[mpich-discuss] discuss Digest, Vol 91, Issue 5
hritikesh semwal
hritikesh.semwal at gmail.com
Tue May 5 07:07:45 CDT 2020
On Tue, May 5, 2020 at 5:17 PM <discuss-request at mpich.org> wrote:
> Send discuss mailing list submissions to
> discuss at mpich.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.mpich.org/mailman/listinfo/discuss
> or, via email, send a message with subject or body 'help' to
> discuss-request at mpich.org
>
> You can reach the person managing the list at
> discuss-owner at mpich.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of discuss digest..."
>
>
> Today's Topics:
>
> 1. Better alternatives of MPI_Allreduce() (Benson Muite)
> 2. Re: Better alternatives of MPI_Allreduce() (Benson Muite)
> 3. Re: Better alternatives of MPI_Allreduce() (hritikesh semwal)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 05 May 2020 14:20:57 +0300
> From: "Benson Muite" <benson_muite at emailplus.org>
> To: "Benson Muite via discuss" <discuss at mpich.org>
> Cc: hritikesh.semwal at gmail.com
> Subject: [mpich-discuss] Better alternatives of MPI_Allreduce()
> Message-ID: <a38d489f-25a6-40a1-9640-42bd2f52a80f at www.fastmail.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
> >> >
> >> > Hi Hitesh,
> >> >
> >> > What hardware are you running on and what is the interconnect?
> >
> > Right now I am using a cluster.
>
> What is the interconnect?
> >
> >> > Have you tried changing any of the MPI settings?
> >
> > What do you mean by MPI settings?
> Given your comment on the barrier, this is probably not so useful at the
> moment.
> >
> >> > Can the reduction be done asynchronously?
> >
> > I did not get your question.
>
> For example using a non blocking all reduce:
> https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node135.htm
>
> >
> >> >
> >> > Regards,
> >> > Benson
> >>
> >> Also, is your work load balanced? One way to check this might be to
> place a barrier just before the all-reduce call. If the barrier ends up
> taking most of your time, then it is likely you will need to determine a
> better way to distribute the computational work.
> >
> > Thanks for your response.
> >
> > Yes, you are right. I have put barrier just before Allreduce and out of
> the total time consumed by Allreduce, 79% time is consumed by the barrier.
> But my computational work is balanced. Right now, I have distributed 97336
> cells among 24 processors and maximum and minimum cell distribution among
> all processors is 4057 and 4055 respectively which is not too bad. Is there
> any solution to get rid of this.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20200505/b2c627af/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Tue, 05 May 2020 14:36:38 +0300
> From: "Benson Muite" <benson_muite at emailplus.org>
> To: "Benson Muite via discuss" <discuss at mpich.org>
> Subject: Re: [mpich-discuss] Better alternatives of MPI_Allreduce()
> Message-ID: <2e8915e2-3c42-450c-921c-f282e46f1049 at www.fastmail.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
>
> On Tue, May 5, 2020, at 2:16 PM, hritikesh semwal via discuss wrote:
> > I want to add two more questions about my solver,
> > 1. I am using MPI_Neighbor_alltoallw() for exchanging the data by
> generating a distributed graph topology communicator. My concern is that
> most of the time my code is working fine but sometimes I guess it is going
> into deadlock (as it is not showing any output). But MPI_Neighbor_alltoallw
> uses MPI_Waitall inside it so I am not getting why exactly this is
> happening.
>
> May want to check sending and receiving correct data. Perhaps also try
> MPI_Neighbor_alltoallw
>
> > 2. Is it possible that every time I run the code the processors times
> for completion of the task may vary? For example, for one run it all
> processors takes around 100 seconds and for another run, all processors
> take 110 seconds.
>
> There is usually some variability. Do you solve the same system each time?
> What is the method of solution? If your code is available it can sometimes
> be easier to give suggestions.
>
> Yes, the system of equations are the same. I am using the finite volume
method for solving Navier stokes equations. By first sentence you mean to
say it is possible.
> >
> > Please help in above two matters.
> >
> > On Tue, May 5, 2020 at 4:28 PM hritikesh semwal <
> hritikesh.semwal at gmail.com> wrote:
> >> Thanks for your response.
> >>
> >> Yes, you are right. I have put barrier just before Allreduce and out of
> the total time consumed by Allreduce, 79% time is consumed by the barrier.
> But my computational work is balanced. Right now, I have distributed 97336
> cells among 24 processors and maximum and minimum cell distribution among
> all processors is 4057 and 4055 respectively which is not too bad. Is there
> any solution to get rid of this.
>
> Try profiling your code not just looking at cell distribution. Are any
> profling tools already installed on your cluster?
>
gprof and valgrind are there.
>
> >> On Tue, May 5, 2020 at 12:30 PM Joachim Protze <
> protze at itc.rwth-aachen.de> wrote:
> >>> Hello,
> >>>
> >>> it is important to understand, that most of the time you see is not
> the
> >>> cost of the allreduce, but the cost of synchronization (caused by
> load
> >>> imbalance).
> >>>
> >>> You can do an easy experiment and add a barrier before the allreduce.
> >>> Then you will see the actual cost of the allreduce, while the cost of
> >>> synchronization will go into the barrier.
> >>>
> >>> Now, think about dependencies in your algorithm: do you need the
> output
> >>> value immediately? Is this the same time, where you have the input
> value
> >>> ready?
> >>> -> otherwise use non-blocking communication and perform independent
> work
> >>> in between
> >>>
> >>> In any case: fix your load imbalance (the root cause of
> synchronization
> >>> cost).
> >>>
> >>> Best
> >>> Joachim
> >>>
> >>> Am 05.05.20 um 07:38 schrieb hritikesh semwal via discuss:
> >>> > Hello all,
> >>> >
> >>> > I am working on the development of a parallel CFD solver and I am
> using
> >>> > MPI_Allreduce for the global summation of the local errors
> calculated on
> >>> > all processes of a group and the summation is to be used by all the
> >>> > processes. My concern is that MPI_Allreduce is taking almost 27-30%
> of
> >>> > the total time used, which is a significant amount. So, I want to
> ask if
> >>> > anyone can suggest me better alternative/s to replace MPI_Allreduce
> >>> > which can reduce the time consumption.
> >>> >
> >>> > Thank you.
> >>> >
> >>> > _______________________________________________
> >>> > discuss mailing list discuss at mpich.org
> >>> > To manage subscription options or unsubscribe:
> >>> > https://lists.mpich.org/mailman/listinfo/discuss
> >>> >
> >>>
> >>>
> >>> --
> >>> Dipl.-Inf. Joachim Protze
> >>>
> >>> IT Center
> >>> Group: High Performance Computing
> >>> Division: Computational Science and Engineering
> >>> RWTH Aachen University
> >>> Seffenter Weg 23
> >>> D 52074 Aachen (Germany)
> >>> Tel: +49 241 80- 24765
> >>> Fax: +49 241 80-624765
> >>> protze at itc.rwth-aachen.de
> >>> www.itc.rwth-aachen.de
> >>>
> > _______________________________________________
> > discuss mailing list discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20200505/37d6d961/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Tue, 5 May 2020 17:16:22 +0530
> From: hritikesh semwal <hritikesh.semwal at gmail.com>
> To: Benson Muite <benson_muite at emailplus.org>
> Cc: Benson Muite via discuss <discuss at mpich.org>
> Subject: Re: [mpich-discuss] Better alternatives of MPI_Allreduce()
> Message-ID:
> <
> CAA+35d3ZTpmksQR3FOyxJZny4og9gTynzk2zcesAmmYB2OBZhw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Tue, May 5, 2020 at 4:51 PM Benson Muite <benson_muite at emailplus.org>
> wrote:
>
> >
> > >
> > > Hi Hitesh,
> > >
> > > What hardware are you running on and what is the interconnect?
> >
> >
> > Right now I am using a cluster.
> >
> >
> > What is the interconnect?
> >
>
> I don't know about this. Is it relevant?
>
>
> >
> >
> > > Have you tried changing any of the MPI settings?
> >
> >
> > What do you mean by MPI settings?
> >
> > Given your comment on the barrier, this is probably not so useful at the
> > moment.
> >
> >
> >
> > > Can the reduction be done asynchronously?
> >
> >
> > I did not get your question.
> >
> >
> > For example using a non blocking all reduce:
> > https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node135.htm
> >
> >
> I tried using a non-blocking call but after this code is not working
> correctly.
>
> >
> >
> > >
> > > Regards,
> > > Benson
> >
> > Also, is your work load balanced? One way to check this might be to place
> > a barrier just before the all-reduce call. If the barrier ends up taking
> > most of your time, then it is likely you will need to determine a better
> > way to distribute the computational work.
> >
> >
> > Thanks for your response.
> >
> > Yes, you are right. I have put barrier just before Allreduce and out of
> > the total time consumed by Allreduce, 79% time is consumed by the
> barrier.
> > But my computational work is balanced. Right now, I have distributed
> 97336
> > cells among 24 processors and maximum and minimum cell distribution among
> > all processors is 4057 and 4055 respectively which is not too bad. Is
> there
> > any solution to get rid of this?
> >
> > Please help me in this regard.
>
> >
> >
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20200505/5b386583/attachment.html
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> discuss mailing list
> discuss at mpich.org
> https://lists.mpich.org/mailman/listinfo/discuss
>
>
> ------------------------------
>
> End of discuss Digest, Vol 91, Issue 5
> **************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20200505/627f19ff/attachment-0001.html>
More information about the discuss
mailing list