[mpich-discuss] discuss Digest, Vol 91, Issue 3

hritikesh semwal hritikesh.semwal at gmail.com
Tue May 5 06:01:04 CDT 2020


On Tue, May 5, 2020 at 12:30 PM <discuss-request at mpich.org> wrote:

> Send discuss mailing list submissions to
>         discuss at mpich.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://lists.mpich.org/mailman/listinfo/discuss
> or, via email, send a message with subject or body 'help' to
>         discuss-request at mpich.org
>
> You can reach the person managing the list at
>         discuss-owner at mpich.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of discuss digest..."
>
>
> Today's Topics:
>
>    1.  Better alternatives of MPI_Allreduce() (hritikesh semwal)
>    2. Re:  Better alternatives of MPI_Allreduce() (Benson Muite)
>    3. Re:  Better alternatives of MPI_Allreduce() (Benson Muite)
>    4. Re:  Better alternatives of MPI_Allreduce() (Joachim Protze)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 5 May 2020 11:08:20 +0530
> From: hritikesh semwal <hritikesh.semwal at gmail.com>
> To: discuss at mpich.org
> Subject: [mpich-discuss] Better alternatives of MPI_Allreduce()
> Message-ID:
>         <CAA+35d1zgxVjcfH=
> jpio5rLm1FkjxMC3Bis-qB9D43qjfEogRw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello all,
>
> I am working on the development of a parallel CFD solver and I am using
> MPI_Allreduce for the global summation of the local errors calculated on
> all processes of a group and the summation is to be used by all the
> processes. My concern is that MPI_Allreduce is taking almost 27-30% of the
> total time used, which is a significant amount. So, I want to ask if anyone
> can suggest me better alternative/s to replace MPI_Allreduce which can
> reduce the time consumption.
>
> Thank you.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20200505/b2c8224a/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Tue, 05 May 2020 09:57:43 +0300
> From: "Benson Muite" <benson_muite at emailplus.org>
> To: discuss at mpich.org
> Subject: Re: [mpich-discuss] Better alternatives of MPI_Allreduce()
> Message-ID: <d3cfd641-1849-4ed8-a036-7db371ebb225 at www.fastmail.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
>
>
> On Tue, May 5, 2020, at 8:38 AM, hritikesh semwal via discuss wrote:
> > Hello all,
> >
> > I am working on the development of a parallel CFD solver and I am using
> MPI_Allreduce for the global summation of the local errors calculated on
> all processes of a group and the summation is to be used by all the
> processes. My concern is that MPI_Allreduce is taking almost 27-30% of the
> total time used, which is a significant amount. So, I want to ask if anyone
> can suggest me better alternative/s to replace MPI_Allreduce which can
> reduce the time consumption.
> >
> > Thank you.
> > _______________________________________________
> > discuss mailing list discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
> >
>
> Hi Hitesh,
>
> What hardware are you running on and what is the interconnect?
> Have you tried changing any of the MPI settings?
> Can the reduction be done asynchronously?
>
> Regards,
> Benson
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20200505/d81885c3/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Tue, 05 May 2020 09:59:37 +0300
> From: "Benson Muite" <benson_muite at emailplus.org>
> To: discuss at mpich.org
> Subject: Re: [mpich-discuss] Better alternatives of MPI_Allreduce()
> Message-ID: <5ed0cd5f-c68c-404b-9041-a5189dd5e0e2 at www.fastmail.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
>
>
> On Tue, May 5, 2020, at 9:57 AM, Benson Muite wrote:
> >
> >
> >
> > On Tue, May 5, 2020, at 8:38 AM, hritikesh semwal via discuss wrote:
> >> Hello all,
> >>
> >> I am working on the development of a parallel CFD solver and I am using
> MPI_Allreduce for the global summation of the local errors calculated on
> all processes of a group and the summation is to be used by all the
> processes. My concern is that MPI_Allreduce is taking almost 27-30% of the
> total time used, which is a significant amount. So, I want to ask if anyone
> can suggest me better alternative/s to replace MPI_Allreduce which can
> reduce the time consumption.
> >>
> >> Thank you.
> >> _______________________________________________
> >> discuss mailing list discuss at mpich.org
> >> To manage subscription options or unsubscribe:
> >> https://lists.mpich.org/mailman/listinfo/discuss
> >>
> >
> > Hi Hitesh,
> >
> > What hardware are you running on and what is the interconnect?
>

Right now I am using a cluster.


> > Have you tried changing any of the MPI settings?
>

What do you mean by MPI settings?


> > Can the reduction be done asynchronously?
>

I did not get your question.


> >
> > Regards,
> > Benson
>
> Also, is your work load balanced? One way to check this might be to place
> a barrier just before the all-reduce call. If the barrier ends up taking
> most of your time, then it is likely you will need to determine a better
> way to distribute the computational work.
>

 Thanks for your response.

Yes, you are right. I have put barrier just before Allreduce and out of the
total time consumed by Allreduce, 79% time is consumed by the barrier. But
my computational work is balanced. Right now, I have distributed 97336
cells among 24 processors and maximum and minimum cell distribution among
all processors is 4057 and 4055 respectively which is not too bad. Is there
any solution to get rid of this.

-------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20200505/f773f670/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 4
> Date: Tue, 5 May 2020 09:00:20 +0200
> From: Joachim Protze <protze at itc.rwth-aachen.de>
> To: <discuss at mpich.org>
> Cc: hritikesh semwal <hritikesh.semwal at gmail.com>
> Subject: Re: [mpich-discuss] Better alternatives of MPI_Allreduce()
> Message-ID: <1189df11-a53f-059d-8fb3-16887890a606 at itc.rwth-aachen.de>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
> Hello,
>
> it is important to understand, that most of the time you see is not the
> cost of the allreduce, but the cost of synchronization (caused by load
> imbalance).
>
> You can do an easy experiment and add a barrier before the allreduce.
> Then you will see the actual cost of the allreduce, while the cost of
> synchronization will go into the barrier.
>
> Now, think about dependencies in your algorithm: do you need the output
> value immediately? Is this the same time, where you have the input value
> ready?
> -> otherwise use non-blocking communication and perform independent work
> in between
>
> In any case: fix your load imbalance (the root cause of synchronization
> cost).
>
> Best
> Joachim
>
> Am 05.05.20 um 07:38 schrieb hritikesh semwal via discuss:
> > Hello all,
> >
> > I am working on the development of a parallel CFD solver and I am using
> > MPI_Allreduce for the global summation of the local errors calculated on
> > all processes of a group and the summation is to be used by all the
> > processes. My concern is that MPI_Allreduce is taking almost 27-30% of
> > the total time used, which is a significant amount. So, I want to ask if
> > anyone can suggest me better alternative/s to replace MPI_Allreduce
> > which can reduce the time consumption.
> >
> > Thank you.
> >
> > _______________________________________________
> > discuss mailing list     discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
> >
>
>
> --
> Dipl.-Inf. Joachim Protze
>
> IT Center
> Group: High Performance Computing
> Division: Computational Science and Engineering
> RWTH Aachen University
> Seffenter Weg 23
> D 52074  Aachen (Germany)
> Tel: +49 241 80- 24765
> Fax: +49 241 80-624765
> protze at itc.rwth-aachen.de
> www.itc.rwth-aachen.de
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 5327 bytes
> Desc: S/MIME Cryptographic Signature
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20200505/f09fd7f5/attachment.p7s
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> discuss mailing list
> discuss at mpich.org
> https://lists.mpich.org/mailman/listinfo/discuss
>
>
> ------------------------------
>
> End of discuss Digest, Vol 91, Issue 3
> **************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20200505/7a892f95/attachment.html>


More information about the discuss mailing list