[mpich-discuss] Question about non-blocking MPI - 3

teivml at gmail.com teivml at gmail.com
Fri Jan 15 03:47:58 CST 2021


Dear Joachim,
Thank you for your response.
It is good to know that no special settings are required to use
non-blocking MPI collectives in MPICH, and the setting
"MPICH_ASYNC_PROGRESS = 1" improves the performance of calculations that
overlap communication.
I will find tools to confirm this.
Best,
Viet

On Thu, Jan 7, 2021 at 10:09 PM Joachim Protze <protze at itc.rwth-aachen.de>
wrote:

> Hi Viet,
>
> If MPICH_ASYNC_PROGRESS=1 is necessary for your application to
> terminate, i.e., you see a deadlock without this flag, the applications
> has a correctness/portability issue. This env variable should only have
> a possible impact to performance (better compute/communication overlap)
> but should not be necessary to avoid deadlock.
>
> You can compare this to the classical send-send deadlock (example 3.9 in
> the standard), where the application relies on buffering / eager
> communication, but all send suddenly block. The portable fix for the
> application is not to increase the eager-limit, but to fix the
> communication pattern.
>
> Best
> Joachim
>
> Am 07.01.21 um 13:08 schrieb viet via discuss:
> > Dear Hui Zhou,
> >
> > Thank you for your response to this thread.
> > Could you give me some references on this issue?
> >
> > It seems that the below environment setting is necessary.
> >
> > MPICH_ASYNC_PROGRESS=1
> >
> > Thank you for anything you can provide.
> > Best, Viet.
> >
> >
> >
> > On Tue, Jan 5, 2021 at 5:59 AM Zhou, Hui <zhouh at anl.gov> wrote:
> >
> >> You don’t need any special settings to use non-blocking MPI collectives
> in
> >> MPICH.
> >>
> >>
> >>
> >> --
> >> Hui Zhou
> >>
> >>
> >>
> >>
> >>
> >> *From: *viet via discuss <discuss at mpich.org>
> >> *Date: *Friday, December 4, 2020 at 7:12 AM
> >> *To: *discuss at mpich.org <discuss at mpich.org>
> >> *Cc: *teivml at gmail.com <teivml at gmail.com>
> >> *Subject: *[mpich-discuss] Question about non-blocking MPI - 3
> >>
> >> Hello everyone,
> >>
> >> What is the environment setting for the use of non-blocking MPI-3 *
> >> collectives in mpich3.3?
> >>
> >> In Intel MPI, the setting is as follows
> >>
> >> $ export I_MPI_ASYNC_PROGRESS=1
> >> $ export I_MPI_ASYNC_PROGRESS_PIN=<CPU list>
> >>
> >> Is there an equivalent setting in mpich3.3?
> >> Thank you,
> >> Viet.
> >>
> >
> >
> > _______________________________________________
> > discuss mailing list     discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
> >
>
>
> --
> Dipl.-Inf. Joachim Protze
>
> IT Center
> Group: High Performance Computing
> Division: Computational Science and Engineering
> RWTH Aachen University
> Seffenter Weg 23
> D 52074  Aachen (Germany)
> Tel: +49 241 80- 24765
> Fax: +49 241 80-624765
> protze at itc.rwth-aachen.de
> www.itc.rwth-aachen.de
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20210115/326ad82c/attachment.html>


More information about the discuss mailing list