[mpich-devel] mpich3 error
Brent Morgan
brent.taylormorgan at gmail.com
Fri Jan 15 23:24:57 CST 2021
Hi MPICH dev team,
We downloaded mpich 3.3.2 (using ch3 as default) and implemented MPI, and
for small # of processes (<50), everything worked fine for our MPI
implementation. For >=50 processes, there was a ch3 error and crashed the
program after a random amount of seconds (sometimes 10seconds, sometimes
100seconds). So we compiled mpich 3.3.2 with ch4 (instead of default ch3)
using '--with-device=ch4:ofi` flag and this got rid of the error- but for
>12 processes, the speed would slow down to 2x slower suddenly.
Upon Hui's suggestion, we upgraded to mpich 3.4 and compiled with
'--with-device=ch4:ofi` flag (where ch4 is default for mpich 3.4).
Everything worked fine until we hit 20 processes; after >=20 processes, the
2x slowdown is happening again.
We have tried 1 communicator and multiple communicators in an attempt to
make the MPI implementation faster, but there's no significant difference
in observations. We are using MPI_Gather collector for merely calculating
the sum of the result of N processes, but we can't seem to maintain
stability within MPI as we increase N processes. Is there something we are
missing that is ultimately causing this error? We are at a loss here,
thank you.
Best,
Brent
PS I am waiting for my subscription acceptance to discuss at mpich.org per
Hui's suggestion.
On Fri, Jan 15, 2021 at 2:57 PM Zhou, Hui <zhouh at anl.gov> wrote:
> Very glad to hear!
>
>
>
> For future reference, consider using discuss at mpich.org for usage
> questions. We have a larger community there and you may receive faster or
> wider helps.
>
>
>
> --
> Hui Zhou
>
>
>
>
>
> *From: *Brent Morgan <brent.taylormorgan at gmail.com>
> *Date: *Friday, January 15, 2021 at 3:43 PM
> *To: *Zhou, Hui <zhouh at anl.gov>
> *Cc: *Robert Katona <robert.katona at hotmail.com>
> *Subject: *Re: [mpich-devel] mpich3 error
>
> Hi Hui,
>
>
>
> We tried the latest release (3.4) for ch4, and the timing issue is
> resolved and is error free so far- I am updating all the nodes still to
> confirm for a large number of processes.
>
>
>
> Best,
>
> Brent
>
>
>
> On Fri, Jan 15, 2021 at 7:36 AM Zhou, Hui <zhouh at anl.gov> wrote:
>
> Please try with the latest release, for both ch3 and ch4 if you can.
>
>
>
> --
> Hui Zhou
>
>
>
>
>
> *From: *Brent Morgan <brent.taylormorgan at gmail.com>
> *Date: *Friday, January 15, 2021 at 7:58 AM
> *To: *Zhou, Hui <zhouh at anl.gov>, Robert Katona <robert.katona at hotmail.com>
> *Subject: *Re: [mpich-devel] mpich3 error
>
> Hi Hui,
>
>
>
> We compiled with `--with-device=ch4:ofi' but now our program is 2x slower,
> but it is error-free.
>
> 1) Is this slowdown of ch4 compared to ch3 expected?
>
> 2) it looks like there's also a '--with-device=ch4:ucx' flag, will this be
> faster? Are there any other ch4 flags that might be faster?
>
> 3) does MPICH 3.4 have any speed benefit over 3.3.2
>
>
>
> Thank you, this will be very helpful for us.
>
>
>
> Best,
>
> Brent
>
>
>
>
>
> On Fri, Jan 15, 2021 at 4:44 AM Brent Morgan <brent.taylormorgan at gmail.com>
> wrote:
>
> Hi Hui,
>
>
>
> It looks like compiling MPICH with ch4, with `--with-device=ch4:ofi` ,
> solved the issue. If you can provide reasons why this may be, that would
> be great. Thank you for the help with that syntax,
>
>
>
> Best,
>
> Brent
>
>
>
>
>
>
>
> On Fri, Jan 15, 2021 at 12:07 AM Brent Morgan <
> brent.taylormorgan at gmail.com> wrote:
>
> Hi Hui,
>
>
>
> My apologies for confusion. We are calling a collective on 1 thread, for
> many processes, so it sounds indeed like 1 communicator should suffice. We
> are trying to compile with ch4 to resolve our issue.
>
>
>
> Our attempt of using 1 communicator vs. many communicators gave us the
> same result, but the version with many communicators was a little faster.
>
>
>
> Best,
>
> Brent
>
>
>
> On Thu, Jan 14, 2021 at 10:52 PM Zhou, Hui <zhouh at anl.gov> wrote:
>
> You are confusing processes with threads, or you are confusing me.
> Concurrently refers the multiple threads, which you previously mentioned.
> All collectives need multiple processes simultaneously participate of
> course, thus the name collective. Lock only applies to multiple threads not
> process. There is no restrictions are on the scale. You can have 1 million
> processes in one communicator if needed.
>
>
>
> --
> Hui Zhou
>
>
>
>
>
> *From: *Brent Morgan <brent.taylormorgan at gmail.com>
> *Date: *Thursday, January 14, 2021 at 11:46 PM
> *To: *Zhou, Hui <zhouh at anl.gov>
> *Subject: *Re: [mpich-devel] mpich3 error
>
> Hi Hui,
>
> Thanks for the response,
>
> "You can’t call collectives (MPI_Gather, MPI_Reduce, etc.) on the same
> communicator concurrently. If you use the same communicator, add a
> lock/critical section to serialize them. Or you can call collectives on
> different communicators concurrently."
>
>
>
> We will try this. From what I understand, for large # of processes, it
> sounds like MPI_Reduce (or any collective) should be done on smaller scales
> serially (with a lock), or with different communicators concurrently. Is
> this because a communicator can only handle so many processes?
>
>
>
> Best,
>
> Brent
>
>
>
>
>
>
>
> On Thu, Jan 14, 2021 at 10:36 PM Zhou, Hui <zhouh at anl.gov> wrote:
>
> You can’t call collectives (MPI_Gather, MPI_Reduce, etc.) on the same
> communicator concurrently. If you use the same communicator, add a
> lock/critical section to serialize them. Or you can call collectives on
> different communicators concurrently.
>
>
>
> The best and ultimate documentation is the MPI specification:
> https://www.mpi-forum.org/docs/
>
>
>
> --
> Hui Zhou
>
>
>
>
>
> *From: *Brent Morgan <brent.taylormorgan at gmail.com>
> *Date: *Thursday, January 14, 2021 at 11:23 PM
> *To: *Zhou, Hui <zhouh at anl.gov>
> *Subject: *Re: [mpich-devel] mpich3 error
>
> Hi Hui,
>
>
>
> Thanks; so should we still be doing multiple communicators with All_Reduce
> for large N processes? I can't find any documentation on the matter...
> We'll be trying things to see if it works. thanks
>
>
>
> Best,
>
> Brent
>
>
>
> On Thu, Jan 14, 2021 at 10:11 PM Zhou, Hui <zhouh at anl.gov> wrote:
>
> “Our MPI implementation is merely finding the sum of the results of
> the N processes, where N is large. Is MPI_Reduce going to be faster?
>
> “
>
>
>
> Oh, yeah, if you are doing reduce, you should call `MPI_Reduce`.
>
>
>
> However, I suspect there maybe some usage errors involved. Could you post
> some sample/pseudo code?
>
>
>
> --
> Hui Zhou
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/devel/attachments/20210115/653d2a2e/attachment.html>
More information about the devel
mailing list