<div dir="ltr"><div dir="ltr"><div>Hi MPICH dev team,</div><div><br></div><div>We downloaded mpich 3.3.2 (using ch3 as default) and implemented MPI, and for small # of processes (<50), everything worked fine for our MPI implementation. For >=50 processes, there was a ch3 error and crashed the program after a random amount of seconds (sometimes 10seconds, sometimes 100seconds). So we compiled mpich 3.3.2 with ch4 (instead of default ch3) using '--with-device=ch4:ofi` flag and this got rid of the error- but for >12 processes, the speed would slow down to 2x slower suddenly.</div><div><br></div><div>Upon Hui's suggestion, we upgraded to mpich 3.4 and compiled with '--with-device=ch4:ofi` flag (where ch4 is default for mpich 3.4). Everything worked fine until we hit 20 processes; after >=20 processes, the 2x slowdown is happening again.</div><div><br></div><div>We have tried 1 communicator and multiple communicators in an attempt to make the MPI implementation faster, but there's no significant difference in observations. We are using MPI_Gather collector for merely calculating the sum of the result of N processes, but we can't seem to maintain stability within MPI as we increase N processes. Is there something we are missing that is ultimately causing this error? We are at a loss here, thank you.</div><div><br></div><div>Best,</div><div>Brent</div><div>PS I am waiting for my subscription acceptance to <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a> per Hui's suggestion.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jan 15, 2021 at 2:57 PM Zhou, Hui <<a href="mailto:zhouh@anl.gov">zhouh@anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div lang="EN-US" style="overflow-wrap: break-word;">
<div class="gmail-m_-2350283978886982936WordSection1">
<p class="MsoNormal">Very glad to hear!<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">For future reference, consider using <a href="mailto:discuss@mpich.org" target="_blank">
discuss@mpich.org</a> for usage questions. We have a larger community there and you may receive faster or wider helps.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<div>
<p class="MsoNormal">-- <br>
Hui Zhou<u></u><u></u></p>
</div>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(181,196,223);padding:3pt 0in 0in">
<p class="MsoNormal" style="margin-bottom:12pt"><b><span style="font-size:12pt;color:black">From:
</span></b><span style="font-size:12pt;color:black">Brent Morgan <<a href="mailto:brent.taylormorgan@gmail.com" target="_blank">brent.taylormorgan@gmail.com</a>><br>
<b>Date: </b>Friday, January 15, 2021 at 3:43 PM<br>
<b>To: </b>Zhou, Hui <<a href="mailto:zhouh@anl.gov" target="_blank">zhouh@anl.gov</a>><br>
<b>Cc: </b>Robert Katona <<a href="mailto:robert.katona@hotmail.com" target="_blank">robert.katona@hotmail.com</a>><br>
<b>Subject: </b>Re: [mpich-devel] mpich3 error<u></u><u></u></span></p>
</div>
<div>
<p class="MsoNormal">Hi Hui,<u></u><u></u></p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">We tried the latest release (3.4) for ch4, and the timing issue is resolved and is error free so far- I am updating all the nodes still to confirm for a large number of processes.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Best,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">Brent<u></u><u></u></p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Fri, Jan 15, 2021 at 7:36 AM Zhou, Hui <<a href="mailto:zhouh@anl.gov" target="_blank">zhouh@anl.gov</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<div>
<div>
<p class="MsoNormal">Please try with the latest release, for both ch3 and ch4 if you can.</p>
<p class="MsoNormal"> </p>
<div>
<div>
<div>
<p class="MsoNormal">-- <br>
Hui Zhou</p>
</div>
</div>
</div>
<p class="MsoNormal"> </p>
<p class="MsoNormal"> </p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(181,196,223);padding:3pt 0in 0in">
<p class="MsoNormal" style="margin-bottom:12pt"><b><span style="font-size:12pt;color:black">From:
</span></b><span style="font-size:12pt;color:black">Brent Morgan <</span><a href="mailto:brent.taylormorgan@gmail.com" target="_blank"><span style="font-size:12pt">brent.taylormorgan@gmail.com</span></a><span style="font-size:12pt;color:black">><br>
<b>Date: </b>Friday, January 15, 2021 at 7:58 AM<br>
<b>To: </b>Zhou, Hui <</span><a href="mailto:zhouh@anl.gov" target="_blank"><span style="font-size:12pt">zhouh@anl.gov</span></a><span style="font-size:12pt;color:black">>, Robert Katona <</span><a href="mailto:robert.katona@hotmail.com" target="_blank"><span style="font-size:12pt">robert.katona@hotmail.com</span></a><span style="font-size:12pt;color:black">><br>
<b>Subject: </b>Re: [mpich-devel] mpich3 error</span></p>
</div>
<div>
<p class="MsoNormal">Hi Hui,</p>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">We compiled with `--with-device=ch4:ofi' but now our program is 2x slower, but it is error-free.</p>
</div>
<div>
<p class="MsoNormal">1) Is this slowdown of ch4 compared to ch3 expected? </p>
</div>
<div>
<p class="MsoNormal">2) it looks like there's also a '--with-device=ch4:ucx' flag, will this be faster? Are there any other ch4 flags that might be faster?</p>
</div>
<div>
<p class="MsoNormal">3) does MPICH 3.4 have any speed benefit over 3.3.2</p>
</div>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">Thank you, this will be very helpful for us.</p>
</div>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">Best,</p>
</div>
<div>
<p class="MsoNormal">Brent</p>
</div>
<div>
<p class="MsoNormal"> </p>
</div>
</div>
<p class="MsoNormal"> </p>
<div>
<div>
<p class="MsoNormal">On Fri, Jan 15, 2021 at 4:44 AM Brent Morgan <<a href="mailto:brent.taylormorgan@gmail.com" target="_blank">brent.taylormorgan@gmail.com</a>> wrote:</p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<div>
<p class="MsoNormal">Hi Hui,</p>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">It looks like compiling MPICH with ch4, with `--with-device=ch4:ofi` , solved the issue. If you can provide reasons why this may be, that would be great. Thank you for the help
with that syntax,</p>
</div>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">Best,</p>
</div>
<div>
<p class="MsoNormal">Brent</p>
</div>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal"> </p>
</div>
</div>
<p class="MsoNormal"> </p>
<div>
<div>
<p class="MsoNormal">On Fri, Jan 15, 2021 at 12:07 AM Brent Morgan <<a href="mailto:brent.taylormorgan@gmail.com" target="_blank">brent.taylormorgan@gmail.com</a>> wrote:</p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<div>
<p class="MsoNormal">Hi Hui,</p>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">My apologies for confusion. We are calling a collective on 1 thread, for many processes, so it sounds indeed like 1 communicator should suffice. We are trying to compile with
ch4 to resolve our issue. </p>
</div>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">Our attempt of using 1 communicator vs. many communicators gave us the same result, but the version with many communicators was a little faster.</p>
</div>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">Best,</p>
</div>
<div>
<p class="MsoNormal">Brent </p>
</div>
</div>
<p class="MsoNormal"> </p>
<div>
<div>
<p class="MsoNormal">On Thu, Jan 14, 2021 at 10:52 PM Zhou, Hui <<a href="mailto:zhouh@anl.gov" target="_blank">zhouh@anl.gov</a>> wrote:</p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<div>
<div>
<p class="MsoNormal">You are confusing processes with threads, or you are confusing me. Concurrently refers the multiple threads, which you previously mentioned. All collectives need multiple processes
simultaneously participate of course, thus the name collective. Lock only applies to multiple threads not process. There is no restrictions are on the scale. You can have 1 million processes in one communicator if needed.</p>
<p class="MsoNormal"> </p>
<div>
<div>
<div>
<p class="MsoNormal">-- <br>
Hui Zhou</p>
</div>
</div>
</div>
<p class="MsoNormal"> </p>
<p class="MsoNormal"> </p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(181,196,223);padding:3pt 0in 0in">
<p class="MsoNormal" style="margin-bottom:12pt"><b><span style="font-size:12pt;color:black">From:
</span></b><span style="font-size:12pt;color:black">Brent Morgan <</span><a href="mailto:brent.taylormorgan@gmail.com" target="_blank"><span style="font-size:12pt">brent.taylormorgan@gmail.com</span></a><span style="font-size:12pt;color:black">><br>
<b>Date: </b>Thursday, January 14, 2021 at 11:46 PM<br>
<b>To: </b>Zhou, Hui <</span><a href="mailto:zhouh@anl.gov" target="_blank"><span style="font-size:12pt">zhouh@anl.gov</span></a><span style="font-size:12pt;color:black">><br>
<b>Subject: </b>Re: [mpich-devel] mpich3 error</span></p>
</div>
<div>
<p class="MsoNormal">Hi Hui,<br>
<br>
Thanks for the response,<br>
<br>
"You can’t call collectives (MPI_Gather, MPI_Reduce, etc.) on the same communicator concurrently. If you use the same communicator, add a lock/critical section to serialize them. Or you can call collectives on different communicators concurrently."</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">We will try this. From what I understand, for large # of processes, it sounds like MPI_Reduce (or any collective) should be done on smaller scales serially (with a lock), or with
different communicators concurrently. Is this because a communicator can only handle so many processes?</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">Best,</p>
<p class="MsoNormal">Brent</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal"> </p>
</div>
<p class="MsoNormal"> </p>
<div>
<div>
<p class="MsoNormal">On Thu, Jan 14, 2021 at 10:36 PM Zhou, Hui <<a href="mailto:zhouh@anl.gov" target="_blank">zhouh@anl.gov</a>> wrote:</p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<div>
<div>
<p class="MsoNormal">You can’t call collectives (MPI_Gather, MPI_Reduce, etc.) on the same communicator concurrently. If you use the same communicator, add a lock/critical section to serialize them.
Or you can call collectives on different communicators concurrently.</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">The best and ultimate documentation is the MPI specification:
<a href="https://www.mpi-forum.org/docs/" target="_blank">https://www.mpi-forum.org/docs/</a></p>
<p class="MsoNormal"> </p>
<div>
<div>
<div>
<p class="MsoNormal">-- <br>
Hui Zhou</p>
</div>
</div>
</div>
<p class="MsoNormal"> </p>
<p class="MsoNormal"> </p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(181,196,223);padding:3pt 0in 0in">
<p class="MsoNormal" style="margin-bottom:12pt"><b><span style="font-size:12pt;color:black">From:
</span></b><span style="font-size:12pt;color:black">Brent Morgan <</span><a href="mailto:brent.taylormorgan@gmail.com" target="_blank"><span style="font-size:12pt">brent.taylormorgan@gmail.com</span></a><span style="font-size:12pt;color:black">><br>
<b>Date: </b>Thursday, January 14, 2021 at 11:23 PM<br>
<b>To: </b>Zhou, Hui <</span><a href="mailto:zhouh@anl.gov" target="_blank"><span style="font-size:12pt">zhouh@anl.gov</span></a><span style="font-size:12pt;color:black">><br>
<b>Subject: </b>Re: [mpich-devel] mpich3 error</span></p>
</div>
<div>
<p class="MsoNormal">Hi Hui,</p>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">Thanks; so should we still be doing multiple communicators with All_Reduce for large N processes? I can't find any documentation on the matter... We'll be trying things to see
if it works. thanks</p>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">Best,</p>
</div>
<div>
<p class="MsoNormal">Brent</p>
</div>
</div>
</div>
<p class="MsoNormal"> </p>
<div>
<div>
<p class="MsoNormal">On Thu, Jan 14, 2021 at 10:11 PM Zhou, Hui <<a href="mailto:zhouh@anl.gov" target="_blank">zhouh@anl.gov</a>> wrote:</p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt">
<div>
<div>
<p class="MsoNormal"> “Our MPI implementation is merely finding the sum of the results of the N processes, where N is large. Is MPI_Reduce going to be faster? </p>
<p class="MsoNormal">“</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">Oh, yeah, if you are doing reduce, you should call `MPI_Reduce`.</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">However, I suspect there maybe some usage errors involved. Could you post some sample/pseudo code?</p>
<p class="MsoNormal"> </p>
<div>
<div>
<div>
<p class="MsoNormal">-- <br>
Hui Zhou</p>
</div>
</div>
</div>
<p class="MsoNormal" style="margin-left:11.55pt">
</p>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote></div></div>