<div dir="ltr">I tested hpgmg on Mira and reproduced the scaling problem. I used mpiwrapper-gcc.<div>With 64000 MPI tasks, the output is:</div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div>Duplicating MPI_COMM_WORLD...done (2.059304 seconds)</div>
<div>Building MPI subcommunicators..., level 1...done (2.291601 seconds)</div><div>Total time in MGBuild 28.085873 seconds</div></blockquote><div>With 125000 MPI Tasks, the output is:</div><blockquote style="margin:0 0 0 40px;border:none;padding:0px">
<div>Duplicating MPI_COMM_WORLD...done (7.807612 seconds)</div><div>Building MPI subcommunicators..., level 1...done (8.158366 seconds)</div><div>Total time in MGBuild 88.644299 seconds</div></blockquote><br><div>Let me replace qsort with mergesort in Comm_split to see what will happen.</div>
</div><div class="gmail_extra"><br clear="all"><div><div dir="ltr">--Junchao Zhang</div></div>
<br><br><div class="gmail_quote">On Sat, May 24, 2014 at 9:07 AM, Sam Williams <span dir="ltr"><<a href="mailto:swwilliams@lbl.gov" target="_blank">swwilliams@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I saw the problem on Mira and K but not Edison. I don't know if that is due to scale or implementation.<br>
<br>
On Mira, I was running jobs with a 10min wallclock limit. I scaled to 46656 processes with 64 threads per process (c1, OMP_NUM_THREADS=64) and all jobs completed successfully. However, I was only looking at MGSolve times and not MGBuild times. I then decided to explore 8 threads per process (c8, OMP_NUM_THREADS=8) and started at the high concurrency. The jobs timed out while still in MGBuild after 10mins with 373248 processes as well as with a 20min timeout. At that point I added the USE_SUBCOMM option to enable/disable the use of comm_split. I haven't tried scaling with the sub communicator on Mira since then.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On May 24, 2014, at 6:39 AM, Junchao Zhang <<a href="mailto:jczhang@mcs.anl.gov">jczhang@mcs.anl.gov</a>> wrote:<br>
<br>
> Hi, Sam,<br>
> Could you give me the exact number of MPI ranks for your results on Mira?<br>
> I run hpgmg on Edison with export OMP_NUM_THREADS=1, aprun -n 64000 -ss -cc numa_node ./hpgmg-fv 6 1. The total time in MGBuild is about 0.005 seconds. I was wondering how many cores I need to apply to reproduce the problem.<br>
> Thanks.<br>
><br>
><br>
> --Junchao Zhang<br>
><br>
><br>
> On Sat, May 17, 2014 at 9:06 AM, Sam Williams <<a href="mailto:swwilliams@lbl.gov">swwilliams@lbl.gov</a>> wrote:<br>
> I've been conducting scaling experiments on the Mira (Blue Gene/Q) and K (Sparc) supercomputers. I've noticed that the time required for MPI_Comm_split and MPI_Comm_dup can grow quickly with scale (~P^2). As such, its performance eventually becomes a bottleneck. That is, although the benefit of using a subcommunicator is huge (multigrid solves are weak-scalable), the penalty of creating one (multigrid build time) is also huge.<br>
><br>
> For example, when scaling from 1 to 46K nodes (= cubes of integers) on Mira, the time (in seconds) required to build a MG solver (including a subcommunicator) scales as<br>
> 222335.output: Total time in MGBuild 0.056704<br>
> 222336.output: Total time in MGBuild 0.060834<br>
> 222348.output: Total time in MGBuild 0.064782<br>
> 222349.output: Total time in MGBuild 0.090229<br>
> 222350.output: Total time in MGBuild 0.075280<br>
> 222351.output: Total time in MGBuild 0.091852<br>
> 222352.output: Total time in MGBuild 0.137299<br>
> 222411.output: Total time in MGBuild 0.301552<br>
> 222413.output: Total time in MGBuild 0.606444<br>
> 222415.output: Total time in MGBuild 0.745272<br>
> 222417.output: Total time in MGBuild 0.779757<br>
> 222418.output: Total time in MGBuild 4.671838<br>
> 222419.output: Total time in MGBuild 15.123162<br>
> 222420.output: Total time in MGBuild 33.875626<br>
> 222421.output: Total time in MGBuild 49.494547<br>
> 222422.output: Total time in MGBuild 151.329026<br>
><br>
> If I disable the call to MPI_Comm_Split, my time scales as<br>
> 224982.output: Total time in MGBuild 0.050143<br>
> 224983.output: Total time in MGBuild 0.052607<br>
> 224988.output: Total time in MGBuild 0.050697<br>
> 224989.output: Total time in MGBuild 0.078343<br>
> 224990.output: Total time in MGBuild 0.054634<br>
> 224991.output: Total time in MGBuild 0.052158<br>
> 224992.output: Total time in MGBuild 0.060286<br>
> 225008.output: Total time in MGBuild 0.062925<br>
> 225009.output: Total time in MGBuild 0.097357<br>
> 225010.output: Total time in MGBuild 0.061807<br>
> 225011.output: Total time in MGBuild 0.076617<br>
> 225012.output: Total time in MGBuild 0.099683<br>
> 225013.output: Total time in MGBuild 0.125580<br>
> 225014.output: Total time in MGBuild 0.190711<br>
> 225016.output: Total time in MGBuild 0.218329<br>
> 225017.output: Total time in MGBuild 0.282081<br>
><br>
> Although I didn't directly measure it, this suggests the time for MPI_Comm_Split is growing roughly quadratically with process concurrency.<br>
><br>
><br>
><br>
><br>
> I see the same effect on the K machine (8...64K nodes) where the code uses comm_split/dup in conjunction:<br>
> run00008_7_1.sh.o2412931: Total time in MGBuild 0.026458 seconds<br>
> run00064_7_1.sh.o2415876: Total time in MGBuild 0.039121 seconds<br>
> run00512_7_1.sh.o2415877: Total time in MGBuild 0.086800 seconds<br>
> run01000_7_1.sh.o2414496: Total time in MGBuild 0.129764 seconds<br>
> run01728_7_1.sh.o2415878: Total time in MGBuild 0.224576 seconds<br>
> run04096_7_1.sh.o2415880: Total time in MGBuild 0.738979 seconds<br>
> run08000_7_1.sh.o2414504: Total time in MGBuild 2.123800 seconds<br>
> run13824_7_1.sh.o2415881: Total time in MGBuild 6.276573 seconds<br>
> run21952_7_1.sh.o2415882: Total time in MGBuild 13.634200 seconds<br>
> run32768_7_1.sh.o2415884: Total time in MGBuild 36.508670 seconds<br>
> run46656_7_1.sh.o2415874: Total time in MGBuild 58.668228 seconds<br>
> run64000_7_1.sh.o2415875: Total time in MGBuild 117.322217 seconds<br>
><br>
><br>
> A glance at the implementation on Mira (I don't know if the implementation on K is stock) suggests it should be using qsort to sort based on keys. Unfortunately, qsort is not performance robust like heap/merge sort. If one were to be productive and call comm_split like...<br>
> MPI_Comm_split(...,mycolor,myrank,...)<br>
> then one runs the risk that the keys are presorted. This hits the worst case computational complexity for qsort... O(P^2). Demanding programmers avoid sending sorted keys seems unreasonable.<br>
><br>
><br>
> I should note, I see a similar lack of scaling with MPI_Comm_dup on the K machine. Unfortunately, my BGQ data used an earlier version of the code that did not use comm_dup. As such, I can’t definitively say that it is a problem on that machine as well.<br>
><br>
> Thus, I'm asking for scalable implementations of comm_split/dup using merge/heap sort whose worst case complexity is still PlogP to be prioritized in the next update.<br>
><br>
><br>
> thanks<br>
> _______________________________________________<br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/devel" target="_blank">https://lists.mpich.org/mailman/listinfo/devel</a><br>
><br>
> _______________________________________________<br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/devel" target="_blank">https://lists.mpich.org/mailman/listinfo/devel</a><br>
<br>
_______________________________________________<br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/devel" target="_blank">https://lists.mpich.org/mailman/listinfo/devel</a><br>
</div></div></blockquote></div><br></div>