<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Yes, there are. There are more things that you can do to take into account the interconnect topology, and there are things that can be done to take better advantage of the SMP nodes. The algorithm in MPICH is rather generic and tries not to overstress the network, since contention and queue search times can have a significant impact on performance of alltoall.<div><br></div><div>With the appropriate algorithm, you should be able to sustain near the peak interconnect bandwidth (assuming also that the memory system on the node is fast enough to keep up with the network).<br><div><br></div><div>Bill</div><div><br><div>
<span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div style="font-size: 12px; ">William Gropp</div><div style="font-size: 12px; ">Director, Parallel Computing Institute</div></div></div></span><span class="Apple-style-span" style="font-size: 12px; ">Thomas M. Siebel Chair in Computer Science</span><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div style="font-size: 12px; ">University of Illinois Urbana-Champaign</div></div><div><br></div></div></span><br class="Apple-interchange-newline"></div></span><br class="Apple-interchange-newline"></span><br class="Apple-interchange-newline">
</div>
<br><div><div>On May 19, 2014, at 9:19 PM, Jan T. Balewski wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Hi,<br>I want to use MPI to transpose a big NxM matrix.<br>Currently the size is: N=20k, M=40k, type=ushort, total size =16GB (it will be<br>larger in the future)<br><br>To evaluate the code I report the processing speed in MB/sec, it is dominated<br>by the cost of MPI_Alltoall(...) between 32 MPI processes.<br>By changing the order in which MPI jobs are assigned to the cores on the blades<br>I was able to increase the overall speed by ~15% , from 349 MB/sec to 398<br>MB/sec.<br><br>My question is are there more tricks I can play to accelerate MPI_Alltoall(...)<br>further ?<br>(For now using the existing hardware)<br>Thanks for any suggestions<br>Jan<br><br>P.S. Below are gory details:<br><br>The transposition is done in 2 stages on 32 MPI jobs running on 4 8-core Dell<br>1950 blades.<br>The 4 blades are connected via eth0 to the same card in the Cisco 1 GBit switch.<br>This is the order of operations:<br>- big matrix is divided on 32x32 non-square blocks<br>- each of 32 blocks is transposed individually on CPU in each MPI job<br>- the 32x32 blocks are exchanged between MPI jobs and transposed using<br>MPI_Alltoall(...) command<br>- the time is measured using MPI_Wtime()<br>FYI, the whole code is visible at:<br><a href="https://bitbucket.org/balewski/kruk-mpi/src/8d3c4b7deb566f2768f132135b12e58f28f252d9/kruk2.c?at=master">https://bitbucket.org/balewski/kruk-mpi/src/8d3c4b7deb566f2768f132135b12e58f28f252d9/kruk2.c?at=master</a><br><br><br>***** mode 1: uses 8 consecutive cores per blade, next fill 2nd blade, etc<br>$ mpiexec -f machinefile -n 32 ./kruk2<br>where machinefile is:<br>oswrk139.lns.mit.edu:8<br>oswrk140.lns.mit.edu:8<br>oswrk145.lns.mit.edu:8<br>oswrk150.lns.mit.edu:8<br><br>Summary: Ncpu=32 avrT/sec=40.83, totMB=16257.0 avrSpeed=398.2(MB/sec)<br><br><br>***** mode 2: uses 1st core from all blades, next use 2nd core from all, etc:<br>$ mpiexec -f machinefile -n 32 ./kruk2<br>where machinefile is:<br>oswrk139.lns.mit.edu:1<br>oswrk140.lns.mit.edu:1<br>oswrk145.lns.mit.edu:1<br>oswrk150.lns.mit.edu:1<br><br>Summary: Ncpu=32 avrT/sec=46.61, totMB=16257.0 avrSpeed=348.8(MB/sec)<br>(3)<br>_______________________________________________<br>discuss mailing list discuss@mpich.org<br>To manage subscription options or unsubscribe:<br>https://lists.mpich.org/mailman/listinfo/discuss<br></div></blockquote></div><br></div></div></body></html>