[mpich-discuss] optimization of MPI_Alltoall(..)

Jan T. Balewski balewski at MIT.EDU
Tue May 20 10:14:51 CDT 2014


Hi Daniel,
thanks a lot for this comparisons test. I do not have access to the 
Inifiniband
setup, so your numbers will help me to build the case that I need one locally.
Change to inifiniband  is obviously the most promising way to increase the
throughput.
Would it be to much to ask you run the my code on more than 32 cores (if you
have that many)?
At some point Inifiniband will saturate, adding more cores will not 
translate on
the larger 'speed' in MB/sec.  I'm curious where is this limit.
Thanks you very much
Jan

P.S. due to hardcoded dimensions my code would run as is up to 64 cores.
For a higher core count this 2 variables need to be modified so they are
divisible by the core count:
int  NchXtot=9*5*7*64*2,  NchTtot=9*5*7*64 ;
Jan



Quoting "Kokron, Daniel S. (GSFC-606.2)[Computer Sciences Corporation]"
<daniel.s.kokron at nasa.gov>:

> Jan,
>
> Here is some output from your code on a cluster that uses FDR 
> Infiniband instead of 1Gbit Ethernet.  I also used the vendor MPI 
> which is SMP aware.
>




More information about the discuss mailing list