[mpich-discuss] Question about mpich bandwidth vs message-size plot
Congiu, Giuseppe
gcongiu at anl.gov
Tue Nov 26 13:32:50 CST 2019
Hi Sajid,
You are correct, by default MPICH 3.3 uses ch3:nemesis:tcp. If you have Mellanox IB and you want to use that you have to specify `—with-device=ch4:ucx` at MPICH configuration time. As for why you see that drop in bandwidth for some large message sizes it’s difficult to pin-point the reason by just looking at the config.log. For large messages (you can probably look up the threshold from the CH3 code, I think it’s either 16 or 32 KBs) MPICH will switch from EAGER to RENDEZVOUS protocol, which requires some additional handshaking between sender and receiver before data can be actually transferred; This might impact performance.
One thing that surprises me more however is that you have much better latency and bandwidth for small messages in MPICH compared with other MPI libraries. I would make sure you are running on separate nodes when using MPICH and not having your processes on the same node instead. There was a bug in MPICH hydra with slurm process manager for some time that would silently fail to detect separate nodes depending on whether they had some hostname that was not recognized properly.
Hope this helps,
Giuseppe Congiu
Postdoctoral Appointee
MCS Division
Argonne National Laboratory
9700 South Cass Ave., Lemont, IL 60439
On Nov 26, 2019, at 12:55 PM, Sajid Ali via discuss <discuss at mpich.org<mailto:discuss at mpich.org>> wrote:
Hi MPICH-developers,
I’ve been trying to benchmark mpi software currently available on my univ. cluster with OSU micro benchmarks. I used this script to build and run the benchmark : https://github.com/s-sajid-ali/nu-quest-mpi/blob/master/curr_mpi_bench/submit.sh
And here is a plot of the results I obtained :
<curr_mpi.png>
Could someone give me a clue as to why the behaviour of mpich bandwidth vs latency is so odd ? Why does the bandwidth increase first and then fall off at high message sizes?
I’m also attaching the config.log for this particular mpich build which seems to indicate that MPICH was built with ch3 and nemesis:tcp. Does this mean that it didn’t pick up the infiniband libraries or that it is somehow using IPoIB? Is there a way to get this information ?
Thanks in advance for the help!
--
Sajid Ali | PhD Candidate
Applied Physics
Northwestern University
s-sajid-ali.github.io<http://s-sajid-ali.github.io/>
<config.log>_______________________________________________
discuss mailing list discuss at mpich.org<mailto:discuss at mpich.org>
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20191126/d3f3b2f0/attachment.html>
More information about the discuss
mailing list