[mpich-discuss] MPICH (v3.1) Performing slow on 10G Ethernet

Muhammad Ansar Javed muhammad.ansar at seecs.edu.pk
Tue Apr 15 15:29:46 CDT 2014


Hi Antonio,
Thanks for response.
Here is output of iperf bandwidth test.
.
Client connecting to 10g2, TCP port 5001
TCP window size: 1.41 MByte (default)
------------------------------------------------------------
[  5] local 192.168.1.33 port 34543 connected with 192.168.1.34 port 5001
[  4] local 192.168.1.33 port 5001 connected with 192.168.1.34 port 48531
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  11.2 GBytes  9.61 Gbits/sec
[  4]  0.0-10.0 sec  11.2 GBytes  9.59 Gbits/sec



On Tue, Apr 15, 2014 at 7:17 PM, "Antonio J. Peña" <apenya at mcs.anl.gov>wrote:

>
> Hi Ansar,
>
> The bandwidth numbers seem a little low. I'd expect something around 7,000
> Mbps. To discard issues with MPI, you can try generic benchmarking tools
> such as those described here>
>
> http://linuxaria.com/article/tool-command-line-bandwidth-linux?lang=en
>
> If you get similar numbers, you'll have to check your setup. Otherwise,
> let us know and we'll try to see if there's any performance issue with
> MPICH.
>
> Best,
>   Antonio
>
>
>
> On 04/15/2014 06:35 AM, Muhammad Ansar Javed wrote:
>
>  Hi,
> I am running benchmarks for MPICH v3.1 performance evaluation on 10G
> Ethernet connection between two hosts. The performance results are less
> than expected. Here is complete set of numbers for Latency and Bandwidth
> tests.
>
> mpj at host3:~/code/benchmarks$ mpiexec -n 2 -f machines ./latency.out
> Latency: MPICH-C
> Size ______ Time (us)
> 1 _________ 37.91
> 2 _________ 37.76
> 4 _________ 37.75
> 8 _________ 39.02
> 16 ________ 52.71
> 32 ________ 39.08
> 64 ________ 37.75
> 128 _______ 37.77
> 256 _______ 57.47
> 512 _______ 37.86
> 1024 ______ 37.76
> 2048 ______ 37.88
>
> mpj at host3:~/code/benchmarks$ mpiexec -n 2 -f machines ./bandwidth.out
> Bandwidth: MPICH-C
> Size(Bytes) Bandwidth (Mbps)
> 2048 ______ 412.32
> 4096 ______ 820.06
> 8192 ______ 827.77
> 16384 _____ 1644.36
> 32768 _____ 2207.52
> 65536 _____ 4368.76
> 131072 ____ 2942.93
> 262144 ____ 4281.17
> 524288 ____ 4773.78
> 1048576 ___ 5310.85
> 2097152 ___ 5382.94
> 4194304 ___ 5518.97
> 8388608 ___ 5508.87
> 16777216 __ 5498.93
>
> My environments consists of two hosts having point-to-point (switch-less)
> 10Gbps Ethernet connection.  Environment (OS, user, directory structure
> etc) on both hosts is exactly same. There is no NAS or shared file system
> between hosts. I have attached output of script
> mpiexec -all.
>
> Are these numbers okay? If not then please suggest performance improvement
> methodology...
>
>  Thanks
>
> --
> Regards
>
> Ansar Javed
> HPC Lab
> SEECS NUST
> Contact: +92 334 438 9394
> Email: muhammad.ansar at seecs.edu.pk
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
> --
> Antonio J. Peña
> Postdoctoral Appointee
> Mathematics and Computer Science Division
> Argonne National Laboratory
> 9700 South Cass Avenue, Bldg. 240, Of. 3148
> Argonne, IL 60439-4847apenya at mcs.anl.govwww.mcs.anl.gov/~apenya
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>



-- 
Regards

Ansar Javed
HPC Lab
SEECS NUST
Contact: +92 334 438 9394
Email: muhammad.ansar at seecs.edu.pk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20140416/fa5cf2e2/attachment.html>


More information about the discuss mailing list