[mpich-discuss] MPICH (v3.1) Performing slow on 10G Ethernet
Atchley, Scott
atchleyes at ornl.gov
Mon Apr 21 06:40:59 CDT 2014
>From the netperf manual:
-r <sizespec>
This option sets the request (first value) and/or response (second value) sizes for an _RR test. By default the units are bytes, but a suffix of “G,” “M,” or “K” will specify the units to be 2^30 (GB), 2^20 (MB) or 2^10 (KB) respectively. A suffix of “g,” “m” or “k” will specify units of 10^9, 10^6 or 10^3 bytes respectively. For example:
-r 128,16K
Will set the request size to 128 bytes and the response size to 16 KB or 16384 bytes. [Default: 1 - a single-byte request and response ]
I thought OSU had a streaming benchmark, but I have not used it in a long time.
On Apr 17, 2014, at 7:47 AM, Muhammad Ansar Javed <muhammad.ansar at seecs.edu.pk> wrote:
>
> I am running bandwidth test from OSU-Micro-Benchmarks and a simple Ping-Pong program implemented using MPI Send and Recv methods. I have also tested bandwidth with NetPipe MPI implementation as well but no improvements in results.
>
> Interrupt coalescing was ON by default and disabling it makes latency consistent at 37 us through out messages of 1 byte to 2 MB data size. I have done following vendor recommended optimizations for network buffer. Latency is not increasing with the message size may be because of these settings.
> net.core.rmem_max = 16777216
> net.core.wmem_max = 16777216
> net.ipv4.tcp_rmem = 4096 87380 16777216
> net.ipv4.tcp_wmem = 4096 65536 16777216
> net.core.netdev_max_backlog = 250000
>
>
> I have not run streaming benchmark. I found a Streaming MPI benchmark (http://www.cs.virginia.edu/stream/) but its C implementation is not available. Can you please suggest me any Streaming MPI-C benchmark?
>
> Latency test with netperf TCP_RR gives 37 us for message size 1 Bytes. Moreover, according to Netperf docs, _RR test support only 1 bytes message size. Please correct me if I am wrong.
>
> Thanks,
>
>
>
> On Wed, Apr 16, 2014 at 5:36 PM, Atchley, Scott <atchleyes at ornl.gov> wrote:
> On Apr 16, 2014, at 8:35 AM, Scott Atchley <atchleyes at ornl.gov> wrote:
>
> > Keep in mind that iperf is a streaming test. What MPI application are you running? IMB pingpong?
> >
> > You should review you NIC vendor's tuning FAQ. Is interrupt coalescing turned on? Normally it is, but it increases small message latency.
> >
> > As for bandwidth, have you tried a streaming MPI benchmark?
>
> Or run Netperf's TCP_RR for each message size to see how it compares to pingpong.
>
> >
> > Scott
> >
> > On Apr 15, 2014, at 4:29 PM, Muhammad Ansar Javed <muhammad.ansar at seecs.edu.pk> wrote:
> >
> >> Hi Antonio,
> >> Thanks for response.
> >> Here is output of iperf bandwidth test.
> >> .
> >> Client connecting to 10g2, TCP port 5001
> >> TCP window size: 1.41 MByte (default)
> >> ------------------------------------------------------------
> >> [ 5] local 192.168.1.33 port 34543 connected with 192.168.1.34 port 5001
> >> [ 4] local 192.168.1.33 port 5001 connected with 192.168.1.34 port 48531
> >> [ ID] Interval Transfer Bandwidth
> >> [ 5] 0.0-10.0 sec 11.2 GBytes 9.61 Gbits/sec
> >> [ 4] 0.0-10.0 sec 11.2 GBytes 9.59 Gbits/sec
> >>
> >>
> >>
> >> On Tue, Apr 15, 2014 at 7:17 PM, "Antonio J. Peña" <apenya at mcs.anl.gov> wrote:
> >>
> >> Hi Ansar,
> >>
> >> The bandwidth numbers seem a little low. I'd expect something around 7,000 Mbps. To discard issues with MPI, you can try generic benchmarking tools such as those described here>
> >>
> >> http://linuxaria.com/article/tool-command-line-bandwidth-linux?lang=en
> >>
> >> If you get similar numbers, you'll have to check your setup. Otherwise, let us know and we'll try to see if there's any performance issue with MPICH.
> >>
> >> Best,
> >> Antonio
> >>
> >>
> >>
> >> On 04/15/2014 06:35 AM, Muhammad Ansar Javed wrote:
> >>> Hi,
> >>> I am running benchmarks for MPICH v3.1 performance evaluation on 10G Ethernet connection between two hosts. The performance results are less than expected. Here is complete set of numbers for Latency and Bandwidth tests.
> >>>
> >>> mpj at host3:~/code/benchmarks$ mpiexec -n 2 -f machines ./latency.out
> >>> Latency: MPICH-C
> >>> Size ______ Time (us)
> >>> 1 _________ 37.91
> >>> 2 _________ 37.76
> >>> 4 _________ 37.75
> >>> 8 _________ 39.02
> >>> 16 ________ 52.71
> >>> 32 ________ 39.08
> >>> 64 ________ 37.75
> >>> 128 _______ 37.77
> >>> 256 _______ 57.47
> >>> 512 _______ 37.86
> >>> 1024 ______ 37.76
> >>> 2048 ______ 37.88
> >>>
> >>> mpj at host3:~/code/benchmarks$ mpiexec -n 2 -f machines ./bandwidth.out
> >>> Bandwidth: MPICH-C
> >>> Size(Bytes) Bandwidth (Mbps)
> >>> 2048 ______ 412.32
> >>> 4096 ______ 820.06
> >>> 8192 ______ 827.77
> >>> 16384 _____ 1644.36
> >>> 32768 _____ 2207.52
> >>> 65536 _____ 4368.76
> >>> 131072 ____ 2942.93
> >>> 262144 ____ 4281.17
> >>> 524288 ____ 4773.78
> >>> 1048576 ___ 5310.85
> >>> 2097152 ___ 5382.94
> >>> 4194304 ___ 5518.97
> >>> 8388608 ___ 5508.87
> >>> 16777216 __ 5498.93
> >>>
> >>> My environments consists of two hosts having point-to-point (switch-less) 10Gbps Ethernet connection. Environment (OS, user, directory structure etc) on both hosts is exactly same. There is no NAS or shared file system between hosts. I have attached output of script
> >>> mpiexec -all.
> >>>
> >>> Are these numbers okay? If not then please suggest performance improvement methodology...
> >>>
> >>> Thanks
> >>>
> >>> --
> >>> Regards
> >>>
> >>> Ansar Javed
> >>> HPC Lab
> >>> SEECS NUST
> >>> Contact: +92 334 438 9394
> >>> Email: muhammad.ansar at seecs.edu.pk
> >>>
> >>>
> >>> _______________________________________________
> >>> discuss mailing list
> >>> discuss at mpich.org
> >>>
> >>> To manage subscription options or unsubscribe:
> >>>
> >>> https://lists.mpich.org/mailman/listinfo/discuss
> >>
> >>
> >> --
> >> Antonio J. Peña
> >> Postdoctoral Appointee
> >> Mathematics and Computer Science Division
> >> Argonne National Laboratory
> >> 9700 South Cass Avenue, Bldg. 240, Of. 3148
> >> Argonne, IL 60439-4847
> >>
> >> apenya at mcs.anl.gov
> >> www.mcs.anl.gov/~apenya
> >>
> >> _______________________________________________
> >> discuss mailing list discuss at mpich.org
> >> To manage subscription options or unsubscribe:
> >> https://lists.mpich.org/mailman/listinfo/discuss
> >>
> >>
> >>
> >> --
> >> Regards
> >>
> >> Ansar Javed
> >> HPC Lab
> >> SEECS NUST
> >> Contact: +92 334 438 9394
> >> Email: muhammad.ansar at seecs.edu.pk
> >> _______________________________________________
> >> discuss mailing list discuss at mpich.org
> >> To manage subscription options or unsubscribe:
> >> https://lists.mpich.org/mailman/listinfo/discuss
> >
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
> --
> Regards
>
> Ansar Javed
> HPC Lab
> SEECS NUST
> Contact: +92 334 438 9394
> Email: muhammad.ansar at seecs.edu.pk
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
More information about the discuss
mailing list