[mpich-discuss] osu_latency test: why 8KB takes less time than 4KB and 2KB takes less time than 1KB?

Min Si msi at anl.gov
Wed Jun 20 12:39:30 CDT 2018


Hi Abu,

I think Jeff means that you should run your experiment with more 
iterations in order to get a stable results.
- Increase the iteration of for loop in each execution (I think osu 
benchmark allows you to set it)
- Run the experiments 10 or 100 times, and take the average and standard 
deviation.

If you see a very small standard deviation (e.g., <=5%), then the trend 
is stable and you might not see such gaps.

Best regards,
Min
On 2018/06/20 12:14, Abu Naser wrote:
>
> Hello Jeff,
>
>
> Yes, I am using a switch and other machines are also connected with 
> that switch.
>
> If I remove other machines and just use my two node with the switch, 
> then will it improve the performance by 200 ~ 400 iterations?
>
> Meanwhile I will give a try with a single dedicated cable.
>
>
> Thank you.
>
>
> Best Regards,
>
> Abu Naser
>
> ------------------------------------------------------------------------
> *From:* Jeff Hammond <jeff.science at gmail.com>
> *Sent:* Wednesday, June 20, 2018 12:52:06 PM
> *To:* MPICH
> *Subject:* Re: [mpich-discuss] osu_latency test: why 8KB takes less 
> time than 4KB and 2KB takes less time than 1KB?
> Is the ethernet connection a single dedicated cable between the two 
> machines or are you running through a switch that handles other traffic?
>
> My best guess is that this is noise and that you may be able to avoid 
> it by running a very long time, e.g. 10000 iterations.
>
> Jeff
>
> On Wed, Jun 20, 2018 at 6:53 AM, Abu Naser <an16e at my.fsu.edu 
> <mailto:an16e at my.fsu.edu>> wrote:
>
>
>     Good day to all,
>
>
>     I had run point to point osu_latency test in two nodes for 200
>     times.  Followings are the average time in microsecond for various
>     size of the messages -
>
>     1KB    84.8514 us
>     2KB    73.52535 us
>     4KB    272.55275 us
>     8KB    234.86385 us
>     16KB    288.88 us
>     32KB    523.3725 us
>     64KB    910.4025 us
>
>
>     From the above looks like, 2KB message has less latency than 1 KB
>     and 8KB has less latency than 4KB.
>
>     I was looking for explanation of this behavior  but did not get any.
>
>
>      1. MPIR_CVAR_CH3_EAGER_MAX_MSG_SIZEis set to 128KB. So none of
>         the above message size is using Rendezvous protocol. Is there
>         any partition inside eager protocol (e.g. 0 - 512 bytes, 1KB -
>         8KB, 16KB - 64KB)? If yes then what are the boundaries for
>         them? Can I log them with debug-event-logging?
>
>
>     Setup I am using:
>
>     - two nodes has intel core i7, one with 16gb memory another one 8gb
>
>     - mpich 3.2.1, configured and build to use nemesis tcp
>
>     - 1gb Ethernet connection
>
>     - NFS is using for sharing
>
>     - osu_latency : uses MPI_Send and MPI_Recv
>
>     - MPIR_CVAR_CH3_EAGER_MAX_MSG_SIZE= 131072 (128KB)
>
>
>     Can anyone help me on that? Thanks in advance.
>
>
>
>
>     Best Regards,
>
>     Abu Naser
>
>
>     _______________________________________________
>     discuss mailing list discuss at mpich.org <mailto:discuss at mpich.org>
>     To manage subscription options or unsubscribe:
>     https://lists.mpich.org/mailman/listinfo/discuss
>     <https://lists.mpich.org/mailman/listinfo/discuss>
>
>
>
>
> -- 
> Jeff Hammond
> jeff.science at gmail.com <mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20180620/16a7e3a2/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list