[mpich-discuss] osu_latency test: why 8KB takes less time than 4KB and 2KB takes less time than 1KB?

Abu Naser an16e at my.fsu.edu
Mon Jul 2 17:52:49 CDT 2018


Hello Min,


Now for some cases it is working and for some cases not.


Cases when it worked:

- when application binary (e.g cpi, osu_bw, osu_latency) is compiled with other mpicc (generated when configured with tcp), then mpiexec (generated when configured with sock) could run it.


Cases not working:

- application binary (e.g cpi) is compiled with mpicc (generated when configured with sock), then mpiexec (generated when configured with sock) could not run it and produce the same error message. [lib path was set]


Thank you.



Best Regards,

Abu Naser

________________________________
From: Min Si <msi at anl.gov>
Sent: Monday, July 2, 2018 2:10:23 PM
To: discuss at mpich.org
Subject: Re: [mpich-discuss] osu_latency test: why 8KB takes less time than 4KB and 2KB takes less time than 1KB?

Could you please try mpich-3.3b3 ?
http://www.mpich.org/static/downloads/3.3b3/mpich-3.3b3.tar.gz

Min
On 2018/07/02 13:01, Abu Naser wrote:

Hello Min,


I have downloaded it from http://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz but it did not work. I have received almost same error. Except this time no process information from my remote machine.

Previously I have received this -

Process 3 of 4 is on dhcp16194
Process 1 of 4 is on dhcp16194
Process 0 of 4 is on dhcp16198
Process 2 of 4 is on dhcp16198

With the new source code -

Process 0 of 4 is on dhcp16198
Process 2 of 4 is on dhcp16198


Entire error message is:

Process 0 of 4 is on dhcp16198
Process 2 of 4 is on dhcp16198
Fatal error in PMPI_Bcast: Unknown error class, error stack:
PMPI_Bcast(1600)............................: MPI_Bcast(buf=0x7ffd1ee145f0, count=1, MPI_INT, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1452).......................:
MPIR_Bcast(1476)............................:
MPIR_Bcast_intra(1249)......................:
MPIR_SMP_Bcast(1081)........................:
MPIR_Bcast_binomial(285)....................:
MPIC_Send(303)..............................:
MPIC_Wait(226)..............................:
MPIDI_CH3i_Progress_wait(242)...............: an error occurred while handling an event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(698)..:
MPIDI_CH3_Sockconn_handle_connect_event(597): [ch3:sock] failed to connnect to remote process
MPIDU_Socki_handle_connect(808).............: connection failure (set=0,sock=1,errno=111:Connection refused)
MPIR_SMP_Bcast(1088)........................:
MPIR_Bcast_binomial(310)....................: Failure during collective
Fatal error in PMPI_Bcast: Other MPI error, error stack:
PMPI_Bcast(1600)........: MPI_Bcast(buf=0x7ffe2eeb90f0, count=1, MPI_INT, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1452)...:
MPIR_Bcast(1476)........:
MPIR_Bcast_intra(1249)..:
MPIR_SMP_Bcast(1088)....:
MPIR_Bcast_binomial(310): Failure during collective


Again if I configure the new source with tcp, it works fine.


Thank You.


Best Regards,

Abu Naser

________________________________
From: Min Si <msi at anl.gov><mailto:msi at anl.gov>
Sent: Monday, July 2, 2018 11:56:51 AM
To: discuss at mpich.org<mailto:discuss at mpich.org>
Subject: Re: [mpich-discuss] osu_latency test: why 8KB takes less time than 4KB and 2KB takes less time than 1KB?

Hi Abu,

Thanks for reporting this. Can you please try the latest release with ch3/sock and see if you still have this error ?

Min
On 2018/07/01 21:47, Abu Naser wrote:

Hello Min,


After compiling my mpich-3.2.1 with sock, while I was trying to run  any program including osu benchmark or examples/cpi  in two machines, I have received following error -


Process 3 of 4 is on dhcp16194
Process 1 of 4 is on dhcp16194
Process 0 of 4 is on dhcp16198
Process 2 of 4 is on dhcp16198
Fatal error in PMPI_Bcast: Unknown error class, error stack:
PMPI_Bcast(1600)............................: MPI_Bcast(buf=0x7ffc1808542c, count=1, MPI_INT, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1452).......................:
MPIR_Bcast(1476)............................:
MPIR_Bcast_intra(1249)......................:
MPIR_SMP_Bcast(1081)........................:
MPIR_Bcast_binomial(285)....................:
MPIC_Send(303)..............................:
MPIC_Wait(226)..............................:
MPIDI_CH3i_Progress_wait(242)...............: an error occurred while handling an event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(698)..:
MPIDI_CH3_Sockconn_handle_connect_event(597): [ch3:sock] failed to connnect to remote process
MPIDU_Socki_handle_connect(808).............: connection failure (set=0,sock=1,errno=111:Connection refused)
MPIR_SMP_Bcast(1088)........................:
MPIR_Bcast_binomial(310)....................: Failure during collective
Fatal error in PMPI_Bcast: Other MPI error, error stack:
PMPI_Bcast(1600)........: MPI_Bcast(buf=0x7ffd9eeebdac, count=1, MPI_INT, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast_impl(1452)...:
MPIR_Bcast(1476)........:
MPIR_Bcast_intra(1249)..:
MPIR_SMP_Bcast(1088)....:
MPIR_Bcast_binomial(310): Failure during collective


I checked the mpich FAQ and also mpich discussion list. Based on that I have checked followings and found  they are fine in my machines -

- firewall is disabled in both machine

- I can do password less ssh in both machine

- /etc/hosts in both machine configured with ip address and name properly

- I have updated the library path and used absolute path for mpiexec

- Most importantly when I configured and build mpich with tcp, it works fine.


 I think I am missing something but could not figure out yet. Any help would be appreciated.


Thank you.






Best Regards,

Abu Naser

________________________________
From: Min Si <msi at anl.gov><mailto:msi at anl.gov>
Sent: Tuesday, June 26, 2018 12:54:29 PM
To: discuss at mpich.org<mailto:discuss at mpich.org>
Subject: Re: [mpich-discuss] osu_latency test: why 8KB takes less time than 4KB and 2KB takes less time than 1KB?

Hi Abu,

I think the results are stable enough. Perhaps you could also try the following tests, and see if similar trend exists:
- MPICH/socket (set `--with-device=ch3:sock` at configure)
- A socket-based pingpong test without MPI.

At this point, I could not think of any MPI-specific design for 2k/8k messages. My guess is that it is related to your network connection.

Min

On 2018/06/24 11:09, Abu Naser wrote:

Hello Min and Jeff,


Here is my experiment results. Default number of iterations in osu_latency for 0B – 8KB is 10,000. With that setting I had run the osu_latency 100 times and found standard deviation 33 for 8KB message size.


So later I have set the iteration to 50,000 and 100,000 for 1KB – 16KB message size. Then run osu_latency for 100 times for each setting and take the average and standard deviation.


Msg Size in Bytes


Avg time in us (50K iterations)


Avg time in us (100k iterations)


Standard deviation (50K iterations)


Standard deviation (100K iterations)


1k


85.10


84.9


0.55


0.45


2k


75.79


74.63


5.09


4.44


4k


273.80


274.71


4.18


2.45


8k


258.56


249.83


21.14


28


16k


281.31


281.02


3.22


4.10




The standard deviation of 8K message is so high and that implies it actually not producing any consistent latency time. Looks like that's the reason for 8K is taking less time than 4K.


Meanwhile, 2K has standard deviation less than 5 but 1K message latency timing are more densely populated than 2K. So probably this is the explanation for 2K message less latency time.


Thank you for your suggestions.




Best Regards,

Abu Naser

________________________________
From: Abu Naser
Sent: Wednesday, June 20, 2018 1:48:53 PM
To: discuss at mpich.org<mailto:discuss at mpich.org>
Subject: Re: [mpich-discuss] osu_latency test: why 8KB takes less time than 4KB and 2KB takes less time than 1KB?


Hello Min,


Thanks for the clarification.  I will do the experiment.


Thanks.

Best Regards,

Abu Naser

________________________________
From: Min Si <msi at anl.gov><mailto:msi at anl.gov>
Sent: Wednesday, June 20, 2018 1:39:30 PM
To: discuss at mpich.org<mailto:discuss at mpich.org>
Subject: Re: [mpich-discuss] osu_latency test: why 8KB takes less time than 4KB and 2KB takes less time than 1KB?

Hi Abu,

I think Jeff means that you should run your experiment with more iterations in order to get a stable results.
- Increase the iteration of for loop in each execution (I think osu benchmark allows you to set it)
- Run the experiments 10 or 100 times, and take the average and standard deviation.

If you see a very small standard deviation (e.g., <=5%), then the trend is stable and you might not see such gaps.

Best regards,
Min
On 2018/06/20 12:14, Abu Naser wrote:

Hello Jeff,


Yes, I am using a switch and other machines are also connected with that switch.

If I remove other machines and just use my two node with the switch, then will it improve the performance by 200 ~ 400 iterations?

Meanwhile I will give a try with a single dedicated cable.


Thank you.


Best Regards,

Abu Naser

________________________________
From: Jeff Hammond <jeff.science at gmail.com><mailto:jeff.science at gmail.com>
Sent: Wednesday, June 20, 2018 12:52:06 PM
To: MPICH
Subject: Re: [mpich-discuss] osu_latency test: why 8KB takes less time than 4KB and 2KB takes less time than 1KB?

Is the ethernet connection a single dedicated cable between the two machines or are you running through a switch that handles other traffic?

My best guess is that this is noise and that you may be able to avoid it by running a very long time, e.g. 10000 iterations.

Jeff

On Wed, Jun 20, 2018 at 6:53 AM, Abu Naser <an16e at my.fsu.edu<mailto:an16e at my.fsu.edu>> wrote:


Good day to all,


I had run point to point osu_latency test in two nodes for 200 times.  Followings are the average time in microsecond for various size of the messages -

1KB    84.8514 us
2KB    73.52535 us
4KB    272.55275 us
8KB    234.86385 us
16KB    288.88 us
32KB    523.3725 us
64KB    910.4025 us



More information about the discuss mailing list