<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">Huiwei,<div><br></div><div>Thanks for your email. Your answer leads to my another question about <span style="font-size:12.8000001907349px">asynchronous </span>MPI communication.</div><div><br></div><div>I'm trying to do an overlapped communication/computing to speedup my MPI code. I read some papers comparing some different approaches to do the overlapped communication. The "naive" overlapped communication implementation, which only use non-blocking mpi Isend/Irecv and the hybrid approach using OpenMP and MPI together. In the hybrid approach, a separated thread is use to do all non-blocking communications. Just exactly as you said, the results indicate that <span style="font-size:12.8000001907349px">current MPI implementations do not support true asynchronous communication.</span></div><div><br></div><div>If I use the naive approach, my code with non-blocking or blocking send/recv gives me almost the same performance in term of Wtime. All communications are postponed to MPI_Wait.</div><div><br></div><div>I have tried calling mpi_test to push library to do communication during iterations. And try to use a dedicated thread to do communication and the other thread to do computing only. However, the performance gains are very small or no gain at all. I'm wondering it is due to the hardware. The cluster I tested uses 10G Ethernet card.</div><div><br></div><div><br></div><div>Best,</div><div><br></div><div>Lei Shi</div><div><br></div><div><br></div><div><br></div><div><br><img src="https://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/215169752acd371fad94ae0c209ea46b/spacer.gif" style="border: 0px; width: 0px; height: 0px; overflow: hidden;" width="0" height="0"><img src="http://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/215169752acd371fad94ae0c209ea46b/spacer.gif" style="border: 0px; width: 0px; height: 0px; overflow: hidden;" width="0" height="0"><font face="yw-d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9-215169752acd371fad94ae0c209ea46b--to" style=""></font></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Apr 3, 2015 at 8:49 AM, Huiwei Lu <span dir="ltr"><<a href="mailto:huiweilu@mcs.anl.gov" target="_blank">huiweilu@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Lei,<div><br></div><div>As far as I know, all current MPI implementations do not support true asynchronous communication for now. i.e., If there is no MPI calls in your iterations, MPICH will not be able to make progress on communication.</div><div><br></div><div>One solution is to poll the MPI runtime regularly to make progress by inserting MPI_Test to your iteration (even though you do not want to check the data).</div><div><br></div><div>Another solution is to enable MPI's asynchronous progress thread to make progress for you.</div><div class="gmail_extra"><br clear="all"><div><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">--</div><div dir="ltr">Huiwei</div></div></div></div></div></div><div><div class="h5">
<br><div class="gmail_quote">On Thu, Apr 2, 2015 at 11:44 PM, Lei Shi <span dir="ltr"><<a href="mailto:lshi@ku.edu" target="_blank">lshi@ku.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span style="font-size:12.8000001907349px">Hi Junchao,</span><div style="font-size:12.8000001907349px"><br></div><div style="font-size:12.8000001907349px">Thanks for your reply. For my case, I don't want to check the data have been received or not. So I don't want to call MPI_Test or any function to verify it. But my problem is like if I ignore calling the MPI_Wait, just call Isend/Irev, my program freezes for several sec and then continues to run. My guess is probably I messed up the MPI library internal buffer by doing this. </div><font face="yw-d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9-de13265fec9b0ac73ef74c2840d1127c--to"></font><img src="https://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/de13265fec9b0ac73ef74c2840d1127c/spacer.gif" style="border:0;width:0;min-height:0;overflow:hidden" width="0" height="0"><img src="http://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/de13265fec9b0ac73ef74c2840d1127c/spacer.gif" style="border:0;width:0;min-height:0;overflow:hidden" width="0" height="0"></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 2, 2015 at 7:25 PM, Junchao Zhang <span dir="ltr"><<a href="mailto:jczhang@mcs.anl.gov" target="_blank">jczhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Does MPI_Test fit your needs?</div><div class="gmail_extra"><br clear="all"><div><div><div dir="ltr">--Junchao Zhang</div></div></div>
<br><div class="gmail_quote"><div><div>On Thu, Apr 2, 2015 at 7:16 PM, Lei Shi <span dir="ltr"><<a href="mailto:lshi@ku.edu" target="_blank">lshi@ku.edu</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div><div dir="ltr"><div style="font-size:12.8000001907349px">I want to use non-blocking send/rev MPI_Isend/MPI_Irev to do communication. But in my case, I don't really care what kind of data I get or it is ready to use or not. So I don't want to waste my time to do any synchronization by calling MPI_Wait or etc API. </div><div style="font-size:12.8000001907349px"><br></div><div style="font-size:12.8000001907349px">But when I avoid calling MPI_Wait, my program is freezed several secs after running some iterations (after multiple MPI_Isend/Irev callings), then continues. It takes even more time than the case with MPI_Wait. So my question is how to do a "true" non-blocking communication without waiting for the data ready or not. Thanks.</div><img src="https://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/fb9de0cf5d9a327366801b8ac373d692/spacer.gif" style="border:0px;width:0px;min-height:0px;overflow:hidden" width="0" height="0"><img src="http://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/fb9de0cf5d9a327366801b8ac373d692/spacer.gif" style="border:0px;width:0px;min-height:0px;overflow:hidden" width="0" height="0"><font face="yw-d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9-fb9de0cf5d9a327366801b8ac373d692--to"></font></div>
<br></div></div>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></blockquote></div><br></div>
<br>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></blockquote></div><br></div></div></div></div>
<br>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></blockquote></div><br></div>