<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div>MPI_Wait does not guarantee that the data sent with MPI_Isend is received. It merely means you can reuse the send buffer. If you want to be sure it has been received, use MPI_Issend (synchronous send).</div><div><br></div><div>I assume that you don't actually care if the data has been received, in which case you should not use synchronous send, since it may be slower than regular send, due to additional synchronization.</div><div><br></div><div>Jeff</div><br><div class="gmail_quote">On Fri, Apr 3, 2015 at 2:46 PM, Lei Shi <span dir="ltr"><<a href="mailto:lshi@ku.edu" target="_blank">lshi@ku.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Pavan,<div><br></div><div>In my case, I don't care the data is correct or not. I know it sounds crazy at first time, but there are some numerical schemes designed for this situation. </div><div><br></div><div>I think according to Jeff's post <a href="http://blogs.cisco.com/performance/dont-leak-mpi_requests" target="_blank">http://blogs.cisco.com/performance/dont-leak-mpi_requests</a>. If we don't call MPI_Wait or test, the request object will not be released even the data transfer completes. </div><div><br></div><div>I will test your suggestion to call <span style="font-size:12.6666669845581px"> MPI_Request_free to release it manually. </span></div><div><span style="font-size:12.6666669845581px"><br></span></div><div><span style="font-size:12.6666669845581px">BTW, attached is my test result, </span></div><div><span style="font-size:12.6666669845581px">1. The orange one uses MPI_ISend/MPI_IRecv with MPI_Wait to make sure the data has been received successfully. The WTime is linear with iterations, which is correct.</span></div><div><span style="font-size:12.6666669845581px"><br></span></div><div><span style="font-size:12.6666669845581px">2. The blue one uses MPI_ISend and MPI_IRecv without calling MPI_Wait. There are some jumps in WTime. Does it due to some resource leaking? I will try to call MPI_Request_free to solve those jumps. </span></div><div><span style="font-size:12.6666669845581px"><br></span></div><div><span style="font-size:12.6666669845581px">3. The green one uses MPI_IBsend and MPI_IRecv (should I use MPI_Recv with MPI_IBsend) without calling MPI_Wait. It is even slower than the case 1 with MPI_Wait. </span></div><div><span style="font-size:12.6666669845581px"><br></span></div><div><span style="font-size:12.6666669845581px"><br></span></div><div><br></div><img style="border:0px;width:0px;min-height:0px;overflow:hidden" width="0" height="0"><img style="border:0px;width:0px;min-height:0px;overflow:hidden" width="0" height="0"><font face="yw-d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9-dc10b12823798f2cbd80191d0e91137a--to"></font></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Apr 3, 2015 at 1:29 PM, Balaji, Pavan <span dir="ltr"><<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
No, the request object will get released fine when the data transfer completes. It is a correct program. What I meant is that, since the application never knows when the request completes, the user application cannot use the data in any meaningful way, IMO.<br>
<span><font color="#888888"><br>
-- Pavan<br>
</font></span><div><div><br>
> On Apr 3, 2015, at 12:13 PM, Lei Shi <<a href="mailto:lshi@ku.edu" target="_blank">lshi@ku.edu</a>> wrote:<br>
><br>
> Pavan,<br>
><br>
> Thanks. You mean since I don't call MPI_Wait at all, the request object does not get a chance to be released by the MPI library? Right? I will give it a try. Thanks again!<br>
><br>
><br>
> On Fri, Apr 3, 2015 at 10:12 AM, Balaji, Pavan <<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>> wrote:<br>
><br>
> You can free the request with MPI_Request_free if you don't want to wait on it. I have no idea how you'll write a correct program without waiting for receive completions at least, though.<br>
><br>
> -- Pavan<br>
><br>
> > On Apr 2, 2015, at 11:44 PM, Lei Shi <<a href="mailto:lshi@ku.edu" target="_blank">lshi@ku.edu</a>> wrote:<br>
> ><br>
> > Hi Junchao,<br>
> ><br>
> > Thanks for your reply. For my case, I don't want to check the data have been received or not. So I don't want to call MPI_Test or any function to verify it. But my problem is like if I ignore calling the MPI_Wait, just call Isend/Irev, my program freezes for several sec and then continues to run. My guess is probably I messed up the MPI library internal buffer by doing this.<br>
> ><br>
> ><br>
> > On Thu, Apr 2, 2015 at 7:25 PM, Junchao Zhang <<a href="mailto:jczhang@mcs.anl.gov" target="_blank">jczhang@mcs.anl.gov</a>> wrote:<br>
> > Does MPI_Test fit your needs?<br>
> ><br>
> > --Junchao Zhang<br>
> ><br>
> > On Thu, Apr 2, 2015 at 7:16 PM, Lei Shi <<a href="mailto:lshi@ku.edu" target="_blank">lshi@ku.edu</a>> wrote:<br>
> > I want to use non-blocking send/rev MPI_Isend/MPI_Irev to do communication. But in my case, I don't really care what kind of data I get or it is ready to use or not. So I don't want to waste my time to do any synchronization by calling MPI_Wait or etc API.<br>
> ><br>
> > But when I avoid calling MPI_Wait, my program is freezed several secs after running some iterations (after multiple MPI_Isend/Irev callings), then continues. It takes even more time than the case with MPI_Wait. So my question is how to do a "true" non-blocking communication without waiting for the data ready or not. Thanks.<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
> --<br>
> Pavan Balaji ✉️<br>
> <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
<br>
--<br>
Pavan Balaji ✉️<br>
<a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
<br>
</div></div></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>