<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">Yes,<div><br></div><div>I tested it and solved my issue. Thanks again!!</div><div><br><img src="https://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/0eb126d2ef47ae51fc6610b6f15ca6c0/spacer.gif" style="border:0; width:0; height:0; overflow:hidden;" width="0" height="0"><img src="http://t.yesware.com/t/d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9/0eb126d2ef47ae51fc6610b6f15ca6c0/spacer.gif" style="border:0; width:0; height:0; overflow:hidden;" width="0" height="0"><font face="yw-d1fcbaa1b12e0f6b1beef0b50d5ebbd873d1b8f9-0eb126d2ef47ae51fc6610b6f15ca6c0--to" style=""></font></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Apr 4, 2015 at 7:01 PM, Balaji, Pavan <span dir="ltr"><<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
MPI_REQUEST_FREE. Please read the remaining emails on this thread.<br>
<span class="HOEnZb"><font color="#888888"><br>
-- Pavan<br>
</font></span><span class="im HOEnZb"><br>
> On Apr 3, 2015, at 7:56 PM, Lei Shi <<a href="mailto:lshi@ku.edu">lshi@ku.edu</a>> wrote:<br>
><br>
> Sorry. I just take a quick look of that slides this afternoon. Got the details now.<br>
><br>
> So can I make a quick conclusion that there is no way to avoid MPI_Wait or MPI_test with non-blocking send/receive. Even I don't care I have received the message or not.<br>
><br>
><br>
</span><span class="im HOEnZb">> On Fri, Apr 3, 2015 at 5:23 PM, Jeff Squyres (jsquyres) <<a href="mailto:jsquyres@cisco.com">jsquyres@cisco.com</a>> wrote:<br>
> On Apr 3, 2015, at 6:09 PM, Balaji, Pavan <<a href="mailto:balaji@anl.gov">balaji@anl.gov</a>> wrote:<br>
> ><br>
> > I didn't look through the video, but what you are saying is not true. You can free the request in the application (this simply reduces the reference count to that request). MPI will still maintain one reference count to the request and will only free it when the data transfer completes.<br>
><br>
> That's correct. The point of that blog entry is that you have to tell MPI that you're done with a request somehow, otherwise MPI *has* to keep those resources allocated.<br>
><br>
> MPI_TEST*/MPI_WAIT* are the usual ways to do this. MPI_REQUEST_FREE is also a valid method, but, as has been stated, isn't advisable.<br>
><br>
> --<br>
> Jeff Squyres<br>
> <a href="mailto:jsquyres@cisco.com">jsquyres@cisco.com</a><br>
> For corporate legal information go to: <a href="http://www.cisco.com/web/about/doing_business/legal/cri/" target="_blank">http://www.cisco.com/web/about/doing_business/legal/cri/</a><br>
><br>
><br>
<br>
</span><div class="HOEnZb"><div class="h5">--<br>
Pavan Balaji ✉️<br>
<a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
<br>
</div></div></blockquote></div><br></div>