<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: times new roman,new york,times,serif; font-size: 12pt; color: #000000'>Hi,<div><br></div><div>I come back to this old discussion, regarding MPI_Send potentially blocking until a matching MPI_Recv is posted.</div><div><br></div><div>I happen to write a master-worker type program where workers periodically send progress status to the master through two MPI_Send.</div><div>The master has an MPI_Irecv posted for the first one and when this one completes, it posts an MPI_Recv for the second one. Those messages being small and the worker not expecting any answer before continuing to work, you can imagine that a systematically blocking MPI_Send could degrade the performance if the master is busy doing something else.</div><div><br></div><div>So my question is: to your knowledge, do implementations of MPI (in particular Mpich, and those running on Cray Linux and BlueGene/P and Q) systematically block on MPI_Send? (I would understand if it blocked because too many MPI_Send have not been matched on the receiver)</div><div><br></div><div>Otherwise I might have to use MPI_Isend everywhere...</div><div><br></div><div>Thanks!</div><div><br></div><div>Matthieu<br><br><hr id="zwchr"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>De: </b>"Wesley Bland" <wbland@mcs.anl.gov><br><b>À: </b>discuss@mpich.org<br><b>Envoyé: </b>Mercredi 3 Juillet 2013 15:40:11<br><b>Objet: </b>Re: [mpich-discuss] MPI_Send and recv<br><br><div><div>On Jul 2, 2013, at 9:03 PM, Matthieu Dorier <<a href="mailto:matthieu.dorier@irisa.fr" target="_blank">matthieu.dorier@irisa.fr</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote><div style="font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div style="font-family: 'times new roman', 'new york', times, serif; font-size: 12pt; ">Hi,<div><br></div><div>These messages piqued my curiosity;</div><div><br></div><div>If I have two processes, process 0 does an MPI_Send to process 1 and eventually, process 1 posts a matching MPI_Recv or MPI_Irecv; does process 0 waits for the MPI_Recv/Irecv to be posted before returning from the MPI_Send? (in other words if process 0 does 2 MPI_Send and process 1 does 2 MPI_Recv, is the linearization "Send; Send; Recv; Recv" possible?)</div></div></div></blockquote><div><br></div><div>It is possible that process 0 could return from the MPI_Send before process 1 posts the corresponding MPI_Recv, but it's not guaranteed. The only thing that is guaranteed from the standard is that when MPI_Send returns, the buffer is available to use again. It is valid to wait for the MPI_Recv to post as well. So to answer your example, S - S - R - R is valid, as is S - R - S - R.</div><br><blockquote><div style="font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div style="font-family: 'times new roman', 'new york', times, serif; font-size: 12pt; "><div><br></div><div>Second question: in the situation described by Sufeng, where a process does a MPI_Send to itself, MPI has enough information to see that the process is sending to itself while it has not posted any matching receive, why would MPI block instead of printing an error and aborting? Is it because in a multithreaded program, the matching receive could be posted by another thread? (Yet in that case I always thought it was unsafe to use the same communicator from multiple threads…)</div></div></div></blockquote><div><br></div><div>There are probably many reasons why MPI doesn't try to do internal error checking. The first is performance. If a program is correct, then checking for all of the cases that might cause a deadlock would be not only very difficult, but also expensive. Therefore, we just say that the user needs to write a correct program and MPI makes no guarantees that anything smart will happen in the background. It's possible that someone could write Nice-MPI that does lots of error checking and verification, but I would think that most users wouldn't want it as it would add overhead.</div><div><br></div><div>Second, it's not always unsafe to use the same communicator in multiple threads. If you use different tags for each thread, then it shouldn't be a problem to communicate between them.</div><div><br></div><div>Wesley</div><br><blockquote><div style="font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div style="font-family: 'times new roman', 'new york', times, serif; font-size: 12pt; "><div><br></div><div>Thanks,</div><div><br></div><div>Matthieu</div><div><br><hr id="zwchr"><blockquote style="border-left-width: 2px; border-left-style: solid; border-left-color: rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica, Arial, sans-serif; font-size: 12pt; "><b>De:<span class="Apple-converted-space"> </span></b>"Sufeng Niu" <<a href="mailto:sniu@hawk.iit.edu" target="_blank">sniu@hawk.iit.edu</a>><br><b>À:<span class="Apple-converted-space"> </span></b>"Pavan Balaji" <<a href="mailto:balaji@mcs.anl.gov" target="_blank">balaji@mcs.anl.gov</a>><br><b>Cc:<span class="Apple-converted-space"> </span></b><a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br><b>Envoyé:<span class="Apple-converted-space"> </span></b>Mardi 2 Juillet 2013 17:20:25<br><b>Objet:<span class="Apple-converted-space"> </span></b>Re: [mpich-discuss] MPI_Send and recv<br><br><div dir="ltr"><div>Thank you so much! Now I make sense<br><br></div>Sufeng<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jul 2, 2013 at 5:19 PM, Pavan Balaji<span class="Apple-converted-space"> </span><span dir="ltr"><<a href="mailto:balaji@mcs.anl.gov" target="_blank">balaji@mcs.anl.gov</a>></span><span class="Apple-converted-space"> </span>wrote:<br><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; "><div class="im"><br>On 07/02/2013 05:18 PM, Sufeng Niu wrote:<br><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; ">Thanks a lot, that is most confusing part for me in MPI. I would like to<br>clear this type of problems.<br><br>So the blocking begins to send message to itself, but there is no<br>receive posted. Thus, the MPI block it. Am I right here?<br></blockquote><br></div>Correct.<div class="im"><br><br><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; ">Thus, I need a nonblocking receive to wait the message here. Can I use<br>blocking receive?<br></blockquote><br></div>If you use blocking receive, you'll block in the receive; how will you reach the send?<span class="HOEnZb"><font color="#888888"><br><br> -- Pavan</font></span><div class="HOEnZb"><div class="h5"><br><br>--<span class="Apple-converted-space"> </span><br>Pavan Balaji<br><a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br></div></div></blockquote></div><br><br clear="all"><br>--<span class="Apple-converted-space"> </span><br>Best Regards,<div>Sufeng Niu</div><div>ECASP lab, ECE department, Illinois Institute of Technology</div><div>Tel: 312-731-7219</div></div><br>_______________________________________________<br>discuss mailing list discuss@mpich.org<br>To manage subscription options or unsubscribe:<br>https://lists.mpich.org/mailman/listinfo/discuss</blockquote><br></div></div>_______________________________________________<br>discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>To manage subscription options or unsubscribe:<br><a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a></div></blockquote></div><br><br>_______________________________________________<br>discuss mailing list discuss@mpich.org<br>To manage subscription options or unsubscribe:<br>https://lists.mpich.org/mailman/listinfo/discuss</blockquote><br></div></div></body></html>