[mpich-discuss] MPI_Send and recv

Matthieu Dorier matthieu.dorier at irisa.fr
Fri Dec 6 00:46:28 CST 2013


Thanks Jeff for your answer. 
By "systematically" I mean "whether a remote buffer is full or not" (i.e. "every time"), as opposed to an MPI_Send that "may block under some conditions, but not always". 

Matthieu 

----- Mail original -----

> De: "Jeff Hammond" <jeff.science at gmail.com>
> À: discuss at mpich.org
> Envoyé: Vendredi 6 Décembre 2013 03:34:00
> Objet: Re: [mpich-discuss] MPI_Send and recv

> Blue Gene does not always block on mpi_send. I don't know what you
> mean by "systematically"...

> If you are sending small messages (less than ~1KB), they will use
> eager protocol and not block until you blow up the remote buffer.

> Rendezvous very likely blocks until the recv is posted.

> There are slides online about Cray MPI protocols that may show
> similar behavior.

> If you want to be sure you don't block on send, you need to use
> isend. That much should be obvious.

> Jeff

> Sent from my iPhone

> On Dec 5, 2013, at 2:59 PM, Matthieu Dorier <
> matthieu.dorier at irisa.fr > wrote:

> > Hi,
> 

> > I come back to this old discussion, regarding MPI_Send potentially
> > blocking until a matching MPI_Recv is posted.
> 

> > I happen to write a master-worker type program where workers
> > periodically send progress status to the master through two
> > MPI_Send.
> 
> > The master has an MPI_Irecv posted for the first one and when this
> > one completes, it posts an MPI_Recv for the second one. Those
> > messages being small and the worker not expecting any answer before
> > continuing to work, you can imagine that a systematically blocking
> > MPI_Send could degrade the performance if the master is busy doing
> > something else.
> 

> > So my question is: to your knowledge, do implementations of MPI (in
> > particular Mpich, and those running on Cray Linux and BlueGene/P
> > and
> > Q) systematically block on MPI_Send? (I would understand if it
> > blocked because too many MPI_Send have not been matched on the
> > receiver)
> 

> > Otherwise I might have to use MPI_Isend everywhere...
> 

> > Thanks!
> 

> > Matthieu
> 

> > ----- Mail original -----
> 

> > > De: "Wesley Bland" < wbland at mcs.anl.gov >
> > 
> 
> > > À: discuss at mpich.org
> > 
> 
> > > Envoyé: Mercredi 3 Juillet 2013 15:40:11
> > 
> 
> > > Objet: Re: [mpich-discuss] MPI_Send and recv
> > 
> 

> > > On Jul 2, 2013, at 9:03 PM, Matthieu Dorier <
> > > matthieu.dorier at irisa.fr > wrote:
> > 
> 

> > > > Hi,
> > > 
> > 
> 

> > > > These messages piqued my curiosity;
> > > 
> > 
> 

> > > > If I have two processes, process 0 does an MPI_Send to process
> > > > 1
> > > > and
> > > > eventually, process 1 posts a matching MPI_Recv or MPI_Irecv;
> > > > does
> > > > process 0 waits for the MPI_Recv/Irecv to be posted before
> > > > returning
> > > > from the MPI_Send? (in other words if process 0 does 2 MPI_Send
> > > > and
> > > > process 1 does 2 MPI_Recv, is the linearization "Send; Send;
> > > > Recv;
> > > > Recv" possible?)
> > > 
> > 
> 
> > > It is possible that process 0 could return from the MPI_Send
> > > before
> > > process 1 posts the corresponding MPI_Recv, but it's not
> > > guaranteed.
> > > The only thing that is guaranteed from the standard is that when
> > > MPI_Send returns, the buffer is available to use again. It is
> > > valid
> > > to wait for the MPI_Recv to post as well. So to answer your
> > > example,
> > > S - S - R - R is valid, as is S - R - S - R.
> > 
> 

> > > > Second question: in the situation described by Sufeng, where a
> > > > process does a MPI_Send to itself, MPI has enough information
> > > > to
> > > > see
> > > > that the process is sending to itself while it has not posted
> > > > any
> > > > matching receive, why would MPI block instead of printing an
> > > > error
> > > > and aborting? Is it because in a multithreaded program, the
> > > > matching
> > > > receive could be posted by another thread? (Yet in that case I
> > > > always thought it was unsafe to use the same communicator from
> > > > multiple threads…)
> > > 
> > 
> 
> > > There are probably many reasons why MPI doesn't try to do
> > > internal
> > > error checking. The first is performance. If a program is
> > > correct,
> > > then checking for all of the cases that might cause a deadlock
> > > would
> > > be not only very difficult, but also expensive. Therefore, we
> > > just
> > > say that the user needs to write a correct program and MPI makes
> > > no
> > > guarantees that anything smart will happen in the background.
> > > It's
> > > possible that someone could write Nice-MPI that does lots of
> > > error
> > > checking and verification, but I would think that most users
> > > wouldn't want it as it would add overhead.
> > 
> 

> > > Second, it's not always unsafe to use the same communicator in
> > > multiple threads. If you use different tags for each thread, then
> > > it
> > > shouldn't be a problem to communicate between them.
> > 
> 

> > > Wesley
> > 
> 

> > > > Thanks,
> > > 
> > 
> 

> > > > Matthieu
> > > 
> > 
> 

> > > > ----- Mail original -----
> > > 
> > 
> 

> > > > > De: "Sufeng Niu" < sniu at hawk.iit.edu >
> > > > 
> > > 
> > 
> 
> > > > > À: "Pavan Balaji" < balaji at mcs.anl.gov >
> > > > 
> > > 
> > 
> 
> > > > > Cc: discuss at mpich.org
> > > > 
> > > 
> > 
> 
> > > > > Envoyé: Mardi 2 Juillet 2013 17:20:25
> > > > 
> > > 
> > 
> 
> > > > > Objet: Re: [mpich-discuss] MPI_Send and recv
> > > > 
> > > 
> > 
> 

> > > > > Thank you so much! Now I make sense
> > > > 
> > > 
> > 
> 

> > > > > Sufeng
> > > > 
> > > 
> > 
> 

> > > > > On Tue, Jul 2, 2013 at 5:19 PM, Pavan Balaji <
> > > > > balaji at mcs.anl.gov
> > > > > >
> > > > > wrote:
> > > > 
> > > 
> > 
> 

> > > > > > On 07/02/2013 05:18 PM, Sufeng Niu wrote:
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > Thanks a lot, that is most confusing part for me in MPI.
> > > > > > > I
> > > > > > > would
> > > > > > > like
> > > > > > > to
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > clear this type of problems.
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > So the blocking begins to send message to itself, but
> > > > > > > there
> > > > > > > is
> > > > > > > no
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > receive posted. Thus, the MPI block it. Am I right here?
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > Correct.
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > Thus, I need a nonblocking receive to wait the message
> > > > > > > here.
> > > > > > > Can
> > > > > > > I
> > > > > > > use
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > blocking receive?
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > If you use blocking receive, you'll block in the receive;
> > > > > > how
> > > > > > will
> > > > > > you reach the send?
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > -- Pavan
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > --
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > Pavan Balaji
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > http://www.mcs.anl.gov/~balaji
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > --
> > > > 
> > > 
> > 
> 
> > > > > Best Regards,
> > > > 
> > > 
> > 
> 
> > > > > Sufeng Niu
> > > > 
> > > 
> > 
> 
> > > > > ECASP lab, ECE department, Illinois Institute of Technology
> > > > 
> > > 
> > 
> 
> > > > > Tel: 312-731-7219
> > > > 
> > > 
> > 
> 
> > > > > _______________________________________________
> > > > 
> > > 
> > 
> 
> > > > > discuss mailing list discuss at mpich.org
> > > > 
> > > 
> > 
> 
> > > > > To manage subscription options or unsubscribe:
> > > > 
> > > 
> > 
> 
> > > > > https://lists.mpich.org/mailman/listinfo/discuss
> > > > 
> > > 
> > 
> 
> > > > _______________________________________________
> > > 
> > 
> 
> > > > discuss mailing list discuss at mpich.org
> > > 
> > 
> 
> > > > To manage subscription options or unsubscribe:
> > > 
> > 
> 
> > > > https://lists.mpich.org/mailman/listinfo/discuss
> > > 
> > 
> 
> > > _______________________________________________
> > 
> 
> > > discuss mailing list discuss at mpich.org
> > 
> 
> > > To manage subscription options or unsubscribe:
> > 
> 
> > > https://lists.mpich.org/mailman/listinfo/discuss
> > 
> 
> > _______________________________________________
> 
> > discuss mailing list discuss at mpich.org
> 
> > To manage subscription options or unsubscribe:
> 
> > https://lists.mpich.org/mailman/listinfo/discuss
> 
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20131206/0a8f231a/attachment.html>


More information about the discuss mailing list