<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix"><br>
Hi Ron,<br>
<br>
Depending on how the algorithm is structured, I may well be that
the faster computers are generating messages faster than the slow
computer is able to process them. As the problem size increases,
the amount of data in unprocessed messages may get too high for
that computer. In a very simple example (but note that there are
many other possible situations):<br>
<br>
Hosts A and B:<br>
for(int i=0; i<n; i++) {<br>
MPI_Send(buf, count, MPI_INT, host_c, 0, MPI_COMM_WORLD);<br>
buf += count;<br>
}<br>
<br>
Host C:<br>
for(int i=0; i<2*n; i++) {<br>
MPI_Recv(buf, count, MPI_INT, MPI_ANY_SOURCE, 0,
MPI_COMM_WORLD, &status);<br>
process_data(buf);<br>
}<br>
<br>
In that case the senders keep sending data to host_c faster than
it's calling MPI_Recv, and get received by MPICH as unexpected
messages (MPICH doesn't know where to place that data, and uses
temporary internal buffers). Note that this situation is not
necessarily caused by different processing speeds, since the
culprit may well be the used algorithm itself. A very simple
solution to avoid the prior potential problem could be:<br>
<br>
Host A and B:<br>
MPI_Barrier(MPI_COMM_WORLD);<br>
for(int i=0; i<n; i++) {<br>
MPI_Send(buf, count, MPI_INT, host_c, 0, MPI_COMM_WORLD);<br>
buf += n;<br>
}<br>
<br>
Host C:<br>
for(int i=0; i<2*n; i++) {<br>
MPI_Irecv(buf, count, MPI_INT, MPI_ANY_SOURCE, 0,
MPI_COMM_WORLD, &requests[i]);<br>
buf += count;<br>
}<br>
MPI_Barrier(MPI_COMM_WORLD);<br>
for(int i=0; i<2*n; i++) {<br>
MPI_Waitany(2*n, requests, &idx, &status);<br>
process_data(buf+idx*count);<br>
}<br>
<br>
Now the data is placed on its final destination, since it's
guaranteed that the MPI implementation knows its destination
before receiving it. That solution would assume that there is
enough memory for preallocating all receiving buffers, which may
not be true and may require, for example, implementing a
high-level protocol (such as based on credits) to synchronize the
sends and receives to make sure that there is room on the receiver
for them.<br>
<br>
I hope this helps.<br>
<br>
Best,<br>
Antonio<br>
<br>
<br>
On 05/24/2014 02:00 AM, Ron Palmer wrote:<br>
</div>
<blockquote cite="mid:53804393.2060408@pgcgroup.com.au" type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=ISO-8859-1">
<div class="moz-cite-prefix">Antonio, Rajeev and others, <br>
thanks for your replies and comments on possible causes for the
error messages and failure, I have passed them on to the
programmers of the underlying application. I must admit I do not
understand what unexpected messages are (I am but a mere user),
could you perhaps give examples of typical causes of them? Eg,
the cluster it runs on consists of 3 dual xeon computers with
varying cpu clock rating - could these error messages be due to
getting out of synch, expecting results but not getting them
from the slower computer? I have re-started the process but
excluded the slowest computer (2.27GHz, the other two are
running at 2.87 and 3.2) as I was running out of ideas.<br>
<br>
For your information, this runs well on smaller problems (few
computations).<br>
<br>
Thanks,<br>
Ron <br>
<br>
On 24/05/2014 3:10 AM, Rajeev Thakur wrote:<br>
</div>
<blockquote
cite="mid:2F0CFAAE-3CFD-458D-B302-585DFF140C90@mcs.anl.gov"
type="cite">
<div>Yes. The message below says some process has received
261,895 messages for which no matching receives have been
posted yet.</div>
<div><br>
</div>
<img id="24dec2b4-4598-4631-a3de-76a9b374a383" apple-width="yes"
apple-height="yes"
src="cid:part1.07000002.09040502@mcs.anl.gov" height="19"
width="852"><br>
<br>
Rajeev
<div>
<div><br>
</div>
<div><br>
<blockquote type="cite">It looks like at least one of your
processes is receiving too many unexpected messages,
leading to get out of memory. Unexpected messages are
those not matching a posted receive on the receiver side.
You may check with the application developers to make
them review the algorithm or look for any possible bug.<br>
<br>
Antonio<br>
</blockquote>
<br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
discuss mailing list <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:discuss@mpich.org">discuss@mpich.org</a>
To manage subscription options or unsubscribe:
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a></pre>
</blockquote>
<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
discuss mailing list <a class="moz-txt-link-abbreviated" href="mailto:discuss@mpich.org">discuss@mpich.org</a>
To manage subscription options or unsubscribe:
<a class="moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a></pre>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
Antonio J. Peña
Postdoctoral Appointee
Mathematics and Computer Science Division
Argonne National Laboratory
9700 South Cass Avenue, Bldg. 240, Of. 3148
Argonne, IL 60439-4847
<a class="moz-txt-link-abbreviated" href="mailto:apenya@mcs.anl.gov">apenya@mcs.anl.gov</a>
<a class="moz-txt-link-abbreviated" href="http://www.mcs.anl.gov/~apenya">www.mcs.anl.gov/~apenya</a></pre>
</body>
</html>