[mpich-discuss] Failed to allocate memory for an unexpected message

Luiz Carlos da Costa Junior lcjunior at ufrj.br
Wed Oct 23 14:27:27 CDT 2013


Hi,

I am getting the following error when running my parallel application:

MPI_Recv(186)......................: MPI_Recv(buf=0x125bd840, count=2060,
MPI_CHARACTER, src=24, tag=94, comm=0x84000002, status=0x125fcff0) failed
MPIDI_CH3I_Progress(402)...........:
MPID_nem_mpich2_blocking_recv(905).:
MPID_nem_tcp_connpoll(1838)........:
state_commrdy_handler(1676)........:
MPID_nem_tcp_recv_handler(1564)....:
MPID_nem_handle_pkt(636)...........:
MPIDI_CH3_PktHandler_EagerSend(606): Failed to allocate memory for an
unexpected message. 261895 unexpected messages queued.
Fatal error in MPI_Send: Other MPI error, error stack:
MPI_Send(173)..............: MPI_Send(buf=0x765d2e60, count=2060,
MPI_CHARACTER, dest=0, tag=94, comm=0x84000004) failed
MPID_nem_tcp_connpoll(1826): Communication error with rank 1: Connection
reset by peer


I went to MPICH's FAQ (
http://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_Why_am_I_getting_so_many_unexpected_messages.3F
).
It says that most likely the receiver process can't cope to process the
high number of messages it is receiving.

In my application, the worker processes perform a very large number of
small computations and, after some computation is complete, they sent the
data to a special "writer" process that is responsible to write the output
to disk.
This scheme use to work in a very reasonable fashion, until we faced some
new data with larger parameters that caused the problem above.

Even though we can redesign the application, for example, by creating a
pool of writer process we still have only one hard disk, so the bottleneck
would not be solved. So, this doesn't seem to be a good approach.

As far as I understood, MPICH saves the content of every MPI_SEND in a
internal buffer (I don't know where the buffer in located, sender or
receiver?) to allow asynchronous sender's computation while the messages
are being received.
The problem is that buffer has been exhausted due some resource limitation.

It is very interesting to have a buffer but if the buffer in the writer
process is close to its limit the workers processes should stop and wait
until it frees some space to restart sending new data to be written to disk.

Is it possible to check this buffer in MPICH? Or is it possible to check
the number of messages to be received?
Can anyone suggest a better (easy to implement) solution?

Thanks in advance.

Regards,
Luiz
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20131023/3a02fa51/attachment.html>


More information about the discuss mailing list