[mpich-discuss] Avoiding using two buffers in a non-blocking send receive

Balaji, Pavan balaji at anl.gov
Sat Apr 25 18:21:41 CDT 2015


We use the mailing list for MPICH-related questions, not general MPI questions.  With respect to discuss at mpich.org<mailto:discuss at mpich.org>, I get those emails and the list has more people who might be able to help you as well.  Cc’ed the list.  Please send MPICH-related emails to that list.  Please send general “how do I write an MPI program” sort of questions to stack overflow or equivalent.

I’ll respond to your email this time, but please avoid sending us general MPI questions in the future.

I’m not sure what you mean by using a single buffer.  Do you mean that, once you send the data to your “send_to" neighbor, you don’t need that buffer anymore and would like to overwrite that with the data coming in from the “recv_from” neighbor?

Assuming that’s what you are looking for, there’s no good function to directly do what you are asking for.  However, pipelining the data transfer might help the memory usage problem.  Specifically, don’t allocate a full receive buffer, but a smaller buffer for a part of the array.  Once you receive the data and the corresponding data in your local array is full, you can overwrite it from the receive buffer.

Something like this:

#define CHUNK_SIZE (1024)
#define TOTAL_ARRAY_SIZE (CHUNK_SIZE * SOME_LARGE_NUMBER)

struct foobar main_buffer[TOTAL_ARRAY_SIZE];
struct foobar temp_buffer[CHUNK_SIZE];
MPI_Request req[CHUNK_SIZE * 2];

for (j = 0; j < SOME_LARGE_NUMBER; j++) {
for (i = 0; i < CHUNK_SIZE; i++) {
Isend(… data from main buffer …, &req[2 * i]);
Irecv(… data into temp buffer …, &req[2 * i + 1]);
}
WAITALL(…)
memcpy(&main_buffer[j * CHUNK_SIZE], &temp_buffer, …);
}

You can make this a little better by not waiting for all chunks, but testing any of the chunks to complete and pipelining the copy as well.

  — Pavan

On Apr 25, 2015, at 5:44 PM, Params <praman1 at ucsc.edu<mailto:praman1 at ucsc.edu>> wrote:

Hi Pavan,

Sorry for directly emailing you about this, but I am a new user to the mpich2 forums and I was not sure if my message to the mailing group reached you (please let me know about that too in case I need to use the forum again). I have a question about mpich usage and would be great if any of you guys could help me out.

I have a use-case where I need to transmit an Array of Structures (AoS) among a group of processors in a circular ("ring") fashion.
I do that currently by using a non-blocking send and receive, and using two auxiliary buffers on the senders and receivers side respectively.

Since the data I am transferring is extremely huge (~60 gb before partitioning), allocating two buffers seems inefficient to me. My question is -- Is there a better way to do it by avoiding "two buffers"? (Perhaps just using one buffer in some way?).

Roughly my communication procedure looks like this:

//Pack AoS into send buffer (which is a vector<char>)
        char *msg_ptr = &(msg_send_buf_.front());
        for (auto& param : local_primal_params_) {
          param.pack_to(msg_ptr);
          msg_ptr += PARAM_BYTENUM;
        }

//Perform non-blocking send receive
        {
          MPI_Request send_request;
          MPI_Status send_stat, recv_stat;
          MPI_Isend(&(msg_send_buf_.front()), local_primal_params_.size() * PARAM_BYTENUM, MPI_CHAR,
                    send_to, 0,
                    MPI_COMM_WORLD, &send_request);
          MPI_Recv(&(msg_recv_buf_.front()), local_primal_params_.size() * PARAM_BYTENUM, MPI_CHAR,
                    recv_from, 0,
                    MPI_COMM_WORLD, &recv_stat);
          MPI_Wait(&send_request, &send_stat);
        }

//Unpack receive buffer back into AoS on receiver side
        {
          char *msg_ptr = &(msg_recv_buf_.front());
          for (auto& param : local_primal_params_) {
            param.unpack_from(msg_ptr);
            msg_ptr += PARAM_BYTENUM;
          }
        }

Any recommendations will be very helpful.

Thanks,
Parameswaran Raman
PhD Student, UC Santa Cruz

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20150425/4f0bfd1a/attachment.html>


More information about the discuss mailing list