[mpich-discuss] Avoiding using two buffers in a non-blocking send receive

Params parameshr at gmail.com
Sat Apr 25 00:34:44 CDT 2015


Hello,

I have a use-case where I need to transmit an Array of Structures (AoS)
among a group of processors in a circular ("ring") fashion.
I do that currently by using a non-blocking send and receive, and using two
auxiliary buffers on the senders and receivers side respectively.

Since the data I am transferring is extremely huge (~60 gb before
partitioning), allocating two buffers seems inefficient to me. My question
is -- Is there a better way to do it by avoiding "two buffers"? (Perhaps
just using one buffer in some way?).

Roughly my communication procedure looks like this:

//Pack AoS into send buffer (which is a vector<char>)
        char *msg_ptr = &(msg_send_buf_.front());
        for (auto& param : local_primal_params_) {
          param.pack_to(msg_ptr);
          msg_ptr += PARAM_BYTENUM;
        }

//Perform non-blocking send receive
        {
          MPI_Request send_request;
          MPI_Status send_stat, recv_stat;
          MPI_Isend(&(msg_send_buf_.front()), local_primal_params_.size() *
PARAM_BYTENUM, MPI_CHAR,
                    send_to, 0,
                    MPI_COMM_WORLD, &send_request);
          MPI_Recv(&(msg_recv_buf_.front()), local_primal_params_.size() *
PARAM_BYTENUM, MPI_CHAR,
                    recv_from, 0,
                    MPI_COMM_WORLD, &recv_stat);
          MPI_Wait(&send_request, &send_stat);
        }

//Unpack receive buffer back into AoS on receiver side
        {
          char *msg_ptr = &(msg_recv_buf_.front());
          for (auto& param : local_primal_params_) {
            param.unpack_from(msg_ptr);
            msg_ptr += PARAM_BYTENUM;
          }
        }

Any recommendations will be very helpful.

-- 
Thanks,
params
http://people.ucsc.edu/~praman1/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20150424/7c8e6be6/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list