[mpich-discuss] Question on MPI_Pack/Unpack

Dries Kimpe dkimpe at mcs.anl.gov
Thu Jul 18 10:29:07 CDT 2013

There's another function for MPI_Pack.

It deals with encoding for non-homogeneous systems.

For example, it might convert between different representations of double
floating point numbers.

Normally, this happens automatically (assuming a proper datatype was
given) in MPI_Send/MPI_Recv.

MPI_Pack does this while it encodes into a contiguous buffer,
which is why you're supposed to use the MPI_PACKED datatype when sending
packed data, so the implementation knows not to convert the data.


* Antonio J. Peña <apenya at mcs.anl.gov> [2013-07-18 10:16:04]:

> Hi Matthieu,

> These functions only copy all the data of the datatype into a contiguous 
> memory region. They don't do any further action. I'd say these are a more 
> natural and efficient solution for your problem than the self send-receive 
> approach. Note that the send is internally performing the pack before 
> sending, so you'd effectively do the same but avoiding the send/receive 
> step.

>   Antonio

> On Thursday, July 18, 2013 04:35:28 PM Matthieu Dorier wrote:

> Hi,

> I'd like some clarification on MPI_Pack/Unpack: the standard only says that 
> "some communication libraries provide pack/unpack functions for sending 
> non-contiguous data", but it's not clear to me what MPI_Pack/Unpack are 
> supposed to do. For example, if I have a MPI_Datatype representing a 
> structure with holes between fields, is MPI_Pack just supposed to put the 
> elements of the structure close to each other so that there is no more 
> holes, or can it do something different (compressing, adding metadata, 
> etc.)?

> The reason I'm asking this is because I'm designing a code to extract 
> chunks of multi-dimentional arrays (basically a big array has ghost zones 
> and I want to extract the non-ghost data from this big array into a smaller 
> array, contiguous in memory). This extraction is local and does not involve 
> communication so right now after using MPI_Type_create_subarray to 
> create a datatype adapted to my need, I post an MPI_Irecv with a 
> contiguous array of MPI_CHAR as type, and then an MPI_Send with one 
> element of the type I just created. I was wondering if MPI_Pack would be 
> equivalent (and maybe more efficient)?

> Thanks

> Matthieu Dorier

> PhD student at ENS Cachan Brittany and IRISA

> http://people.irisa.fr/Matthieu.Dorier[1]

> --------
> [1] http://people.irisa.fr/Matthieu.Dorier

> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130718/c8023007/attachment.sig>

More information about the discuss mailing list