[mpich-discuss] From file reading to memory sharing

Jeff Hammond jeff.science at gmail.com
Wed Aug 12 11:46:52 CDT 2015


On Wed, Aug 12, 2015 at 9:27 AM, Dorier, Matthieu <mdorier at anl.gov> wrote:

> Hi,
>
> I'm trying to refactor an MPI code using MPI one-sided communications.
>
> The initial version of the code reads its data from a file containing a 3D
> array of floats. Each process has a series of subdomains (blocks) to load
> from the file, so they all open the file and then issue a series of
> MPI_File_set_view and MPI_File_read. The type passed to MPI_File_set_view
> is constructed using MPI_Type_create_subarray to match the block that needs
> to be loaded.
>
> This code performs very poorly even at small scale: the file is 7GB but
> the blocks are a few hundreds of bytes, and each process has many blocks to
> load.
>
>
What is the root cause for the poor performance here?


> Instead, I would like to have process rank 0 load the entire file, then
> expose it over RMA. I'm not
>

I don't understand why you want to serialize your I/O through rank 0, limit
your code to loading files that fit into the memory of one process, and
then force every single process to obtain data via a point-to-point
operation with one process.  This seems triply unscalable.

Perhaps you can address your performance issues without compromising
scalability.

Do you have a MCVE (http://stackoverflow.com/help/mcve) for this?

Jeff


> familiar at all with MPI one-sided operations, since I never used them
> before, but I guess there should be a simple way to reuse the subarray
> datatype of my MPI_File_set_view and use it in the context of an MPI_Get.
> I'm just not sure what the arguments of this MPI_Get would be. My guess is:
> origin_count would be the number of floats in a single block,
> origin_datatype would be MPI_FLOAT, target_rank = 0, target_disp = 0,
> target_count = 1, target_datatype = my subarray datatype. Would that be
> correct?
>
> Thanks,
>
> Matthieu
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>



-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20150812/52794802/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list