[mpich-discuss] MPI I/O spilt collective implementation: MPI_File_write_all_begin blocking or non-blocking?

pramod kumbhar pramod.s.kumbhar at gmail.com
Tue Feb 9 13:42:26 CST 2016


I have additional question about the internal progress of these
non-blocking routines. I have simple test case:

err = MPI_File_iwrite_all( fh, buf, bufsize, MPI_INT, &req);
MPI_Wait(&req, &status);

Here MPI_File_iwrite_all returns immediately and profiling shows time spent
in MPI_Wait.

Next, I modified example to overlap i/o with some computations:

some_compute_fun() {
  .....some_flops_here......
}

err = MPI_File_iwrite_all( fh, buf, bufsize, MPI_INT, &req);
some_compute_fun();
MPI_Wait(&req, &status);

In this case I still see the similar time spent in MPI_Wait. I added
MPI_Test for the non-blocking request as:

some_compute_fun() {
MPI_Test(&req, &flag, &status);
  .....some_flops_here......
}

err = MPI_File_iwrite_all( fh, buf, bufsize, MPI_INT, &req);
some_compute_fun();
MPI_Wait(&req, &status);

Now I see that i/o operation progresses in background and don't see any
time spent in MPI_Wait.

Few months ago I was testing non-blocking collective operations in mvapich
and saw similar behaviour where I have to use few MPI_Test calls to make
progress with non-blocking collectives (this will be addressed in upcoming
releases).

I am testing above i/o example on my laptop and haven't tested other
non-blocking collectives. Before analysing/profiling this further on
production cluster / bg-q, I would like to know:
- what is the expected behaviour?
- are there any specific options that I have to use to make sure internal
progress?

-Pramod

p.s. I just built mpich-3.2 with configure,make,make install on OS X,
without looking into all configure options...

On Tue, Feb 9, 2016 at 3:51 AM, Rob Latham <robl at mcs.anl.gov> wrote:

>
>
> On 02/08/2016 06:04 PM, pramod kumbhar wrote:
>
>> thanks! MPI_File_iwrite_all working as expected.
>>
>
> I'm always glad to see folks looking at the MPI-IO routines.  Let us know
> how your research progresses.
>
> ==rob
>
>
>> -Pramod
>>
>> On Tue, Feb 9, 2016 at 12:52 AM, Balaji, Pavan <balaji at anl.gov
>> <mailto:balaji at anl.gov>> wrote:
>>
>>
>>     I believe it is valid to do that in split collectives.  If you want
>>     truly nonblocking nature, the standard way to do it would be to use
>>     nonblocking collective I/O, i.e., MPI_File_iwrite_all.  This is
>>     guaranteed to not block during the MPI_File_iwrite_all call.
>>
>>        -- Pavan
>>
>>      > On Feb 8, 2016, at 5:48 PM, pramod kumbhar
>>     <pramod.s.kumbhar at gmail.com <mailto:pramod.s.kumbhar at gmail.com>>
>> wrote:
>>      >
>>      > Hello All,
>>      >
>>      > I am testing MPI_FIle_write_all_begin/_end and it seems like
>>      > MPI_File_write_all_begin waits for i/o completion.
>>      >
>>      > Could someone confirm the implementation of these routines?
>>      >
>>      > Thanks,
>>      > Pramod
>>      > _______________________________________________
>>      > discuss mailing list discuss at mpich.org <mailto:discuss at mpich.org>
>>      > To manage subscription options or unsubscribe:
>>      > https://lists.mpich.org/mailman/listinfo/discuss
>>
>>     _______________________________________________
>>     discuss mailing list discuss at mpich.org <mailto:discuss at mpich.org>
>>     To manage subscription options or unsubscribe:
>>     https://lists.mpich.org/mailman/listinfo/discuss
>>
>>
>>
>>
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20160209/522de1fb/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list