[mpich-discuss] MPI I/O spilt collective implementation: MPI_File_write_all_begin blocking or non-blocking?

Rob Latham robl at mcs.anl.gov
Tue Feb 9 15:17:06 CST 2016



On 02/09/2016 01:42 PM, pramod kumbhar wrote:

>
> In this case I still see the similar time spent in MPI_Wait. I added
> MPI_Test for the non-blocking request as:
>
> some_compute_fun() {
> MPI_Test(&req, &flag, &status);
>    .....some_flops_here......
> }
>
> err = MPI_File_iwrite_all( fh, buf, bufsize, MPI_INT, &req);
> some_compute_fun();
> MPI_Wait(&req, &status);
>
> Now I see that i/o operation progresses in background and don't see any
> time spent in MPI_Wait.


It's a subtle but important point: the standard suggests might progress 
happen in the background.  It does not mandate it.

We have the notions of "strict interpretation of the progress rule" 
where background progress will happen and "weak interpretation", where 
progress only happens when one is in an MPI call.

When you call MPI_Test, the mpi library can, as we say "kick the 
progress engine" and allow outstanding requests to make some function 
calls.

MPICH takes a weak interpretation of the progress rule, but one can 
emulate strict interpretation by spawning a thread whose one job in life 
is to call MPI_Test repeatedly.

==rob

>
> Few months ago I was testing non-blocking collective operations in
> mvapich and saw similar behaviour where I have to use few MPI_Test calls
> to make progress with non-blocking collectives (this will be addressed
> in upcoming releases).
>
> I am testing above i/o example on my laptop and haven't tested other
> non-blocking collectives. Before analysing/profiling this further on
> production cluster / bg-q, I would like to know:
> - what is the expected behaviour?
> - are there any specific options that I have to use to make sure
> internal progress?
>
> -Pramod
>
> p.s. I just built mpich-3.2 with configure,make,make install on OS X,
> without looking into all configure options...
>
> On Tue, Feb 9, 2016 at 3:51 AM, Rob Latham <robl at mcs.anl.gov
> <mailto:robl at mcs.anl.gov>> wrote:
>
>
>
>     On 02/08/2016 06:04 PM, pramod kumbhar wrote:
>
>         thanks! MPI_File_iwrite_all working as expected.
>
>
>     I'm always glad to see folks looking at the MPI-IO routines.  Let us
>     know how your research progresses.
>
>     ==rob
>
>
>         -Pramod
>
>         On Tue, Feb 9, 2016 at 12:52 AM, Balaji, Pavan <balaji at anl.gov
>         <mailto:balaji at anl.gov>
>         <mailto:balaji at anl.gov <mailto:balaji at anl.gov>>> wrote:
>
>
>              I believe it is valid to do that in split collectives.  If
>         you want
>              truly nonblocking nature, the standard way to do it would
>         be to use
>              nonblocking collective I/O, i.e., MPI_File_iwrite_all.  This is
>              guaranteed to not block during the MPI_File_iwrite_all call.
>
>                 -- Pavan
>
>               > On Feb 8, 2016, at 5:48 PM, pramod kumbhar
>              <pramod.s.kumbhar at gmail.com
>         <mailto:pramod.s.kumbhar at gmail.com>
>         <mailto:pramod.s.kumbhar at gmail.com
>         <mailto:pramod.s.kumbhar at gmail.com>>> wrote:
>               >
>               > Hello All,
>               >
>               > I am testing MPI_FIle_write_all_begin/_end and it seems like
>               > MPI_File_write_all_begin waits for i/o completion.
>               >
>               > Could someone confirm the implementation of these routines?
>               >
>               > Thanks,
>               > Pramod
>               > _______________________________________________
>               > discuss mailing list discuss at mpich.org
>         <mailto:discuss at mpich.org> <mailto:discuss at mpich.org
>         <mailto:discuss at mpich.org>>
>               > To manage subscription options or unsubscribe:
>               > https://lists.mpich.org/mailman/listinfo/discuss
>
>              _______________________________________________
>              discuss mailing list discuss at mpich.org
>         <mailto:discuss at mpich.org> <mailto:discuss at mpich.org
>         <mailto:discuss at mpich.org>>
>              To manage subscription options or unsubscribe:
>         https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
>
>         _______________________________________________
>         discuss mailing list discuss at mpich.org <mailto:discuss at mpich.org>
>         To manage subscription options or unsubscribe:
>         https://lists.mpich.org/mailman/listinfo/discuss
>
>     _______________________________________________
>     discuss mailing list discuss at mpich.org <mailto:discuss at mpich.org>
>     To manage subscription options or unsubscribe:
>     https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list