[mpich-discuss] Isend and Test

Zhen Wang toddwz at gmail.com
Mon Apr 25 15:25:14 CDT 2016


Rob,

Thanks for your reply. Let me rephrase my question. I'm simulating what I
do in a complicate code. P0 Isends data to P1, and does computations. P1
needs data to do computation, so it calls Recv. As you said, P0 is calling
MPI_Test periodically. What puzzles me is why 2 calls of MPI_Test are
needed to let P1 receive the data. (With Intel MPI, the number jumps to
around 6.) The number of MPI_Test calls has influence on performance.
Thanks.

Best regards,
Zhen

On Mon, Apr 25, 2016 at 4:13 PM, Rob Latham <robl at mcs.anl.gov> wrote:

>
>
> On 04/25/2016 02:57 PM, Zhen Wang wrote:
>
>> Hi,
>>
>> I have questions regarding MPI_Isend and MPI_Test. A sample code is
>> attached, output is as follows.
>>
>> For instance, both Isend and Recv of 0 start at 15:46:14, but 2 MPI_Test
>> are called before Recv of 0 finishes. My understanding is first Test
>> receives the signal that receiver is ready, and data transfer follows.
>> Then the second Test will see the transfer is complete, and free
>> MPI_Request. Is my understanding correct? Thanks.
>>
>
> MPI_Test() is non-blocking.  MPI_Test() is not going to wait for anything
> to happen -- that's what MPI_Wait() will do.
>
> under the hood, MPI_Test "kicks the progress engine", which means
> everything that can execute right now will happen.  If there is anything
> that requires code to wait (to receive a signal, in your case), then test
> will return.
>
> A data transfer might take several calls to MPI_Test to complete.  Or,
> there's a background progress thread that's churning along and you only
> need to make one call to MPI_Test.  It's implementation-dependent.
>
> You are asking the wrong question, though.  At this level, it doesn't
> matter what MPI_Test does.  If your code has something it can do while the
> isend progresses, then go do that and call MPI_Test periodically. If you
> have nothing productive you can do, call MPI_Wait() (or one of the
> variants).
>
> ==rob
>
>
>>
>> Best regards,
>> Zhen
>>
>> MPI 1: Recv of 0 started at 15:46:14.
>> MPI 0: Isend of 0 started at 15:46:14.
>> MPI 0: Isend of 1 started at 15:46:14.
>> MPI 0: Isend of 2 started at 15:46:14.
>> MPI 0: Isend of 3 started at 15:46:14.
>> MPI 0: Isend of 4 started at 15:46:14.
>> MPI 0: MPI_Test of 0 at 15:46:16.
>> MPI 0: MPI_Test of 0 at 15:46:18.
>> MPI 0: Isend of 0 finished at 15:46:18.
>> MPI 1: Recv of 0 finished at 15:46:18.
>> MPI 1: Recv of 1 started at 15:46:18.
>> MPI 0: MPI_Test of 1 at 15:46:20.
>> MPI 0: MPI_Test of 1 at 15:46:22.
>> MPI 0: Isend of 1 finished at 15:46:22.
>> MPI 1: Recv of 1 finished at 15:46:22.
>> MPI 1: Recv of 2 started at 15:46:22.
>> MPI 0: MPI_Test of 2 at 15:46:24.
>> MPI 0: MPI_Test of 2 at 15:46:26.
>> MPI 0: Isend of 2 finished at 15:46:26.
>> MPI 1: Recv of 2 finished at 15:46:26.
>> MPI 1: Recv of 3 started at 15:46:26.
>> MPI 0: MPI_Test of 3 at 15:46:28.
>> MPI 0: MPI_Test of 3 at 15:46:30.
>> MPI 0: Isend of 3 finished at 15:46:30.
>> MPI 1: Recv of 3 finished at 15:46:30.
>> MPI 1: Recv of 4 started at 15:46:30.
>> MPI 0: MPI_Test of 4 at 15:46:32.
>> MPI 0: MPI_Test of 4 at 15:46:34.
>> MPI 0: Isend of 4 finished at 15:46:34.
>> MPI 1: Recv of 4 finished at 15:46:34.
>>
>>
>>
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20160425/85f49710/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list