[mpich-discuss] Questions about MPICH multi-thread support

Guilherme Valarini guilherme.a.valarini at gmail.com
Tue Jan 26 15:24:06 CST 2021


Dear MPICH community,

I am currently developing an event system built upon MPI and I have a few
questions about the current state of multi-thread support from MPICH in the
*MPI_THREAD_MULTIPLE* mode.

*Question 1*. I am aware of the message envelope tuples (source,
destination, tag, communicator) that may be used to identify and filter
different messages. While using MPICH, can I rely on such information in
order to guarantee that the correct messages will reach/be matched with
their destinations in a multiprocess multithreaded program? I know this is
quite a broad question, but I am more interested in the analysis of the
following scenarios.


   - Two threads in a process A want to send different messages to two
   different processes, B and C (each with one thread), while using the
   *same* *communicator* *and* *tag*. Code example:

  // One process with N threads to N process with one thread (same tag)
>   if (rank == 0) {
>     std::thread t1([&]() {
>       // Generate data ...
>       MPI_Send(data, 1, MPI_CHAR, 1, 0, MPI_COMM_WORLD);
>     });
>     std::thread t2([&]() {
>       // Generate data ...
>       MPI_Send(data, 1, MPI_CHAR, 2, 0, MPI_COMM_WORLD);
>     });
>     t1.join(); t2.join();
>   } else {
>     MPI_Recv(data, 1, MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
>   }
>

   - The same scenario as above, but now each thread uses a *different tag*
   to communicate to their respective process while only sharing the *same*
   *communicator*. Code example:

  // One process with N threads to N process with one thread (different
> tags)
>   if (rank == 0) {
>     std::thread t1([&]() {
>       // Generate data ...
>       MPI_Send(data, 1, MPI_CHAR, 1, 1, MPI_COMM_WORLD);
>     });
>     std::thread t2([&]() {
>       // Generate data ...
>       MPI_Send(data, 1, MPI_CHAR, 2, 2, MPI_COMM_WORLD);
>     });
>     t1.join(); t2.join();
>   } else {
>     MPI_Recv(data, 1, MPI_CHAR, 0, rank, MPI_COMM_WORLD,
> MPI_STATUS_IGNORE);
>   }
>

   - Multiple threads from a process A want each to send a message to the
   same thread of another process B using the *same communicator and*
*tag*. Code
   example:

  // One process with N threads to one process with one thread (same tags)
>   if (rank == 0) {
>     std::thread t1([&]() {
>       // Generate data ...
>       MPI_Send(data, 1, MPI_CHAR, 1, 0, MPI_COMM_WORLD);
>     });
>     std::thread t2([&]() {
>       // Generate data ...
>       MPI_Send(data, 1, MPI_CHAR, 1, 0, MPI_COMM_WORLD);
>     });
>     t1.join(); t2.join();
>   } else if (rank == 1) {
>     MPI_Recv(data, 1, MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
>     // Process data ...
>     MPI_Recv(data, 1, MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
>     // Process data ...
>   }
>

   - One thread from a process A wants to send a message to one of multiple
   threads from a process B using the *same communicator and tag*. Code
   example:

  // One process with one thread to one process with N threads (same tags)
>   if (rank == 0) {
>     MPI_Request requests[2];
>     // Generate data ...
>     MPI_Isend(data, 1, MPI_CHAR, 1, 0, MPI_COMM_WORLD, &requests[0]);
>     // Generate data ...
>     MPI_Isend(data, 1, MPI_CHAR, 1, 0, MPI_COMM_WORLD, &requests[1]);
>
>     MPI_Waitall(2, requests, MPI_STATUSES_IGNORE);
>   } else if (rank == 1) {
>     std::thread t1([&]() {
>       MPI_Recv(data, 1, MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
>       // Process data ...
>     });
>     std::thread t2([&]() {
>       MPI_Recv(data, 1, MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
>       // Process data ...
>     });
>     t1.join(); t2.join();
>   }
>

   - Two threads from a process A want each to send a message to two
   threads from a process B using different tags for each pair of threads
   (e.g. the pair A.1/B.1 uses a different tag from pair A.2/B.2). Code
   example:

  // One process with N threads to one process with N thread (different
> tags)
>   if (rank == 0) {
>     std::thread t1([&]() {
>       // Generate data ...
>       MPI_Send(data, 1, MPI_CHAR, 1, 1, MPI_COMM_WORLD);
>     });
>     std::thread t2([&]() {
>       // Generate data ...
>       MPI_Send(data, 1, MPI_CHAR, 1, 2, MPI_COMM_WORLD);
>     });
>     t1.join(); t2.join();
>   } else if (rank == 1) {
>     std::thread t1([&]() {
>       MPI_Recv(data, 1, MPI_CHAR, 0, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
>       // Process data ...
>     });
>     std::thread t2([&]() {
>       MPI_Recv(data, 1, MPI_CHAR, 0, 2, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
>       // Process data ...
>     });
>     t1.join(); t2.join();
>   }
>

*Question 2*. Is there a problem when mixing blocking and non-blocking
calls on opposite sides of a message? Even in the multithreaded scenarios
described previously? (e.g. matching a *MPI_Isend* with a *MPI_Recv*, and
vice-versa.)

Thank you.

Regards,
Guilherme Valarini
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20210126/6eae0b4a/attachment.html>


More information about the discuss mailing list