[mpich-discuss] How to terminate MPI_Comm_accept
Roy, Hirak
Hirak_Roy at mentor.com
Wed Oct 8 11:41:10 CDT 2014
Pavan,
Is there any way to come out of the blocking accept?
Thanks,
Hirak
Hmm. This will not work. You are simultaneously doing two collectives on the same communicator (MPI_COMM_SELF) from two different threads: MPI_Comm_accept and MPI_Comm_connect. This is not allowed by the MPI standard.
- Pavan
P.S.: Sorry, I should have caught that yesterday. It's amazing what a good night's sleep can do!
On Oct 8, 2014, at 12:14 AM, Roy, Hirak <Hirak_Roy at mentor.com<https://lists.mpich.org/mailman/listinfo/discuss>> wrote:
> Hi Pavan,
>
> Here is my code for thread2 :
>
> do {
> MPI_Comm newComm ;
> MPI_Comm_accept (m_serverPort.c_str(), MPI_INFO_NULL, 0, MPI_COMM_SELF, &newComm);
> Log ("Accepted a connection");
> int buf = 0 ;
> MPI_Status status ;
> MPI_Recv(&buf, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, newComm, &status);
>
> if (status.MPI_TAG == MPI_MSG_TAG_NEW_CONN) {
> m_clientComs[m_clientCount] = newComm ;
> m_clientCount++;
> } else if (status.MPI_TAG == MPI_MSG_TAG_SHUTDOWN) {
> Log ("Shutdown");
> //MPI_Comm_disconnect (&newComm);
> Log ("Disconnect");
> break;
> } else {
> Log ("Unmatched Receive");
> }
> } while(1) ;
>
>
> Here is my code for thread1 to terminate thread2 :
>
> MPI_Comm newComm ;
> MPI_Comm_connect (m_serverPort.c_str(), MPI_INFO_NULL, 0, MPI_COMM_SELF, &newComm);
> Log ("Connect to Self");
> int val = 0 ;
> MPI_Request req ;
> MPI_Send(&val, 1, MPI_INT, 0, MPI_MSG_TAG_SHUTDOWN, newComm);
> Log ("Successful");
> //MPI_Status stat ;
> //MPI_Wait(&req, &stat);
> Log ("Complete");
>
> //MPI_Comm_disconnect(&newComm);
>
>
>
>
> The MPI_Send/Recv waits.
> I am using sock channel.
> For nemesis, I get the following crash :
> Assertion failed in file ./src/mpid/ch3/channels/nemesis/include/mpid_nem_inline.h at line 58: vc_ch->is_local
> internal ABORT - process 0
>
> I tried non-blocking send and receive followed by wait. However, that also does not solve the problem.
>
> Thanks,
> Hirak
>
>
>
> -----
>
> Hirak,
>
> Your approach should work fine. I'm not sure what issue you are facing. I assume thread 1 is doing this:
>
> while (1) {
> MPI_Comm_accept(..);
> MPI_Recv(.., tag, ..);
> if (tag == REGULAR_CONNECTION)
> continue;
> else if (tag == TERMINATION) {
> MPI_Send(..);
> break;
> }
> }
>
> In this case, all clients do an MPI_Comm_connect and then send a message with tag = REGULAR_CONNECTION. When thread 2 is done with its work, it'll do an MPI_Comm_connect and then send a message with tag = TERMINATION, wait for a response from thread 1, and call finalize.
>
> - Pavan
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20141008/d025ef2f/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
More information about the discuss
mailing list