[mpich-discuss] How to terminate MPI_Comm_accept

Roy, Hirak Hirak_Roy at mentor.com
Thu Oct 9 11:26:19 CDT 2014


Thanks Sangmin for your suggestion.



I have taken a similar approach. I have notified one of the connected clients to reconnect and terminate the blocking accept.



Regards,

Hirak









Hirak,



I would correct my previous response. We discussed this issue inside the MPICH team, and we concluded this is a gray area of the standard (it's not clear when the functions are used with threads). Thus, we are not sure if your use case is right or wrong because you were using different communicators. BTW, you may solve this issue by spawning a new process that connects back.



Thanks,

Sangmin





On Oct 9, 2014, at 7:27 AM, Roy, Hirak <Hirak_Roy at mentor.com<https://lists.mpich.org/mailman/listinfo/discuss><mailto:Hirak_Roy at mentor.com<https://lists.mpich.org/mailman/listinfo/discuss>>> wrote:



Thanks Sangmin.

I would change the strategy to terminate the accept.



Regards,

Hirak





Hi Hirak,



I checked your code. I think your code is incorrect because you are trying to establish communication within a single process. You are using two threads and two duplicated communicators, but they are associated with a single process. MPI_Comm_accept() and MPI_Comm_connect() are used to establish communication between "two sets of MPI processes" (i.e., at least two different processes are needed). Please refer to 10.4 Establishing Communication (p. 387) in MPI Standard 3.0.



- Sangmin







From: Roy, Hirak

Sent: Thursday, October 09, 2014 2:17 AM

To: discuss at mpich.org<https://lists.mpich.org/mailman/listinfo/discuss><mailto:discuss at mpich.org<https://lists.mpich.org/mailman/listinfo/discuss>>

Subject: Re: How to terminate MPI_Comm_accept





Hi Pavan,







Here is the code attached. Let me know if you think the code is incorrect.







I used MPI-3.0.4 with sock device.







Compile command :







export MPI_ROOT = /home/hroy/local/mpich-3.0.4/linux_x86_64



#export MPI_ROOT = /home/hroy/local/mpich-3.0.4.nemesis/linux_x86_64



export MPI_BIN = ${MPI_ROOT}/bin



export MPI_LIB = ${MPI_ROOT}/lib



export INCLUDE_DIR = ${MPI_ROOT}/include



compile:



        g++ -g  main.cpp  -I ${INCLUDE_DIR} ${MPI_LIB}/libmpich.a ${MPI_LIB}/libmpl.a -lpthread















Thanks,



Hirak







Can you send us a simple program that reproduces the issue?







  - Pavan







On Oct 8, 2014, at 2:54 PM, Roy, Hirak <Hirak_Roy at mentor.com<https://lists.mpich.org/mailman/listinfo/discuss>> wrote:







> Hi Pavan,



>



> I even tried duping the communicator (MPI_COMM_SELF à DUP1, DUP2) before I do any kind of MPI calls.



> Still it does not work.



>



> Thanks,



> Hirak



>



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20141009/d351fc17/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list