[mpich-discuss] mpi_comm_connect hangs in client/server mode runs in multiple process
Shuwei Zhao
shuweizhao1991 at gmail.com
Wed Feb 20 21:23:30 CST 2019
Hi, Joachim,
Thanks a lot for your response. Is that possible that the rank 0 in server
can talk to each process in the client?
If that's possible, I just need to replace MPI_COMM_WORLD by MPI_COMM_SELF
on the server side, right?
Thanks,
Shuwei
On Wed, Feb 20, 2019 at 5:11 PM Protze, Joachim <protze at itc.rwth-aachen.de>
wrote:
> Shuwei,
>
> Both MPI calls are collective calls. This means all ranks in the provided
> communicator must call this function.
> If you want to connect both applications so that any rank in server can
> communicate with any rank in client, all processes need to call the
> function for each side.
> If only rank 0 should ever communicate, you can replace MPI_COMM_WORLD by
> MPI_COMM_SELF on both sides. The resulting newcomm will be a
> intercommunicator with one rank on each side.
>
> Best
> Joachim
>
> --
> Sent from my mobile phone
>
>
>
>
> On Wed, Feb 20, 2019 at 10:54 PM +0100, "Shuwei Zhao via discuss" <
> discuss at mpich.org> wrote:
>
> Hi,
>>
>> I was trying to study the usage of mpi client-server model based on the
>> example you provided on
>> https://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-2.0/node106.htm.
>> After writing simple separate client and server source file based on the
>> samples on your website. I found that:
>> 1) when client and server both runs in single process like mpiexec -n 1
>> server, mpiexec -n 1 client.
>> it works fine, connection between server and client can successfully be
>> built.
>> 2) when client runs in single process but server runs in multiple
>> processes (2 for example) but only rank 0 in server responsible to call
>> mpi_comm_accept, rank 1 of server is just sleeping. Usage is like mpiexec
>> -n 1 client, mpiexec -n 2 server. In this case, server blocks in
>> mpi_comm_accept, which is expected. However, client blocks in
>> mpi_comm_connect which is not expected.
>> 3) when client runs in multiple processes and server runs in multiple
>> processes, only rank 0 of client responsible for mpi_comm_connect and only
>> rank 0 of server responsible for mpi_comm_accept. In this case, same
>> observation in 2).
>>
>> May I ask is the hanging of mpi_comm_connect in 2) & 3) expected? Is
>> there a way to build connection successfully when mpi client/server runs
>> under multi-process?
>>
>> Thanks a lot,
>> Shuwei
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20190220/5057c7ac/attachment.html>
More information about the discuss
mailing list