[mpich-discuss] [EXTERNAL] Re: Calling MPI_Finalize or not

Mccall, Kurt E. (MSFC-EV41) kurt.e.mccall at nasa.gov
Sun Feb 21 16:12:52 CST 2021


Hui,

So are you saying that the worker can call MPI_Finalize and it won't wait for the manager that spawned it to do the same?   That would be ideal.

Thanks,
Kurt

From: Zhou, Hui <zhouh at anl.gov>
Sent: Sunday, February 21, 2021 4:07 PM
To: discuss at mpich.org
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall at nasa.gov>
Subject: [EXTERNAL] Re: Calling MPI_Finalize or not

The worker should call `MPI_Finalize` before exit. If somehow it hangs, assuming all the resources have been freed/disconnected, it must be a bug, please file an issue and we'll track it down. Each spawned process has a different `MPI_COMM_WORLD` from the process who spawned it.

--
Hui Zhou


From: Mccall, Kurt E. (MSFC-EV41) via discuss <discuss at mpich.org<mailto:discuss at mpich.org>>
Date: Sunday, February 21, 2021 at 3:06 PM
To: discuss at mpich.org<mailto:discuss at mpich.org> <discuss at mpich.org<mailto:discuss at mpich.org>>
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall at nasa.gov<mailto:kurt.e.mccall at nasa.gov>>
Subject: [mpich-discuss] Calling MPI_Finalize or not
Two questions:

I have a group of manager processes, each of which uses MPI_Comm_spawn() to create individual worker processes one at a time.
Occasionally, a worker has to exit and is replaced by another worker via MPI_Comm_spawn().    If the exiting worker calls MPI_Finalize(),
will it sit there waiting for all other managers and workers in MPI_COMM_WORLD to do the same, or will it wait only for the manager
that created its inter-communicator to call MPI_Finalize?

In either case, maybe it is better for the worker to NOT call MPI_Finalize at all so that it doesn't sit there taking up a core's resources
while it waits for the others.   What are the implications of not calling MPI_Finalize in the workers?

Thanks,
Kurt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20210221/645b97d1/attachment-0001.html>


More information about the discuss mailing list