[mpich-discuss] Why stuck in MPI_Finalize?

Erik Schnetter schnetter at gmail.com
Mon Nov 29 19:23:52 CST 2021


Hui

Julia is garbage collected, and our MPI wrappers free MPI handles
automatically when the wrapped handles become garbage collected. I
thus don't have direct control over when and which handles are freed
before MPI_Finalize is called at the end of the test case. Our
approach is to let things be as they are, ensuring that no MPI calls
are made any more when MPI_Finalized returns true. That means that
different MPI processes might (or might not ) free handles before
calling MPI_Finalize. I notice that this seems to work fine on Linux,
but not always on macOS. I don't have a simple reproducer in C.

Pointers to setting up Julia: See e.g.
<https://github.com/JuliaParallel/MPI.jl/blob/master/.github/workflows/UnitTests.yml>
which runs the unit tests for the Julia package `MPI.jl` which wraps
MPI implementations. There are several stanzas; the one called
`test-system-brew` runs MPI.jl on a system-provided (external MPI)
installation that has been installed via Homebrew on macOS. You would
replace the step "Install MPI via homebrew" by a step that downloads
and installs MPICH into /usr/local. I'll be happy to discuss details,
possibly over Zoom.

-erik


On Mon, Nov 29, 2021 at 5:12 PM Zhou, Hui <zhouh at anl.gov> wrote:
>
> Hi Erik,
>
> It is not just a barrier in MPI_Finalize. It also needs to make sure pending communications are completed. Sometimes this is the user's fault, such as mismatched communication or partially freed communicators. But sometimes this is due to lower-level network library. For example, some libfabric provider does not always flush send. We tried to detect and prevent all cases that result in hanging, but there always seem to be cases that escape our solution. If you can drill down some simple reproducible (even with 5% chance) cases, please create a github issue and we'll track it down.
>
> By the way, if you have pointers on setting up Julia CI testing, we can try to setup nightly testing on our end as well. Catching the errors when it appear always makes troubleshooting easier.
>
> --
> Hui
> ________________________________
> From: Erik Schnetter via discuss <discuss at mpich.org>
> Sent: Monday, November 29, 2021 12:30 PM
> To: discuss at mpich.org <discuss at mpich.org>
> Cc: Erik Schnetter <schnetter at gmail.com>
> Subject: [mpich-discuss] Why stuck in MPI_Finalize?
>
> I have a Julia test case on macOS where MPICH randomly gets stuck in
> MPI_Finalize (with about a 5% chance). See e.g.
> https://github.com/JuliaParallel/MPI.jl/runs/4357341818
>
> Can you advise under what circumstances MPICH could get stuck there?
> The respective run uses 3 processes, and all 3 processes call into
> MPI_Finalize, but no process returns.
>
> I assume that MPI_Finalize contains internally the equivalent to an
> MPI_Barrier, but that should succeed here. Are there other actions
> taken in MPI_Finalize that would require some kind of consistent state
> across the application? For example, if a communicator was created on
> all processes, but freed only on some processes, could this cause such
> a deadlock?
>
> -erik
>
> --
> Erik Schnetter <schnetter at gmail.com>
> http://www.perimeterinstitute.ca/personal/eschnetter/
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss



-- 
Erik Schnetter <schnetter at gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/


More information about the discuss mailing list