<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">Hi Martin,<div>Thank you so much for taking the time to look at this so carefully - especially on a Friday before a holiday weekend!</div><div>You reproduced the behavior I am seeing precisely, where things run fine on an interactive node. Just hacking something to work intra-node would be fine as a start. This gives me a much better understanding so I can play around more.</div><div>I'll report back if I manage to make some progress or can come up with an intelligent question :)</div><div>Take care,<br>Heather</div><br><div class="gmail_quote"><div dir="ltr">On Fri, Aug 31, 2018 at 7:13 PM Martin Cuma <<a href="mailto:martin.cuma@utah.edu">martin.cuma@utah.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Heather,<br>
<br>
this is a nicely complex problem that I can't say I know a solution of, <br>
but, let me say what I know and perhaps it'll shed some light on the <br>
problem.<br>
<br>
To answer your question on how mpirun interacts with srun (or SLURM in <br>
general), most MPIs (or better to say, PMIs that MPI uses for process <br>
launch) these days have SLURM support so when built they can leverage <br>
SLURM. Or the SLURM is set up to facilitate the remote node connection <br>
(e.g. by hijacking ssh through its own PMI - I don't know this just <br>
guessing). So, for the MPI distros that I tried (MPICH and derivatives - <br>
Intel MPI, MVAPICH2; and OpenMPI), mpirun at some point calls srun, no <br>
matter if it was built with SLURM support explicitly or not. Which would <br>
explain the srun error you are getting.<br>
<br>
Now, what I think is happening in your case is that you are calling the <br>
mpirun (or its equivalent inside mpi4py) from INSIDE of the container, <br>
where there's no srun. Notice that most MPI container examples, including <br>
the very well written ANL page, instruct you to use mpirun (or aprun in <br>
Cray's case) OUTSIDE of the container (the host), and launch N instances <br>
of the container through the mpirun.<br>
<br>
I reproduced your problem on our system in the following way:<br>
1. Build a Singularity container with local MPI installation, e.g. <br>
<a href="https://github.com/CHPC-UofU/Singularity-ubuntu-mpi" rel="noreferrer" target="_blank">https://github.com/CHPC-UofU/Singularity-ubuntu-mpi</a><br>
2. shell into the container and build some mpi program (e.g. I have the <br>
cpi.c example from mpich - mpicc cpi.c -o cpi).<br>
3. This then runs OK in the container on an interactive node (= outside <br>
SLURM job - mpirun does not use SLURM's PMI = does not use srun).<br>
4. Launch the job, then shell into the container, and try to run mpirun <br>
-np 2 ./cpi - I get the same error you get, since<br>
$ which srun<br>
$<br>
<br>
Now, I can try to set the path to the SLURM binaries<br>
$ export PATH="/uufs/notchpeak.peaks/sys/installdir/slurm/std/bin:$PATH"<br>
$ which srun<br>
/uufs/notchpeak.peaks/sys/installdir/slurm/std/bin/srun<br>
but then get another error:<br>
$ mpirun -np 2 ./cpi.c<br>
srun: error: Invalid user for SlurmUser slurm, ignored<br>
srun: fatal: Unable to process configuration file<br>
so the environment needs some more changes to get the srun to work <br>
correctly from inside the container. Though I think this would still only <br>
be hackable for an intra-node MPI launch, as inter-node you'll rely on <br>
SLURM that would have to get accessed from outside of the container.<br>
<br>
So, bottom line, launching mpirun from the host is preferable.<br>
<br>
I am not sure how you can launch the mpi4py from the host, since I don't <br>
use mpi4py, but, in theory it should not be any different than launching <br>
MPI binaries. Though I figure modifying your launch scripts around this <br>
may be complicated.<br>
<br>
BTW, I have had a reasonable success with mixing ABI compatible MPIs <br>
(MPICH, MVAPICH2, Intel MPI) in and out of the container. It often works <br>
but sometimes it does not.<br>
<br>
HTH,<br>
MC<br>
<br>
-- <br>
Martin Cuma<br>
Center for High Performance Computing<br>
Department of Geology and Geophysics<br>
University of Utah<br>
<br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</blockquote></div></div>