<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Hi Kurt,</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Did you run <code>mpiexec</code> inside <code>sbatch</code>? It will need <code>
sbatch</code> to allocate the nodes.</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
-- <br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Hui<br>
</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Mccall, Kurt E. (MSFC-EV41) via discuss <discuss@mpich.org><br>
<b>Sent:</b> Friday, February 18, 2022 1:39 PM<br>
<b>To:</b> Raffenetti, Ken <raffenet@anl.gov>; discuss@mpich.org <discuss@mpich.org><br>
<b>Cc:</b> Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov><br>
<b>Subject:</b> Re: [mpich-discuss] MPI_Init hangs under Slurm</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">Here is the --verbose output. Is it trying to launch the processes all on the head node rocci.ndc.nasa.gov?<br>
<br>
Kurt<br>
<br>
-----Original Message-----<br>
From: Raffenetti, Ken <raffenet@anl.gov> <br>
Sent: Friday, February 18, 2022 1:31 PM<br>
To: discuss@mpich.org<br>
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov><br>
Subject: [EXTERNAL] Re: [mpich-discuss] MPI_Init hangs under Slurm<br>
<br>
From the looks of it, using the ssh launcher might not be able to access all the nodes. To confirm, can you try launching a non-MPI program? Something like<br>
<br>
mpiexec -verbose -launcher ssh -print-all-exitcodes -np 20 -ppn1 hostname<br>
<br>
Ken<br>
<br>
On 2/17/22, 2:39 PM, "Mccall, Kurt E. (MSFC-EV41) via discuss" <discuss@mpich.org> wrote:<br>
<br>
Sorry, my attachment with an .out extension was blocked. Here is the file with a .txt extension.<br>
<br>
From: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov> <br>
Sent: Thursday, February 17, 2022 2:36 PM<br>
To: discuss@mpich.org<br>
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov><br>
Subject: MPI_Init hangs under Slurm<br>
<br>
<br>
<br>
Things were working fine when I was launching 1 node jobs under Slurm 20.11.8, but when I launched a 20 node job, MPICH hangs in MPI_Init. The output of “mpiexec -verbose” is attached, and the stack trace at the point where it hangs is below.<br>
<br>
In the “mpiexec -verbose” output, I wonder why variables such as PATH_modshare point to our Intel MPI implementation, which I am no using. I am using MPICH 4.0 with a patch that Ken Raffenetti provided (which makes MPICH recognize the “host” info key).
My $PATH and $LD_LIBRARY_PATH variables definitely point to the correct MPICH installation.<br>
<br>
I appreciate any help you can give.<br>
<br>
<br>
Here is the Slurm sbatch command:<br>
<br>
sbatch --nodes=20 --ntasks=20 --job-name $job_name --exclusive –verbose <br>
<br>
<br>
Here is the mpiexec command:<br>
<br>
mpiexec -verbose -launcher ssh -print-all-exitcodes -np 20 -wdir ${work_dir} -env DISPLAY localhost:10.0 --ppn 1 <many more args…><br>
<br>
<br>
Stack trace at MPI_Init:<br>
<br>
#0 0x00007f6d85f499b2 in read () from /lib64/libpthread.so.0<br>
#1 0x00007f6d87a5753a in PMIU_readline (fd=5, buf=buf@entry=0x7ffd6fb596e0 "", maxlen=maxlen@entry=1024)<br>
at ../mpich-slurm-patch-4.0/src/pmi/simple/simple_pmiutil.c:134<br>
#2 0x00007f6d87a57a56 in GetResponse (request=0x7f6d87b48351 "cmd=barrier_in\n",<br>
expectedCmd=0x7f6d87b48345 "barrier_out", checkRc=0) at ../mpich-slurm-patch-4.0/src/pmi/simple/simple_pmi.c:818<br>
#3 0x00007f6d87a29915 in MPIDI_PG_SetConnInfo (rank=rank@entry=0,<br>
connString=connString@entry=0x1bbf5a0 "description#n001$port#33403$ifname#172.16.56.1$")<br>
at ../mpich-slurm-patch-4.0/src/mpid/ch3/src/mpidi_pg.c:559<br>
#4 0x00007f6d87a38611 in MPID_nem_init (pg_rank=pg_rank@entry=0, pg_p=pg_p@entry=0x1bbf850, has_parent=<optimized out>)<br>
at ../mpich-slurm-patch-4.0/src/mpid/ch3/channels/nemesis/src/mpid_nem_init.c:393<br>
#5 0x00007f6d87a2ad93 in MPIDI_CH3_Init (has_parent=<optimized out>, pg_p=0x1bbf850, pg_rank=0)<br>
at ../mpich-slurm-patch-4.0/src/mpid/ch3/channels/nemesis/src/ch3_init.c:83<br>
#6 0x00007f6d87a1b3b7 in init_world () at ../mpich-slurm-patch-4.0/src/mpid/ch3/src/mpid_init.c:190<br>
#7 MPID_Init (requested=<optimized out>, provided=provided@entry=0x7f6d87e03540 <MPIR_ThreadInfo>)<br>
at ../mpich-slurm-patch-4.0/src/mpid/ch3/src/mpid_init.c:76<br>
#8 0x00007f6d879828eb in MPII_Init_thread (argc=argc@entry=0x7ffd6fb5a5cc, argv=argv@entry=0x7ffd6fb5a5c0,<br>
user_required=0, provided=provided@entry=0x7ffd6fb5a574, p_session_ptr=p_session_ptr@entry=0x0)<br>
at ../mpich-slurm-patch-4.0/src/mpi/init/mpir_init.c:208<br>
#9 0x00007f6d879832a5 in MPIR_Init_impl (argc=0x7ffd6fb5a5cc, argv=0x7ffd6fb5a5c0)<br>
at ../mpich-slurm-patch-4.0/src/mpi/init/mpir_init.c:93<br>
#10 0x00007f6d8786388e in PMPI_Init (argc=0x7ffd6fb5a5cc, argv=0x7ffd6fb5a5c0)<br>
at ../mpich-slurm-patch-4.0/src/binding/c/init/init.c:46<br>
#11 0x000000000040640d in main (argc=23, argv=0x7ffd6fb5ad68) at src/NeedlesMpiManagerMain.cpp:53<br>
<br>
</div>
</span></font></div>
</body>
</html>