<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Hi Kurt,</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
As Ken mentioned, there is often the complication or confusion of several distinct combinations of setup. Mailinglist is especially bad at resolving such issues as we try different suggestions and the feedback is often all mixed up. Could you check
<a href="https://github.com/pmodels/mpich/issues/5835" id="LPlnkOWALinkPreview">https://github.com/pmodels/mpich/issues/5835</a> and provide (or re-provide) relevant information there? Particularly useful information are: how mpich is configured, how the job
is invocated, the failure symptom, and the debug output by using <code>mpiexec -verbose</code>. As Ken listed, there are three different scenarios. Each should work. It probably will help if we focus on one scenario at a time.<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
-- <br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Hui Zhou<br>
</div>
<div class="_Entity _EType_OWALinkPreview _EId_OWALinkPreview _EReadonly_1">
<div id="LPBorder_GTaHR0cHM6Ly9naXRodWIuY29tL3Btb2RlbHMvbXBpY2gvaXNzdWVzLzU4MzU." class="LPBorder348008" style="width: 100%; margin-top: 16px; margin-bottom: 16px; position: relative; max-width: 800px; min-width: 424px;">
<table id="LPContainer348008" role="presentation" style="padding: 12px 36px 12px 12px; width: 100%; border-width: 1px; border-style: solid; border-color: rgb(200, 200, 200); border-radius: 2px;">
<tbody>
<tr style="border-spacing: 0px;" valign="top">
<td>
<div id="LPImageContainer348008" style="position: relative; margin-right: 12px; height: 120px; overflow: hidden; width: 240px;">
<a target="_blank" id="LPImageAnchor348008" href="https://github.com/pmodels/mpich/issues/5835"><img id="LPThumbnailImageId348008" alt="" style="display: block;" width="240" height="120" src="https://opengraph.githubassets.com/f73eca0aa1fd8cd3bc03c57525990e22e36e447bdc6b0393f863e131273f0563/pmodels/mpich/issues/5835"></a></div>
</td>
<td style="width: 100%;">
<div id="LPTitle348008" style="font-size: 21px; font-weight: 300; margin-right: 8px; font-family: "wf_segoe-ui_light", "Segoe UI Light", "Segoe WP Light", "Segoe UI", "Segoe WP", Tahoma, Arial, sans-serif; margin-bottom: 12px;">
<a target="_blank" id="LPUrlAnchor348008" href="https://github.com/pmodels/mpich/issues/5835" style="text-decoration: none; color: var(--themePrimary);">MPI_Comm_spawn in Slurm environment · Issue #5835 · pmodels/mpich</a></div>
<div id="LPDescription348008" style="font-size: 14px; max-height: 100px; color: rgb(102, 102, 102); font-family: "wf_segoe-ui_normal", "Segoe UI", "Segoe WP", Tahoma, Arial, sans-serif; margin-bottom: 12px; margin-right: 8px; overflow: hidden;">
Originated from user email https://lists.mpich.org/pipermail/discuss/2022-January/006360.html. MPICH + Hydra + PMI1 (crashes) MPICH + Hydra + PMI2 (works but ignores "hosts" info key) MPI...</div>
<div id="LPMetadata348008" style="font-size: 14px; font-weight: 400; color: rgb(166, 166, 166); font-family: "wf_segoe-ui_normal", "Segoe UI", "Segoe WP", Tahoma, Arial, sans-serif;">
github.com</div>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<br>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Raffenetti, Ken via discuss <discuss@mpich.org><br>
<b>Sent:</b> Monday, February 7, 2022 4:28 PM<br>
<b>To:</b> Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov>; discuss@mpich.org <discuss@mpich.org><br>
<b>Cc:</b> Raffenetti, Ken <raffenet@anl.gov><br>
<b>Subject:</b> Re: [mpich-discuss] MPI_Comm_spawn crosses node boundaries</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">Darn. I'm creating an issue to track this since it will likely take some time and effort to investigate each configuration.<br>
<br>
<a href="https://github.com/pmodels/mpich/issues/5835">https://github.com/pmodels/mpich/issues/5835</a><br>
<br>
Ken<br>
<br>
On 2/7/22, 12:41 PM, "Mccall, Kurt E. (MSFC-EV41)" <kurt.e.mccall@nasa.gov> wrote:<br>
<br>
Ken,<br>
<br>
To review, I configured as follows:<br>
<br>
$ configure CFLAGS=-DUSE_PMI2_API LIBS=-lpmi2 --with-pm=none --with-pmi=slurm --with-slurm=/opt/slurm < ...><br>
<br>
and ran srun with the argument --mpi=pmi2.<br>
<br>
The job is stil segfaulting in MPI_Comm_spawn in one process and returning an error from MPI_Barrier in the other. Error messages below:<br>
<br>
<br>
The MPI_Comm_spawn error:<br>
<br>
<br>
backtrace for error: backtrace after receiving signal SIGSEGV:<br>
/home/kmccall/Needles2/./NeedlesMpiMM() [0x45ab36]<br>
/lib64/libpthread.so.0(+0x12c20) [0x7f4833387c20]<br>
/lib64/libc.so.6(+0x15d6b7) [0x7f483310d6b7]<br>
/lib64/libc.so.6(__strdup+0x12) [0x7f4833039802]<br>
/lib64/libpmi2.so.0(+0x171c) [0x7f483294371c]<br>
/lib64/libpmi2.so.0(+0x185e) [0x7f483294385e]<br>
/lib64/libpmi2.so.0(PMI2_Job_Spawn+0x1a7) [0x7f48329453d8]<br>
/home/kmccall/mpich-slurm-install-4.0_2/lib/libmpi.so.12(+0x23a7db) [0x7f4834e0a7db]<br>
/home/kmccall/mpich-slurm-install-4.0_2/lib/libmpi.so.12(+0x1fc805) [0x7f4834dcc805]<br>
/home/kmccall/mpich-slurm-install-4.0_2/lib/libmpi.so.12(MPI_Comm_spawn+0x507) [0x7f4834cea9f7]<br>
<br>
<br>
The MPI_Barrier error:<br>
<br>
MPI_Barrier returned the error MPI runtime error: Unknown error class, error stack:<br>
internal_Barrier(84).......................: MPI_Barrier(MPI_COMM_WORLD) failed<br>
MPIR_Barrier_impl(91)......................:<br>
MPIR_Barrier_allcomm_auto(45)..............:<br>
MPIR_Barrier_intra_dissemination(39).......:<br>
MPIDI_CH3U_Complete_posted_with_error(1090): Process failed<br>
<br>
<br>
<br>
<br>
Thanks,<br>
Kurt<br>
<br>
-----Original Message-----<br>
From: Raffenetti, Ken <raffenet@anl.gov> <br>
Sent: Friday, February 4, 2022 4:08 PM<br>
To: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov>; discuss@mpich.org<br>
Subject: [EXTERNAL] Re: [mpich-discuss] MPI_Comm_spawn crosses node boundaries<br>
<br>
:(. We need to link -lpmi2 instead of -lpmi. This really needs a patch in our configure script, but adding this to your configure is worth a shot:<br>
<br>
LIBS=-lpmi2<br>
<br>
Ken<br>
<br>
On 2/4/22, 1:56 PM, "Mccall, Kurt E. (MSFC-EV41)" <kurt.e.mccall@nasa.gov> wrote:<br>
<br>
I added the CFLAGS argument and the configuration completed, but make ended with a link error.<br>
<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Abort'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Info_GetJobAttr'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Job_Spawn'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Nameserv_publish'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Finalize'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_KVS_Put'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Info_GetNodeAttr'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_KVS_Get'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_KVS_Fence'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Nameserv_unpublish'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Info_PutNodeAttr'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Job_GetId'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Init'<br>
lib/.libs/libmpi.so: undefined reference to `PMI2_Nameserv_lookup'<br>
collect2: error: ld returned 1 exit status<br>
<br>
-----Original Message-----<br>
From: Raffenetti, Ken <raffenet@anl.gov> <br>
Sent: Friday, February 4, 2022 1:23 PM<br>
To: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov>; discuss@mpich.org<br>
Subject: [EXTERNAL] Re: [mpich-discuss] MPI_Comm_spawn crosses node boundaries<br>
<br>
I think I see a new issue. The Slurm website documentation says that their PMI library doesn't support PMI_Spawn_multiple from the PMI 1 API. We can try to force PMI 2 and see what happens. Try adding this to your configure line.<br>
<br>
CFLAGS=-DUSE_PMI2_API<br>
<br>
Ken<br>
<br>
On 2/4/22, 11:58 AM, "Mccall, Kurt E. (MSFC-EV41)" <kurt.e.mccall@nasa.gov> wrote:<br>
<br>
Did that, and launched the job with "srun --mpi=none" and one of the processes failed when MPI_Comm_spawn was called. Note the :<br>
<br>
internal_Comm_spawn(101)......: MPI_Comm_spawn(command=NeedlesMpiMM, argv=0x226b030, maxprocs=1, info=0x9c000000, 0, MPI_COMM_SELF, intercomm=0x7ffffda9448c, array_of_errcodes=0x7ffffda94378) failed<br>
MPIDI_Comm_spawn_multiple(225): PMI_Spawn_multiple returned -1<br>
<br>
<br>
<br>
The other process failed when MPI_Barrier was called:<br>
<br>
<br>
internal_Barrier(84).......................: MPI_Barrier(MPI_COMM_WORLD) failed<br>
MPIR_Barrier_impl(91)......................:<br>
MPIR_Barrier_allcomm_auto(45)..............:<br>
MPIR_Barrier_intra_dissemination(39).......:<br>
MPIDI_CH3U_Complete_posted_with_error(1090): Process failed<br>
MPI runtime error: Unknown error class, error stack:<br>
internal_Barrier(84).......................: MPI_Barrier(MPI_COMM_WORLD) failed<br>
MPIR_Barrier_impl(91)......................:<br>
MPIR_Barrier_allcomm_auto(45)..............:<br>
MPIR_Barrier_intra_dissemination(39).......:<br>
MPIDI_CH3U_Complete_posted_with_error(1090): Process failed<br>
MPI runtime error: Unknown error class, error stack:<br>
internal_Barrier(84).......................: MPI_Barrier(MPI_COMM_WORLD) failed<br>
MPIR_Barrier_impl(91)......................:<br>
MPIR_Barrier_allcomm_auto(45)..............:<br>
MPIR_Barrier_intra_dissemination(39).......:<br>
MPIDI_CH3U_Complete_posted_with_error(1090): Process failed<br>
MPI manager 1 threw exception: MPI runtime error: Unknown error class, error stack:<br>
internal_Barrier(84).......................: MPI_Barrier(MPI_COMM_WORLD) failed<br>
MPIR_Barrier_impl(91)......................:<br>
MPIR_Barrier_allcomm_auto(45)..............:<br>
MPIR_Barrier_intra_dissemination(39).......:<br>
MPIDI_CH3U_Complete_posted_with_error(1090): Process failed<br>
<br>
-----Original Message-----<br>
From: Raffenetti, Ken <raffenet@anl.gov> <br>
Sent: Friday, February 4, 2022 11:42 AM<br>
To: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov>; discuss@mpich.org<br>
Subject: [EXTERNAL] Re: [mpich-discuss] MPI_Comm_spawn crosses node boundaries<br>
<br>
Yes, you should also use --with-pm=none. If using mpicc to build your application, you should not have to add -lpmi. The script will handle it for you.<br>
<br>
If using another method, you might have to add it. These days with shared libraries, linkers are often able to manage "inter-library" dependencies just fine. Static builds are a different story.<br>
<br>
Ken<br>
<br>
On 2/4/22, 11:35 AM, "Mccall, Kurt E. (MSFC-EV41)" <kurt.e.mccall@nasa.gov> wrote:<br>
<br>
Ken,<br>
<br>
>> configure --with-slurm=/opt/slurm --with-pmi=slurm<br>
<br>
That is similar to your first suggestion below. With the above, do I have to include --with-pm=none? I guess I also have to link my application with -lpmi, right?<br>
<br>
Thanks,<br>
Kurt<br>
<br>
-----Original Message-----<br>
From: Raffenetti, Ken <raffenet@anl.gov> <br>
Sent: Friday, February 4, 2022 11:02 AM<br>
To: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov>; discuss@mpich.org<br>
Subject: [EXTERNAL] Re: [mpich-discuss] MPI_Comm_spawn crosses node boundaries<br>
<br>
When running with srun you need to use the Slurm PMI library, not the embedded Simple PMI2 library. Simple PMI2 is API compatible, but uses a different wire protocol that the Slurm implementation. Try this instead:<br>
<br>
configure --with-slurm=/opt/slurm --with-pmi=slurm<br>
<br>
This will link the Slurm PMI library to MPICH. I do acknowledge how confusing this must be to users :). Probably a good FAQ topic for our Github discussions page.<br>
<br>
Ken<br>
<br>
On 2/3/22, 7:00 PM, "Mccall, Kurt E. (MSFC-EV41)" <kurt.e.mccall@nasa.gov> wrote:<br>
<br>
Ken,<br>
<br>
I'm trying to build MPICH 4.0 in several ways, one of which will be what you suggested below. For this particular attempt suggested by the Slurm MPI guide, I built it with<br>
<br>
configure --with-slurm=/opt/slurm --with-pmi=pmi2/simple <etc><br>
<br>
and invoked it with<br>
<br>
srun --mpi=pmi2 <etc><br>
<br>
The job is crashing with this message. Any idea what is wrong?<br>
<br>
slurmstepd: error: mpi/pmi2: no value for key in req<br>
slurmstepd: error: mpi/pmi2: no value for key in req<br>
slurmstepd: error: mpi/pmi2: no value for key <99>èþ^? in req<br>
slurmstepd: error: mpi/pmi2: no value for key in req<br>
slurmstepd: error: mpi/pmi2: no value for key in req<br>
slurmstepd: error: mpi/pmi2: no value for key ´2¾ÿ^? in req<br>
slurmstepd: error: mpi/pmi2: no value for key ; in req<br>
slurmstepd: error: mpi/pmi2: no value for key in req<br>
slurmstepd: error: *** STEP 52227.0 ON n001 CANCELLED AT 2022-02-03T18:48:02 ***<br>
<br>
-----Original Message-----<br>
From: Raffenetti, Ken <raffenet@anl.gov> <br>
Sent: Friday, January 28, 2022 3:15 PM<br>
To: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall@nasa.gov>; discuss@mpich.org<br>
Subject: [EXTERNAL] Re: [mpich-discuss] MPI_Comm_spawn crosses node boundaries<br>
<br>
On 1/28/22, 2:22 PM, "Mccall, Kurt E. (MSFC-EV41)" <kurt.e.mccall@nasa.gov> wrote:<br>
<br>
Ken,<br>
<br>
I confirmed that MPI_Comm_spawn fails completely if I build MPICH without the PMI2 option.<br>
<br>
Dang, I thought that would work :(.<br>
<br>
Looking at the Slurm documentation <a href="https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fslurm.schedmd.com%2Fmpi_guide.html%23intel_mpiexec_hydra&data=04%7C01%7Ckurt.e.mccall%40nasa.gov%7C9249f74763f1428b909908d9e82adeef%7C7005d45845be48ae8140d43da96dd17b%7C0%7C0%7C637796093091225985%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=dcpAlSu4FLsL2o39rDH0t1jyzq4AYSOQlZXMReZmmJU%3D&reserved=0">
https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fslurm.schedmd.com%2Fmpi_guide.html%23intel_mpiexec_hydra&data=04%7C01%7Ckurt.e.mccall%40nasa.gov%7C9249f74763f1428b909908d9e82adeef%7C7005d45845be48ae8140d43da96dd17b%7C0%7C0%7C637796093091225985%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=dcpAlSu4FLsL2o39rDH0t1jyzq4AYSOQlZXMReZmmJU%3D&reserved=0</a><br>
it states "All MPI_comm_spawn work fine now going through hydra's PMI 1.1 interface." The full quote is below for reference.<br>
<br>
1) how do I build MPICH to support hydra's PMI 1.1 interface?<br>
<br>
That is the default, so no extra configuration should be needed. One thing I notice in your log output is that the Slurm envvars seems to have changed name from what we have in our source. E.g. SLURM_JOB_NODELIST vs. SLURM_NODELIST. Do your
initial processes launch on the right nodes?<br>
<br>
2) Can you offer any guesses on how to build Slurm to do the same? (I realize this isn't a Slurm forum 😊)<br>
<br>
Hopefully you don't need to rebuild Slurm to do it. What you could try is configuring the Slurm PMI library when building MPICH. Add "--with-pm=none --with-pmi=slurm --with-slurm=<path/to/install>". Then use srun instead of mpiexec and see
how it goes.<br>
<br>
Ken<br>
<br>
<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
discuss mailing list discuss@mpich.org<br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</div>
</span></font></div>
</body>
</html>