<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div>Oh okay - thank you!</div><div><br></div><div>-Melissa</div><div><br><div class="gmail_quote"><div>On Mon, May 1, 2017 at 3:29 PM Kenneth Raffenetti <<a href="mailto:raffenet@mcs.anl.gov">raffenet@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Melissa,<br>
<br>
Probably best to post this question to<br>
<a href="mailto:mvapich-discuss@cse.ohio-state.edu" target="_blank">mvapich-discuss@cse.ohio-state.edu</a> and go from there.<br>
<br>
Thanks,<br>
Ken<br>
<br>
On 05/01/2017 01:15 PM, Melissa Romanus wrote:<br>
> Hello,<br>
><br>
> I am experiencing issues on the SDSC Comet system when using the intel<br>
> compilers with mvapich2. The scheduling system on Comet is slurm. It<br>
> seems like the code is seg-faulting inside of MPI_Comm_dup, but prior to<br>
> that, it seems like it is rejecting a connection request to "self"<br>
> (i.e., same IP to same IP).<br>
><br>
> The modules loaded are:<br>
><br>
> ```<br>
> $ module list<br>
><br>
> Currently Loaded Modulefiles:<br>
> 1) intel/2013_sp1.2.144 2) mvapich2_ib/2.1 3) gnutools/2.69<br>
> ```<br>
><br>
> I am attempting to use the `ib0` interface. In my job script, I am<br>
> launching 3 different applications. I am **not** using slurm<br>
> `--multi-prog`. I am instead using 3 different `srun` commands. My job<br>
> has to be launched this way.<br>
><br>
> Using OpenMPI, I can set the MCA parameters to allow connections from<br>
> `self` at the byte-transfer layer, i.e., `OMPI_MCA_btl="self,openib"`<br>
> and specify to slurm that I would like to use `--mpi=pmi2`.<br>
><br>
> I think the mvapich errors that I am experiencing stem from the fact<br>
> that the "self" connection is rejected (i.e., node to itself). Is there<br>
> a way to tell MVAPICH to allow the self connection? I think I want the<br>
> `--with-device=ch3:nemesis:ib` command in some capacity, but I'm not<br>
> sure if that would be enough to allow the connection from the node to<br>
> itself. Is the self connection inherently a TCP connection? Or do I<br>
> still need `--mpi=pmi2` for srun? Can I use srun or do I need to use<br>
> `mpiexec` explicitly?<br>
><br>
> Could this also be the cause of the error described by this FAQ?<br>
> <a href="https://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_All_my_processes_get_rank_0" rel="noreferrer" target="_blank">https://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_All_my_processes_get_rank_0</a>.<br>
><br>
> Any help you can provide is greatly appreciated.<br>
><br>
> -Melissa<br>
><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</blockquote></div></div>