<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr"><div class="gmail_default" style="font-family:tahoma,sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif;font-size:small">Here you go!</div><div class="gmail_default" style="font-family:tahoma,sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif;font-size:small">host machine:</div><div class="gmail_default" style="font-family:tahoma,sans-serif;font-size:small"><div class="gmail_default">~{ahassani@vulcan13:~/usr/bin}~{Tue Nov 25 10:56 PM}~</div><div class="gmail_default">$ echo $LD_LIBRARY_PATH</div><div class="gmail_default">/nethome/students/ahassani/usr/lib:/nethome/students/ahassani/usr/mpi/lib:</div><div class="gmail_default">~{ahassani@vulcan13:~/usr/bin}~{Tue Nov 25 10:56 PM}~</div><div class="gmail_default">$ echo $PATH</div><div class="gmail_default">/nethome/students/ahassani/usr/bin:/nethome/students/ahassani/usr/mpi/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/sbin:/usr/sbin:/usr/local/sbin:/opt/matlab-R2013a/bin</div><div class="gmail_default"><br></div><div class="gmail_default">oakmnt-0-a:</div><div class="gmail_default"><div class="gmail_default">$ echo $LD_LIBRARY_PATH</div><div class="gmail_default">/nethome/students/ahassani/usr/lib:/nethome/students/ahassani/usr/mpi/lib:</div><div class="gmail_default">~{ahassani@oakmnt-0-a:~/usr/bin}~{Tue Nov 25 10:56 PM}~</div><div class="gmail_default">$ echo $PATH</div><div class="gmail_default">/nethome/students/ahassani/usr/bin:/nethome/students/ahassani/usr/mpi/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/sbin:/usr/sbin:/usr/local/sbin</div></div><div class="gmail_default"><br></div><div class="gmail_default">oakmnt-0-b:</div><div class="gmail_default"><div class="gmail_default">~{ahassani@oakmnt-0-b:~}~{Tue Nov 25 10:56 PM}~</div><div class="gmail_default">$ echo $LD_LIBRARY_PATH</div><div class="gmail_default">/nethome/students/ahassani/usr/lib:/nethome/students/ahassani/usr/mpi/lib:</div><div class="gmail_default">~{ahassani@oakmnt-0-b:~}~{Tue Nov 25 10:56 PM}~</div><div class="gmail_default">$ echo $PATH</div><div class="gmail_default">/nethome/students/ahassani/usr/bin:/nethome/students/ahassani/usr/mpi/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/sbin:/usr/sbin:/usr/local/sbin</div><div><br></div></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr">Amin Hassani,<br>CIS department at UAB,<br>
Birmingham, AL, USA.</div></div></div>
<br><div class="gmail_quote">On Tue, Nov 25, 2014 at 10:55 PM, Lu, Huiwei <span dir="ltr"><<a href="mailto:huiweilu@mcs.anl.gov" target="_blank">huiweilu@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">So your ssh connection is correct. And we confirmed the code itself is correct before. The problem may be somewhere else.<br>
<br>
Could you check the PATH and LD_LIBRARY_PATH on these three machines (oakmnt-0-a, oakmnt-0-b, and the host machine) to make sure they are the same? So that mpirun is using the same library on these machines.<br>
<br>
—<br>
<span class=""><font color="#888888">Huiwei<br>
</font></span><div class=""><div class="h5"><br>
> On Nov 25, 2014, at 10:33 PM, Amin Hassani <<a href="mailto:ahassani@cis.uab.edu">ahassani@cis.uab.edu</a>> wrote:<br>
><br>
> Here you go!<br>
><br>
> $ mpirun -hostfile hosts-hydra -np 2 hostname<br>
> oakmnt-0-a<br>
> oakmnt-0-b<br>
><br>
> Thanks.<br>
><br>
> Amin Hassani,<br>
> CIS department at UAB,<br>
> Birmingham, AL, USA.<br>
><br>
> On Tue, Nov 25, 2014 at 10:31 PM, Lu, Huiwei <<a href="mailto:huiweilu@mcs.anl.gov">huiweilu@mcs.anl.gov</a>> wrote:<br>
> I can run your simplest code on my machine without a problem. So I guess there is some problem in cluster connection. Could you give me the output of the following?<br>
><br>
> $ mpirun -hostfile hosts-hydra -np 2 hostname<br>
><br>
> —<br>
> Huiwei<br>
><br>
> > On Nov 25, 2014, at 10:24 PM, Amin Hassani <<a href="mailto:ahassani@cis.uab.edu">ahassani@cis.uab.edu</a>> wrote:<br>
> ><br>
> > Hi,<br>
> ><br>
> > the code that I gave you had more stuff in it that I didn't want to distract you. here is the simpler send/recv test that I just ran and it failed.<br>
> ><br>
> > which mpirun: specific directory that I install my MPIs<br>
> > /nethome/students/ahassani/usr/mpi/bin/mpirun<br>
> ><br>
> > mpirun with no argument:<br>
> > $ mpirun<br>
> > [mpiexec@oakmnt-0-a] set_default_values (../../../../src/pm/hydra/ui/mpich/utils.c:1528): no executable provided<br>
> > [mpiexec@oakmnt-0-a] HYD_uii_mpx_get_parameters (../../../../src/pm/hydra/ui/mpich/utils.c:1739): setting default values failed<br>
> > [mpiexec@oakmnt-0-a] main (../../../../src/pm/hydra/ui/mpich/mpiexec.c:153): error parsing parameters<br>
> ><br>
> ><br>
> ><br>
> > #include <mpi.h><br>
> > #include <stdio.h><br>
> > #include <malloc.h><br>
> > #include <unistd.h><br>
> > #include <stdlib.h><br>
> ><br>
> > int skip = 10;<br>
> > int iter = 30;<br>
> ><br>
> > int main(int argc, char** argv)<br>
> > {<br>
> > int rank, size;<br>
> > int i, j, k;<br>
> > double t1, t2;<br>
> > int rc;<br>
> ><br>
> > MPI_Init(&argc, &argv);<br>
> > MPI_Comm world = MPI_COMM_WORLD, newworld, newworld2;<br>
> > MPI_Comm_rank(world, &rank);<br>
> > MPI_Comm_size(world, &size);<br>
> > int a = 0, b = 1;<br>
> > if(rank == 0){<br>
> > MPI_Send(&a, 1, MPI_INT, 1, 0, world);<br>
> > }else{<br>
> > MPI_Recv(&b, 1, MPI_INT, 0, 0, world, MPI_STATUS_IGNORE);<br>
> > }<br>
> ><br>
> > printf("b is %d\n", b);<br>
> > MPI_Finalize();<br>
> ><br>
> > return 0;<br>
> > }<br>
> ><br>
> > Thank you.<br>
> ><br>
> ><br>
> > Amin Hassani,<br>
> > CIS department at UAB,<br>
> > Birmingham, AL, USA.<br>
> ><br>
> > On Tue, Nov 25, 2014 at 10:20 PM, Lu, Huiwei <<a href="mailto:huiweilu@mcs.anl.gov">huiweilu@mcs.anl.gov</a>> wrote:<br>
> > Hi, Amin,<br>
> ><br>
> > Could you quickly give us the output of the following command: "which mpirun"<br>
> ><br>
> > Also, your simplest code couldn’t compile correctly: "error: ‘t_avg’ undeclared (first use in this function)”. Can you fix it?<br>
> ><br>
> > —<br>
> > Huiwei<br>
> ><br>
> > > On Nov 25, 2014, at 2:58 PM, Amin Hassani <<a href="mailto:ahassani@cis.uab.edu">ahassani@cis.uab.edu</a>> wrote:<br>
> > ><br>
> > > This is the simplest code I have that doesn't run.<br>
> > ><br>
> > ><br>
> > > #include <mpi.h><br>
> > > #include <stdio.h><br>
> > > #include <malloc.h><br>
> > > #include <unistd.h><br>
> > > #include <stdlib.h><br>
> > ><br>
> > > int main(int argc, char** argv)<br>
> > > {<br>
> > > int rank, size;<br>
> > > int i, j, k;<br>
> > > double t1, t2;<br>
> > > int rc;<br>
> > ><br>
> > > MPI_Init(&argc, &argv);<br>
> > > MPI_Comm world = MPI_COMM_WORLD, newworld, newworld2;<br>
> > > MPI_Comm_rank(world, &rank);<br>
> > > MPI_Comm_size(world, &size);<br>
> > ><br>
> > > t2 = 1;<br>
> > > MPI_Allreduce(&t2, &t_avg, 1, MPI_DOUBLE, MPI_SUM, world);<br>
> > > t_avg = t_avg / size;<br>
> > ><br>
> > > MPI_Finalize();<br>
> > ><br>
> > > return 0;<br>
> > > }<br>
> > ><br>
> > > Amin Hassani,<br>
> > > CIS department at UAB,<br>
> > > Birmingham, AL, USA.<br>
> > ><br>
> > > On Tue, Nov 25, 2014 at 2:46 PM, "Antonio J. Peña" <<a href="mailto:apenya@mcs.anl.gov">apenya@mcs.anl.gov</a>> wrote:<br>
> > ><br>
> > > Hi Amin,<br>
> > ><br>
> > > Can you share with us a minimal piece of code with which you can reproduce this issue?<br>
> > ><br>
> > > Thanks,<br>
> > > Antonio<br>
> > ><br>
> > ><br>
> > ><br>
> > > On 11/25/2014 12:52 PM, Amin Hassani wrote:<br>
> > >> Hi,<br>
> > >><br>
> > >> I am having problem running MPICH, on multiple nodes. When I run an multiple MPI processes on one node, it totally works, but when I try to run on multiple nodes, it fails with the error below.<br>
> > >> My machines have Debian OS, Both infiniband and TCP interconnects. I'm guessing it has something do to with the TCP network, but I can run openmpi on these machines with no problem. But for some reason I cannot run MPICH on multiple nodes. Please let me know if more info is needed from my side. I'm guessing there are some configuration that I am missing. I used MPICH 3.1.3 for this test. I googled this problem but couldn't find any solution.<br>
> > >><br>
> > >> In my MPI program, I am doing a simple allreduce over MPI_COMM_WORLD.<br>
> > >><br>
> > >> my host file (hosts-hydra) is something like this:<br>
> > >> oakmnt-0-a:1<br>
> > >> oakmnt-0-b:1 <br>
> > >><br>
> > >> I get this error:<br>
> > >><br>
> > >> $ mpirun -hostfile hosts-hydra -np 2 test_dup<br>
> > >> Assertion failed in file ../src/mpi/coll/helper_fns.c at line 490: status->MPI_TAG == recvtag<br>
> > >> Assertion failed in file ../src/mpi/coll/helper_fns.c at line 490: status->MPI_TAG == recvtag<br>
> > >> internal ABORT - process 1<br>
> > >> internal ABORT - process 0<br>
> > >><br>
> > >> ===================================================================================<br>
> > >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES<br>
> > >> = PID 30744 RUNNING AT oakmnt-0-b<br>
> > >> = EXIT CODE: 1<br>
> > >> = CLEANING UP REMAINING PROCESSES<br>
> > >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES<br>
> > >> ===================================================================================<br>
> > >> [mpiexec@vulcan13] HYDU_sock_read (../../../../src/pm/hydra/utils/sock/sock.c:239): read error (Bad file descriptor)<br>
> > >> [mpiexec@vulcan13] control_cb (../../../../src/pm/hydra/pm/pmiserv/pmiserv_cb.c:199): unable to read command from proxy<br>
> > >> [mpiexec@vulcan13] HYDT_dmxu_poll_wait_for_event (../../../../src/pm/hydra/tools/demux/demux_poll.c:76): callback returned error status<br>
> > >> [mpiexec@vulcan13] HYD_pmci_wait_for_completion (../../../../src/pm/hydra/pm/pmiserv/pmiserv_pmci.c:198): error waiting for event<br>
> > >> [mpiexec@vulcan13] main (../../../../src/pm/hydra/ui/mpich/mpiexec.c:344): process manager error waiting for completion<br>
> > >><br>
> > >> Thanks.<br>
> > >> Amin Hassani,<br>
> > >> CIS department at UAB,<br>
> > >> Birmingham, AL, USA.<br>
> > >><br>
> > >><br>
> > >> _______________________________________________<br>
> > >> discuss mailing list<br>
> > >> <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > >><br>
> > >> To manage subscription options or unsubscribe:<br>
> > >><br>
> > >> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> > ><br>
> > ><br>
> > > --<br>
> > > Antonio J. Peña<br>
> > > Postdoctoral Appointee<br>
> > > Mathematics and Computer Science Division<br>
> > > Argonne National Laboratory<br>
> > > 9700 South Cass Avenue, Bldg. 240, Of. 3148<br>
> > > Argonne, IL 60439-4847<br>
> > ><br>
> > > <a href="mailto:apenya@mcs.anl.gov">apenya@mcs.anl.gov</a><br>
> > > <a href="http://www.mcs.anl.gov/~apenya" target="_blank">www.mcs.anl.gov/~apenya</a><br>
> > ><br>
> > > _______________________________________________<br>
> > > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > > To manage subscription options or unsubscribe:<br>
> > > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> > ><br>
> > > _______________________________________________<br>
> > > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > > To manage subscription options or unsubscribe:<br>
> > > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a></div></div></blockquote></div><br></div></div>