<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body dir="auto">
<div>Any chance you're using MPICH on one side and Open MPI on the other? You can get some weird situations when mixing the two. <br>
<br>
<br>
</div>
<div><br>
On Nov 25, 2014, at 11:33 PM, Amin Hassani <<a href="mailto:ahassani@cis.uab.edu">ahassani@cis.uab.edu</a>> wrote:<br>
<br>
</div>
<blockquote type="cite">
<div>
<div dir="ltr">
<div class="gmail_default" style="">
<div class="gmail_default" style=""><font face="tahoma, sans-serif">Here you go!</font></div>
<div class="gmail_default" style=""><font face="tahoma, sans-serif"><br>
</font></div>
<div class="gmail_default" style=""><font face="tahoma, sans-serif">$ mpirun -hostfile hosts-hydra -np 2 hostname</font></div>
<div class="gmail_default" style="font-family:tahoma,sans-serif;font-size:small">
oakmnt-0-a</div>
<div class="gmail_default" style="font-family:tahoma,sans-serif;font-size:small">
oakmnt-0-b</div>
<div style="font-family:tahoma,sans-serif;font-size:small"><br>
</div>
<div style="font-family:tahoma,sans-serif;font-size:small">Thanks.</div>
</div>
</div>
<div class="gmail_extra"><br clear="all">
<div>
<div class="gmail_signature">
<div dir="ltr">Amin Hassani,<br>
CIS department at UAB,<br>
Birmingham, AL, USA.</div>
</div>
</div>
<br>
<div class="gmail_quote">On Tue, Nov 25, 2014 at 10:31 PM, Lu, Huiwei <span dir="ltr">
<<a href="mailto:huiweilu@mcs.anl.gov" target="_blank">huiweilu@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I can run your simplest code on my machine without a problem. So I guess there is some problem in cluster connection. Could you give me the output of the following?<br>
<br>
$ mpirun -hostfile hosts-hydra -np 2 hostname<br>
<br>
—<br>
<span class="HOEnZb"><font color="#888888">Huiwei<br>
</font></span>
<div class="HOEnZb">
<div class="h5"><br>
> On Nov 25, 2014, at 10:24 PM, Amin Hassani <<a href="mailto:ahassani@cis.uab.edu">ahassani@cis.uab.edu</a>> wrote:<br>
><br>
> Hi,<br>
><br>
> the code that I gave you had more stuff in it that I didn't want to distract you. here is the simpler send/recv test that I just ran and it failed.<br>
><br>
> which mpirun: specific directory that I install my MPIs<br>
> /nethome/students/ahassani/usr/mpi/bin/mpirun<br>
><br>
> mpirun with no argument:<br>
> $ mpirun<br>
> [mpiexec@oakmnt-0-a] set_default_values (../../../../src/pm/hydra/ui/mpich/utils.c:1528): no executable provided<br>
> [mpiexec@oakmnt-0-a] HYD_uii_mpx_get_parameters (../../../../src/pm/hydra/ui/mpich/utils.c:1739): setting default values failed<br>
> [mpiexec@oakmnt-0-a] main (../../../../src/pm/hydra/ui/mpich/mpiexec.c:153): error parsing parameters<br>
><br>
><br>
><br>
> #include <mpi.h><br>
> #include <stdio.h><br>
> #include <malloc.h><br>
> #include <unistd.h><br>
> #include <stdlib.h><br>
><br>
> int skip = 10;<br>
> int iter = 30;<br>
><br>
> int main(int argc, char** argv)<br>
> {<br>
> int rank, size;<br>
> int i, j, k;<br>
> double t1, t2;<br>
> int rc;<br>
><br>
> MPI_Init(&argc, &argv);<br>
> MPI_Comm world = MPI_COMM_WORLD, newworld, newworld2;<br>
> MPI_Comm_rank(world, &rank);<br>
> MPI_Comm_size(world, &size);<br>
> int a = 0, b = 1;<br>
> if(rank == 0){<br>
> MPI_Send(&a, 1, MPI_INT, 1, 0, world);<br>
> }else{<br>
> MPI_Recv(&b, 1, MPI_INT, 0, 0, world, MPI_STATUS_IGNORE);<br>
> }<br>
><br>
> printf("b is %d\n", b);<br>
> MPI_Finalize();<br>
><br>
> return 0;<br>
> }<br>
><br>
> Thank you.<br>
><br>
><br>
> Amin Hassani,<br>
> CIS department at UAB,<br>
> Birmingham, AL, USA.<br>
><br>
> On Tue, Nov 25, 2014 at 10:20 PM, Lu, Huiwei <<a href="mailto:huiweilu@mcs.anl.gov">huiweilu@mcs.anl.gov</a>> wrote:<br>
> Hi, Amin,<br>
><br>
> Could you quickly give us the output of the following command: "which mpirun"<br>
><br>
> Also, your simplest code couldn’t compile correctly: "error: ‘t_avg’ undeclared (first use in this function)”. Can you fix it?<br>
><br>
> —<br>
> Huiwei<br>
><br>
> > On Nov 25, 2014, at 2:58 PM, Amin Hassani <<a href="mailto:ahassani@cis.uab.edu">ahassani@cis.uab.edu</a>> wrote:<br>
> ><br>
> > This is the simplest code I have that doesn't run.<br>
> ><br>
> ><br>
> > #include <mpi.h><br>
> > #include <stdio.h><br>
> > #include <malloc.h><br>
> > #include <unistd.h><br>
> > #include <stdlib.h><br>
> ><br>
> > int main(int argc, char** argv)<br>
> > {<br>
> > int rank, size;<br>
> > int i, j, k;<br>
> > double t1, t2;<br>
> > int rc;<br>
> ><br>
> > MPI_Init(&argc, &argv);<br>
> > MPI_Comm world = MPI_COMM_WORLD, newworld, newworld2;<br>
> > MPI_Comm_rank(world, &rank);<br>
> > MPI_Comm_size(world, &size);<br>
> ><br>
> > t2 = 1;<br>
> > MPI_Allreduce(&t2, &t_avg, 1, MPI_DOUBLE, MPI_SUM, world);<br>
> > t_avg = t_avg / size;<br>
> ><br>
> > MPI_Finalize();<br>
> ><br>
> > return 0;<br>
> > }<br>
> ><br>
> > Amin Hassani,<br>
> > CIS department at UAB,<br>
> > Birmingham, AL, USA.<br>
> ><br>
> > On Tue, Nov 25, 2014 at 2:46 PM, "Antonio J. Peña" <<a href="mailto:apenya@mcs.anl.gov">apenya@mcs.anl.gov</a>> wrote:<br>
> ><br>
> > Hi Amin,<br>
> ><br>
> > Can you share with us a minimal piece of code with which you can reproduce this issue?<br>
> ><br>
> > Thanks,<br>
> > Antonio<br>
> ><br>
> ><br>
> ><br>
> > On 11/25/2014 12:52 PM, Amin Hassani wrote:<br>
> >> Hi,<br>
> >><br>
> >> I am having problem running MPICH, on multiple nodes. When I run an multiple MPI processes on one node, it totally works, but when I try to run on multiple nodes, it fails with the error below.<br>
> >> My machines have Debian OS, Both infiniband and TCP interconnects. I'm guessing it has something do to with the TCP network, but I can run openmpi on these machines with no problem. But for some reason I cannot run MPICH on multiple nodes. Please let me
know if more info is needed from my side. I'm guessing there are some configuration that I am missing. I used MPICH 3.1.3 for this test. I googled this problem but couldn't find any solution.<br>
> >><br>
> >> In my MPI program, I am doing a simple allreduce over MPI_COMM_WORLD.<br>
> >><br>
> >> my host file (hosts-hydra) is something like this:<br>
> >> oakmnt-0-a:1<br>
> >> oakmnt-0-b:1 <br>
> >><br>
> >> I get this error:<br>
> >><br>
> >> $ mpirun -hostfile hosts-hydra -np 2 test_dup<br>
> >> Assertion failed in file ../src/mpi/coll/helper_fns.c at line 490: status->MPI_TAG == recvtag<br>
> >> Assertion failed in file ../src/mpi/coll/helper_fns.c at line 490: status->MPI_TAG == recvtag<br>
> >> internal ABORT - process 1<br>
> >> internal ABORT - process 0<br>
> >><br>
> >> ===================================================================================<br>
> >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES<br>
> >> = PID 30744 RUNNING AT oakmnt-0-b<br>
> >> = EXIT CODE: 1<br>
> >> = CLEANING UP REMAINING PROCESSES<br>
> >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES<br>
> >> ===================================================================================<br>
> >> [mpiexec@vulcan13] HYDU_sock_read (../../../../src/pm/hydra/utils/sock/sock.c:239): read error (Bad file descriptor)<br>
> >> [mpiexec@vulcan13] control_cb (../../../../src/pm/hydra/pm/pmiserv/pmiserv_cb.c:199): unable to read command from proxy<br>
> >> [mpiexec@vulcan13] HYDT_dmxu_poll_wait_for_event (../../../../src/pm/hydra/tools/demux/demux_poll.c:76): callback returned error status<br>
> >> [mpiexec@vulcan13] HYD_pmci_wait_for_completion (../../../../src/pm/hydra/pm/pmiserv/pmiserv_pmci.c:198): error waiting for event<br>
> >> [mpiexec@vulcan13] main (../../../../src/pm/hydra/ui/mpich/mpiexec.c:344): process manager error waiting for completion<br>
> >><br>
> >> Thanks.<br>
> >> Amin Hassani,<br>
> >> CIS department at UAB,<br>
> >> Birmingham, AL, USA.<br>
> >><br>
> >><br>
> >> _______________________________________________<br>
> >> discuss mailing list<br>
> >> <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >><br>
> >> To manage subscription options or unsubscribe:<br>
> >><br>
> >> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> ><br>
> ><br>
> > --<br>
> > Antonio J. Peña<br>
> > Postdoctoral Appointee<br>
> > Mathematics and Computer Science Division<br>
> > Argonne National Laboratory<br>
> > 9700 South Cass Avenue, Bldg. 240, Of. 3148<br>
> > Argonne, IL 60439-4847<br>
> ><br>
> > <a href="mailto:apenya@mcs.anl.gov">apenya@mcs.anl.gov</a><br>
> > <a href="http://www.mcs.anl.gov/~apenya" target="_blank">www.mcs.anl.gov/~apenya</a><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a></div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<blockquote type="cite">
<div><span>_______________________________________________</span><br>
<span>discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a></span><br>
<span>To manage subscription options or unsubscribe:</span><br>
<span><a href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a></span></div>
</blockquote>
</body>
</html>