<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">2013/5/29 Jeff Hammond <span dir="ltr"><<a href="mailto:jhammond@alcf.anl.gov" target="_blank">jhammond@alcf.anl.gov</a>></span><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="im"><br>
><br>
><br>
> Wouldn't a message such as "`pwd` directory does not exist on node velascoj"<br>
> be more illustrative?<br>
<br>
</div>Yes. However, the set of improper uses of MPI that could generate<br>
helpful error messages is uncountable. Do you not think it is a good<br>
use of finite developer effort to implement an infinitesimal fraction<br>
of such warnings? There has to be a minimum requirement placed upon<br>
the user. I personally think that it should include running in a<br>
directory that actually exists.<br></blockquote><div><br></div><div>Certainly! But then again, some developer must have thought it a good idea, since under different circumstances, I get:<br><br>/bin/bash -c mpiexec -n 1 -hosts tauro,velascoj gmandel<br>
[proxy:0:0@tauro] launch_procs (./pm/pmiserv/pmip_cb.c:648): unable to change wdir to /tmp/edscott/mnt/tauro-home/GIT/gmandel (No such file or directory)<br>[proxy:0:0@tauro] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:893): launch_procs returned error<br>
[proxy:0:0@tauro] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status<br>[proxy:0:0@tauro] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event<br>[mpiexec@velascoj] control_cb (./pm/pmiserv/pmiserv_cb.c:202): assert (!closed) failed<br>
[mpiexec@velascoj] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status<br>[mpiexec@velascoj] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:197): error waiting for event<br>
[mpiexec@velascoj] main (./ui/mpich/mpiexec.c:331): process manager error waiting for completion<br></div><div> </div>Which is inconsistent with the previous behavior. Anyways, its no big deal. <br><br></div><div class="gmail_quote">
BTW, would you happen to know why a process which is started with MPI_Comm_spawn will go into what seems like an active wait after MPI_Comm_disconnect and MPI_Finalize has been called? These spawned processed will hog up CPU until the parent process exits. Curious enough, this behavior is not mirrored in openmpi. <br>
</div><div class="gmail_quote"><br></div><div class="gmail_quote">Edscott<br></div><div class="gmail_quote"><br><br></div><br><div><div><div>-------------------------------<br></div>Dr. Edscott Wilson Garcia<br></div>
Applied Mathematics and Computing<br></div>Mexican Petroleum Institute</div></div>