<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">Thank you Kenneth. I figured as much. No matter how long i searched, it seems this question keeps popping up and there is no solution to date. <div><br></div><div>On the bright side i figured, oh well, we can launch the separate workers, configure them into one intra communicator via MPI_Comm_accept() and intracomm_merge() and pass in the task via shared memory. It seems to be working although direct in-process control over task data hand-off would be nicer still.</div><div><br></div><div>Thank you for the issue tracker!</div><div>-dmitriy</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 26, 2016 at 8:18 AM, Kenneth Raffenetti <span dir="ltr"><<a href="mailto:raffenet@mcs.anl.gov" target="_blank">raffenet@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Dmitriy,<br>
<br>
mpiexec is still needed for this type of operation with MPICH today. This ticket has some of the previous discussion about singleton init and spawn, but there hasn't been much demand for the feature. <a href="https://trac.mpich.org/projects/mpich/ticket/1074" rel="noreferrer" target="_blank">https://trac.mpich.org/projects/mpich/ticket/1074</a><br>
<br>
Ken<div><div class="h5"><br>
<br>
On 04/21/2016 12:18 AM, Dmitriy Lyubimov wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
Hi,<br>
<br>
I guess i need to be asking more specific questions.<br>
<br>
I saw the same issue has been raised a few years ago and apparently<br>
there was no solution for it back then, so i thought i'd check again.<br>
<br>
The question is about bypassing mpiexec and trying to connect already<br>
running processes (which already have some data in memory to share with<br>
mpi tasks) into a single communicator without having to spawn yet<br>
another process with mpiexec.<br>
<br>
Suppose one of this processes uses MPI_Comm_accept, and the other<br>
somehow know the mpi port name and would use MPI_Comm_connect() to<br>
connect to it.<br>
<br>
Everything works as long as both server and client started with mpiexec<br>
-n 1. However, without mpiexec, the same attempts cause errors and<br>
connection never happens:<br>
<br>
mpiexec@Intel-Kubu] match_arg (utils/args/args.c:159): unrecognized<br>
argument pmi_args<br>
[mpiexec@Intel-Kubu] HYDU_parse_array (utils/args/args.c:174): argument<br>
matching returned error<br>
[mpiexec@Intel-Kubu] parse_args (ui/mpich/utils.c:1596): error parsing<br>
input array<br>
[mpiexec@Intel-Kubu] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1648):<br>
unable to parse user arguments<br>
[mpiexec@Intel-Kubu] main (ui/mpich/mpiexec.c:153): error parsing parameters<br>
<br>
<br>
So, there's no way of working around the necessity to spawn an entirely<br>
new process with mpiexec?<br>
<br>
Thank you very much for your help.<br>
-Dmitriy<br>
<br>
<br></div></div>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
</blockquote>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</blockquote></div><br></div>