<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">Based on my understanding, MPI_COMM_WORLD is a pre-defined intra-communicator. I don't think you can change the size of it.<div><br></div><div>What <span style="font-size:13px">MPI_Comm_accept and MPI_Comm_connect create is a inter-communicator, which consist of two disjoint group of processors. (See Chap 6.6 of MPI 3.0 standard.)</span> If you need a intra-communicator, you may use MPI_Intercomm_merge to create one.<br></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">--</div><div dir="ltr">Huiwei</div></div></div></div></div></div>
<br><div class="gmail_quote">On Thu, Jan 22, 2015 at 6:14 PM, haozi <span dir="ltr"><<a href="mailto:yidanyiji@163.com" target="_blank">yidanyiji@163.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial"><div>Thanks, Lu.</div><div>My simple code is as following.</div><div>//server</div><div><div>#include "mpi.h"</div><div><br></div><div>int main(int argc, char *argv[])</div><div>{</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm client;</div><div><span style="white-space:pre-wrap"> </span>MPI_Status status;</div><div><span style="white-space:pre-wrap"> </span>char port_name[MPI_MAX_PORT_NAME];</div><div><span style="white-space:pre-wrap"> </span>int size, again;</div><div><br></div><div><span style="white-space:pre-wrap"> </span>MPI_Init(&argc, &argv);</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_size(MPI_COMM_WORLD, &size);</div><div><span style="white-space:pre-wrap"> </span>MPI_Open_port(MPI_INFO_NULL, port_name);</div><div><span style="white-space:pre-wrap"> </span>printf("server port_name is %s\n\n", port_name);</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_accept(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,&client);</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_size(MPI_COMM_WORLD, &size);</div><div><span style="white-space:pre-wrap"> </span>printf("At server, comm_size=%d @ MPI_COMM_WORLD=%x, Client_World=%x\n",size,MPI_COMM_WORLD, client);</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_size(client, &size);</div><div><span style="white-space:pre-wrap"> </span>printf("At server, client_size=%d @ MPI_COMM_WORLD=%x, Client_World=%x\n",size,MPI_COMM_WORLD, client);</div><div><br></div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_disconnect(&client);</div><div><span style="white-space:pre-wrap"> </span>MPI_Finalize();</div><div><span style="white-space:pre-wrap"> </span>return 0;</div><div>}</div></div><br><div>//client</div><div><div>#include "mpi.h"</div><div><br></div><div>int main( int argc, char **argv )</div><div>{</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm server;</div><div><span style="white-space:pre-wrap"> </span>char port_name[MPI_MAX_PORT_NAME];</div><div><span style="white-space:pre-wrap"> </span>int size;</div><div><br></div><div><span style="white-space:pre-wrap"> </span>MPI_Init( &argc, &argv );</div><div><span style="white-space:pre-wrap"> </span>strcpy( port_name, argv[1] );</div><div><span style="white-space:pre-wrap"> </span>printf("server port name:%s\n",port_name);</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_connect( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,&server );</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_size(MPI_COMM_WORLD, &size);</div><div><span style="white-space:pre-wrap"> </span>printf("At client, comm_size=%d @ MPI_COMM_WORLD=%x, Server_World=%x\n",size,MPI_COMM_WORLD,server);</div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_size(server, &size);</div><div><span style="white-space:pre-wrap"> </span>printf("At client, server_size=%d @ MPI_COMM_WORLD=%x, Server_World=%x\n",size,MPI_COMM_WORLD,server);</div><div><br></div><div><span style="white-space:pre-wrap"> </span>MPI_Comm_disconnect( &server );</div><div><span style="white-space:pre-wrap"> </span>MPI_Finalize();</div><div><span style="white-space:pre-wrap"> </span>return 0;</div><div>}</div></div><div><br></div><div>The run command is as following.</div><div>mpiexec -n 1 ./server</div><div>mpiexec -n 1 ./client</div><div><br></div><div>BUT, the SIZE is 1, NOT 2.</div><div><br></div><div>My question is as before: How does the size of MPI_COMM_WORLD change to 2 ?</div><div><div class="h5"><br><div></div><div></div><br>At 2015-01-23 00:30:43, "Huiwei Lu" <<a href="mailto:huiweilu@mcs.anl.gov" target="_blank">huiweilu@mcs.anl.gov</a>> wrote:<br> <blockquote style="PADDING-LEFT:1ex;MARGIN:0px 0px 0px 0.8ex;BORDER-LEFT:#ccc 1px solid"><div dir="ltr">You may take a look at MPI_Comm_accept and MPI_Comm_connect, which will connect a new client process to a server process. See Chap. 10 of MPI 3.0 standard (<a href="http://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf" target="_blank">www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf</a>) for a detail example.<div class="gmail_extra"><br clear="all"><div><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr">--</div><div dir="ltr">Huiwei Lu</div><div dir="ltr">Postdoc Appointee</div><div dir="ltr">Mathematics and Computer Science Division</div><div dir="ltr">Argonne National Laboratory</div><div dir="ltr"><a href="http://www.mcs.anl.gov/~huiweilu/" target="_blank">http://www.mcs.anl.gov/~huiweilu/</a></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Thu, Jan 22, 2015 at 9:41 AM, haozi <span dir="ltr"><<a href="mailto:yidanyiji@163.com" target="_blank">yidanyiji@163.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial"><div>Hi, guys.</div><div><br></div><div>This web page (<a href="http://wiki.mpich.org/mpich/index.php/PMI_v2_Design_Thoughts)" target="_blank">http://wiki.mpich.org/mpich/index.php/PMI_v2_Design_Thoughts)</a> says:</div><div> <b> </b><span style="font-family:sans-serif;font-size:12.8000001907349px;line-height:19.2000007629395px"><b>Singleton init.</b> This is the process by which a program that was not started with mpiexec can <b>become an MPI process </b>and make use of all MPI features, including MPI_Comm_spawn, needs to be designed and documented, with particular attention to the disposition of standard I/O. Not all process managers will want to or even be able to create a new mpiexec process, so this needs to be negotiated. Similarly, the dispostion of stdio needs to be negotiated between the singleton process and the process manager. To address these issues, a new singleton init protocol has been implemented and tested with the gforker process manager.</span></div><div><br></div><div>I am very interested in this <font color="#333333" face="arial"><span style="line-height:22px;background-color:rgb(254,254,254)">function</span></font>.</div><div>Can this function solve the following question:</div><div> At beginning, the MPI job uses the mpiexec commond to start three MPI processes. That is to say, there are three MPI processes in MPI_COMM_WORLD. At some time, the job find itself to need another MPI process to cooperate the three MPI processes. So the question is: Could PMI help an non-MPI process to become a MPI process of the current MPI_COMM_WORLD? That is to say, Could the non-MPI process use the PMI function to become a member process of the current MPI job which would have FOUR MPI processes in MPI_COMM_WORLD?</div><div><br></div><div>Is there some method to solve question?</div><div>Is anybody have some example?</div><div><br></div><div>Thandks!!!</div></div></div><br><br><span title="neteasefooter"><span></span></span><br>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></blockquote></div><br></div></div>
</blockquote></div></div></div><br><br><span title="neteasefooter"><span></span></span><br>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></blockquote></div><br></div></div>