<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">Thank you very much.<div><br><div>I'm using MPICH 3.1.</div><div><br></div><div>The application purpose is continue execution until at least Master process is live. The assumption is that processes can fail, but no more than single failure at a time. In this case survived processes must continue "like a Terminator". (-:<br></div><div>In run-time I can't execute "manually" MPI_Test on all requests, it's too heavy. I have a network traffic ~ 400-500 Mb/s.</div><div><br></div><div>Implementation:</div><div>I set -disable-auto-cleanup flag to activate Fault tolerance.</div><div>All my processes activate MPI_Irecv on other processes and then execute MPI_Waitany on all active requests. When data arrives correctly, I process it and execute new MPI_Irecv on source process. If MPI_Waitany returns error (some process failed) I recognize failed rank and stop communication with it on application level (no more Sends & Recvs to it). In this mode system continues execution with survived processes. I don't use any collective operations, I simulate them using MPI_Irecv & MPI_Isend + MPI_Waitany or MPI_Waitall (returns error if some process failed).</div><div><br></div><div>I think it's ugly solution, but I can't think on any more elegant solution.</div><div>Any other solution at all would be welcome.</div><div><br></div><div>The problem in "ready" state, when there is almost only waiting 3-4 messages may be received by process. In this state all CPUs are busy by executing polling mechanism.</div><div><br></div><div><br></div><div><br></div><div>Regards,</div><div>Anatoly.</div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 22, 2014 at 4:46 PM, Wesley Bland <span dir="ltr"><<a href="mailto:wbland@anl.gov" target="_blank">wbland@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">Which version of MPICH are you using?<div><br></div><div>Which fault tolerance features are you using? Fault tolerance is currently undergoing some changes and has different features than it used to have.</div><div><br></div><div>AFAIK, neither version of FT has been tested with ch3:sock. It’s possible that it will work, but FT is still a very experimental feature and hasn’t been widely tested.</div><div><br></div><div>If you want to avoid polling more, you can use non-blocking receive calls to post a receive and poll the system yourself periodically (using MPI_TEST). This will give your application an opportunity to do something else while waiting for the receives to complete.</div><div><br></div><div>Thanks,</div><div>Wesley</div><div><div class="h5"><div><br><div><blockquote type="cite"><div>On Sep 22, 2014, at 8:30 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com" target="_blank">anatolyrishon@gmail.com</a>> wrote:</div><br><div><div dir="ltr">Dear MPICH.<div>I have a problem with poling MPICH mechanism.</div><div>I'm working on cluster. There are 2-4 processes on each computer (I can't execute single process per computer because of application requirements).</div><div>My system has 2 states:</div><div>Ready - slaves listen to master (but no data flow)</div><div>Run - masters start communication, then there is data flow.</div><div>When system in ready state (all processes except master executed MPI_Recv requests on master) but Master process still net sending data I see CPU usage > 100% (more than 1 core used) per process. When 4 processes are in ready state (waiting for data) computer begins to slow down other processes, I think because of polling.</div><div>I tried to build MPICH with <span style="font-family:arial,sans-serif;font-size:12.666666984558105px"> </span><span style="font-family:arial,sans-serif;font-size:12.666666984558105px">--with-device=ch3:sock, then I get 0% CPU usage in ready state, but I have a problem with Fault tolerance feature.</span></div><div><span style="font-family:arial,sans-serif;font-size:12.666666984558105px">My questions are:</span></div><div><font face="arial, sans-serif"><span style="font-size:12.6666669845581px">1) Is such behavior expected that build with </span></font><span style="font-family:arial,sans-serif;font-size:12.666666984558105px">--with-device=ch3:sock causes Fault tolerance not work? Does Fault tolerance based on polling mechanism?</span></div><div><span style="font-family:arial,sans-serif;font-size:12.666666984558105px">2) Can I change polling rate to reduce CPU payload? I understand that penalty is transfer rate slow down.</span></div><div><span style="font-family:arial,sans-serif;font-size:12.666666984558105px">3) Can I use any other MPI APIs to check if message from master is arrived w/o activating polling mechanism?</span></div><div><span style="font-family:arial,sans-serif;font-size:12.666666984558105px"><br></span></div><div><span style="font-family:arial,sans-serif;font-size:12.666666984558105px">Regards,</span></div><div><span style="font-family:arial,sans-serif;font-size:12.666666984558105px">Anatoly. </span></div><div><span style="font-family:arial,sans-serif;font-size:12.666666984558105px"><br></span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 8, 2014 at 3:57 PM, Balaji, Pavan <span dir="ltr"><<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
This is expected. Currently, the only way to not have MPICH poll is to configure with --with-device=ch3:sock. Please note that this can cause performance loss (the polling is helpful for performance in the common case).<br>
<br>
We are planning to allow this in the default build as well in the future.<br>
<br>
— Pavan<br>
<div><div><br>
On May 8, 2014, at 7:54 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com" target="_blank">anatolyrishon@gmail.com</a>> wrote:<br>
<br>
> Dear MPICH forum.<br>
> I created an endless MPI program.<br>
> In this program each process calls MPI_Recv from other process, w/o any MPI_Send.<br>
> When I execute this program I see each process takes ~ 100% CPU core.<br>
> Is this behavior (I suppose polling) is normal?<br>
> May I reduce MPI_Recv CPU penalty?<br>
><br>
> Regards,<br>
> Anatoly.<br>
</div></div>> <mpi_polling.cpp>_______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</blockquote></div><br></div>
_______________________________________________<br>discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>To manage subscription options or unsubscribe:<br><a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a></div></blockquote></div><br></div></div></div></div><br>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></blockquote></div><br></div>