<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body dir="auto"><div>Probably while all of the processes are I the MPI stack waiting most of them are blocking on a mutex that will only let one of them into the MPI progress engine. Depending on how your application is architected, you could have just one thread make progress in MPI by calling Waitany and then just use Test or Testany in the other threads. Unless your using per object locking, you won't have multiple processes polling the network at the same time. <br></div><div><br></div><div>Wesley</div><div><br>On Sep 27, 2014, at 7:39 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr"><div>You are absolutely right.</div><div>But some engineer designed to execute another application on computer while system is in ready state.</div><div>This application executed only when my application in ready state.</div><div>I'm not familiar with the reason, but currently this is a situation.</div><div>One more question:</div><div>My application designed not by me, then I can't explain you design reason for the solution.</div><div>I have number of communicators which are cloned from MPI_COMM_WORLD. I execute on each communicator MPI_Waitany on separate thread. The data mostly arrives on single communicator only, but other threads are just waiting. I mean that single thread is actually working with MPI, others are on busy wait. Do I have polling penalty in this case? </div><div><br></div><div>Regards,</div><div>Anatoly.</div><div><br><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 23, 2014 at 4:49 PM, Rob Latham <span dir="ltr"><<a href="mailto:robl@mcs.anl.gov" target="_blank">robl@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 09/23/2014 09:38 AM, Wesley Bland wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Out of curiosity, what is the issue with using 100% of the CPU? If<br>
you’re not using it for your application (which it appears that you<br>
aren’t since you’re calling MPI_Wait), what difference does it make if<br>
MPI uses all of it?<br>
</blockquote>
<br></span>
power consumption, I'd imagine.<span class="HOEnZb"><font color="#888888"><br>
<br>
==rob<br>
<br>
-- <br>
Rob Latham<br>
Mathematics and Computer Science Division<br>
Argonne National Lab, IL USA</font></span><div class="HOEnZb"><div class="h5"><br>
______________________________<u></u>_________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/<u></u>mailman/listinfo/discuss</a><br>
</div></div></blockquote></div><br></div>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a></span><br><span>To manage subscription options or unsubscribe:</span><br><span><a href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a></span></div></blockquote></body></html>