[mpich-devel] MPI_Recv, blocking call concept

Jeff Hammond jeff.science at gmail.com
Thu Jun 7 17:00:09 CDT 2018


It spins because that is optimal for latency and how the shared-memory
protocols work.  If you want blocking semantics, use ch3:sock, which will
park the calling thread in the kernel.  It is great for oversubscription
but terrible for performance in the common case of exact subscription or
undersubscription.

You can't save much power unless you drop into lower P/C-states, but the
states that save you significant power will increase the latency a huge
amount.  Dell did something a while back that turned down the frequency
during MPI calls (
http://www.hpcadvisorycouncil.com/events/2013/Spain-Workshop/pdf/5_Dell.pdf),
which saved a bit of power.

Jeff

On Thu, Jun 7, 2018 at 4:27 AM, Ali MoradiAlamdarloo <timndus at gmail.com>
wrote:

> Dear all,
>
> The blocking call definition from my understanding is something like this:
> when a process(P0) do a blocking system call, scheduler block the process
> and assign another process(P1) in order to efficiently use of CPU core.
> Finally P0 response will be ready and scheduler can map it again on a core.
>
> But this is not what happening In MPICH->MPI_Recv function. you call it
> BLOCKING call, but the process that call this function actually doesn't
> block, it just continue running on core WAITING for his response.
>
> Why you decide to do this? why we have a process waiting on a valuable
> processing core and burning the power?
>
> _______________________________________________
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/devel
>
>


-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/devel/attachments/20180607/6e9b255e/attachment.html>


More information about the devel mailing list