[mpich-devel] MPI_Recv, blocking call concept
Ali MoradiAlamdarloo
timndus at gmail.com
Wed Jun 13 23:18:14 CDT 2018
Thanks for your comment.
On Fri, Jun 8, 2018 at 9:08 PM, Jed Brown <jed at jedbrown.org> wrote:
> Your assumption is that exascale machines will be homogeneous
Of course Exascale machines will be heterogeneous in a global view. probably
two application running on the system has heterogeneous computing resource
comparing to each other, but I think each application must has their own
homogeneous computing resources, otherwise maybe this heterogeneity itself
makes load imbalances.
Yes we can do some static hardware reconfiguration and find the best
configuration(probably heterogeneous) for a specific application but I
don't think this is going to be feasible in Exascale systems.
filled with dirt-cheap hardware that requires lots of power, and operated by
> low-wage workers
This is not my assumption. Yes, the hardware guys are building low power
high efficient computing cores, but currently what is going to be run on
top of it is a software that waste power at slacks.
1- there is a time frame that software waste the power.
2- power is the most constrained resource in Exascale.
3- there will be a millions(maybe billions) of cores acting like this.
So this must be handled, no matter how much our computing cores are power
efficient.
and also that applications will be highly latency
> tolerant while also having such atrocious load imbalance that reducing
> power while waiting on communication will significantly reduce total
> operating costs?
>
Yes, probably there will be some plentiful latency insensitive Exascale
applications.
We would like the developers to write code where every processors is doing
pretty much the same thing, it takes the same amount of time, they all
communicate at exactly the same time and they do it again, in the real
world this never happens. until there are slacks we have the chance to go
forward in efficiency.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/devel/attachments/20180614/afbfe93b/attachment.html>
More information about the devel
mailing list