[mpich-discuss] Hardware / Performance Recommendations?

Pavan Balaji balaji at mcs.anl.gov
Wed Dec 18 18:57:06 CST 2013


Lou,

Unfortunately, there’s no way we can say anything about this, since it depends on the applications/workflows.

FWIW, the best “supercomputer” will be one that has an infinitely fast processor and a ton of superfast memory.  But that’ll be too expensive.  So, really, it depends on what your application needs.

  — Pavan

On Dec 19, 2013, at 2:31 AM, Lou Picciano <loupicciano at comcast.net> wrote:

> MPICH Friends, 
> 
> Could we ask for some insights/recommendations on what hardware your sites are using in delivering these MPI(CH) workflows?
> We've been doing proof-of-concept work thus far on readily-available commodity systems; in the near future, would be doing all custom builds, so are able to keep an 'open mind'!
> 
> At this point, we're looking into all SuperMicro, ECC memory, Intel Xeon E2nnn v2 series CPUs, and perhaps looking into leveraging nVidia GPUs as well.
> 
> Are there any big gotchas? Or no-gos?
> 
> What would opinions be on the Faster-ClockSpeed-vs.-More-Cores argument?
> 
> Is there a good argument in favor of the most expensive Xeons over more commodity i7 'consumer grade' CPUs (ECC aside)? Or as compared to Athlon multicore chips?
> 
> Is the negotiation over network (10GB) trivial enough that adding more (smaller) nodes might be roughly equivalent to building monster servers? Ie, are there any numbers comparing the More-nodes-fewer-cores-each to the 1-node, many-cores approach? (Think we know the answer to this one, roughly, but are wondering if there are any hard numbers we could use in our planning decisions).
> 
> Are there any specific hardware pieces / controllers / interfaces to be avoided?
> 
> I realize it's hard to make such assertions in a vacuum - ie, without more specifics about the workflow - but we don't have these fully nailed down yet. Nor will we have the opportunity in the very near term to apply much code optimization expertise. 
> 
> For the moment, we're projecting for clustered fortran workflows - but would like to hear observations/insights into how best keep options open. 
> 
> Many Thanks for Insights, Lou Picciano
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

--
Pavan Balaji
http://www.mcs.anl.gov/~balaji


More information about the discuss mailing list