[mpich-devel] ROMIO collective i/o memory use
Bob Cernohous
bobc at us.ibm.com
Mon May 6 14:30:15 CDT 2013
> From: Rob Ross <rross at mcs.anl.gov>
>
> Should we consider this as interest in working on this problem on
> the IBM side :)? -- Rob
Say what?! ;)
Meaning we can get rid of all o(p) allocations? Not sure how you do
internal collectives on behalf of the app/mpi-io without at least some of
those. I was looking more for agreement that collective i/o is 'what it
is'... and maybe some idea if we just have some known limitations on
scaling it. Yes, that BG alltoallv is a bigger problem that we can avoid
with an env var -- is that just going to have to be 'good enough'? (I
think that Jeff P wrote that on BG/P and got good performance with that
alltoallv. Trading memory for performance, not unusual, and at least it's
selectable.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/devel/attachments/20130506/97844384/attachment-0001.html>
More information about the devel
mailing list