[mpich-discuss] Indexed MPI_Reduce_local

Jed Brown jedbrown at mcs.anl.gov
Fri Dec 28 19:33:40 CST 2012


The context is a comm library on top of MPI. I provide sparse communication
with respect to a user-provided "unit" datatype (usually basic types or
small contiguous structs).

I want to reduce incoming (contiguous) data with a sparse local copy. For
example, suppose I have a local buffer of length 5 containing
[0,100,200,300,400] and two indexed receives:

Recv A: indices=[0,2,3], values = [10,11,12]
Recv B: indices=[2,4], values=[23,24]

If I reduce using SUM, the operation should complete with

[10, 100, 234, 312, 424]

in the local buffer. It looks like if I want to use MPI_Reduce_local, I
have to create a bunch of tiny arrays:

[[0,10],
 [100],
 [200,11,23],
 [300,12],
 [400,24]]

and run MPI_Reduce_local on each row. This is annoying because (a) there is
no API for copying the unit datatype and (b) most of these short lists will
have length 2 or 3, so this is likely to be slow.

I could implement this operation in-place using MPI_Accumulate on
COMM_SELF, but I'd rather avoid MPI_Accumulate because it doesn't work with
quad-precision or with MINLOC on {long; long}. Related tickets include:
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/338
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/318
https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/319

Any other suggestions?

In lieu of a more elegant solution, I'm going to match on the types and ops
that I need to support and write the sparse reduction by hand.

In the longer term, I'm curious whether the MPICH developers, and
eventually the Forum, may be interested in providing better support for
higher level comm libraries like this.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20121228/9d0a3e29/attachment.html>


More information about the discuss mailing list