[mpich-discuss] Affinity with MPICH_ASYNC_PROGRESS

Jed Brown jedbrown at mcs.anl.gov
Sat Feb 23 19:10:22 CST 2013


On Sat, Feb 23, 2013 at 7:01 PM, Jeff Hammond <jhammond at alcf.anl.gov> wrote:

> >> For example, on an 8-core node, I was hoping to get async progress on
> >> 7 processes by pinning 7 comm threads to the 8th core.
> >
> > Did this work at all?
>
> What is your definition of work?


Does it make better async progress than the default?


> As far as I know, there is no API in MPICH for controlling thread
> affinity.  The right way to improve this could would be to move it
> inside of Nemesis and then add support for hwloc for comm threads.  I
> assume you can move the parent processes around such that your 7
> comm-intensive procs are closer to the NIC though.  You should look at
> hwloc though.
>

I'm thinking about how to interact with other threads _because_ I'm writing
hwloc-based affinity management now. The behavior that I think people will
want is to use MPI to set process affinity and then tell me to use some
affinity policy to _restrict_ the process affinity to each thread. I prefer
using MPI to set process affinity because I think it's more messy for
software at a higher level than MPI to "figure out" what other processes
are running on the same node and agree among themselves how to divvy up
resources. If there was an important configuration that could not be
supported this way, however, then I suppose I could do it.


> My diff w.r.t. the SVN trunk (I did all of this before the SVN->Git
> conversion) is below.  It is clearly a hack and I don't care.  It only
> works on Linux or other systems that support CPU_SET.  It does not
> work on my Mac, for example.
>
> I have not done very much experimenting with this code other than to
> verify that it works (as in "does not crash and gives the same result
> for cpi").  Eventually, I am going to see how it works with ARMCI-MPI.


Cool, I'll be curious.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130223/5dafe698/attachment.html>


More information about the discuss mailing list