[mpich-discuss] Implementation of MPICH collectives

Jiri Simsa jsimsa at cs.cmu.edu
Fri Sep 13 07:53:36 CDT 2013


Pavan,

Thank you for your answer. That's precisely what I was looking for. Any
chance there is a way to force the intranode communication to use tcp?

--Jiri

Within the node, it uses shared memory.  Outside the node, it depends on
> the netmod you configured with.  tcp is the default netmod.
>  -- Pavan
> On Sep 12, 2013, at 2:24 PM, Jiri Simsa wrote:
> > The high-order bit of my question is: What OS interface(s) does MPICH
> use to transfer data from one MPI process to another?
> >
> >
> > On Thu, Sep 12, 2013 at 1:36 PM, Jiri Simsa <jsimsa at cs.cmu.edu> wrote:
> > Hello,
> >
> > I have been trying to understand how MPICH implements collective
> operations. To do so, I have been reading the MPICH source code and
> stepping through mpiexec executions.
> >
> > For the sake of this discussion, let's assume that all MPI processes are
> executed on the same computer using: mpiexec -n <n> <mpi_program>
> >
> > This is my current abstract understanding of MPICH:
> >
> > - mpiexec spawns a hydra_pmi_proxy process, which in turn spawns <n>
> instances of <mpi_program>
> > - hydra_pmi_proxy process uses socket pairs to communicate with the
> instances of <mpi_program>
> >
> > I am not quite sure though what happens under the hoods when a
> collective operation, such as MPI_Allreduce, is executed. I have noticed
> that instances of <mpi_program> create and listen on a socket in the course
> of executing MPI_Allreduce but I am not sure who connects to these sockets.
> Any chance someone could describe the data flow inside of MPICH when a
> collective operation, such as MPI_Allreduce, is executed? Thanks!
> >
> > Best,
> >
> > --Jiri Simsa
> >
> > _______________________________________________
> > discuss mailing list     discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130913/5e20edbf/attachment.html>


More information about the discuss mailing list