[mpich-discuss] Implementation of MPICH collectives
Antonio J. Peña
apenya at mcs.anl.gov
Fri Sep 13 07:56:36 CDT 2013
You can set the MPIR_PARAM_CH3_NO_LOCAL environment variable to 1.
Refer to the README.envvar file.
On Friday, September 13, 2013 08:53:36 AM Jiri Simsa wrote:
Thank you for your answer. That's precisely what I was looking for. Any
chance there is a way to force the intranode communication to use tcp?
Within the node, it uses shared memory. Outside the node, it depends on
the netmod you configured with. tcp is the default netmod.
On Sep 12, 2013, at 2:24 PM, Jiri Simsa wrote:
> The high-order bit of my question is: What OS interface(s) does MPICH
use to transfer data from one MPI process to another?>>> On Thu, Sep
12, 2013 at 1:36 PM, Jiri Simsa <jsimsa at cs.cmu.edu> wrote:> Hello,>>
I have been trying to understand how MPICH implements collective
operations. To do so, I have been reading the MPICH source code and
stepping through mpiexec executions.>> For the sake of this discussion,
let's assume that all MPI processes are executed on the same computer
using: mpiexec -n <n> <mpi_program>>> This is my current abstract
understanding of MPICH:>> - mpiexec spawns a hydra_pmi_proxy process,
which in turn spawns <n> instances of <mpi_program>> -
hydra_pmi_proxy process uses socket pairs to communicate with the
instances of <mpi_program>>> I am not quite sure though what happens
under the hoods when a collective operation, such as MPI_Allreduce, is
executed. I have noticed that instances of <mpi_program> create and
listen on a socket in the course of executing MPI_Allreduce but I am not
sure who connects to these sockets. Any chance someone could describe
the data flow inside of MPICH when a collective operation, such as
MPI_Allreduce, is executed? Thanks!>> Best,>> --Jiri Simsa>>
_______________________________________________> discuss mailing list
discuss at mpich.org
> To manage subscription options or
 mailto:jsimsa at cs.cmu.edu
 mailto:discuss at mpich.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the discuss