[mpich-discuss] Implementation of MPICH collectives
Antonio J. Peña
apenya at mcs.anl.gov
Fri Sep 13 07:56:36 CDT 2013
You can set the MPIR_PARAM_CH3_NO_LOCAL environment variable to 1.
Refer to the README.envvar file.
Antonio
On Friday, September 13, 2013 08:53:36 AM Jiri Simsa wrote:
Pavan,
Thank you for your answer. That's precisely what I was looking for. Any
chance there is a way to force the intranode communication to use tcp?
--Jiri
Within the node, it uses shared memory. Outside the node, it depends on
the netmod you configured with. tcp is the default netmod.
-- Pavan
On Sep 12, 2013, at 2:24 PM, Jiri Simsa wrote:
> The high-order bit of my question is: What OS interface(s) does MPICH
use to transfer data from one MPI process to another?>>> On Thu, Sep
12, 2013 at 1:36 PM, Jiri Simsa <jsimsa at cs.cmu.edu[1]> wrote:> Hello,>>
I have been trying to understand how MPICH implements collective
operations. To do so, I have been reading the MPICH source code and
stepping through mpiexec executions.>> For the sake of this discussion,
let's assume that all MPI processes are executed on the same computer
using: mpiexec -n <n> <mpi_program>>> This is my current abstract
understanding of MPICH:>> - mpiexec spawns a hydra_pmi_proxy process,
which in turn spawns <n> instances of <mpi_program>> -
hydra_pmi_proxy process uses socket pairs to communicate with the
instances of <mpi_program>>> I am not quite sure though what happens
under the hoods when a collective operation, such as MPI_Allreduce, is
executed. I have noticed that instances of <mpi_program> create and
listen on a socket in the course of executing MPI_Allreduce but I am not
sure who connects to these sockets. Any chance someone could describe
the data flow inside of MPICH when a collective operation, such as
MPI_Allreduce, is executed? Thanks!>> Best,>> --Jiri Simsa>>
_______________________________________________> discuss mailing list
discuss at mpich.org
[2]> To manage subscription options or
unsubscribe:> https://lists.mpich.org/mailman/listinfo/discuss[3]
--Pavan Balaji
http://www.mcs.anl.gov/~balaji[4]
--------
[1] mailto:jsimsa at cs.cmu.edu
[2] mailto:discuss at mpich.org
[3] https://lists.mpich.org/mailman/listinfo/discuss
[4] http://www.mcs.anl.gov/~balaji
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130913/91712de6/attachment.html>
More information about the discuss
mailing list