<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">I don't see you measure MPI_Allreduce. Basically you only measured some random numbers across processes.<div>See osu_allreduce in OSU microbenchmark (<a href="http://mvapich.cse.ohio-state.edu/benchmarks/">http://mvapich.cse.ohio-state.edu/benchmarks/</a>) for a MPI_Allreduce benchmark.</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr">--Junchao Zhang</div></div></div>
<br><div class="gmail_quote">On Mon, May 4, 2015 at 2:35 AM, David Froger <span dir="ltr"><<a href="mailto:david.froger.ml@mailoo.org" target="_blank">david.froger.ml@mailoo.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
I've written a little MPI_Allreduce benchmarks [0].<br>
<br>
The results with MPICH 1.4.1 [1] and Open MPI 1.8.4 [2] (1 column for the<br>
number of processors, and 1 columns for the corresponding wall clock time) show<br>
that clock time is half with twice processors.<br>
<br>
But with `MPICH 3.1.4`, the wall clock time increase for 7, 8 or more processes [3].<br>
<br>
In my real code and for all of the 3 above MPI implementation, I observe the<br>
same problem for 7, 8 or more processes, while I expect my code to be scallable<br>
to at least 8 or 16 processes.<br>
<br>
So I'm trying to understand what could happen with the little benchmark and<br>
MPICH 3.1.4?<br>
<br>
Thanks for reading.<br>
<br>
Best regards,<br>
David<br>
<br>
[0] <a href="https://github.com/dfroger/issue/blob/8b8bdd8e4b2b5e265c25fc2ba7077f6a108bb34a/mpi/bench_mpi.cxx" target="_blank">https://github.com/dfroger/issue/blob/8b8bdd8e4b2b5e265c25fc2ba7077f6a108bb34a/mpi/bench_mpi.cxx</a><br>
[1] <a href="https://github.com/dfroger/issue/blob/8b8bdd8e4b2b5e265c25fc2ba7077f6a108bb34a/mpi/carla/conda-default.mpich2.1.4.1p1.txt" target="_blank">https://github.com/dfroger/issue/blob/8b8bdd8e4b2b5e265c25fc2ba7077f6a108bb34a/mpi/carla/conda-default.mpich2.1.4.1p1.txt</a><br>
[2] <a href="https://github.com/dfroger/issue/blob/8b8bdd8e4b2b5e265c25fc2ba7077f6a108bb34a/mpi/carla/conda-mpi4py-channel.openmpi.1.8.4.txt" target="_blank">https://github.com/dfroger/issue/blob/8b8bdd8e4b2b5e265c25fc2ba7077f6a108bb34a/mpi/carla/conda-mpi4py-channel.openmpi.1.8.4.txt</a><br>
[3] <a href="https://github.com/dfroger/issue/blob/8b8bdd8e4b2b5e265c25fc2ba7077f6a108bb34a/mpi/carla/conda-mpi4py-channel.mpich.3.1.4.txt" target="_blank">https://github.com/dfroger/issue/blob/8b8bdd8e4b2b5e265c25fc2ba7077f6a108bb34a/mpi/carla/conda-mpi4py-channel.mpich.3.1.4.txt</a><br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</blockquote></div><br></div>