[mpich-discuss] where mpi waitsome only returns one request each time

Junchao Zhang jczhang at mcs.anl.gov
Sun Oct 19 21:39:20 CDT 2014


Try to run on another machine, you may get different results. My configure
is
$ mpichversion
MPICH Version:     3.1.3
MPICH Release date: unreleased development copy
MPICH Device:     ch3:nemesis
MPICH configure: --enable-romio --enable-nemesis-dbg-localoddeven
--enable-g=all --enable-fast=O0 --enable-fortran=f77,fc CC=gcc CXX=g++
F77=gfortran FC=gfortran
MPICH CC: gcc    -g -O0
MPICH CXX: g++   -g -O0
MPICH F77: gfortran   -g -O0
MPICH FC: gfortran   -g -O0

--Junchao Zhang

On Sun, Oct 19, 2014 at 9:23 PM, myself <chcdlf at 126.com> wrote:

>
> I run several times of the program, the results are as the same as before. Could
> you tell me your MPICH version and compiling parameters? The version of
> my MPICH is 3.1.3, I compile it as
>
> $ mpirun --version
> HYDRA build details:
>     Version:                                 3.1.3
>     Release Date:                            Wed Oct  8 09:37:19 CDT 2014
>     CC:                              gcc  -fPIC
>     CXX:                             g++
>     F77:                             gfortran
>     F90:                             gfortran
>     Configure options:                       '--disable-option-checking'
> '--prefix=/usr/local/mpich3' '--with-device=ch3:nemesis' 'CFLAGS=-fPIC -O2'
> '--disable-fortran' '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc'
> 'LDFLAGS= ' 'LIBS=-lpthread ' 'CPPFLAGS=
> -I/home/valder/Downloads/Software/mpi/mpich-3.1.3/src/mpl/include
> -I/home/valder/Downloads/Software/mpi/mpich-3.1.3/src/mpl/include
> -I/home/valder/Downloads/Software/mpi/mpich-3.1.3/src/openpa/src
> -I/home/valder/Downloads/Software/mpi/mpich-3.1.3/src/openpa/src
> -D_REENTRANT
> -I/home/valder/Downloads/Software/mpi/mpich-3.1.3/src/mpi/romio/include'
>     Process Manager:                         pmi
>     Launchers available:                     ssh rsh fork slurm ll lsf sge
> manual persist
>     Topology libraries available:            hwloc
>     Resource management kernels available:   user slurm ll lsf sge pbs
> cobalt
>     Checkpointing libraries available:
>     Demux engines available:                 poll select
>
> the whole program is
> =========
>
> #include "mpi.h"
> #include <stdio.h>
> #include <stdlib.h>
> #include <sys/time.h>
>
> typedef unsigned char byte;
>
> int main(int argc, char *argv[]){
>     int rank,size;
>     byte buf[10][10];
>     MPI_Request req[10];
>     MPI_Status stat[10];
>     int outcount;
>     int array_of_indices[10];
>     int i;
>     int complete = 0;
>
>     MPI_Init(&argc, &argv);
>     MPI_Comm_size(MPI_COMM_WORLD, &size);
>     MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>
>     if(rank==0) {
>         for(i =0; i< 10;i++){
>             MPI_Isend(buf[i], 10, MPI_BYTE, 1, 123, MPI_COMM_WORLD,
> &req[i]);
>         }
>         printf("post send\n");
>     }else {
>         for(i=0; i< 10;i++){
>             MPI_Irecv(buf[i], 10, MPI_BYTE, 0, 123, MPI_COMM_WORLD,
> &req[i]);
>         }
>         printf("post recv\n");
>     }
>
>     for (;;){
>         MPI_Waitsome(10, req, &outcount, array_of_indices, stat);
>         complete += outcount;
>         printf("rank %d %d index %d\n", rank, outcount,
> array_of_indices[0]);
>         if(complete == 10)
>             break;
>     }
>     printf("rank %d complete\n", rank);
>     MPI_Finalize();
> }
> =========
>
> I compile it as
> $ mpicc -o tests testsome.c
>
> and run as
> $ mpirun -n 2 ./tests
>
>
> At 2014-10-20 09:02:39, "Junchao Zhang" <jczhang at mcs.anl.gov> wrote:
>
> I think it is random. For example, I ran your code and its output was:
>
> post recv
> post send
> rank 0 10 index 0
> rank 0 complete
> rank 1 1 index 0
> rank 1 9 index 1
> rank 1 complete
>
> --Junchao Zhang
>
> On Sun, Oct 19, 2014 at 9:48 AM, myself <chcdlf at 126.com> wrote:
>
>> Because I want to understand the behavior of MPI_Waitsome, I change the
>> program as fellow. MPI can waitsome 10 Isend request once time, while
>> waitsome 1 Irecv request each time. Why waitsome-Irecv can not be more than
>> 1 as I expect?
>> =====
>>      MPI_Init(&argc, &argv);
>>      MPI_Comm_size(MPI_COMM_WORLD, &size);
>>      MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>
>>      if(rank==0) {
>>          for(i =0; i< 10;i++){
>>              MPI_Isend(buf[i], 10, MPI_BYTE, 1, 123, MPI_COMM_WORLD,
>> &req[i]);
>>          }
>>          printf("post send\n");
>>      }else {
>>          for(i=0; i< 10;i++){
>>              MPI_Irecv(buf[i], 10, MPI_BYTE, 0, 123, MPI_COMM_WORLD,
>> &req[i]);
>>          }
>>          printf("post recv\n");
>>      }
>>
>>      for (;;){
>>          MPI_Waitsome(10, req, &outcount, array_of_indices, stat);
>>          complete += outcount;
>>          printf("rank %d %d index %d\n", rank, outcount,
>> array_of_indices[0]);
>>          if(complete == 10)
>>              break;
>>      }
>>      printf("rank %d complete\n", rank);
>>      MPI_Finalize();
>> =====
>> $ mpirun -n 2 ./tests
>> post send
>> post recv
>> rank 0 10 index 0
>> rank 0 complete
>> rank 1 1 index 0
>> rank 1 1 index 1
>> rank 1 1 index 2
>> rank 1 1 index 3
>> rank 1 1 index 4
>> rank 1 1 index 5
>> rank 1 1 index 6
>> rank 1 1 index 7
>> rank 1 1 index 8
>> rank 1 1 index 9
>> rank 1 complete
>>
>>
>>
>>
>>
>> At 2014-10-19 22:36:25, "Bland, Wesley B." <wbland at anl.gov> wrote:
>>
>> You need to move your waitany outside of the if statement so both ranks
>> execute it.
>>
>>
>>
>> On Oct 19, 2014, at 9:33 AM, myself <chcdlf at 126.com> wrote:
>>
>>   Here is my test program
>>
>> ======
>>
>>   #include "mpi.h"
>>  #include <stdio.h>
>>  #include <stdlib.h>
>>  #include <sys/time.h>
>>
>>   typedef unsigned char byte;
>>
>>  int main(int argc, char *argv[]){
>>      MPI_Init(&argc, &argv);
>>      int rank,size;
>>      MPI_Comm_size(MPI_COMM_WORLD, &size);
>>      MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>      byte buf[10][10];
>>      MPI_Request req[10];
>>      MPI_Status stat[10];
>>      int outcount;
>>      int array_of_indices[10];
>>      int i;
>>
>>      if(rank==0) {
>>          for(i =0; i< 10;i++){
>>              MPI_Isend(buf[i], 10, MPI_BYTE, 1, 123, MPI_COMM_WORLD,
>> &req[i]);
>>          }
>>          printf("post send\n");
>>          MPI_Waitall(10, req, stat);
>>          printf("send over\n");
>>      }else {
>>          for(i=0; i< 10;i++){
>>              MPI_Irecv(buf[i], 10, MPI_BYTE, 0, 123, MPI_COMM_WORLD,
>> &req[i]);
>>          }
>>
>>          printf("post recv\n");
>>          for (i=0;i<10;i++){
>>              MPI_Waitsome(10, req, &outcount, array_of_indices, stat);
>>              printf("%d index %d\n", outcount, array_of_indices[0]);
>>          }
>>      }
>>      MPI_Finalize();
>>  }
>>
>>  ======
>>  get the result like this
>> ======
>>  $ mpirun -n 2 ./tests
>> post send
>> send over
>> post recv
>> 1 index 0
>> 1 index 1
>> 1 index 2
>> 1 index 3
>> 1 index 4
>> 1 index 5
>> 1 index 6
>> 1 index 7
>> 1 index 8
>> 1 index 9
>>
>>  ======
>>
>>  I think I should get some completed receive connection, not just 1.
>>
>>
>>
>>
>>   _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>>
>>
>>
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>
>
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20141019/5198b955/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list