[mpich-discuss] MPI Reduce with MPI_IN_PLACE fails with non-0 root rank for message sizes over 256 with MPI version 4 and after

Solomonik, Edgar solomon2 at illinois.edu
Thu Jun 8 15:36:56 CDT 2023


Hello,

Our library's autobuild (CTF, which uses MPI extensively and in relatively sophisticated ways) started failing on multiple architectures after github workflows moved to later OS versions (and so later MPI versions). I believe I have narrowed the issue to an MPI bug associated with very basic usage of MPI Reduce. The following test code runs into a segmentation fault inside MPI when running with 2 MPI processes with the latest Ubuntu MPI build and MPI 4.0. It works for smaller values of message size (n) or if the root is rank 0. The usage of MPI_IN_PLACE adheres with the MPI standard.

Best,
Edgar Solomonik

#include <mpi.h>
#include <iostream>

int main(int argc, char ** argv){
  int64_t n = 257;

  MPI_Init(&argc, &argv);
  int rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);

  double * A = (double*)malloc(sizeof(double)*n);
  for (int i=0; i<n; i++){
    A[i] = (double)i;
  }

  if (rank == 1){
    MPI_Reduce(MPI_IN_PLACE, A, n, MPI_DOUBLE, MPI_SUM, 1, MPI_COMM_WORLD);
  } else {
    MPI_Reduce(A, NULL, n, MPI_DOUBLE, MPI_SUM, 1, MPI_COMM_WORLD);
  }

  free(A);

  MPI_Finalize();

  return 0;
}


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20230608/36d5c37d/attachment.html>


More information about the discuss mailing list