[mpich-discuss] Fwd: Re: Using MPI_Waitany for MPI_File Requests
Nils-Arne Dreier
n.dreier at uni-muenster.de
Fri Sep 29 05:15:58 CDT 2017
Hi Jeff,
Thanks for your answer!
I changed to MPI_Waitsome. During that, I run into another issue:
MPI_Waitsome raises a Segmentation fault, depending on the order in
which the requests are passed in, if different types of requests are
used (file and p2p).
Here is the code:
#include <iostream>
#include <mpi.h>
int main(int argc, char** argv)
{
MPI_Init(&argc, &argv);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Request recv_req;
int int_buf = 0;
MPI_Irecv(&int_buf, 1, MPI_INT, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD,
&recv_req);
MPI_File fh;
MPI_File_open(MPI_COMM_WORLD, "output", MPI_MODE_CREATE | MPI_MODE_RDWR,
MPI_INFO_NULL, &fh);
MPI_Request file_req;
double data = 42;
MPI_File_iwrite_at(fh, rank*sizeof(double), &data, 1, MPI_DOUBLE,
&file_req);
//MPI_Request requests[2] = {recv_req, file_req}; // A
MPI_Request requests[2] = {file_req, recv_req}; // B
std::cout << rank << ":\twaiting..." << std::endl;
int completed = 0;
int indices[2];
MPI_Status status[2];
MPI_Waitsome(2, requests, &completed, indices, status);
// int index = 0;
// MPI_Waitany(2, requests, &index, MPI_STATUS_IGNORE);
std::cout << rank << ":\tdone" << std::endl;
MPI_File_close(&fh);
MPI_Finalize();
return 0;
}
Exchanging line B with line A does not raises the fault.
Thanks,
Nils
On 29.09.2017 05:36, Jeff Hammond wrote:
> MPI_Waitsome also works, and I assume you can use that almost anywhere
> that MPI_Waitany is used.
>
> I didn't determine for sure if it is the cause, but MPI_Waitall,
> MPI_Wait, and MPI_Waitsome all invoke MPI_Grequest progress, whereas
> MPI_Waitany does not, and I recall that ROMIO uses generalized requests.
>
> $ grep MPIR_Grequest ../src/mpi/pt2pt/wait*
>
> ../src/mpi/pt2pt/waitall.c: mpi_errno =
> *MPIR_Grequest*_waitall(count, request_ptrs);
>
> ../src/mpi/pt2pt/wait.c: mpi_errno =
> *MPIR_Grequest*_progress_poke(1, &request_ptr, status);
>
> ../src/mpi/pt2pt/waitsome.c:mpi_errno =
> *MPIR_Grequest*_progress_poke(incount,
>
>
> Jeff
>
> On Thu, Sep 28, 2017 at 9:32 AM, Jeff Hammond <jeff.science at gmail.com
> <mailto:jeff.science at gmail.com>> wrote:
>
>
> There appears to be a bug in Waitany here. It works with 2x Wait
> and Waitall.
>
> Jeff
>
> jrhammon at klondike:~/Work/INTEL/BUGS$ mpicc mpiio.c && ./a.out
>
> blocking
>
> done
>
> jrhammon at klondike:~/Work/INTEL/BUGS$ mpicc -DNONBLOCKING -DWAITALL
> mpiio.c && ./a.out
>
> nonblocking
>
> waitall
>
> done
>
> jrhammon at klondike:~/Work/INTEL/BUGS$ mpicc -DNONBLOCKING -DWAIT
> mpiio.c && ./a.out
>
> nonblocking
>
> wait
>
> done
>
> wait
>
> done
>
> jrhammon at klondike:~/Work/INTEL/BUGS$ mpicc -DNONBLOCKING -DWAITANY
> mpiio.c && ./a.out
>
> nonblocking
>
> waitany
>
> ^C
>
> jrhammon at klondike:~/Work/INTEL/BUGS$ cat mpiio.c
>
> #include <stdio.h>
>
> #include <mpi.h>
>
>
> int main(int argc, char** argv)
>
> {
>
> MPI_Init(&argc, &argv);
>
>
> int rank;
>
> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>
>
> MPI_Request recv_req;
>
> int int_buf;
>
> MPI_Irecv(&int_buf, 1, MPI_INT, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD,
>
> &recv_req);
>
>
> MPI_File fh;
>
> MPI_File_open(MPI_COMM_WORLD, "output", MPI_MODE_CREATE |
> MPI_MODE_RDWR, MPI_INFO_NULL, &fh);
>
> MPI_File_set_view(fh, 2*sizeof(int)*rank, MPI_INT, MPI_INT,
> "native", MPI_INFO_NULL);
>
>
> #ifdef NONBLOCKING
>
> printf("nonblocking\n");
>
> MPI_Request requests[2] = {MPI_REQUEST_NULL, MPI_REQUEST_NULL};
>
> MPI_File_iwrite(fh, &rank, 1, MPI_INT, &requests[0]);
>
> MPI_File_iwrite(fh, &rank, 1, MPI_INT, &requests[1]);
>
> #if defined(WAITANY)
>
> int index;
>
> printf("waitany\n");
>
> MPI_Waitany(2, requests, &index, MPI_STATUS_IGNORE);
>
> MPI_Waitany(2, requests, &index, MPI_STATUS_IGNORE);
>
> printf("done\n");
>
> #elif defined(WAIT)
>
> printf("wait\n");
>
> MPI_Wait(&requests[0], MPI_STATUS_IGNORE);
>
> printf("done\n");
>
> printf("wait\n");
>
> MPI_Wait(&requests[1], MPI_STATUS_IGNORE);
>
> printf("done\n");
>
> #elif defined(WAITALL)
>
> printf("waitall\n");
>
> MPI_Waitall(2, requests, MPI_STATUSES_IGNORE);
>
> printf("done\n");
>
> #else
>
> #error Define WAITANY, WAIT, or WAITALL
>
> #endif
>
> #else
>
> printf("blocking\n");
>
> MPI_File_write(fh, &rank, 1, MPI_INT, MPI_STATUS_IGNORE);
>
> MPI_File_write(fh, &rank, 1, MPI_INT, MPI_STATUS_IGNORE);
>
> printf("done\n");
>
> #endif
>
> MPI_File_close(&fh);
>
>
> MPI_Finalize();
>
>
> return 0;
>
> }
>
>
>
> On Thu, Sep 28, 2017 at 6:53 AM, Nils-Arne Dreier
> <n.dreier at uni-muenster.de <mailto:n.dreier at uni-muenster.de>> wrote:
>
> Dear MPICH community,
>
> I'm currently playing around with the File-IO interface of
> MPI. For any
> reason, the following code does not running through:
>
> #include <iostream>
> #include <mpi.h>
>
> int main(int argc, char** argv)
> {
> MPI_Init(&argc, &argv);
>
> int rank;
> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>
> MPI_Request recv_req;
> int int_buf;
> MPI_Irecv(&int_buf, 1, MPI_INT, MPI_ANY_SOURCE, 0,
> MPI_COMM_WORLD,
> &recv_req);
>
> MPI_File fh;
> MPI_File_open(MPI_COMM_WORLD, "output", MPI_MODE_CREATE |
> MPI_MODE_RDWR,
> MPI_INFO_NULL, &fh);
> MPI_File_set_view(fh, 2*sizeof(int)*rank, MPI_INT, MPI_INT,
> "native", MPI_INFO_NULL);
>
> MPI_Request file_req;
> MPI_File_iwrite(fh, &rank, 1, MPI_INT, &file_req);
>
> MPI_Request file_req2;
> MPI_File_iwrite(fh, &rank, 1, MPI_INT, &file_req2);
>
> MPI_Request requests[2] = {file_req, file_req2};
> int index;
> std::cout << rank << ":\twaiting..." << std::endl;
> MPI_Waitany(2, requests, &index, MPI_STATUS_IGNORE);
> //MPI_Wait(&file_req, MPI_STATUS_IGNORE);
> std::cout << rank << ":\tdone" << std::endl;
> MPI_File_close(&fh);
>
> MPI_Finalize();
>
> return 0;
> }
>
> Neither if i exchange file_req2 with recv_req, MPI_Waitany
> doesn't return.
>
> Is it possible to use MPI_Waitany with File-IO calls? Did i read
> anything over in the MPI-Standard?
>
> Thank you for your help.
>
> Thanks,
> Nils
>
> PS: I'm running Ubuntu 16.04 LTS, gcc-4.9 (using -std=c++14)
> and MPICH
> 3.2. This is the output of mpirun --version:
> HYDRA build details:
> Version: 3.2
> Release Date: Wed Nov 11
> 22:06:48 CST 2015
> CC: gcc
> -Wl,-Bsymbolic-functions
> -Wl,-z,relro
> CXX: g++
> -Wl,-Bsymbolic-functions
> -Wl,-z,relro
> F77: gfortran
> -Wl,-Bsymbolic-functions
> -Wl,-z,relro
> F90: gfortran
> -Wl,-Bsymbolic-functions
> -Wl,-z,relro
> Configure options:
> '--disable-option-checking'
> '--prefix=/usr' '--build=x86_64-linux-gnu'
> '--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
> '--infodir=${prefix}/share/info' '--sysconfdir=/etc'
> '--localstatedir=/var' '--disable-silent-rules'
> '--libdir=${prefix}/lib/x86_64-linux-gnu'
> '--libexecdir=${prefix}/lib/x86_64-linux-gnu'
> '--disable-maintainer-mode' '--disable-dependency-tracking'
> '--enable-shared' '--enable-fortran=all' '--disable-rpath'
> '--disable-wrapper-rpath' '--sysconfdir=/etc/mpich'
> '--libdir=/usr/lib/x86_64-linux-gnu'
> '--includedir=/usr/include/mpich'
> '--docdir=/usr/share/doc/mpich' '--with-hwloc-prefix=system'
> '--enable-checkpointing' '--with-hydra-ckpointlib=blcr' 'CPPFLAGS=
> -Wdate-time -D_FORTIFY_SOURCE=2
> -I/build/mpich-jQtQ8p/mpich-3.2/src/mpl/include
> -I/build/mpich-jQtQ8p/mpich-3.2/src/mpl/include
> -I/build/mpich-jQtQ8p/mpich-3.2/src/openpa/src
> -I/build/mpich-jQtQ8p/mpich-3.2/src/openpa/src -D_REENTRANT
> -I/build/mpich-jQtQ8p/mpich-3.2/src/mpi/romio/include'
> 'CFLAGS= -g -O2
> -fstack-protector-strong -Wformat -Werror=format-security -O2'
> 'CXXFLAGS= -g -O2 -fstack-protector-strong -Wformat
> -Werror=format-security -O2' 'FFLAGS= -g -O2
> -fstack-protector-strong
> -O2' 'FCFLAGS= -g -O2 -fstack-protector-strong -O2'
> 'build_alias=x86_64-linux-gnu' 'MPICHLIB_CFLAGS=-g -O2
> -fstack-protector-strong -Wformat -Werror=format-security'
> 'MPICHLIB_CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
> 'MPICHLIB_CXXFLAGS=-g -O2 -fstack-protector-strong -Wformat
> -Werror=format-security' 'MPICHLIB_FFLAGS=-g -O2
> -fstack-protector-strong' 'MPICHLIB_FCFLAGS=-g -O2
> -fstack-protector-strong' 'LDFLAGS=-Wl,-Bsymbolic-functions
> -Wl,-z,relro' 'FC=gfortran' 'F77=gfortran' 'MPILIBNAME=mpich'
> '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc' 'LIBS=-lpthread '
> Process Manager: pmi
> Launchers available: ssh rsh fork
> slurm ll lsf
> sge manual persist
> Topology libraries available: hwloc
> Resource management kernels available: user slurm ll lsf
> sge pbs
> cobalt
> Checkpointing libraries available: blcr
> Demux engines available: poll select
>
>
> --
> Nils-Arne Dreier, M.Sc.
> Institute for Computational und Applied Mathematics,
> University of Münster, Orleans-Ring 10, D-48149 Münster
> Tel: +49 251 83-35147
>
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> <mailto:discuss at mpich.org>
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
> <https://lists.mpich.org/mailman/listinfo/discuss>
>
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com <mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/
>
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com <mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/
>
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
--
Nils-Arne Dreier, M.Sc.
Institute for Computational und Applied Mathematics,
University of Münster, Orleans-Ring 10, D-48149 Münster
Tel: +49 251 83-35147
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20170929/36a3d3f5/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5390 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20170929/36a3d3f5/attachment.p7s>
-------------- next part --------------
_______________________________________________
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
More information about the discuss
mailing list