[mpich-discuss] Fortran 2008 MPI Interface Bug

Zhou, Hui zhouh at anl.gov
Wed Nov 13 16:45:51 CST 2019


Hi Pedro Ricardo,

Thanks for reporting the issue. I have opened a GitHub issue, https://github.com/pmodels/mpich/issues/4170, for better tracking.

`mpi_f08` tries to use the iso-c-binding as much as it can and it has not been receiving as much test as `use mpi`. One of the reason is the compiler support has been sporadic. May I ask which compiler (include version) were you using?

Anyway, we’ll look into it and hopefully get it fixed shortly.

—
Hui Zhou
Principal Software Development Specialist
MCS | ANL



On Nov 13, 2019, at 1:59 PM, Pedro Ricardo Souza via discuss <discuss at mpich.org<mailto:discuss at mpich.org>> wrote:

Hi there, My name is Pedro Ricardo and I'm currently a researcher at a Fluid Dynamics Laboratory in Brazil.
We have a CFD software with 12 continuous years of development mostly in Fortran, and we've been using Mpich and Openmpi since the very beginning.

Recently we started to refactor the code and write it using Fortran2008 standard and this includes the usage of the the newest MPI module for Fortran "mpi_f08".

So far no errors were present in Openmpi implementations and everything worked like a charm, but when we started compiling with Mpich lots of errors happened during execution.

I was able so isolate and reproduce the error in a simpler code. Whenever "use mpi_f08" is present, the arrays passed to a MPI subroutine return with C-like indexes. If I allocate an standard array in Fortran (that starts with index 1), after the call of some MPI routine this array come back starting with index 0.
And you can imagine that all access to this array will be wrong after that.
It is a funny behavior because if you put  "use mpi" instead of "use mpi_f08", the indexes do not change.

The test is simple, allocate and array, print it's bound indexes, calls a MPI_Reduce routine and print the array bounds again. I've also tested other routines like MPI_ISend and the error also happens.

program check_bounds
    !Check Works
    ! use mpi
    !Check Fails
    use mpi_f08

    implicit none
    integer:: mpi_ierr, proc_id, n_proc
    real, allocatable, dimension(:):: t1, t1_0

    call MPI_Init(mpi_ierr)
    call MPI_Comm_rank(MPI_COMM_WORLD, proc_id, mpi_ierr)
    call MPI_Comm_size(MPI_COMM_WORLD, n_proc, mpi_ierr)

    allocate(t1(5), t1_0(5))
    t1 = 1.0; t1_0 = 1.0

    if (proc_id==0) then
        write(*,*)'Array Configuration Before:'
        write(*,'(a6,2(1x,i2))')'Low:',lbound(t1),lbound(t1_0)
        write(*,'(a6,2(1x,i2))')'Upper:',ubound(t1),ubound(t1_0)
    end if

    CALL MPI_REDUCE(t1, t1_0, 5, MPI_REAL, MPI_SUM, 0, MPI_COMM_WORLD, mpi_ierr)

    if (proc_id==0) then
        write(*,*)'Array Configuration After:'
        write(*,'(a6,2(1x,i2))')'Low:',lbound(t1),lbound(t1_0)
        write(*,'(a6,2(1x,i2))')'Upper:',ubound(t1),ubound(t1_0)
    end if

    deallocate(t1, t1_0)
    call MPI_Finalize(mpi_ierr)

end program check_bounds


Thanks for the attention,

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20191113/9ef1bafa/attachment-0001.html>


More information about the discuss mailing list