[mpich-devel] issue with 2018 ISO_Fortran_binding.h
Gilles Gouaillardet
gilles at rist.or.jp
Thu Apr 4 19:52:11 CDT 2019
Hi Paul,
thanks for the quick reply !
Would you mind double checking the definition of CFI_is_contiguous() in
the current standard ?
From https://j3-fortran.org/doc/year/12/12-119r1.pdf (this is a *draft*
and not the final standard I do not have)
> 8.3.5.6 int CFI is contiguous ( const CFI cdesc t * dv );
> Description. Test contiguity of an array.
> Formal Parameter.
> dv shall be the address of a C descriptor describing an array. The
> base_addr member of the C descriptor shall
> not be a null pointer.
> Result Value. CFI is contiguous returns 1 if the array described is
> determined to be contiguous, and 0 otherwise.
>
If the draft has been approved as-is, it means CFI_is_contiguous() is
not supposed to return CFI_SUCCESS
(and the Intel folks got it right in the 2019 compilers)
I can certainly see an interest in copy-in for INTENT(IN) in order to
enforce correctness of the Fortran code
(assuming this is not prohibited by the standard) *as an option*.
Generally speaking, I see that as quite undesirable for performance
reasons. For example,
in the context of MPI (Message Passing Library) I am primarly interested
in, we can
double precision :: a(0:1025,0:1025)
call mpi_send(a(1:1024,1:1024, 1024*1024, MPI_DOUBLE_PRECISION, ...)
in order to send a subarray (and the first parameter of mpi_send() is
declared with INTENT(IN)).
That currently works with gcc 8, but this involves a temporary
(contiguous) array to be allocated/filled/freed at runtime.
The primary motivation for using ISO_Fortran_binding.h, is to avoid the
need for atemporary array and uses the data
"in place" instead (and MPI will internally uses a temporary derived
datatype in order to send the non-contiguous subarray,
which is much more efficient with respect to performance and memory
footprint).
As a side note, MPI also provides the mpi_isend() subroutine that
returns immediately
(and the app must basically call mpi_wait() in order to make sure the
message has been sent).
In the case of gcc 8, and copy-in in current gcc9, I can see a risk of
the temporary array being accessed *after* being freed when mpi_isend
returns.
No copy-in would avoid this scenario (but to be fully honest, I am not
even sure this use-case is correct with respect to the MPI and/or
Fortran standards)
Feel free to post any patch you are working on and I will be happy to
test them.
Thanks !
Gilles
On 4/4/2019 7:15 PM, Paul Richard Thomas wrote:
> Hi Gilles,
>
> Many thanks for your testing of ISO_Fortran_binding with gfortran.
>
> With respect to the base address values, the difference comes about
> because of the use of INTENT(IN). In order to enforce the "INness", I
> chose to copy the data. Any other INTENT gives the value that you
> expect:
>
> module cdesc
> interface
> subroutine cdesc_f08(buf, expected) BIND(C, name="cdesc_c")
> implicit none
> type(*), dimension(..), INTENT(INOUT) :: buf
> integer(kind=8),INTENT(IN) :: expected
> end subroutine cdesc_f08
> end interface
> end module
>
> I will check the standard to see if this copy-in for INTENT(IN)
> violates any requirement of the standard. I would be happy to get
> feedback as to whether this is considered to be desirable or not.
>
> The CFI_is_contiguous test exposes one problem with ifort and one with gfortran:
> (i) According to F2018 18.5.4, "The macro CFI_SUCCESS shall be defined
> to be the integer constant 0." Thus, ifort is giving the wrong result.
> Using:
> #include <stdio.h>
> #include <stdlib.h>
> #include <ISO_Fortran_binding.h>
> void cdesc_c(CFI_cdesc_t* x, long *expected)
> {
> printf ("x->rank = %d, CFI_is_contigous(x) = %d, x->base_addr =
> %p, expected %p\n",
> x->rank, CFI_is_contiguous(x) == CFI_SUCCESS, x->base_addr,
> (void *)*expected);
> }
> should yield the same result for both, up to and including rank 2.
>
> (ii) The gfortran CFI_is_contigous gives the wrong result for rank
>> =3. This is a simple, stupid bug that I have already fixed and will
> commit asap.
>
> Many thanks again.
>
> Paul
More information about the devel
mailing list