[mpich-devel] supporting Fortran INTEGER when it's 64b

Jeff Hammond jeff.science at gmail.com
Tue May 19 18:22:06 CDT 2015


It would be useful to modify buildiface to generate bindings that fail
in a well-defined way when the Fortran count exceeds INT_MAX.  The
current interface may fail silently or abort with a message that
doesn't help a novice.

As I do not know Perl, I don't think I can contribute patches.  It's
possible that I could use Todd Gamblin's Python interposition
generator to do something, but I don't know that this would ever be
replace the current setup.

Jeff

On Tue, May 19, 2015 at 4:09 PM, William Gropp <wgropp at illinois.edu> wrote:
> I know of no plans to fix the underlying problems.  buildiface can be told to generate code for the case where MPI_Fint is different from int, but this doesn’t address the use of the C interface from within the Fortran wrappers.
>
> Bill
>
> On May 20, 2015, at 3:43 AM, Jeff Hammond <jeff.science at gmail.com> wrote:
>
>> I'm not aware of any place in the standard where it says that
>> implementations do not have to support the full 64b range of a Fortran
>> INTEGER when the compiler ordains that INTEGER is that size.  MPICH is
>> at best dangerous when a Fortran INTEGER is larger than a C int.
>>
>> The tail of mpif_h/sendf.c shows this problem:
>>
>> /* Prototypes for the Fortran interfaces */
>> #include "fproto.h"
>> FORT_DLL_SPEC void FORT_CALL mpi_send_ ( void*v1, MPI_Fint *v2,
>> MPI_Fint *v3, MPI_Fint *v4, MPI_Fint *v5, MPI_Fint *v6, MPI_Fint *ierr
>> ){
>>    *ierr = MPI_Send( v1, (int)*v2, (MPI_Datatype)(*v3), (int)*v4,
>> (int)*v5, (MPI_Comm)(*v6) );
>> }
>>
>> The origin of the issue appears to be the unsafe cast generated by
>> mpif_h/buildiface:
>>
>> sub fint2int_inout_decl {
>>    my $count = $_[0];
>>    if ($within_fint) {
>>        print $OUTFD "    int l$count = (int)*v$count;\n";
>>    }
>>
>> This is not just academic for me.  I know that unsafe down-casting
>> inside of MPI breaks multiple quantum chemistry applications.  And
>> while I wrote BigMPI in part to address this issue, users would be
>> much happier if MPICH and its derivatives were sufficient.
>>
>> Is there any plan to address this?  It appears to require lots of code
>> changes.  For example, one could add e.g. MPIR_Send that takes
>> mpid_size_t (or similar) counts and call that (instead of MPI_Send)
>> from Fortran, as well as map C MPI_Send to it.  Of course MPID_Send
>> would also have to convert to mpid_size_t and so forth across all the
>> routines and down the stack.
>>
>> Comments?
>>
>> Thanks,
>>
>> Jeff
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> http://jeffhammond.github.io/
>> _______________________________________________
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/devel
>
> _______________________________________________
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/devel



-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/


More information about the devel mailing list