[mpich-discuss] Erroneous MPI type tag warning

Jeff Hammond jeff.science at gmail.com
Mon Dec 23 12:14:27 CST 2013


On Mon, Dec 23, 2013 at 10:55 AM, Markus Geimer <m.geimer at fz-juelich.de> wrote:
> Hi,
>
>>   I guess the warning has nothing to do with padding.
>
> Agreed. The warning is due to the type_tag attributes defined in
> MPICH's 'mpi.h' header file.
>
>> For MPI_Reduce(in, out, 1, MPI_2INT, ...), the compiler tries to
>> match int, i.e., typeof(*in), with MPI_2INT.  Whatever the layout of
>> MPI_2INT is defined (e.g., struct {int i1; int i2} or int[2]), the
>> compiler fails at matching. If you want to use MPI_2INT, then in, out
>> should be defined as of type struct {int v1;  int v2}.

This is not what the MPI standard says.  MPI-3 Section 5.9.4 defines
MPI_2INT _as if_ created by the following call:

MPI_Type_contiguous(2, MPI_INT, MPI_2INT);

Therefore, the resulting type corresponds to "int[2]" not "struct {int
i1; int i2}".

> Well, according to the C99 standard "Each non-bit-field member of a
> structure or union object is aligned in an *implementation-defined*
> manner appropriate to its type." (§6.7.2.1) It is therefore not
> guaranteed that the two ints in the struct are contiguous -- which
> means that, strictly speaking, int[2] is the only portable choice.
> (Although I'm not aware of any compiler that would insert padding
> between the two ints, i.e., in practice the struct should be fine,
> too.)

This is interesting, because it seems to suggest that either the MPI
standard is wrong in its definition of mixed pair types (e.g.
MPI_FLOAT_INT) or at least assumes that the compiler does no padding.

To be explicit, the MPI standard (again, MPI-3 Section 5.9.4) says the
following:

type[0] = MPI_FLOAT
type[1] = MPI_INT
disp[0] = 0
disp[1] = sizeof(float)
block[0] = 1
block[1] = 1
MPI_TYPE_CREATE_STRUCT(2, block, disp, type, MPI_FLOAT_INT)

It would seem that disp[1] is wrong in the event that the compiler
adds any padding between the float and the int.  If it is intended to
be absolutely correct, the the definition of MPI_FLOAT_INT demands the
user take whatever steps are required to ensure that the compiler does
no padding.

>>   I agree that MPI_Reduce(in, out, 1, MPI_2INT, ...) is absolutely
>> right.  I think the only reason for the warning is that the compiler is
>> not smart enough.
>
> Compilers could always be smarter ;-) It would be great if you could
> come up with a datatype that is -- from the compiler's perspective --
> compatible with both struct {int i1; int i2} and int[2]. Otherwise,
> I suggest to remove the type check for MPI_2INT as the warning is
> misleading.

If compilers were capable of being smart, everyone would just write
naive UPC code and let the compiler do all the work instead of bother
with MPI :-D

>From what you've stated above, this requires a lot of introspection of
the compiler.  How does one know for sure that the compiler pads the
same way every time?  What if the user employs a flag that induces
more or less padding?  There seems to be no reasonable way to address
this.

Fortunately, the MPI standard is quite clear and the valid type check
is for equivalence to "int[2]" and not any manner of struct, and thus
the following line of mpi.h.in should be changed accordingly:

struct mpich_struct_mpi_2int            { int i1; int i2; };

Best,

Jeff

>> On Sun, Dec 22, 2013 at 3:56 AM, Markus Geimer <m.geimer at fz-juelich.de
>> <mailto:m.geimer at fz-juelich.de>> wrote:
>>
>>     Dear MPICH developers,
>>
>>     When compiling the attached example with MPICH 3.1rc2 using Clang 3.3,
>>     I get the following compiler warnings:
>>
>>     -------------------- snip --------------------
>>     mpi2int.c:17:20: warning: argument type 'int *' doesn't match specified
>>     'MPI' type tag [-Wtype-safety]
>>         MPI_Reduce(in, out, 1, MPI_2INT, MPI_MAXLOC, 0, MPI_COMM_WORLD);
>>                        ^~~     ~~~~~~~~
>>     mpi2int.c:17:16: warning: argument type 'int *' doesn't match specified
>>     'MPI' type tag [-Wtype-safety]
>>         MPI_Reduce(in, out, 1, MPI_2INT, MPI_MAXLOC, 0, MPI_COMM_WORLD);
>>                    ^~          ~~~~~~~~
>>     2 warnings generated.
>>     -------------------- snip --------------------
>>
>>     According to the MPI standard, however, MPI_2INT is a datatype as if
>>     defined by
>>
>>             MPI_Type_contiguous(2, MPI_INT, MPI_2INT)
>>
>>     i.e., 'int[2]' should be a perfect match. This is not necessarily true
>>     for the type used for comparison,
>>
>>             struct mpich_struct_mpi_2int { int i1; int i2; };
>>
>>     which will only be contiguous if the compiler does not add any padding.
>>
>>     Is there any chance this gets fixed for the final 3.1 release? Or did
>>     I miss something?
>>
>>     Thanks,
>>     Markus
>>
>
> --
> Dr. Markus Geimer
> Juelich Supercomputing Centre
> Institute for Advanced Simulation
> Forschungszentrum Juelich GmbH
> 52425 Juelich, Germany
>
> Phone:  +49-2461-61-1773
> Fax:    +49-2461-61-6656
> E-mail: m.geimer at fz-juelich.de
> WWW:    http://www.fz-juelich.de/jsc/
>
>
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss



-- 
Jeff Hammond
jeff.science at gmail.com



More information about the discuss mailing list