[mpich-discuss] attribute value of MPI_TAG_UB

Jeff Hammond jeff.science at gmail.com
Sat Apr 27 11:20:09 CDT 2019


On Fri, Apr 26, 2019 at 12:18 PM Wei-keng Liao via discuss <
discuss at mpich.org> wrote:

> According to MPI 3.1, Section 8.1.2, the attribute value of
> MPI_TAG_UB attached to MPI_COMM_WORLD can be inquired by function
> MPI_Comm_get_attr(). It has the same value on all processes of
> MPI_COMM_WORLD.
>
> My first question is should this value be the same across different runs?


I would expect it to be the same when running with the same options. It is
possible that it will change for different numbers of processes. It is also
possible that it changes based upon the network or which back-end is used
(eg libfabric provider).


> A small test program using MPICH 3.3 shows they are not the same across
> runs and across processes. But when compiled with the latest master branch,
> they are the same. So, can I assume the answer to my question is YES and
> MPICH has fixed this in the master branch?
>
> My second question is that I notice MPI_TAG_UB is defined as a constant
> of 0x64400001 (=1681915905) in mpi.h. That value is not the one returned
> by MPI_Comm_get_attr(), which is 10345120. Is this intended?
>

I don’t think that’s the value but the key used to query it. You need to
call get_attr rather than look at this value. The key is meaningless by
itself.

Jeff


> I also tested OpenMPI 4.0.0. The inconsistency occurs across different
> runs, as well as different processes. I can see OpenMPI defines
> MPI_TAG_UB as an enum type, rather than a constant.
>
> Wei-keng
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
-- 
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20190427/ede4081f/attachment.html>


More information about the discuss mailing list