[mpich-discuss] Extent of support for MPI_T_... interface?
Halim Amer
aamer at anl.gov
Mon Jun 19 14:06:23 CDT 2017
The type of variables you can measure should all be visible at init
time. For the same type of variable, however, you can have multiple
bindings to different MPI objects. For example, when you create a new
communicator, you can bind a new performance variable to that
communicator instead of tracking the same performance metric globally on
all communicators. You can refer to section 14.3.2 of the MPI 3.1
standard for more details.
Halim
www.mcs.anl.gov/~aamer
On 6/19/17 11:07 AM, Alexander Rast wrote:
> OK that's working now. I see 10 performance variables. Are any more likely
> to become available as processes/comms are started? In particular what will
> happen if I run MPI_Comm_spawn to generate new processes?
>
> Thanks for the help thus far.
>
>
>
> On Mon, Jun 19, 2017 at 3:40 PM, Halim Amer <aamer at anl.gov> wrote:
>
>> Tracking performance variables incurs overhead, thus disabled by default.
>> You can enable tracking pvars with --enable-mpit-pvars at configure time.
>> Feel free to look at "configure --help" to get what pvars are tracked in
>> the MPICH version you are using.
>>
>> Halim
>> www.mcs.anl.gov/~aamer
>>
>>
>> On 6/19/17 9:27 AM, Alexander Rast wrote:
>>
>>> I just ran a simple program to output the list of performance variables
>>> available at MPI initialisation. That it compiled and ran indicates that
>>> MPICH does at least support the interface, however it returned 0 variables
>>> available. Is this true - that MPICH supports the interface but exports no
>>> performance variables?
>>>
>>> The other possible scenario I could envisage is that MPICH exports
>>> possibly
>>> many variables but these aren't available at initialisation time, rather
>>> they are dynamically exported as processes and communicators are created.
>>> However, if this is the case I don't understand the expected usage model.
>>> It's not clear how you could make use of this data without very awkward
>>> processing because the underlying datatypes are only exported as an
>>> MPI_Datatype opaque handle.
>>>
>>> Does either of these 2 scenarios reflect the extent of MPI_T support?
>>>
>>>
>>>
>>> _______________________________________________
>>> discuss mailing list discuss at mpich.org
>>> To manage subscription options or unsubscribe:
>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>
>>> _______________________________________________
>> discuss mailing list discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>
>
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
_______________________________________________
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
More information about the discuss
mailing list