[mpich-discuss] Reading buffers during MPI call in multithreaded application

Thakur, Rajeev thakur at mcs.anl.gov
Tue Aug 16 15:53:55 CDT 2016


I guess what is happening is the allgather in the scatter-allgather algorithm is happening everywhere, even at the root. The root must participate in the allgather, but the copying into the local buffer at the root can be avoided since the data is already there. Since all this is within a blocking MPI_Bcast operation, it was not considered a problem. I don’t know if the buffer at the root is modified during the operation or just over-written with the same values. If the latter, I don’t know whey there would be a problem even in the multithreaded case.

Rajeev




> On Aug 16, 2016, at 3:47 PM, Mark Davis <markdavisinboston at gmail.com> wrote:
> 
> I'm glad that is the intention of MPI_Bcast. That agrees with my
> intuition. However, unless I'm mistaken, there may be a bug here as it
> seems that the buffer is being written to on the root process (see my
> original post for more details).
> 
> Given that there's only one buffer argument in MPI_Bcast, it does seem
> that it would be impossible to mark it as const as it would only be
> const on the root and non-const on the non-roots. But, it would be
> nice if it was effectively const for the root process.
> 
> On Tue, Aug 16, 2016 at 4:34 PM, Balaji, Pavan <balaji at anl.gov> wrote:
>> 
>>> On Aug 16, 2016, at 3:12 PM, Mark Davis <markdavisinboston at gmail.com> wrote:
>>> 
>>>> At the non-root processes, MPI_Bcast behaves like a MPI_Recv call, and thus
>>>> you cannot touch the buffer until the function returned.
>>> 
>>> That part makes sense. I'm not allowing the buffer to be read or
>>> otherwise used on non-root threads. It makes sense to me that this
>>> acts as a MPI_Recv call.
>>> 
>>> The thing that I'm confused by is on the root process, as it seems
>>> that the root process' buffer is also written to during the course of
>>> the MPI_Bcast; it should act like an MPI_Send. It seems like this is
>>> just an implementation detail, and as you pointed out, since the
>>> MPI_Bcast buffer is not marked const, anything could happen.
>> 
>> As of MPI-2.2, we allowed the root process to read from the buffer while communication is going on for point-to-point and collective operations.  In MPI-3.0, we removed that wording when we added "const" attributes, and might have accidentally removed the wording in collectives also even though we didn't add const in all cases.  I didn't get a chance to look through the MPI-3.1 standard, but if we missed that, we should fix it.
>> 
>> The intention is that MPI_Bcast allows the user to read from the buffer at the root process, but not write to it.  So, for example, you can do multiple MPI_Ibcasts from the same buffer.
>> 
>>  -- Pavan
>> 
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list