<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">I cannot find the text in MPI 2.1 that says one cannot initiate a shared lock at two processes at the same time. I have some vague recollection, but I read the chapter twice and couldn't find the restriction.<div><br></div><div>Jeff<br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 30, 2017 at 9:54 AM, Balaji, Pavan <span dir="ltr"><<a href="mailto:balaji@anl.gov" target="_blank">balaji@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
No, it's not a bug. PAMID only supports MPI-2.1. Before you do locks to multiple targets, you need to check if MPI_VERSION >= 3. MPI-2 only supported locks to a single target. MPI-3 added multiple lock epochs.<br>
<br>
Also, pamid is not supported anymore in MPICH. We recommend the MPICH-3.3/OFI/BGQ path for Blue Gene.<br>
<span class="gmail-HOEnZb"><font color="#888888"><br>
-- Pavan<br>
</font></span><div class="gmail-HOEnZb"><div class="gmail-h5"><br>
> On Mar 30, 2017, at 10:12 AM, Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>> wrote:<br>
><br>
> I guess this is a bug but MPICH 3.1.x isn't the basis for the supported MPI on BGQ, so I doubt you will get much traction by reporting it. IBM made an effort to support MPI-3 with PAMID but it was as an open-source, best effort project, and I recall there were some issues with it, including deadlock in certain asynchronous operations.<br>
><br>
> You should try the supported MPI-2.2 implementation on BGQ or try the unsupported OFI-based implementation of MPI-3.<br>
><br>
> Disclaimer: The comments above are spoken in my capacity as the person who used to support MPI at ALCF, not my current job role.<br>
><br>
> Best,<br>
><br>
> Jeff<br>
><br>
> On Thu, Mar 30, 2017 at 3:21 AM, Sebastian Rinke <<a href="mailto:rinke@cs.tu-darmstadt.de">rinke@cs.tu-darmstadt.de</a>> wrote:<br>
> Same window, i.e.:<br>
><br>
> Process 0:<br>
><br>
> Win_lock(MPI_LOCK_SHARED, target=A, window=win)<br>
> Win_lock(MPI_LOCK_SHARED, target=B, window=win)<br>
><br>
> Sebastian<br>
><br>
><br>
> On 30 Mar 2017, at 06:24, Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>> wrote:<br>
><br>
>> Same window or different windows?<br>
>><br>
>> Jeff<br>
>><br>
>> On Wed, Mar 29, 2017 at 5:59 PM Sebastian Rinke <<a href="mailto:rinke@cs.tu-darmstadt.de">rinke@cs.tu-darmstadt.de</a>> wrote:<br>
>> Dear all,<br>
>><br>
>> I have some issue with MPI_Win_lock in MPICH-3.1.4 on Blue Gene/Q.<br>
>><br>
>> Here is my example:<br>
>><br>
>> Process 0:<br>
>><br>
>> Win_lock(MPI_LOCK_SHARED, target=A)<br>
>> Win_lock(MPI_LOCK_SHARED, target=B)<br>
>><br>
>> No matter what I use for A and B (given A != B), a process cannot acquire more than one lock<br>
>> at a time.<br>
>><br>
>> To my understanding, it should be possible to acquire more than one lock.<br>
>><br>
>> Can you confirm this issue?<br>
>><br>
>> Thanks,<br>
>> Sebastian<br>
>> ______________________________<wbr>_________________<br>
>> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
>> To manage subscription options or unsubscribe:<br>
>> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/<wbr>mailman/listinfo/discuss</a><br>
>> --<br>
>> Jeff Hammond<br>
>> <a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
>> <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a><br>
>> ______________________________<wbr>_________________<br>
>> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
>> To manage subscription options or unsubscribe:<br>
>> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/<wbr>mailman/listinfo/discuss</a><br>
><br>
><br>
> ______________________________<wbr>_________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/<wbr>mailman/listinfo/discuss</a><br>
><br>
><br>
><br>
> --<br>
> Jeff Hammond<br>
> <a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
> <a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a><br>
> ______________________________<wbr>_________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/<wbr>mailman/listinfo/discuss</a><br>
<br>
______________________________<wbr>_________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/<wbr>mailman/listinfo/discuss</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div></div></div>