[mpich-discuss] MPICH-3.1.4 with PAMID: Win_lock

Jeff Hammond jeff.science at gmail.com
Fri Mar 31 13:12:21 CDT 2017


Thanks.  I vaguely remember that discussion but I was confused by the text
until now.

Jeff


On Thu, Mar 30, 2017 at 7:57 PM, Balaji, Pavan <balaji at anl.gov> wrote:
>
>
> MPI-2.2 page 347, line 31:
>
> "Distinct access epochs for win at the same process must be disjoint."
>
> We had a passionate discussion about this at the Forum for an entire day
many years ago, even though every single person agreed that this
restriction was silly.  Kind of hard to forget.  Fun times.  :-)
>
>   -- Pavan
>
> > On Mar 30, 2017, at 1:53 PM, Jeff Hammond <jeff.science at gmail.com>
wrote:
> >
> > I cannot find the text in MPI 2.1 that says one cannot initiate a
shared lock at two processes at the same time.  I have some vague
recollection, but I read the chapter twice and couldn't find the
restriction.
> >
> > Jeff
> >
> > On Thu, Mar 30, 2017 at 9:54 AM, Balaji, Pavan <balaji at anl.gov> wrote:
> >
> > No, it's not a bug.  PAMID only supports MPI-2.1.  Before you do locks
to multiple targets, you need to check if MPI_VERSION >= 3.  MPI-2 only
supported locks to a single target.  MPI-3 added multiple lock epochs.
> >
> > Also, pamid is not supported anymore in MPICH.  We recommend the
MPICH-3.3/OFI/BGQ path for Blue Gene.
> >
> >   -- Pavan
> >
> > > On Mar 30, 2017, at 10:12 AM, Jeff Hammond <jeff.science at gmail.com>
wrote:
> > >
> > > I guess this is a bug but MPICH 3.1.x isn't the basis for the
supported MPI on BGQ, so I doubt you will get much traction by reporting
it.  IBM made an effort to support MPI-3 with PAMID but it was as an
open-source, best effort project, and I recall there were some issues with
it, including deadlock in certain asynchronous operations.
> > >
> > > You should try the supported MPI-2.2 implementation on BGQ or try the
unsupported OFI-based implementation of MPI-3.
> > >
> > > Disclaimer: The comments above are spoken in my capacity as the
person who used to support MPI at ALCF, not my current job role.
> > >
> > > Best,
> > >
> > > Jeff
> > >
> > > On Thu, Mar 30, 2017 at 3:21 AM, Sebastian Rinke <
rinke at cs.tu-darmstadt.de> wrote:
> > > Same window, i.e.:
> > >
> > > Process 0:
> > >
> > > Win_lock(MPI_LOCK_SHARED, target=A, window=win)
> > > Win_lock(MPI_LOCK_SHARED, target=B, window=win)
> > >
> > > Sebastian
> > >
> > >
> > > On 30 Mar 2017, at 06:24, Jeff Hammond <jeff.science at gmail.com> wrote:
> > >
> > >> Same window or different windows?
> > >>
> > >> Jeff
> > >>
> > >> On Wed, Mar 29, 2017 at 5:59 PM Sebastian Rinke <
rinke at cs.tu-darmstadt.de> wrote:
> > >> Dear all,
> > >>
> > >> I have some issue with MPI_Win_lock in MPICH-3.1.4 on Blue Gene/Q.
> > >>
> > >> Here is my example:
> > >>
> > >> Process 0:
> > >>
> > >> Win_lock(MPI_LOCK_SHARED, target=A)
> > >> Win_lock(MPI_LOCK_SHARED, target=B)
> > >>
> > >> No matter what I use for A and B (given A != B), a process cannot
acquire more than one lock
> > >> at a time.
> > >>
> > >> To my understanding, it should be possible to acquire more than one
lock.
> > >>
> > >> Can you confirm this issue?
> > >>
> > >> Thanks,
> > >> Sebastian
> > >> _______________________________________________
> > >> discuss mailing list     discuss at mpich.org
> > >> To manage subscription options or unsubscribe:
> > >> https://lists.mpich.org/mailman/listinfo/discuss
> > >> --
> > >> Jeff Hammond
> > >> jeff.science at gmail.com
> > >> http://jeffhammond.github.io/
> > >> _______________________________________________
> > >> discuss mailing list     discuss at mpich.org
> > >> To manage subscription options or unsubscribe:
> > >> https://lists.mpich.org/mailman/listinfo/discuss
> > >
> > >
> > > _______________________________________________
> > > discuss mailing list     discuss at mpich.org
> > > To manage subscription options or unsubscribe:
> > > https://lists.mpich.org/mailman/listinfo/discuss
> > >
> > >
> > >
> > > --
> > > Jeff Hammond
> > > jeff.science at gmail.com
> > > http://jeffhammond.github.io/
> > > _______________________________________________
> > > discuss mailing list     discuss at mpich.org
> > > To manage subscription options or unsubscribe:
> > > https://lists.mpich.org/mailman/listinfo/discuss
> >
> > _______________________________________________
> > discuss mailing list     discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
> >
> >
> >
> > --
> > Jeff Hammond
> > jeff.science at gmail.com
> > http://jeffhammond.github.io/
> > _______________________________________________
> > discuss mailing list     discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss




--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20170331/363db654/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list