<div dir="ltr"><p dir="ltr">Hi Bob,</p>
<p dir="ltr">Active target separates exposure and access epochs. It is erroneous for a process to engage in more than one exposure epoch or more than one access epoch per window. So it's ok for many processes to expose (post/wait) to me concurrently, but for a given window I can access only one (start/complete) at a time.</p>
<p style>Your example code looks ok to me. Are you getting an error message?</p><p style> ~Jim.</p>
<div class="gmail_quote">On Jun 7, 2013 9:11 AM, "Bob Cernohous" <<a href="mailto:bobc@us.ibm.com" target="_blank">bobc@us.ibm.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<font face="sans-serif">I haven't worked on rma before but was
working on a problem and ran into this **comment in MPI_Win_post:</font>
<br>
<br><font face="sans-serif">"Starts an RMA exposure epoch for
the local window associated with win. **Only the processes belonging
to group should access the window with RMA calls on win during this epoch.
Each process in group must issue a matching call to MPI_Win_start.
MPI_Win_post does not block."</font>
<br>
<br><font face="sans-serif">Would overlapping epochs be violating
the ** line? I decided I probably need to support this but I wondered
if it's bending or breaking the 'rules'?</font>
<br>
<br><font face="sans-serif">The problem (code at the bottom of this
email) is using a cartesian communicator and alternating "left/right'
accumulates with 'up/down' accumulates on a single win. So:</font>
<br>
<br><font face="sans-serif">- Ranks 0,1,2,3 are doing a left/right
accumulate.</font>
<br><font face="sans-serif">- Ranks 4,5,6,7 are doing a left/right
accumulate.</font>
<br><font face="sans-serif">- ...</font>
<br>
<br><font face="sans-serif">and then sometimes...</font>
<br>
<br><font face="sans-serif">- Ranks 0,1,2,3 complete and enter the
'up/down' accumulate epoch</font>
<br><font face="sans-serif">-- Rank 0 does MPI_Win_post to ranks
4,12</font>
<br><font face="sans-serif">-- Rank 1 doesn MPI_Win_post to ranks
5,13</font>
<br><font face="sans-serif">... </font>
<br>
<br><font face="sans-serif">So is Rank 0 posting to Rank 4 while
4 is still in the epoch with 5/6/7 a violation of "Only the processes
belonging to group should access the window with RMA calls on win during
this epoch"? From Rank 4's point of view, rank 0 isn't in the
group for the current win/epoch.</font>
<br>
<br><font face="sans-serif">Putting a barrier (or something) in
between or using two different win's fixes it. I like using two win's
since it separates the epochs and clearly doesn't use the wrong group/rank
on the win.</font>
<br>
<br><font face="sans-serif"> /* RMA transfers in left-right
direction */</font>
<br><font face="sans-serif"> MPI_Win_post(grp_lr, 0,
win);</font>
<br><font face="sans-serif"> MPI_Win_start(grp_lr,
0, win);</font>
<br><font face="sans-serif"> MPI_Accumulate(&i,
1, MPI_INT, ranks_lr[LEFT] , 0, 1, MPI_INT, MPI_SUM, win);</font>
<br><font face="sans-serif"> MPI_Accumulate(&i,
1, MPI_INT, ranks_lr[RIGHT], 0, 1, MPI_INT, MPI_SUM, win);</font>
<br><font face="sans-serif"> MPI_Win_complete(win);</font>
<br><font face="sans-serif"> MPI_Win_wait(win);</font>
<br>
<br><font face="sans-serif"> /* RMA transfers in up-down
direction */</font>
<br><font face="sans-serif"> MPI_Win_post(grp_ud, 0,
win);</font>
<br><font face="sans-serif"> MPI_Win_start(grp_ud,
0, win);</font>
<br><font face="sans-serif"> MPI_Accumulate(&i,
1, MPI_INT, ranks_ud[UP] , 0, 1, MPI_INT, MPI_SUM, win);</font>
<br><font face="sans-serif"> MPI_Accumulate(&i,
1, MPI_INT, ranks_ud[DOWN], 0, 1, MPI_INT, MPI_SUM, win);</font>
<br><font face="sans-serif"> MPI_Win_complete(win);</font>
<br><font face="sans-serif"> MPI_Win_wait(win);</font>
<br>
<br></blockquote></div>
</div>