<font size=2 face="sans-serif">I haven't worked on rma before but was
working on a problem and ran into this **comment in MPI_Win_post:</font>
<br>
<br><font size=2 face="sans-serif">"Starts an RMA exposure epoch for
the local window associated with win. **Only the processes belonging
to group should access the window with RMA calls on win during this epoch.
Each process in group must issue a matching call to MPI_Win_start.
MPI_Win_post does not block."</font>
<br>
<br><font size=2 face="sans-serif">Would overlapping epochs be violating
the ** line? I decided I probably need to support this but I wondered
if it's bending or breaking the 'rules'?</font>
<br>
<br><font size=2 face="sans-serif">The problem (code at the bottom of this
email) is using a cartesian communicator and alternating "left/right'
accumulates with 'up/down' accumulates on a single win. So:</font>
<br>
<br><font size=2 face="sans-serif">- Ranks 0,1,2,3 are doing a left/right
accumulate.</font>
<br><font size=2 face="sans-serif">- Ranks 4,5,6,7 are doing a left/right
accumulate.</font>
<br><font size=2 face="sans-serif">- ...</font>
<br>
<br><font size=2 face="sans-serif">and then sometimes...</font>
<br>
<br><font size=2 face="sans-serif">- Ranks 0,1,2,3 complete and enter the
'up/down' accumulate epoch</font>
<br><font size=2 face="sans-serif">-- Rank 0 does MPI_Win_post to ranks
4,12</font>
<br><font size=2 face="sans-serif">-- Rank 1 doesn MPI_Win_post to ranks
5,13</font>
<br><font size=2 face="sans-serif">... </font>
<br>
<br><font size=2 face="sans-serif">So is Rank 0 posting to Rank 4 while
4 is still in the epoch with 5/6/7 a violation of "Only the processes
belonging to group should access the window with RMA calls on win during
this epoch"? From Rank 4's point of view, rank 0 isn't in the
group for the current win/epoch.</font>
<br>
<br><font size=2 face="sans-serif">Putting a barrier (or something) in
between or using two different win's fixes it. I like using two win's
since it separates the epochs and clearly doesn't use the wrong group/rank
on the win.</font>
<br>
<br><font size=2 face="sans-serif"> /* RMA transfers in left-right
direction */</font>
<br><font size=2 face="sans-serif"> MPI_Win_post(grp_lr, 0,
win);</font>
<br><font size=2 face="sans-serif"> MPI_Win_start(grp_lr,
0, win);</font>
<br><font size=2 face="sans-serif"> MPI_Accumulate(&i,
1, MPI_INT, ranks_lr[LEFT] , 0, 1, MPI_INT, MPI_SUM, win);</font>
<br><font size=2 face="sans-serif"> MPI_Accumulate(&i,
1, MPI_INT, ranks_lr[RIGHT], 0, 1, MPI_INT, MPI_SUM, win);</font>
<br><font size=2 face="sans-serif"> MPI_Win_complete(win);</font>
<br><font size=2 face="sans-serif"> MPI_Win_wait(win);</font>
<br>
<br><font size=2 face="sans-serif"> /* RMA transfers in up-down
direction */</font>
<br><font size=2 face="sans-serif"> MPI_Win_post(grp_ud, 0,
win);</font>
<br><font size=2 face="sans-serif"> MPI_Win_start(grp_ud,
0, win);</font>
<br><font size=2 face="sans-serif"> MPI_Accumulate(&i,
1, MPI_INT, ranks_ud[UP] , 0, 1, MPI_INT, MPI_SUM, win);</font>
<br><font size=2 face="sans-serif"> MPI_Accumulate(&i,
1, MPI_INT, ranks_ud[DOWN], 0, 1, MPI_INT, MPI_SUM, win);</font>
<br><font size=2 face="sans-serif"> MPI_Win_complete(win);</font>
<br><font size=2 face="sans-serif"> MPI_Win_wait(win);</font>
<br>
<br>