[mpich-commits] [mpich] MPICH primary repository branch, master, updated. v3.0.2-78-g5ac51ed
mysql vizuser
noreply at mpich.org
Fri Mar 15 16:35:59 CDT 2013
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "MPICH primary repository".
The branch, master has been updated
via 5ac51edf49d664d33b4a88fc6a5cebeb51950149 (commit)
via 9f3b12348c4f94bc059fcc6350a660dabb020c4e (commit)
from e04dd4b64ff618f2df58789265b741a8e9fab081 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
http://git.mpich.org/mpich.git/commitdiff/5ac51edf49d664d33b4a88fc6a5cebeb51950149
commit 5ac51edf49d664d33b4a88fc6a5cebeb51950149
Author: James Dinan <dinan at mcs.anl.gov>
Date: Fri Mar 1 09:12:19 2013 -0600
Add additional wait/test cases to Req. ops. test
This adds additional cases to the request-generation RMA operations test
case that ensures the implementation correctly handles waiting on the
request inside of the same passive target epoch, a different passive
target epoch, a different fence epoch, and no epoch.
Reviewer: goodell
diff --git a/test/mpi/rma/reqops.c b/test/mpi/rma/reqops.c
index 7622dd1..ef2636f 100644
--- a/test/mpi/rma/reqops.c
+++ b/test/mpi/rma/reqops.c
@@ -172,6 +172,106 @@ int main( int argc, char *argv[] )
}
MPI_Win_unlock(0, window);
+ MPI_Barrier(MPI_COMM_WORLD);
+
+ /* Wait inside of an epoch */
+ {
+ MPI_Request pn_req[4];
+ int val[4], res;
+ const int target = 0;
+
+ MPI_Win_lock_all(0, window);
+
+ MPI_Rget_accumulate(&val[0], 1, MPI_INT, &res, 1, MPI_INT, target, 0, 1, MPI_INT, MPI_REPLACE, window, &pn_req[0]);
+ MPI_Rget(&val[1], 1, MPI_INT, target, 1, 1, MPI_INT, window, &pn_req[1]);
+ MPI_Rput(&val[2], 1, MPI_INT, target, 2, 1, MPI_INT, window, &pn_req[2]);
+ MPI_Raccumulate(&val[3], 1, MPI_INT, target, 3, 1, MPI_INT, MPI_REPLACE, window, &pn_req[3]);
+
+ assert(pn_req[0] != MPI_REQUEST_NULL);
+ assert(pn_req[1] != MPI_REQUEST_NULL);
+ assert(pn_req[2] != MPI_REQUEST_NULL);
+ assert(pn_req[3] != MPI_REQUEST_NULL);
+
+ MPI_Waitall(4, pn_req, MPI_STATUSES_IGNORE);
+
+ MPI_Win_unlock_all(window);
+ }
+
+ MPI_Barrier(MPI_COMM_WORLD);
+
+ /* Wait outside of an epoch */
+ {
+ MPI_Request pn_req[4];
+ int val[4], res;
+ const int target = 0;
+
+ MPI_Win_lock_all(0, window);
+
+ MPI_Rget_accumulate(&val[0], 1, MPI_INT, &res, 1, MPI_INT, target, 0, 1, MPI_INT, MPI_REPLACE, window, &pn_req[0]);
+ MPI_Rget(&val[1], 1, MPI_INT, target, 1, 1, MPI_INT, window, &pn_req[1]);
+ MPI_Rput(&val[2], 1, MPI_INT, target, 2, 1, MPI_INT, window, &pn_req[2]);
+ MPI_Raccumulate(&val[3], 1, MPI_INT, target, 3, 1, MPI_INT, MPI_REPLACE, window, &pn_req[3]);
+
+ assert(pn_req[0] != MPI_REQUEST_NULL);
+ assert(pn_req[1] != MPI_REQUEST_NULL);
+ assert(pn_req[2] != MPI_REQUEST_NULL);
+ assert(pn_req[3] != MPI_REQUEST_NULL);
+
+ MPI_Win_unlock_all(window);
+
+ MPI_Waitall(4, pn_req, MPI_STATUSES_IGNORE);
+ }
+
+ /* Wait in a different epoch */
+ {
+ MPI_Request pn_req[4];
+ int val[4], res;
+ const int target = 0;
+
+ MPI_Win_lock_all(0, window);
+
+ MPI_Rget_accumulate(&val[0], 1, MPI_INT, &res, 1, MPI_INT, target, 0, 1, MPI_INT, MPI_REPLACE, window, &pn_req[0]);
+ MPI_Rget(&val[1], 1, MPI_INT, target, 1, 1, MPI_INT, window, &pn_req[1]);
+ MPI_Rput(&val[2], 1, MPI_INT, target, 2, 1, MPI_INT, window, &pn_req[2]);
+ MPI_Raccumulate(&val[3], 1, MPI_INT, target, 3, 1, MPI_INT, MPI_REPLACE, window, &pn_req[3]);
+
+ assert(pn_req[0] != MPI_REQUEST_NULL);
+ assert(pn_req[1] != MPI_REQUEST_NULL);
+ assert(pn_req[2] != MPI_REQUEST_NULL);
+ assert(pn_req[3] != MPI_REQUEST_NULL);
+
+ MPI_Win_unlock_all(window);
+
+ MPI_Win_lock_all(0, window);
+ MPI_Waitall(4, pn_req, MPI_STATUSES_IGNORE);
+ MPI_Win_unlock_all(window);
+ }
+
+ /* Wait in a fence epoch */
+ {
+ MPI_Request pn_req[4];
+ int val[4], res;
+ const int target = 0;
+
+ MPI_Win_lock_all(0, window);
+
+ MPI_Rget_accumulate(&val[0], 1, MPI_INT, &res, 1, MPI_INT, target, 0, 1, MPI_INT, MPI_REPLACE, window, &pn_req[0]);
+ MPI_Rget(&val[1], 1, MPI_INT, target, 1, 1, MPI_INT, window, &pn_req[1]);
+ MPI_Rput(&val[2], 1, MPI_INT, target, 2, 1, MPI_INT, window, &pn_req[2]);
+ MPI_Raccumulate(&val[3], 1, MPI_INT, target, 3, 1, MPI_INT, MPI_REPLACE, window, &pn_req[3]);
+
+ assert(pn_req[0] != MPI_REQUEST_NULL);
+ assert(pn_req[1] != MPI_REQUEST_NULL);
+ assert(pn_req[2] != MPI_REQUEST_NULL);
+ assert(pn_req[3] != MPI_REQUEST_NULL);
+
+ MPI_Win_unlock_all(window);
+
+ MPI_Win_fence(0, window);
+ MPI_Waitall(4, pn_req, MPI_STATUSES_IGNORE);
+ MPI_Win_fence(0, window);
+ }
+
MPI_Win_free(&window);
if (buf) MPI_Free_mem(buf);
http://git.mpich.org/mpich.git/commitdiff/9f3b12348c4f94bc059fcc6350a660dabb020c4e
commit 9f3b12348c4f94bc059fcc6350a660dabb020c4e
Author: James Dinan <dinan at mcs.anl.gov>
Date: Fri Mar 15 14:14:44 2013 -0500
Fix req. op. completion outside of PT epoch
This is a temporary fix for request-generating operations to allow their
requests to be completed after the user has called unlock on the given
target. This closes ticket #1801. Ticket #1741 is still active and is
keeping track of the fact that an implementation of req. ops that tracks
and completes individual operations (rather than the current approach,
which just calls flush) is still needed.
Reviewer: goodell
diff --git a/src/mpid/ch3/src/ch3u_rma_reqops.c b/src/mpid/ch3/src/ch3u_rma_reqops.c
index 09cf9d9..acfe1cb 100644
--- a/src/mpid/ch3/src/ch3u_rma_reqops.c
+++ b/src/mpid/ch3/src/ch3u_rma_reqops.c
@@ -30,10 +30,16 @@ static int MPIDI_CH3I_Rma_req_poll(void *state, MPI_Status *status)
MPIU_UNREFERENCED_ARG(status);
- /* Call flush to complete the operation */
+ /* Call flush to complete the operation. Check that a passive target epoch
+ * is still active first; the user could complete the request after calling
+ * unlock. */
/* FIXME: We need per-operation completion to make this more efficient. */
- mpi_errno = req_state->win_ptr->RMAFns.Win_flush(req_state->target_rank,
- req_state->win_ptr);
+ if (req_state->win_ptr->targets[req_state->target_rank].remote_lock_state
+ != MPIDI_CH3_WIN_LOCK_NONE)
+ {
+ mpi_errno = req_state->win_ptr->RMAFns.Win_flush(req_state->target_rank,
+ req_state->win_ptr);
+ }
if (mpi_errno != MPI_SUCCESS) { MPIU_ERR_POP(mpi_errno); }
-----------------------------------------------------------------------
Summary of changes:
src/mpid/ch3/src/ch3u_rma_reqops.c | 12 +++-
test/mpi/rma/reqops.c | 100 ++++++++++++++++++++++++++++++++++++
2 files changed, 109 insertions(+), 3 deletions(-)
hooks/post-receive
--
MPICH primary repository
More information about the commits
mailing list