<div dir="ltr"><div><div><div><div><div><div><div>I am having a problem where I cannot set the target_disp parameter to a positive value in any of the 1-sided calls I've tried (EG: MPI_Put, MPI_Get, MPI_Fetch_and_op, etc.)<br>
<br></div>I am trying to use a shared (lock_all) approach with flushes. When I set target_disp to zero, the messaging works fine as expected. If I use a positive value I always get a Seg fault. <br><br></div>Obligatory disclaimer: I am not a c or MPI expert so it's entirely possible I've made some newbie error here. But I am at my wit's end trying to figure this out and could use help.<br>
<br></div>Info: MPICH 3.0.4 built on Ubuntu 12.04 LTS running one node on Intel® Core™ i5-3570K CPU @ 3.40GHz × 4<br><br></div>I've attached the code I've isolated to show the problem. With the targetDisp int set to 0, the data is properly transferred. If it is set to 1, or sizeof(int), I get the following seg fault from mpiexec for targetDisp>0. <br>
<br>corey@UbuntuDesktop:~/workspace/TargetDispBug/Release$ mpiexec -n 2 ./TargetDispBug<br><br>===================================================================================<br>= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES<br>
= EXIT CODE: 139<br>= CLEANING UP REMAINING PROCESSES<br>= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES<br>===================================================================================<br>YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)<br>
This typically refers to a problem with your application.<br>Please see the FAQ page for debugging suggestions<br><br></div>However, for targetDisp == 0 I get (as expected):<br><br>corey@UbuntuDesktop:~/workspace/TargetDispBug/Release$ mpiexec -n 2 ./TargetDispBug<br>
Received: 42.<br><br></div>The seg fault occurs at the MPI_Win_flush on both processes for targetDisp>0 on either the Put or Get or both. <br><br></div><div>Any help with this would be great. <br></div><div><br></div>Code follows:<br>
<br>#include "mpi.h"<br><br>int main(int argc, char* argv[]){<br><br> // Test main for one sided message queueing<br> int rank, numranks, targetDisp = 0;<br> int sizeInBytes = 10*sizeof(int), *buffer;<br>
MPI_Win window;<br><br> MPI_Init(&argc, &argv);<br><br> MPI_Comm_rank(MPI_COMM_WORLD, &rank);<br> MPI_Comm_size(MPI_COMM_WORLD, &numranks);<br><br> MPI_Win_allocate(sizeInBytes, MPI_INT, MPI_INFO_NULL, MPI_COMM_WORLD, &buffer, &window);<br>
<br> MPI_Win_lock_all(0, window);<br><br> int *sendBuffer;<br> int *receiveBuffer;<br><br> MPI_Alloc_mem(sizeof(int), MPI_INFO_NULL, &sendBuffer);<br> MPI_Alloc_mem(sizeof(int), MPI_INFO_NULL, &receiveBuffer);<br>
<br> if (rank == 1) {<br><br> sendBuffer[0] = 42;<br><br> MPI_Put(sendBuffer, 1, MPI_INT, 0, targetDisp, 1, MPI_INT, window);<br><br> MPI_Win_flush(0, window);<br><br> }<br><br> MPI_Barrier(MPI_COMM_WORLD);<br>
<br> if (rank == 0) {<br><br> MPI_Get(receiveBuffer, 1, MPI_INT, 0, targetDisp, 1, MPI_INT, window);<br><br> MPI_Win_flush(0, window);<br><br> printf("Received: %d.\n", receiveBuffer[0]);<br>
<br> }<br><br> MPI_Win_unlock_all(window);<br><br> MPI_Free_mem(sendBuffer);<br> MPI_Free_mem(receiveBuffer);<br><br> MPI_Win_free(&window);<br><br> MPI_Finalize();<br> return 0;<br><br>}<br><br></div>