<div dir="ltr"><div><div><div><div><div>Jeff,<br><br></div>Thank you for the help. Indeed, changing the disp_unit on MPI_Win_allocate from MPI_INT to sizeof(MPI_INT) does eliminate the seg faults in the subsequent window accesses. <br>
<br>Rajeev, I am not getting any compiler warnings for using int or literals for target_disp on my machine, but I did change all instances to MPI_Aint just in case. Possibly running this code with some other compiler would show warnings. I am using mpic++/g++ on Ubuntu. <br>
<br></div>I vaguely remember from the MPI-3.0 documentation (but can't seem to find it now) that the disp_unit on window creation and on any subsequent atomic operations had to match. I get the impression that the datatype passed to Get/Put does not determine the length in bytes of the stride used with target_disp. Instead the disp_unit at window creation is used. In this case it would seem to me that asking to Put/Get data with a target_disp of 1 and an original disp_unit of 1275069445 would definitely seg fault since the buffer and window are only a few tens of bytes long. But as I said earlier, I am by no means an expert. Still learning my way around. <br>
</div><div><br></div></div>Thanks again to both of you for the help. I am not sure I would've ever caught that myself!<br><br></div>Corey<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sat, Jan 18, 2014 at 4:56 PM, Jeff Hammond <span dir="ltr"><<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Corey,<br>
<br>
It was a simple bug. You passed MPI_INT as the disp_unit rather than<br>
sizeof(int), which is almost certainly what you meant. MPI_INT is an<br>
opaque handle that is 1275069445 when interpreted as an integer.<br>
<br>
I am not entirely sure why using such a large disp_unit caused the<br>
segfault though. I haven't yet figured out where disp_unit is used in<br>
the MPI_Get code.<br>
<br>
Best,<br>
<br>
Jeff<br>
<div class="HOEnZb"><div class="h5"><br>
On Sat, Jan 18, 2014 at 2:41 PM, Corey A. Henderson<br>
<<a href="mailto:cahenderson@wisc.edu">cahenderson@wisc.edu</a>> wrote:<br>
> I am having a problem where I cannot set the target_disp parameter to a<br>
> positive value in any of the 1-sided calls I've tried (EG: MPI_Put, MPI_Get,<br>
> MPI_Fetch_and_op, etc.)<br>
><br>
> I am trying to use a shared (lock_all) approach with flushes. When I set<br>
> target_disp to zero, the messaging works fine as expected. If I use a<br>
> positive value I always get a Seg fault.<br>
><br>
> Obligatory disclaimer: I am not a c or MPI expert so it's entirely possible<br>
> I've made some newbie error here. But I am at my wit's end trying to figure<br>
> this out and could use help.<br>
><br>
> Info: MPICH 3.0.4 built on Ubuntu 12.04 LTS running one node on Intel® Core™<br>
> i5-3570K CPU @ 3.40GHz × 4<br>
><br>
> I've attached the code I've isolated to show the problem. With the<br>
> targetDisp int set to 0, the data is properly transferred. If it is set to<br>
> 1, or sizeof(int), I get the following seg fault from mpiexec for<br>
> targetDisp>0.<br>
><br>
> corey@UbuntuDesktop:~/workspace/TargetDispBug/Release$ mpiexec -n 2<br>
> ./TargetDispBug<br>
><br>
> ===================================================================================<br>
> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES<br>
> = EXIT CODE: 139<br>
> = CLEANING UP REMAINING PROCESSES<br>
> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES<br>
> ===================================================================================<br>
> YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal<br>
> 11)<br>
> This typically refers to a problem with your application.<br>
> Please see the FAQ page for debugging suggestions<br>
><br>
> However, for targetDisp == 0 I get (as expected):<br>
><br>
> corey@UbuntuDesktop:~/workspace/TargetDispBug/Release$ mpiexec -n 2<br>
> ./TargetDispBug<br>
> Received: 42.<br>
><br>
> The seg fault occurs at the MPI_Win_flush on both processes for targetDisp>0<br>
> on either the Put or Get or both.<br>
><br>
> Any help with this would be great.<br>
><br>
> Code follows:<br>
><br>
> #include "mpi.h"<br>
><br>
> int main(int argc, char* argv[]){<br>
><br>
> // Test main for one sided message queueing<br>
> int rank, numranks, targetDisp = 0;<br>
> int sizeInBytes = 10*sizeof(int), *buffer;<br>
> MPI_Win window;<br>
><br>
> MPI_Init(&argc, &argv);<br>
><br>
> MPI_Comm_rank(MPI_COMM_WORLD, &rank);<br>
> MPI_Comm_size(MPI_COMM_WORLD, &numranks);<br>
><br>
> MPI_Win_allocate(sizeInBytes, MPI_INT, MPI_INFO_NULL, MPI_COMM_WORLD,<br>
> &buffer, &window);<br>
><br>
> MPI_Win_lock_all(0, window);<br>
><br>
> int *sendBuffer;<br>
> int *receiveBuffer;<br>
><br>
> MPI_Alloc_mem(sizeof(int), MPI_INFO_NULL, &sendBuffer);<br>
> MPI_Alloc_mem(sizeof(int), MPI_INFO_NULL, &receiveBuffer);<br>
><br>
> if (rank == 1) {<br>
><br>
> sendBuffer[0] = 42;<br>
><br>
> MPI_Put(sendBuffer, 1, MPI_INT, 0, targetDisp, 1, MPI_INT, window);<br>
><br>
> MPI_Win_flush(0, window);<br>
><br>
> }<br>
><br>
> MPI_Barrier(MPI_COMM_WORLD);<br>
><br>
> if (rank == 0) {<br>
><br>
> MPI_Get(receiveBuffer, 1, MPI_INT, 0, targetDisp, 1, MPI_INT,<br>
> window);<br>
><br>
> MPI_Win_flush(0, window);<br>
><br>
> printf("Received: %d.\n", receiveBuffer[0]);<br>
><br>
> }<br>
><br>
> MPI_Win_unlock_all(window);<br>
><br>
> MPI_Free_mem(sendBuffer);<br>
> MPI_Free_mem(receiveBuffer);<br>
><br>
> MPI_Win_free(&window);<br>
><br>
> MPI_Finalize();<br>
> return 0;<br>
><br>
> }<br>
><br>
><br>
</div></div><div class="HOEnZb"><div class="h5">> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Jeff Hammond<br>
<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
</font></span></blockquote></div><br></div>