[mpich-discuss] MPI_Win_fence failed

Jim Dinan james.dinan at gmail.com
Wed Jul 10 10:12:45 CDT 2013


It's hard to tell where the segmentation fault is coming from.  Can you use
a debugger to generate a backtrace?

 ~Jim.


On Wed, Jul 10, 2013 at 11:07 AM, Sufeng Niu <sniu at hawk.iit.edu> wrote:

> Hello,
>
> I used MPI RMA in my program, but the program stop at the MPI_Win_fence, I
> have a master process receive data from udp socket. Other processes use
> MPI_Get to access data.
>
> master process:
>
> MPI_Create(...)
> for(...){
> /* udp recv operation */
>
> MPI_Barrier  // to let other process know data received from udp is ready
>
> MPI_Win_fence(0, win);
> MPI_Win_fence(0, win);
>
> }
>
> other processes:
>
> for(...){
>
> MPI_Barrier  // sync for udp data ready
>
> MPI_Win_fence(0, win);
>
> MPI_Get();
>
> MPI_Win_fence(0, win);  <-- program stopped here
>
> /* other operation */
> }
>
> I found that the program stopped at second MPI_Win_fence, the terminal
> output is:
>
>
>
> ===================================================================================
> =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> =   EXIT CODE: 11
> =   CLEANING UP REMAINING PROCESSES
> =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>
> ===================================================================================
> YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault
> (signal 11)
> This typically refers to a problem with your application.
> Please see the FAQ page for debugging suggestions
>
> Do you have any suggestions? Thank you very much!
>
> --
> Best Regards,
> Sufeng Niu
> ECASP lab, ECE department, Illinois Institute of Technology
> Tel: 312-731-7219
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130710/48c5f337/attachment.html>


More information about the discuss mailing list