[mpich-discuss] Dataloop error message
William Gropp
wgropp at illinois.edu
Wed Mar 8 19:18:40 CST 2017
I’d still like to see MPICH adopt the Dataloop code that Tarun wrote and that should be much faster and particularly more appropriate for use with RMA. I think that would be more productive in the long term than continuing to maintain the current code.
Bill
William Gropp
Acting Director and Chief Scientist, NCSA
Director, Parallel Computing Institute
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign
> On Mar 8, 2017, at 5:41 PM, Palmer, Bruce J <Bruce.Palmer at pnnl.gov> wrote:
>
> Rob,
>
> Attached are the valgrind logs for a failed run. I've checked out the code on our side and I don't see anything obviously bogus (not that that means much). Do these suggest anything to you? I'm still trying to create a short reproducer, but as you can imagine, all my efforts so far work just fine.
>
> Bruce
>
> -----Original Message-----
> From: Latham, Robert J. [mailto:robl at mcs.anl.gov]
> Sent: Wednesday, March 08, 2017 7:26 AM
> To: discuss at mpich.org
> Subject: Re: [mpich-discuss] Dataloop error message
>
> On Tue, 2017-03-07 at 19:31 +0000, Palmer, Bruce J wrote:
>> Hi,
>>
>> I’m trying to track down a possible race condition in a test program
>> that is using MPI RMA from MPICH 3.2. The program repeats a series of
>> put/get/accumulate operations to different processors. When I’m
>> running on 1 node 4 processors everything is fine but when I move to
>> 2 nodes 4 processors I start getting failures. The error messages I’m
>> seeing are
>>
>> Assertion failed in file src/mpid/common/datatype/dataloop/dataloop.c
>> at line 265: 0
>
> that's a strange one! that came from the "Dataloop_update" routine.
> It updates pointers after a copy operation. That particular assertion came from the "handle different types" switch
>
> switch(dataloop->kind & DLOOP_KIND_MASK)
>
> which means somehow this code got a datatype that was not one of CONTIG, VECTOR, BLOCKINDEXED, INDEXED, or STRUCT (in dataloop terms.
> MPI type "HINDEXED" for example maps to INDEXED directly, so not all MPI types are explicitly handled).
>
>
>> Assertion failed in file src/mpid/common/datatype/dataloop/dataloop.c
>> at line 157: dataloop->loop_params.cm_t.dataloop
>
> Also inside "Dataloop_update". This assertion
>
> DLOOP_Assert(dataloop->loop_params.cm_t.dataloop)
>
> basically suggests garbage was passed to the Dataloop_update routine.
>
>> Does anyone have a handle on what these routines do and what kind of
>> behavior is generating these errors? The test program is allocating
>> memory and using it to create a window, followed immediately by a call
>> to MPI_Win_lock_all to create a passive synchronization epoch.
>> I’ve been using request based RMA calls (Rput, Rget, Raccumulate)
>> followed by an immediate call to MPI_Wait for the individual RMA
>> operations. Any suggestions about what these errors are telling me?
>> If I start putting in print statements to narrow down the location of
>> the error, the code runs to completion.
>
> The two assertions plus your observation that "printf debugging makes it go away" sure sounds a lot like some kind of memory corruption. Any chance you can collect some valgrind logs?
>
> ==rob
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
> <log.1673><log.1674><log.1729><log.1730>_______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20170308/2496ad5e/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
More information about the discuss
mailing list