[mpich-discuss] MPI_Get error with multiple threads on two nodes

Zhao, Xin xinzhao3 at illinois.edu
Wed Jul 30 11:10:11 CDT 2014


Hi all,

I looked through Sangmin's attached code and I think it is correct. If it does not work with current mpich RMA code, we should create a ticket for it now. When we replace the current code with the new RMA code, we should run this test again at that time.

Xin

________________________________
From: Seo, Sangmin [sseo at anl.gov]
Sent: Wednesday, July 30, 2014 10:50 AM
To: Zhao, Xin
Subject: Fwd: [mpich-discuss] MPI_Get error with multiple threads on two nodes

FYI,

Begin forwarded message:

From: Rob Latham <robl at mcs.anl.gov<mailto:robl at mcs.anl.gov>>
Subject: Re: [mpich-discuss] MPI_Get error with multiple threads on two nodes
Date: July 30, 2014 at 10:12:38 AM CDT
To: <discuss at mpich.org<mailto:discuss at mpich.org>>
Reply-To: <discuss at mpich.org<mailto:discuss at mpich.org>>



On 07/28/2014 09:28 AM, Balaji, Pavan wrote:

It seems to work fine with the mpich-dev/new-op-rma branch, which will replace the mpich/master code soon.

To be clear, it replaces mpich/master code *only for ch3*

I haven’t tested it with mpich/master.

Can confirm the RMA code Sangmin sent blows up on Blue Gene, but in a different way:

*** glibc detected *** /gpfs/mira-home/robl/src/rma_get_pthread/./rma_get_pthread: free():
corrupted unsorted chunks: 0x0000001e8349b560 ***

Sangmin's original question remains unanswered: is the test correct? new-op-rma seems to think so, but old ch3 and pamid do not.

==rob

--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
_______________________________________________
discuss mailing list     discuss at mpich.org<mailto:discuss at mpich.org>
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20140730/e8125ba1/attachment.html>


More information about the discuss mailing list