<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style id="owaParaStyle" type="text/css">P {margin-top:0;margin-bottom:0;}</style>
</head>
<body ocsi="0" fpstyle="1" style="word-wrap:break-word">
<div style="direction: ltr;font-family: Tahoma;color: #000000;font-size: 10pt;">I didn't mean there was a bug in memcpy. I meant the bug was in MPI_Get, which should indeed be checking that pointers are not aliased.<br>
<br>
-Nick<br>
<br>
<div style="font-family: Times New Roman; color: #000000; font-size: 16px">
<hr tabindex="-1">
<div style="direction: ltr;" id="divRpF833778"><font color="#000000" face="Tahoma" size="2"><b>From:</b> Brian Van Straalen [bvstraalen@lbl.gov]<br>
<b>Sent:</b> Thursday, August 21, 2014 6:21 PM<br>
<b>To:</b> discuss@mpich.org<br>
<b>Subject:</b> Re: [mpich-discuss] MPI_Get on the same memory location<br>
</font><br>
</div>
<div></div>
<div>
<div>It depends on what version of memcpy you are using. If you are calling IEEE std 1003.1 2004 then memcpy is defined as</div>
<div><br>
</div>
<div><code><tt>void *memcpy(void *restrict</tt> <i>s1</i><tt>, const void *restrict</tt>
<i>s2</i><tt>, size_t</tt> <i>n</i><tt>);</tt></code></div>
<div><br>
</div>
<div>so this form of calling memcpy is illegal and the error from memcpy is correct. </div>
<div><br>
</div>
<div>The routine calling memcpy should verify you are not aliased, or incorrect behavior is very likely on newer architectures. It would be better to check for aliased data before calling memcpy. </div>
<div><br>
</div>
<div>Brian Van Straalen</div>
<div> </div>
<br>
<div>
<div>On Aug 21, 2014, at 3:47 PM, Nick Radcliffe <<a href="mailto:nradclif@cray.com" target="_blank">nradclif@cray.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<blockquote type="cite">
<blockquote type="cite">MPIR_Localcopy(357): memcpy arguments alias each other, dst=0x19a5f40<br>
</blockquote>
src=0x19a5f40 len=4<br>
<br>
It looks like memcpy is doing a check to make sure the source and destination buffers don't overlap. This seems like a bug to me -- when doing an MPI_Get from a buffer to itself, the implementation should probably just do nothing and return.<br>
<br>
-Nick<br>
<br>
________________________________________<br>
From: <a href="mailto:alessandro.fanfarillo@gmail.com" target="_blank">alessandro.fanfarillo@gmail.com</a> [<a href="mailto:alessandro.fanfarillo@gmail.com" target="_blank">alessandro.fanfarillo@gmail.com</a>] on behalf of Alessandro Fanfarillo [<a href="mailto:fanfarillo@ing.uniroma2.it" target="_blank">fanfarillo@ing.uniroma2.it</a>]<br>
Sent: Thursday, August 21, 2014 5:25 PM<br>
To: <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
Subject: [mpich-discuss] MPI_Get on the same memory location<br>
<br>
Dear all,<br>
<br>
I'm having the following error:<br>
<br>
Fatal error in MPI_Get: Internal MPI error!, error stack:<br>
MPI_Get(156).......: MPI_Get(origin_addr=0x19a5f40, origin_count=4,<br>
MPI_BYTE, target_rank=0, target_disp=0, target_count=4, MPI_BYTE,<br>
win=0xa0000000) failed<br>
MPIDI_Get(247).....:<br>
MPIR_Localcopy(357): memcpy arguments alias each other, dst=0x19a5f40<br>
src=0x19a5f40 len=4<br>
<br>
if I try to execute MPI_Get on the same memory location on a shared<br>
memory machine (my laptop).<br>
<br>
I cannot find anything in the standard that denies it for the one-sided.<br>
<br>
Running with OpenMPI everything works fine.<br>
<br>
Is it a bug or I missed something in the standard?<br>
<br>
Thanks.<br>
<br>
Alessandro<br>
<br>
--<br>
<br>
Alessandro Fanfarillo<br>
Dip. di Ingegneria Civile ed Ingegneria Informatica<br>
Università di Roma "Tor Vergata"<br>
NCAR Office: +1 (303) 497-2442<br>
Tel: +39-06-7259 7719<br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
_______________________________________________<br>
discuss mailing list discuss@mpich.org<br>
To manage subscription options or unsubscribe:<br>
https://lists.mpich.org/mailman/listinfo/discuss<br>
</blockquote>
</div>
<div><span class="Apple-style-span" style="border-collapse:separate; border-spacing:0px">
<div><br>
</div>
<div><br>
</div>
</span><br class="Apple-interchange-newline">
</div>
<br>
</div>
</div>
</div>
</body>
</html>