[mpich-discuss] MPI_Get on the same memory location

Alessandro Fanfarillo fanfarillo at ing.uniroma2.it
Sat Aug 23 18:17:19 CDT 2014


Thanks for the suggestions. There are two versions of the library:
MPI-2/MPI-3 and GASNet.

Thanks

Alessandro

On Sat, Aug 23, 2014 at 5:05 PM, Brian Van Straalen <bvstraalen at lbl.gov> wrote:
>
> I’d recommend
>
> @inproceedings{dotsenko2004multi,
>   title={A multi-platform co-array fortran compiler},
>   author={Dotsenko, Yuri and Coarfa, Cristian and Mellor-Crummey, John},
>   booktitle={Proceedings of the 13th International Conference on Parallel
> Architectures and Compilation Techniques},
>   pages={29--40},
>   year={2004},
>   organization={IEEE Computer Society}
> }
>
>
> @inproceedings{coarfa2005evaluation,
>   title={An evaluation of global address space languages: co-array fortran
> and unified parallel C},
>   author={Coarfa, Cristian and Dotsenko, Yuri and Mellor-Crummey, John and
> Cantonnet, Fran{\c{c}}ois and El-Ghazawi, Tarek and Mohanti, Ashrujit and
> Yao, Yiyi and Chavarr{\'\i}a-Miranda, Daniel},
>   booktitle={Proceedings of the tenth ACM SIGPLAN symposium on Principles
> and practice of parallel programming},
>   pages={36--47},
>   year={2005},
>   organization={ACM}
> }
>
>
> Which has a large discussion of aliasing and performance with the Rice CAF
> compiler  and Berkeley UPC compiler
>
> Short answer is to a) not go through the communication layer when you are in
> a shared address space, and b) you have to trick Fortran with dummy
> variables so it thinks you are not aliasing, or drop directly to load/store
> instructions when handling accesses that share an address space.
>
> You could also look at some messaging layers that are designed specifically
> to support global address space language development, like GASNet or ARMCI.
>
> Brian
>
>
>
> On Aug 23, 2014, at 7:44 AM, William Gropp <wgropp at illinois.edu> wrote:
>
> It is amusing that the original reason for the restriction on aliasing of
> arguments comes from Fortran - MPI-1 wanted the language-independent spec to
> be expressible in both Fortran and C, and this is a property of Fortran (and
> one which aids in performance).
>
> Bill
>
> William Gropp
> Director, Parallel Computing Institute
> Thomas M. Siebel Chair in Computer Science
> University of Illinois Urbana-Champaign
>
>
>
>
>
> On Aug 22, 2014, at 6:04 PM, Brian Van Straalen <bvstraalen at lbl.gov> wrote:
>
>
> Disabling the check is probably a bad idea.   if you disable it in MPI you
> will still push a call to memcpy that has aliased buffers.    For many
> systems memcpy still does the right thing, but modern fancy memcpy
> implementations have permission to produce wrong results if the regions
> overlap.
>
> The MPI spec is clear, and the coarray spec is also clear, and clearly
> different.  I would recommend handling this in your library implementation
> and leave your MPI build as standard.  Other MPI builds will be done
> standard and you will just be delaying the work.   You can’t call MPI_Get
> with aliased locations.
>
> Brian
>
>
>
> On Aug 22, 2014, at 3:18 PM, Alessandro Fanfarillo
> <fanfarillo at ing.uniroma2.it> wrote:
>
> Could you tell me which flag disables the check? I would suggest this
> procedure as temporary workaround.
>
> Thanks
>
> On Fri, Aug 22, 2014 at 2:41 PM, Balaji, Pavan <balaji at anl.gov> wrote:
>
>
> On Aug 22, 2014, at 1:43 PM, Brian Van Straalen <bvstraalen at lbl.gov> wrote:
>
> Is there a significant cost to have the MPI implementation provide a check
> for aliasing?
>
>
> We do check for this error by default in MPICH (which is why this email
> thread started).
>
> There’s a way to turn off the checking through a configure flag as well as
> an environment variable, if needed.  But that’s essentially saying — I know
> it’s wrong, but I’m too lazy to fix my application.
>
>  — Pavan
>
> --
> Pavan Balaji  ✉️
> http://www.mcs.anl.gov/~balaji
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
>
> --
>
> Alessandro Fanfarillo
> Dip. di Ingegneria Civile ed Ingegneria Informatica
> Università di Roma "Tor Vergata"
> NCAR Office: +1 (303) 497-2442
> Tel: +39-06-7259 7719
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
>
> Brian Van Straalen         Lawrence Berkeley Lab
> BVStraalen at lbl.gov         Computational Research
> (510) 486-4976             Division (crd.lbl.gov)
>
>
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
>
> Brian Van Straalen         Lawrence Berkeley Lab
> BVStraalen at lbl.gov         Computational Research
> (510) 486-4976             Division (crd.lbl.gov)
>
>
>
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss



-- 

Alessandro Fanfarillo
Dip. di Ingegneria Civile ed Ingegneria Informatica
Università di Roma "Tor Vergata"
NCAR Office: +1 (303) 497-2442
Tel: +39-06-7259 7719



More information about the discuss mailing list