[mpich-discuss] halo update performance comparison

Kokron, Daniel S. (GSFC-606.2)[Computer Sciences Corporation] daniel.s.kokron at nasa.gov
Wed Jan 29 15:47:37 CST 2014


See attached tarball.  The contained README has instructions.

Daniel Kokron
NASA Ames (ARC-TN)
SciCon group
301-286-3959

________________________________________
From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Rajeev Thakur [thakur at mcs.anl.gov]
Sent: Monday, January 13, 2014 5:00 PM
To: discuss at mpich.org
Subject: Re: [mpich-discuss] halo update performance comparison

If you could send us your test case, that would be great.

Rajeev

On Jan 13, 2014, at 3:58 PM, "Kokron, Daniel S. (GSFC-606.2)[Computer Sciences Corporation]" <daniel.s.kokron at nasa.gov> wrote:

> II should be able to package up my test case.
> Please let me know if I can help
>
> Daniel Kokron
> NASA Ames (ARC-TN)
> SciCon group
> 301-286-3959
>
> ________________________________________
> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Rajeev Thakur [thakur at mcs.anl.gov]
> Sent: Monday, January 13, 2014 4:45 PM
> To: discuss at mpich.org
> Subject: Re: [mpich-discuss] halo update performance comparison
>
>> Is there room for improvement?
>
> Yes, we are looking into it.
>
> Rajeev
>
> On Dec 13, 2013, at 2:31 PM, "Kokron, Daniel S. (GSFC-610.1)[Computer Sciences Corporation]" <daniel.s.kokron at nasa.gov> wrote:
>
>> All,
>>
>> I have been interested in MPI datatypes since reading about them and playing with the benchmark found at http://unixer.de/research/datatypes/ddtbench/
>>
>> I am working with a weather model that spends considerable time doing halo (aka ghost cell) updates.  The existing code uses irecv/isend.  Since most of the time spent doing halo updates is actually spent packing and unpacking the MPI buffers, I was hoping the use of datatypes would improve performance.  Unfortunately, the datatype version is actually quite a bit slower than the standard non-blocking pt2pt code.  I found this to be that case for both mpich-3.1rc1 and a recent version of a vendor MPI implementation (not MPICH based).  I have not run this comparison using OpenMPI.
>>
>> Is data type performance an active area of development?  Is there room for improvement?
>>
>> Daniel Kokron
>> NASA Ames (ARC-TN)
>> SciCon group
>> 301-286-3959
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ForMPICH.tgz
Type: application/x-compressed-tar
Size: 113961 bytes
Desc: ForMPICH.tgz
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20140129/c9c159ed/attachment.bin>


More information about the discuss mailing list