[mpich-discuss] halo update performance comparison
Kokron, Daniel S. (GSFC-606.2)[Computer Sciences Corporation]
daniel.s.kokron at nasa.gov
Mon Jan 13 15:58:49 CST 2014
II should be able to package up my test case.
Please let me know if I can help
NASA Ames (ARC-TN)
From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Rajeev Thakur [thakur at mcs.anl.gov]
Sent: Monday, January 13, 2014 4:45 PM
To: discuss at mpich.org
Subject: Re: [mpich-discuss] halo update performance comparison
> Is there room for improvement?
Yes, we are looking into it.
On Dec 13, 2013, at 2:31 PM, "Kokron, Daniel S. (GSFC-610.1)[Computer Sciences Corporation]" <daniel.s.kokron at nasa.gov> wrote:
> I have been interested in MPI datatypes since reading about them and playing with the benchmark found at http://unixer.de/research/datatypes/ddtbench/
> I am working with a weather model that spends considerable time doing halo (aka ghost cell) updates. The existing code uses irecv/isend. Since most of the time spent doing halo updates is actually spent packing and unpacking the MPI buffers, I was hoping the use of datatypes would improve performance. Unfortunately, the datatype version is actually quite a bit slower than the standard non-blocking pt2pt code. I found this to be that case for both mpich-3.1rc1 and a recent version of a vendor MPI implementation (not MPICH based). I have not run this comparison using OpenMPI.
> Is data type performance an active area of development? Is there room for improvement?
> Daniel Kokron
> NASA Ames (ARC-TN)
> SciCon group
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
More information about the discuss