[mpich-discuss] MPI Benchmark Suite

Jiri Simsa jsimsa at cs.cmu.edu
Fri Jul 5 17:25:43 CDT 2013


Jeff,

I am not trying to benchmark performance of MPI or beat up on poor old
bulk-synchronous
MPI-1. My goal is to understand which API functions are most commonly used
in programs that are representative of MPI programs.

I am doing a Ph.D. on systematic testing on concurrent software. In my
experience, software developers use concurrency for the sake of
performance, but are unaware of the quantitative implications their designs
have on the ability to thoroughly test the code they write. Recently, I
have been involved in a project that explores the performance-testability
trade-off in the context of multi-threaded programs and I am interested in
doing the same in the context of MPI programs. As a start I would like to
get a better sense of how people use MPI.

I hope this puts any worries you might have about unfair performance
comparisons to rest and sheds some light on what I mean by a
"macro-benchmark".

Best,

--Jiri





On Fri, Jul 5, 2013 at 1:01 PM, Jeff Hammond <jeff.science at gmail.com> wrote:

> What is a macrobenchmark?  The more complexity you add to a benchmark,
> the harder it is to see the details.  This is why most MPI benchmarks
> are microbenchmarks.  As soon as you add complexity in the usage of
> MPI, you run into issues with algorithmic variation.  For example, all
> of the "macrobenchmarks" I have seen (e.g. NAS PB) are written to
> MPI-1.  They are also quite simple in their use of MPI, relying upon
> barrier synchronization when it may not be necessary or appropriate.
> You may see papers that show UPC beating MPI for NAS PB, but these
> comparisons are apples-to-oranges; if one rewrote the code to use
> nonblocking collectives, RMA or other less synchronous approach, the
> performance would likely be as good if not better than UPC.  In this
> respect, one is not so much evaluating MPI performance (which is the
> point of an MPI benchmark, no?) but rather evaluating the use of a
> simplistic and perhaps overly synchronous programming model that
> happens to use MPI-1.
>
> What is it you really want to do?  Do you have some new-fangled
> programming model (NFPM) and you want to beat up on poor old
> bulk-synchronous MPI-1?  If so, just don't do that.  Write your own
> comparison between NFMP and MPI-3 using all the same algorithmic
> tricks that NFMP does.  If you want to understand the performance of
> MPI in detail when employed by nontrivial applications, then you
> should try to find a nontrivial application that uses MPI very
> effectively.  Unfortunately, many apps use MPI in a simplistic way
> because they are not communication limited in the way that
> strong-scaled macrobenchmarks are, hence no exotic use of MPI is
> required.  I find that better algorithms are usually a more effective
> way to improve application performance than going OCD on my use of
> MPI...
>
> It is entirely possible that there does not exist anything out there
> that you can reuse.  However, this is clearly no problem for you since
> you are now well-rested and can complete the project by yourself :-)
> I am, of course, referencing
> http://www.cs.cmu.edu/~jsimsa/images/dilbert.jpg on your home page :-)
>
> Best,
>
> Heff
>
> On Fri, Jul 5, 2013 at 11:22 AM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
> >
> > On 07/05/2013 10:38 AM, Jiri Simsa wrote:
> >>
> >> Could anyone point me in the direction of programs that are
> >> representative of MPI programs? I am looking for something like PARSEC
> >> (http://parsec.cs.princeton.edu/) for MPI. In other words, I am
> >> interested in macro-bechnmarks not micro-benchmarks. Thank you.
> >
> >
> > You could try the Sequoia suite.  There's also the NAS parallel
> benchmarks
> > and Graph500 benchmarks.
> >
> >  -- Pavan
> >
> > --
> > Pavan Balaji
> > http://www.mcs.anl.gov/~balaji
> > _______________________________________________
> > discuss mailing list     discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130705/dd901f5d/attachment.html>


More information about the discuss mailing list