[mpich-discuss] MPI_THREAD_MULTIPLE

Jeff Hammond jeff.science at gmail.com
Mon Jul 1 16:11:57 CDT 2013


I have done my own studies of this but only in detail on Blue Gene/Q,
which supports both fine-grain and fat locking to support
MPI_THREAD_MULTIPLE.

I believe that CrayMPI takes a noticeable hit in MPI_THREAD_MULTIPLE
because their network hardware is very low latency and the software
overhead associated with locking (which is fat in their case AFAIK) is
noticeable by comparison.  Is that the vendor in question?

Do you have the option to aggregate communication and or otherwise use
MPI_THREAD_SERIALIZED instead?  If not, then there really isn't an
alternative so comparative study will only make you a sad panda.
However, if you can use MPI_THREAD_SERIALIZED, perhaps with some
overhead, then you can compare the two implementations.

It would be helpful if you could share code and system details.

Best,

Jeff

On Mon, Jul 1, 2013 at 2:05 AM, Bobby Philip
<bphilip.kondekeril at gmail.com> wrote:
> Hi:
>
> Are there any studies that have been done on the effect of turning on
> MPI_THREAD_MULTIPLE with the latest versions of MPICH? I have an AMR
> application where halo or ghost updates require lots of small messages to be
> sent/rec'd and I am currently seeing a performance hit with a vendor
> specific implementation based on MPICH2 and am trying to see whether there
> are any implementations out there that might deliver better performance.
>
> Thanks,
> Bobby
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss



-- 
Jeff Hammond
jeff.science at gmail.com



More information about the discuss mailing list