[mpich-announce] Announcing the availability of MPICH 3.3b1

Kenneth Raffenetti raffenet at mcs.anl.gov
Mon Feb 5 11:03:37 CST 2018


A new preview release of MPICH, 3.3b1, is now available for download. 
MPICH 3.3 contains a new (non-default) device layer implementation – 
CH4. CH4 is designed for low software overheads to better exploit 
next-generation hardware. An OFI (http://libfabric.org) or UCX 
(http://openucx.org) library is required to build CH4. Example configure 
lines:

./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> 
./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

CH4 is still in beta stages, meaning there are known build issues and 
bugs, but most tests and common benchmarks will complete on 64-bit Linux 
systems. Since 3.3a3, there have been more stability improvements, bug 
fixes, and code cleanup.

Also in this release is a reorganization of MPI collectives to make it 
easier to integrate new algorithms. A new framework is added to enable 
collective algorithms on either generic or device-specific functionality 
using a C++-template style system. Support is added for creating 
communicators based on hardware topology hints, and SLURM integration in 
Hydra is updated to work with the latest node list format. You can find 
the release on our downloads page (www.mpich.org/downloads).

Regards,
The MPICH team

===============================================================================
                                Changes in 3.3
===============================================================================

  # CH4 Device: A new device layer implementation designed for low software
    overheads. CH4 has experimental support for OFI and UCX network 
libraries,
    and POSIX shared memory. Thanks to Intel, Mellanox, and RIKEN AICS for
    participating in the CH4 coding effort.

  # Added support for splitting communicators based on hardware
    topology using info hints.

  # Fixed SLURM integration in Hydra for new node list format.

  # Better organization of collectives in the MPI layer. The new
    scheme, which de-couples implementation from selection logic,
    enables easier integration of additional algorithms.

  # TSP collectives framework: A C++-template style framework for
    collective algorithms is added to allow single collective
    implementation to move data over generic or device-specific
    transport functions.

  # Cleanup of whitespace (ch3 excluded) using the
    maint/code-cleanup.sh script. For instructions on how to update
    PRs/branches based on MPICH before the cleanup, see
    https://github.com/pmodels/mpich/wiki/Code-Cleanup-Procedure.

  # Removed the PAMI device and poe PMI client.

  # Several other minor bug fixes, memory leak fixes, and code cleanup.

    A full list of changes is available at the following link:

      http://git.mpich.org/mpich.git/shortlog/v3.2..v3.3b1

    A list of bugs that have been fixed is available at the following
    link:

      https://github.com/pmodels/mpich/milestone/25?closed=1


More information about the announce mailing list