[mpich-announce] Announcing the availability of MPICH 3.3b1

Kenneth Raffenetti raffenet at mcs.anl.gov
Tue Apr 10 09:45:11 CDT 2018


A new preview release of MPICH, 3.3b2, is now available for download. 
MPICH 3.3 contains a new (non-default) device layer implementation – 
CH4. CH4 is designed for low software overheads to better exploit 
next-generation hardware. An OFI (http://libfabric.org) or UCX 
(http://openucx.org) library is required to build CH4. Example configure 
lines:

./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> 
./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>

CH4 is still in beta stages, meaning there are known build issues and 
bugs, but most tests and common benchmarks will complete on 64-bit Linux 
systems. Since 3.3b1, there have been more stability improvements, bug 
fixes, and code cleanup. PMIx client library support is added in CH4 to 
support launching with a compatible PMIx server.

Also in this release is a reorganization of MPI collectives to make it 
easier to integrate new algorithms. A new framework is added to enable 
collective algorithms on either generic or device-specific functionality 
using a C++-template style system. Support is added for creating 
communicators based on hardware topology hints, and SLURM integration in 
Hydra is updated to work with the latest node list format. You can find 
the release on our downloads page (www.mpich.org/downloads).

Regards,
The MPICH team

===============================================================================
                                Changes in 3.3
===============================================================================

  # CH4 Device: A new device layer implementation designed for low software
    overheads. CH4 has experimental support for OFI and UCX network 
libraries,
    and POSIX shared memory. Thanks to Intel, Mellanox, and RIKEN AICS for
    participating in the CH4 coding effort.

  # Added support for splitting communicators based on hardware
    topology using info hints.

  # Fixed SLURM integration in Hydra for new node list format.

  # Added support for PMIx (https://pmix.github.io/pmix/) client
    library in CH4 netmods. Note that you must use a compatible PMIx
    server in this configuration.

  # Better organization of collectives in the MPI layer. The new
    scheme, which de-couples implementation from selection logic,
    enables easier integration of additional algorithms.

  # TSP collectives framework: A C++-template style framework for
    collective algorithms is added to allow single collective
    implementation to move data over generic or device-specific
    transport functions.

  # Improvements to derived datatype testing (DTPools -
    https://wiki.mpich.org/mpich/index.php/DTPools).

  # Added new "non-catastrophic" error codes to expose internal
    resource exhaustion.

  # Cleanup of whitespace (ch3 excluded) using the
    maint/code-cleanup.sh script. For instructions on how to update
    PRs/branches based on MPICH before the cleanup, see
    https://github.com/pmodels/mpich/wiki/Code-Cleanup-Procedure.

  # Removed the PAMI device and poe PMI client.

  # Several other minor bug fixes, memory leak fixes, and code cleanup.

    A full list of changes is available at the following link:

      http://git.mpich.org/mpich.git/shortlog/v3.2..v3.3b2

    A list of bugs that have been fixed is available at the following
    link:

      https://github.com/pmodels/mpich/milestone/32?closed=1


More information about the announce mailing list