[mpich-announce] MPICH 5.0.0b1 Released
Zhou, Hui
zhouh at anl.gov
Fri Nov 7 14:22:39 CST 2025
We are happy to announce the release of MPICH 5.0.0b1! Please test and feedback.
This is the first beta release for the MPICH 5.0.x release cycle. Major features added in this release include:
* Full support for MPI 5.0 standard including the support for MPI ABI
* Support true MPI sessions
* Internal datatypes for mapping builtin MPI datatypes
* CH4:OFI native RNDV mode
The release tarball can be found on our downloads page.
For the full set of commits see https://urldefense.us/v3/__https://github.com/pmodels/mpich/compare/v4.3.2__;!!G_uCfscf7eWS!dMeERcNaN5bJBPlxaHhLIZjsjyB7ZGpj-O0vJiZt1y95hoqXjBs3waCC1KNeO7yHNGZzl1Ptkij4$ …v5.0.0b1/.
Following is the complete CHANGELOG:
===============================================================================
Changes in 5.0
===============================================================================
# MPI_VERSION/MPI_SUBVERSION updated to 5 and 0. MPICH now supports the MPI 5.0
standard.
# MPIR_CHKLMEM_ and MPIR_CHKPMEM_ macros are simplified, removing non-essential
argument such as type case and custom error messages.
# Rename MPIR_CVAR_DEBUG_PROGRESS_TIMEOUT to MPIR_CVAR_PROGRESS_TIMEOUT, and
enable it whether or not --enable-g=progress is used in configure.
# MPICH now generates the MPI-IO bindings when ROMIO is build inside MPICH. It
remains the same for using ROMIO outside MPICH.
# Yaksa is now maintained inside MPICH rather than as an external submodule.
# Added internal builtin datatypes and external builtin datatypes are mapped to
internal types. For example, both MPI_INT and MPI_INT32_T are mapped to internal
type MPIR_INT32. NOTE: direct usage of external MPI types are disallowed
in MPICH internally. For example, use MPIR_INT_INTERNAL to replace direct usage
of MPI_INT. Commonly used types include MPI_BYTE, MPI_CHAR, MPI_AINT, use
MPIR_BYTE_INTERNAL, MPIR_CHAR_INTERNAL, MPIR_AINT_INTERNAL instead. There is no
impact to users.
# Removed MPIR_Find_{local,external}. Added in MPIR_Comm struct with attr and
hierarchical fields to more efficiently query communicator's hierarchical
structure.
# Use MPIR_Comm_get_{node,node_roots}_comm to obtain node_comm and node_roots_comm.
This provides the mechanism to create subcomms on demand. In general, use
MPIR_Subcomm_create to create subcomms. Subcomms are marked by
MPIR_COMM_ATTR__SUBCOMM bit in the attr field. They can be much more lightweight
than user-visible communicators.
# ADI: MPID_Comm_get_lpid removed. Lpids are looked up from the local_group and
remote_group in the MPIR_Comm struct.
# ADI: MPID_Intercomm_exchange_map renamed to MPID_Intercomm_exchange and
parameters now include tag, context_id, and will perform context_id exchange
and lpid exchange.
# Added MPI_LOGICAL1, MPI_LOGICAL2, MPI_LOGICAL4, MPI_LOGICAL8, and MPI_LOGICAL16.
# Added MPIX_BFLOAT16, and added software reduction support for MPIX_BFLOAT16
and MPIX_C_FLOAT16.
# Reworked AVX and AVX512 support. By default, MPICH will use runtime check to
detect if AVX or AVX512 is supported. If supported, MPICH will internally use
AVX and AVX512 for inter-NUMA and Intel GPU device memory copy operations. The
--enable-fast=avx and --enable-fast=avx512f options now does two things: 1)
skips the runtime check and force enables AVX and AVX512; 2) enables the usage
of AVX and AVX512 to build the entire libmpi.so (instead of selected memory
copy operations). A new summary in ./configure will report info on these
usages.
# PMI 2 is now deprecated. Please consider switching to PMI 1.
# PMI 2 thread support is removed.
# PMI 1 upgraded to PMI 1.2, adding new API PMI_Barrier_group. PMI_Barrier_group
support KVS exchange and barrier over a group of processes rather than the
world. Usage in multiple threads is supported.
# MPI_Session_init default to MPI_THREAD_MULTIPLE. Thread levels are global. The
first MPI_Session_init, MPI_Init, or MPI_Init_thread sets the thread level;
the later init will ignore users request and inherit the global thread level.
# Internal collective interface replaces the last parameter "MPIR_Errflag_t errflag"
with "int coll_attr". Use MPIR_COLL_ATTR_SYNC for internal collective usages
where completing the synchronization is more critical than batch latency.
# CH4:OFI added native RNDV feature. Set MPIR_CVAR_CH4_OFI_EAGER_THRESHOLD to
enable the RNDV path. The RNDV path supports the following protocols:
pipeline, read, write, and direct.
--
The MPICH Team
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/announce/attachments/20251107/856ffbfb/attachment.html>
More information about the announce
mailing list