[mpich-announce] MPICH 4.2.0b1 is released!

Zhou, Hui zhouh at anl.gov
Fri Nov 10 14:34:06 CST 2023


I am happy to announce that a new release of MPICH, 4.2.0b1, is now available for download<https://www.mpich.org/static/downloads/4.2.0b1/mpich-4.2.0b1.tar.gz>. This is the first feature-complete release for the 4.2 series. This release provides full support for the newly ratified MPI 4.1 specification<https://www.mpi-forum.org/docs>. It also contains some new experimental features. MPIX Thread communicator supports MPI usage in a thread parallel region. MPIX_Type_iov provides iovec access to MPI derived datatypes. Main changes are listed below. We welcome interested users to test and provide feedback.

===============================================================================
                               Changes in 4.2
===============================================================================
# Complete support MPI 4.1 specification

# Experimental thread communicator feature (e.g. MPIX_Threadcomm_init).
  See paper "Frustrated With MPI+Threads? Try MPIxThreads!",
  https://doi.org/10.1145/3615318.3615320.

# Experimental datatype functions MPIX_Type_iov_len and MPIX_Type_Iov

# Experimental op MPIX_EQUAL for MPI_Reduce and MPI_Allreduce (intra
  communicator only)

# Use --with-{pmi,pmi2,pmix]=[path] to configure external PMI library.
  Convenience options for Slurm and cray deprecated. Use --with-pmi=oldcray
  for older Cray environment.

# Error checking default changed to runtime (used to be all).

# Use the error handler bound to MPI_COMM_SELF as the default error handler.

# Use ierror instead of ierr in "use mpi" Fortran interface. This affects
  user code if they call with explicit keyword, e.g. call MPI_Init(ierr=arg).
  "ierror" is the correct name specified in the MPI specification. We only
  added subroutine interface in "mpi.mod" since 4.1.

# Handle conversion functions, such as MPI_Comm_c2f, MPI_Comm_f2c, etc., are
  no longer macros. MPI-4.1 require these to be actual functions.

# Yaksa updated to auto detect the GPU architecture and only build for
  the detected arch. This applies to CUDA and HIP support.

# MPI_Win_shared_query can be used on windows created by MPI_Win_create,
  MPI_Win_allocate, in addition to windows created by MPI_Win_allocate_shared.
  MPI_Win_allocate will create shared memory whenever feasible, including between
  spawned processes on the same node.

# Fortran mpi.mod support Type(c_ptr) buffer output for MPI_Alloc_mem,
  MPI_Win_allocate, and MPI_Win_allocate_shared.

# New functions added in MPI-4.1: MPI_Remove_error_string, MPI_Remove_error_code,
  and MPI_Remove_error_class

# New functions added in MPI-4.1: MPI_Request_get_status_all,
  MPI_Request_get_status_any, and MPI_Request_get_status_some.

# New function added in MPI-4.1: MPI_Type_get_value_index.

# New functions added in MPI-4.1: MPI_Comm_attach_buffer, MPI_Session_attach_buffer,
  MPI_Comm_detach_buffer, MPI_Session_detach_buffer,
  MPI_Buffer_flush, MPI_Comm_flush_buffer, MPI_Session_flush_buffer,
  MPI_Buffer_iflush, MPI_Comm_iflush_buffer, and MPI_Session_iflush_buffer.
  Also added constant MPI_BUFFER_AUTOMATIC to allow automatic buffers.

# Support for "mpi_memory_alloc_kinds" info key. Memory allocation kind
  requests can be made via argument to mpiexec, or as info during
  session creation. Kinds supported are "mpi" (with standard defined
  restrictors) and "system". Queries for supported kinds can be made on
  MPI objects such as sessions, comms, windows, or files. MPI 4.1 states
  that supported kinds can also be found in MPI_INFO_ENV, but it was
  decided at the October 2023 meeting that this was a mistake and will
  be removed in an erratum.

--
Hui Zhou
Argonne National Laboratory
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/announce/attachments/20231110/f468c8dc/attachment.html>


More information about the announce mailing list