[mpich-discuss] Issue with MPICH + mpi4py on Ubuntu 24.04

Zhou, Hui zhouh at anl.gov
Mon Sep 16 09:53:32 CDT 2024


This is a known ubuntu package issue -https://urldefense.us/v3/__https://bugs.launchpad.net/ubuntu/*source/mpich/*bug/2072338__;Kys!!G_uCfscf7eWS!ZXbM8OZL3oAoMXNbRtfOkqUOlgk7AgJe6H7xvMCPY0O110T3hB1GspOKUNggP0dHHzL542tPtKrL$ 

Hui
________________________________
From: Jørgen Dokken via discuss <discuss at mpich.org>
Sent: Monday, September 16, 2024 2:06 AM
To: discuss at mpich.org <discuss at mpich.org>
Cc: Jørgen Dokken <dokken at simula.no>
Subject: [mpich-discuss] Issue with MPICH + mpi4py on Ubuntu 24.04

When trying to install MPICH from the apt repository on Ubuntu 24. 04 and then installing mpi4py with pip, I get a non-functional installation of mpi4py. It does not happen if I use openmpi, or downgrade to Ubuntu 22. 04. It has been tested with
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd
When trying to install MPICH from the apt repository on Ubuntu 24.04 and then installing mpi4py with pip, I get a non-functional installation of mpi4py. It does not happen if I use openmpi, or downgrade to Ubuntu 22.04.

It has been tested with the same MPICH on other systems (See: https://urldefense.us/v3/__https://github.com/mpi4py/mpi4py/issues/547__;!!G_uCfscf7eWS!ZXbM8OZL3oAoMXNbRtfOkqUOlgk7AgJe6H7xvMCPY0O110T3hB1GspOKUNggP0dHHzL54_jDboAQ$ <https://urldefense.us/v3/__https://github.com/mpi4py/mpi4py/issues/547__;!!G_uCfscf7eWS!cPoueuWR7fxw1e0Sea_FpPwE69LSB-4AL5z_w7BTkhgE477ScU4TJA4tOTBUI_OpT980NxbAga-iHw$>) for where it has been tested, leading one to believe that something is wrong with the ubuntu build.

Minimal reproducible dockerfile:
```

FROM ubuntu:24.04

ARG MPI="mpich"
ENV OPENBLAS_NUM_THREADS=1 \
    OPENBLAS_VERBOSE=0
ENV DEB_PYTHON_INSTALL_LAYOUT=deb_system
ENV DEBIAN_FRONTEND=noninteractive


WORKDIR /tmp

RUN apt-get -qq update && \
    apt-get -yq  upgrade && \
    apt-get -y install \
    lib${MPI}-dev \
    python3-dev \
    python3-pip \
    python3-setuptools \
    python3-venv && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

ENV VIRTUAL_ENV=/test-env
ENV PATH=/${VIRTUAL_ENV}/bin:$PATH
RUN python3 -m venv ${VIRTUAL_ENV}

# Install Python packages (via pip)
RUN python3 -m pip install --no-cache-dir mpi4py -v

ENV OMPI_ALLOW_RUN_AS_ROOT=1 \
    OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1

CMD ["mpiexec", "-v", "--np", "2", "python3", "-c", "from mpi4py import MPI; print(f'{MPI.COMM_WORLD.rank}/{MPI.COMM_WORLD.size}')"]

```
with the output

```

host: 6a2e05040a65
[mpiexec at 6a2e05040a65] Timeout set to -1 (-1 means infinite)

==================================================================================================
mpiexec options:
----------------
  Base path: /usr/bin/
  Launcher: (null)
  Debug level: 1
  Enable X: -1

  Global environment:
  -------------------
    PATH=//test-env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    HOSTNAME=6a2e05040a65
    TERM=xterm
    OPENBLAS_NUM_THREADS=1
    OPENBLAS_VERBOSE=0
    DEB_PYTHON_INSTALL_LAYOUT=deb_system
    DEBIAN_FRONTEND=noninteractive
    VIRTUAL_ENV=/test-env
    OMPI_ALLOW_RUN_AS_ROOT=1
    OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
    HOME=/root

  Hydra internal environment:
  ---------------------------
    GFORTRAN_UNBUFFERED_PRECONNECTED=y


    Proxy information:
    *********************
      [1] proxy: 6a2e05040a65 (1 cores)
      Exec list: python3 (2 processes);


==================================================================================================


Proxy launch args: /usr/bin/hydra_pmi_proxy --control-port 6a2e05040a65:36819 --debug --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --pmi-port 0 --gpus-per-proc -2 --gpu-subdevs-per-proc -2 --proxy-id

Arguments being passed to proxy 0:
--version 4.2.0 --iface-ip-env-name MPIR_CVAR_CH3_INTERFACE_HOSTNAME --hostname 6a2e05040a65 --global-core-map 0,1,1 --pmi-id-map 0,0 --global-process-count 2 --auto-cleanup 1 --pmi-kvsname kvs_1_0_1589598326_6a2e05040a65 --pmi-process-mapping (vector,(0,1,2)) --global-inherited-env 11 'PATH=//test-env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' 'HOSTNAME=6a2e05040a65' 'TERM=xterm' 'OPENBLAS_NUM_THREADS=1' 'OPENBLAS_VERBOSE=0' 'DEB_PYTHON_INSTALL_LAYOUT=deb_system' 'DEBIAN_FRONTEND=noninteractive' 'VIRTUAL_ENV=/test-env' 'OMPI_ALLOW_RUN_AS_ROOT=1' 'OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1' 'HOME=/root' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 1 --exec --exec-appnum 0 --exec-proc-count 2 --exec-local-env 0 --exec-wdir /tmp --exec-args 3 python3 -c from mpi4py import MPI; print(f'{MPI.COMM_WORLD.rank}/{MPI.COMM_WORLD.size}');

[mpiexec at 6a2e05040a65] Launch arguments: /usr/bin/hydra_pmi_proxy --control-port 6a2e05040a65:36819 --debug --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --pmi-port 0 --gpus-per-proc -2 --gpu-subdevs-per-proc -2 --proxy-id 0
[proxy:0 at 6a2e05040a65] Sending upstream hdr.cmd = CMD_PID_LIST
[proxy:0 at 6a2e05040a65] Sending upstream hdr.cmd = CMD_STDOUT
[proxy:0 at 6a2e05040a65] Sending upstream hdr.cmd = CMD_STDOUT
0/1
0/1
[proxy:0 at 6a2e05040a65] Sending upstream hdr.cmd = CMD_EXIT_STATUS
```


Best,

Jørgen

--
Jørgen S. Dokken, PhD
Senior Research Engineer
Simula Research Laboratory
+47 45286467
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20240916/f11033e3/attachment-0001.html>


More information about the discuss mailing list