[mpich-discuss] Maximum number of inter-communicators?

Mccall, Kurt E. (MSFC-EV41) kurt.e.mccall at nasa.gov
Tue Oct 26 02:54:58 CDT 2021


I'm trying to build 4.0a2 with the Portland Group compiler pgc++ 19.5-0.    configure seems to finish without problems.

../mpich-4.0a2/configure -prefix=/home/kmccall/mpich-install-4.0a2  CC=pgcc CXX=pgc++ --with-pbs=/opt/torque --with-device=ch3:nemesis --disable-fortran --with-pm=hydra  -enable-debuginfo  2>&1 | tee c.txt

but when I run make it ends with this error (actually, many of the same error):

/home/kmccall/mpich-build-4.0a2/src/pm/hydra/.libs/libhydra.a(args.o): In function `MPL_gpu_query_pointer_attr':
/home/kmccall/mpich-4.0a2/src/pm/hydra/mpl/include/mpl_gpu.h:44: multiple definition of `MPL_gpu_query_pointer_attr'
tools/bootstrap/persist/hydra_persist-persist_server.o:/home/kmccall/mpich-4.0a2/src/pm/hydra/mpl/include/mpl_gpu.h:44: first defined here

I've attached c.txt and m.txt from configure and make.    Thanks for any help.


From: Zhou, Hui <zhouh at anl.gov>
Sent: Sunday, October 24, 2021 6:46 PM
To: discuss at mpich.org
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall at nasa.gov>
Subject: [EXTERNAL] Re: Maximum number of inter-communicators?

Hi Kurt,

There is indeed a limit on maximum number of communicators that you can have, including both intra communicators and inter-communicators. Try free the communicators that you no longer need. In older version of MPICH, there may be additional limit on how many dynamic processes one can connect. If you still hit crash after making sure there isn't too many simultaneous active communicators, could you try the latest release -- http://www.mpich.org/static/downloads/4.0a2/mpich-4.0a2.tar.gz<https://gcc02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.mpich.org%2Fstatic%2Fdownloads%2F4.0a2%2Fmpich-4.0a2.tar.gz&data=04%7C01%7Ckurt.e.mccall%40nasa.gov%7C81a749e8af5c48d749d608d997490628%7C7005d45845be48ae8140d43da96dd17b%7C0%7C0%7C637707162177686343%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=AEgw5QEfTdvRW2UeQwmbObw4Qke1FQ%2BPkmf%2BteBff70%3D&reserved=0>, and see if the issue persist?

From: Mccall, Kurt E. (MSFC-EV41) via discuss <discuss at mpich.org<mailto:discuss at mpich.org>>
Sent: Sunday, October 24, 2021 2:37 PM
To: discuss at mpich.org<mailto:discuss at mpich.org> <discuss at mpich.org<mailto:discuss at mpich.org>>
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall at nasa.gov<mailto:kurt.e.mccall at nasa.gov>>
Subject: [mpich-discuss] Maximum number of inter-communicators?


Based on a paper I read about giving an MPI job some fault tolerance, I'm exclusively connecting my processes with inter-communicators.

I've found that if I increase the number of processes beyond a certain point, many processes don't get created at all and the whole job

crashes.   Am I running up against an operating system limit (like the number of open file descriptors - it is set at 1024), or some sort of

MPICH limit?

If it matters, my process architecture (a tree)  is as follows:  one master process connected to 21 manager processes on 21 other nodes,

and each manager connected to 8 worker processes on the manager's own node.   This is the largest job I've been able to create

without it crashing.    Attempting to increase the number of workers beyond 8 results in a crash.

I'm using MPICH 3.3.2 on Centos 3.10.0.   MPICH was compiled with the Portland Group compiler pgc++ 19.5-0.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20211026/52b2ab16/attachment-0001.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: c.txt
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20211026/52b2ab16/attachment-0002.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: m.txt
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20211026/52b2ab16/attachment-0003.txt>

More information about the discuss mailing list