[mpich-discuss] Maximum number of inter-communicators?
Mccall, Kurt E. (MSFC-EV41)
kurt.e.mccall at nasa.gov
Tue Oct 26 04:55:37 CDT 2021
Hui,
Please disregard my last message. I got MPICH to build with at newer version of pgc++, namely nvc++.
Thanks
Kurt
From: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall at nasa.gov>
Sent: Tuesday, October 26, 2021 2:55 AM
To: discuss at mpich.org
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall at nasa.gov>
Subject: Re: Maximum number of inter-communicators?
Hui,
I'm trying to build 4.0a2 with the Portland Group compiler pgc++ 19.5-0. configure seems to finish without problems.
../mpich-4.0a2/configure -prefix=/home/kmccall/mpich-install-4.0a2 CC=pgcc CXX=pgc++ --with-pbs=/opt/torque --with-device=ch3:nemesis --disable-fortran --with-pm=hydra -enable-debuginfo 2>&1 | tee c.txt
but when I run make it ends with this error (actually, many of the same error):
/home/kmccall/mpich-build-4.0a2/src/pm/hydra/.libs/libhydra.a(args.o): In function `MPL_gpu_query_pointer_attr':
/home/kmccall/mpich-4.0a2/src/pm/hydra/mpl/include/mpl_gpu.h:44: multiple definition of `MPL_gpu_query_pointer_attr'
tools/bootstrap/persist/hydra_persist-persist_server.o:/home/kmccall/mpich-4.0a2/src/pm/hydra/mpl/include/mpl_gpu.h:44: first defined here
I've attached c.txt and m.txt from configure and make. Thanks for any help.
Thanks,
Kurt
From: Zhou, Hui <zhouh at anl.gov<mailto:zhouh at anl.gov>>
Sent: Sunday, October 24, 2021 6:46 PM
To: discuss at mpich.org<mailto:discuss at mpich.org>
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall at nasa.gov<mailto:kurt.e.mccall at nasa.gov>>
Subject: [EXTERNAL] Re: Maximum number of inter-communicators?
Hi Kurt,
There is indeed a limit on maximum number of communicators that you can have, including both intra communicators and inter-communicators. Try free the communicators that you no longer need. In older version of MPICH, there may be additional limit on how many dynamic processes one can connect. If you still hit crash after making sure there isn't too many simultaneous active communicators, could you try the latest release -- http://www.mpich.org/static/downloads/4.0a2/mpich-4.0a2.tar.gz<https://gcc02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.mpich.org%2Fstatic%2Fdownloads%2F4.0a2%2Fmpich-4.0a2.tar.gz&data=04%7C01%7Ckurt.e.mccall%40nasa.gov%7Cf784f0a87c7245e8a5f808d99855e822%7C7005d45845be48ae8140d43da96dd17b%7C0%7C0%7C637708316997034873%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=P2D25y9EReF3fLdicTKBU5N1k5tzRtAH2a9ZbOLf3cs%3D&reserved=0>, and see if the issue persist?
--
Hui
________________________________
From: Mccall, Kurt E. (MSFC-EV41) via discuss <discuss at mpich.org<mailto:discuss at mpich.org>>
Sent: Sunday, October 24, 2021 2:37 PM
To: discuss at mpich.org<mailto:discuss at mpich.org> <discuss at mpich.org<mailto:discuss at mpich.org>>
Cc: Mccall, Kurt E. (MSFC-EV41) <kurt.e.mccall at nasa.gov<mailto:kurt.e.mccall at nasa.gov>>
Subject: [mpich-discuss] Maximum number of inter-communicators?
Hi,
Based on a paper I read about giving an MPI job some fault tolerance, I'm exclusively connecting my processes with inter-communicators.
I've found that if I increase the number of processes beyond a certain point, many processes don't get created at all and the whole job
crashes. Am I running up against an operating system limit (like the number of open file descriptors - it is set at 1024), or some sort of
MPICH limit?
If it matters, my process architecture (a tree) is as follows: one master process connected to 21 manager processes on 21 other nodes,
and each manager connected to 8 worker processes on the manager's own node. This is the largest job I've been able to create
without it crashing. Attempting to increase the number of workers beyond 8 results in a crash.
I'm using MPICH 3.3.2 on Centos 3.10.0. MPICH was compiled with the Portland Group compiler pgc++ 19.5-0.
Thanks,
Kurt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20211026/397fbc0e/attachment.html>
More information about the discuss
mailing list