[mpich-discuss] MPICH3 Problem

Seo, Sangmin sseo at anl.gov
Tue Feb 3 17:56:37 CST 2015


Correct. libpgmp.so should be from your PGI compiler installation.

— Sangmin


On Feb 3, 2015, at 5:53 PM, Abhishek Bhat <abhat at trinityconsultants.com<mailto:abhat at trinityconsultants.com>> wrote:

Sangmin,

I have installed MPICH on shared drive but PGI fortran is installed on /opt/pgi which node do not have access.  I am assuming that is the issue currently.  I am trying to re-install PGI on the shared drive to see if that will fix the problem.

Just to confirm libpgmp.so is not mpich file correct?

Thank You
Abhishek

………………………………………………………………………………………………….
Abhishek Bhat, PhD, EPI,
Senior Consultant


From: Seo, Sangmin [mailto:sseo at anl.gov]
Sent: Tuesday, February 03, 2015 5:50 PM
To: <discuss at mpich.org<mailto:discuss at mpich.org>>
Subject: Re: [mpich-discuss] MPICH3 Problem

Hi Abhishek,

As the error message says, it looks that the node running the application doesn’t have libpgmp.so. Can you confirm whether the node has libpgmp.so and whether LD_LIBRARY_PATH is correctly set on the node in case that it has libpgmp.so?

Best regards,

Sangmin


On Feb 3, 2015, at 5:31 PM, Abhishek Bhat <abhat at trinityconsultants.com<mailto:abhat at trinityconsultants.com>> wrote:

HI All,

I installed MPICH3 on a master terminal along with PGI Fortran.  Then used MPICH3 and PGI to compile my software.  When I run the program on the master terminal only I do not have any error messages but when I am trying to run them on one of the nodes, I am getting following message error –

/home/Earth/MODELS/camx/src_611/CAMx.v6.11.MPICH3.pgfomp: error while loading shared libraries: libpgmp.so: cannot open shared object file: No such file or directory
/home/Earth/MODELS/camx/src_611/CAMx.v6.11.MPICH3.pgfomp: error while loading shared libraries: libpgmp.so: cannot open shared object file: No such file or directory

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 5200 RUNNING AT node3
=   EXIT CODE: 127
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0 at Earth] HYD_pmcd_pmip_control_cmd_cb (/home/Earth/MODELS/mpi/mpich-3.1.3/src/pm/hydra/pm/pmiserv/pmip_cb.c:885): assert (!closed) failed
[proxy:0:0 at Earth] HYDT_dmxu_poll_wait_for_event (/home/Earth/MODELS/mpi/mpich-3.1.3/src/pm/hydra/tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:0 at Earth] main (/home/Earth/MODELS/mpi/mpich-3.1.3/src/pm/hydra/pm/pmiserv/pmip.c:206): demux engine error waiting for event
[mpiexec at Earth] HYDT_bscu_wait_for_completion (/home/Earth/MODELS/mpi/mpich-3.1.3/src/pm/hydra/tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec at Earth] HYDT_bsci_wait_for_completion (/home/Earth/MODELS/mpi/mpich-3.1.3/src/pm/hydra/tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec at Earth] HYD_pmci_wait_for_completion (/home/Earth/MODELS/mpi/mpich-3.1.3/src/pm/hydra/pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec at Earth] main (/home/Earth/MODELS/mpi/mpich-3.1.3/src/pm/hydra/ui/mpich/mpiexec.c:344): process manager error waiting for completion

The /home/Earth is shared and mapped on all nodes

Any help is much appreciated.

Thank You
Abhishek
………………………………………………………………………………………………….
Abhishek Bhat, PhD, EPI,
Senior Consultant

_________________________________________________________________________

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.
_________________________________________________________________________
_______________________________________________
discuss mailing list     discuss at mpich.org<mailto:discuss at mpich.org>
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


_________________________________________________________________________

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.
_________________________________________________________________________
_______________________________________________
discuss mailing list     discuss at mpich.org<mailto:discuss at mpich.org>
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20150203/3ba1a241/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list