[mpich-discuss] Error Running MPICH for Photochemical Modeling

Abhishek Bhat abhat at trinityconsultants.com
Fri Sep 12 17:51:00 CDT 2014


Sangmin.

I updated to mpich3 and getting the following error

Fatal error in MPI_Recv: A process has failed, error stack:
MPI_Recv(187).............: MPI_Recv(buf=0x7fff93840c30, count=644490, MPI_REAL, src=1, tag=14131, MPI_COMM_WORLD, status=0x7fff94444f20) failed
dequeue_and_set_error(865): Communication error with rank 1
rank 1 in job 1  dfw-camx_55000   caused collective abort of all ranks
  exit status of rank 1: killed by signal 9

Same situation, successful runs for smaller resource runs and for up to 7 processes.  Error at more than 7.  Here is the mpich command I am using to run from my job file...

cat << ieof > nodes
dfw-camx:1
dfw-camx-n1:1
dfw-camx-n2:1
dfw-camx-n3:1
dfw-camx-n4:1
dfw-camx-n5:1
dfw-camx-n6:1
dfw-camx-n7:1
ieof
set NUMPROCS = 8
set RING = `wc -l nodes | awk '{print $1}'`
mpdboot -n $RING -f nodes -verbose

if( ! { mpiexec -machinefile nodes -np $NUMPROCS $EXEC } ) then
   mpdallexit
   exit
endif


For a successful run the NUMPROCS has to be < = 7.

Any help is much appreciated.

Thank You
Abhishek
................................................................................................................
Abhishek Bhat, PhD, EPI,
Senior Consultant


From: Seo, Sangmin [mailto:sseo at anl.gov]
Sent: Friday, September 12, 2014 1:11 PM
To: <discuss at mpich.org>
Subject: Re: [mpich-discuss] Error Running MPICH for Photochemical Modeling

Hi Abhishek,

Can you try with the recent MPICH release to see if the same error happens? You can download the recent release, 3.1.2, from http://www.mpich.org/downloads/.

Thanks,
Sangmin


On Sep 12, 2014, at 12:59 PM, Abhishek Bhat <abhat at trinityconsultants.com<mailto:abhat at trinityconsultants.com>> wrote:


I am running a photochemical modeling on Linux cluster (CentOS_64 bit) with 1 master and 8 slave nodes with quad core (intel i7) on each node.  I have two scenarios, in first scenario, I am running less data intensive run on all 8 nodes (NUMPROCS = 9) and the run will go fine.  When running same configuration for a more intense run, I am getting following error.

Fatal error in MPI_Recv: Other MPI error, error stack:
MPI_Recv(187).....................: MPI_Recv(buf=0x7fff989d53b0, count=644490, MPI_REAL, src=1, tag=14131, MPI_COMM_WORLD, status=0x7fff995d96a0) failed
MPIDI_CH3I_Progress(150)..........:
MPID_nem_mpich2_blocking_recv(948):
MPID_nem_tcp_connpoll(1720).......:
state_commrdy_handler(1556).......:
MPID_nem_tcp_recv_handler(1446)...: socket closed
rank 1 in job 1  dfw-camx_55000   caused collective abort of all ranks
  exit status of rank 1: killed by signal 9

If I run the program with smaller nodes (smaller than 7 NUMPROCS) the run goes fine.

It appears that the rank 1 (my first node) is collectively causing all the ranks, but I could identify why.  I tried following solutions -

1.       Increased master memory to 32 gb
2.       Increased all nodes memory to 32 gb
3.       Exchanged the rank 1 to different node in the parallel.

In all situations, I am getting this error.  Surprisingly, when I am running smaller (less data intensive runs), I am not getting this error even if I increase the NUMPROCS to 32 processes.

Any help will be highly appreciated.

I am running mpich 1.4

Thank You
Abhishek
................................................................................................................
Abhishek Bhat, PhD, EPI,
Senior Consultant

Trinity Consultants
12770 Merit Drive, Suite 900  |  Dallas, Texas 75251
Office:  972-661-8100|  Mobile:  806-281-7617
Email:  abhat at trinityconsultants.com<mailto:abhat at trinityconsultants.com>  |  LinkedIn: www.linkedin.com/in/abhattrinityconsultants<http://www.linkedin.com/in/abhattrinityconsultants>

Stay current on environmental issues.  Subscribe<http://www.trinityconsultants.com/Subscribe/> today to receive Trinity's free Environmental Quarterly<http://www.trinityconsultants.com/EnvironmentalQuarterly/>.
Learn about Trinity's courses<http://www.trinityconsultants.com/Training/> for environmental professionals.

<image001.gif><http://www.linkedin.com/company/trinity-consultants>    <image002.gif><http://www.facebook.com/TrinityConsults>    <image003.gif><http://twitter.com/trinityconsults>    <image004.gif><http://www.youtube.com/trinityconsultants>

<image005.jpg>


_________________________________________________________________________

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.
_________________________________________________________________________
_______________________________________________
discuss mailing list     discuss at mpich.org<mailto:discuss at mpich.org>
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


-- 
_________________________________________________________________________

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.
_________________________________________________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20140912/86ae5eea/attachment.html>


More information about the discuss mailing list