[mpich-discuss] Potentially odd memory behavior in MPI_Waitall

John Grime jgrime at uchicago.edu
Wed Jan 1 15:05:25 CST 2014


Hi all,

I was wondering if someone could help me out with trying to understand what’s gong on with an MPI_Waitall() call with regards to resident set memory usage. Platform information at the end of the email.

If I run a 4 process MPI job on my desktop machine, I get a steady increase in memory usage over the course of the program. Apart from the places where I would reasonably expect expect memory usage to increase, I also see regular increases when I call MPI_Waitall().

I measure this behavior via:

getmem( &rss0, &vs );

result = MPI_Waitall( n_buffer_reqs, buffer_reqs, MPI_STATUSES_IGNORE );
if( result != MPI_SUCCESS ) ERROR_MACRO( "MPI_Waitall" );

getmem( &rss1, &vs );

… where getmem() queries the OS kernel to return the resident set size (“rss") and virtual memory size (“vs", ignored). The getmem() function does not allocate any memory itself, so I can hopefully see any difference in the resident set memory used before and after the MPI_Waitall() call via (rss1-rss0).

After MPI_Waitall() returns, I very often see an increase in the resident set size ranging from 4096 bytes to 65536 bytes.

Running an “identical” 4-process job on a single node of the Blue Waters compute cluster (Cray XE system, GCC 4.8.1, Cray mpich 6.0.1) does not seem to produce this behaviour.

Switching the MPI_Waitall() for a loop over MPI_Waitany() calls also produces resident set increases.

Does anyone have any comments or suggestions as to how I might understand whether this behavior is errant, and how I might track down the cause? It’s irritating me as I’d really like to produce code with as low a memory overhead as possible!

Cheers,

J.

------------------
Platform info:
------------------

OS: Apple OSX 10.9.1

mpicxx —version:

Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin13.0.0
Thread model: posix

mpiexec —version

HYDRA build details:
    Version:                                 3.0.4
    Release Date:                            Wed Apr 24 10:08:10 CDT 2013
    CC:                              /usr/bin/clang  -pipe -arch x86_64 -Wl,-headerpad_max_install_names -arch x86_64 
    CXX:                             /usr/bin/clang++  -pipe -arch x86_64 -Wl,-headerpad_max_install_names -arch x86_64 
    F77:                             /opt/local/bin/gfortran-mp-4.8 -pipe -m64 -Wl,-headerpad_max_install_names -arch x86_64 
    F90:                             /opt/local/bin/gfortran-mp-4.8 -pipe -m64 -Wl,-headerpad_max_install_names -arch x86_64 
    Configure options:                       '--disable-option-checking' '--prefix=/opt/local' '--disable-dependency-tracking' '--disable-silent-rules' '--enable-base-cache' '--enable-cache' '--enable-cxx' '--enable-fast=O2' '--enable-shared' '--enable-smpcoll' '--with-device=ch3:nemesis' '--with-pm=hydra' '--with-thread-package=posix' '--enable-versioning' 'F90FLAGS=' 'F90=' '--enable-timer-type=mach_absolute_time' '--libdir=/opt/local/lib/mpich-mp' '--sysconfdir=/opt/local/etc/mpich-mp' '--program-suffix=-mp' '--enable-f77' '--enable-fc' 'CC=/usr/bin/clang' 'CFLAGS=-pipe -arch x86_64 -O2' 'LDFLAGS=-Wl,-headerpad_max_install_names -arch x86_64 ' 'CXX=/usr/bin/clang++' 'CXXFLAGS=-pipe -arch x86_64 -O2' 'F77=/opt/local/bin/gfortran-mp-4.8' 'FFLAGS=-pipe -m64 -O2' 'FC=/opt/local/bin/gfortran-mp-4.8' 'FCFLAGS=-pipe -m64 -O2' '--cache-file=/dev/null' '--srcdir=.' 'LIBS=-lpthread ' 'CPPFLAGS= -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/mpl/include -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/mpl/include -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/openpa/src -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/openpa/src -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/mpi/romio/include'
    Process Manager:                         pmi
    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist
    Topology libraries available:            hwloc
    Resource management kernels available:   user slurm ll lsf sge pbs cobalt
    Checkpointing libraries available:       
    Demux engines available:                 poll select




More information about the discuss mailing list