[mpich-discuss] Potentially odd memory behavior in MPI_Waitall

John Grime jgrime at uchicago.edu
Sat Jan 4 09:57:44 CST 2014


Hi Ken,

I tried to generate such an example, but unfortunately I could’t capture the behavior. That would most likely indicate the problem is with my code, rather than MPICH, but I was curious as the behavior does not seem to manifest on other platforms. I previously ran the code through valgrind without detecting any obvious memory leaks etc (valgrind is not currently working for OSX 10.9, alas).

As the additional memory use seemed to mainly appear through the MPI_Waitall() / Waitany() calls, my guess was that temporary buffers were allocated though these calls and some sort of memory fragmentation prevented fitting them into the previously free’d blocks in the underlying allocator, so progressively more memory was being attached to the process.

I took a look at the current MPICH source code, and e.g. MPIR_Waitall_impl() seems to use a fixed-size stack-allocated array for the requests, allocating a larger array if more than 16 requests are used (the default value for MPID_REQUEST_PTR_ARRAY_SIZE seems to be 16). I’m using 12 requests, so I should be okay there.

This is probably not a big deal. It could also be due to how I determine the process memory usage ( via the task_info() routine of the Mach kernel, in case anyone is curious ), as OSX 10.9 can do weird things behind the scenes with dynamic compression of memory.

Fundamentally, I was curious to know if anyone else had observed this sort of behaviour and if there was a fairly straightforward explanation for it, even bearing in mind the obvious difficulties in making definitive statements when dealing with something as potentially complicated as different memory allocators on different platforms.

Cheers,

J.


On Jan 2, 2014, at 9:53 PM, Kenneth Raffenetti <raffenet at mcs.anl.gov> wrote:

> Can you provide a minimal demonstration of this behavior? That way we can work from the same reference point to identify possible causes.
> 
> On 01/01/2014 03:05 PM, John Grime wrote:
>> Hi all,
>> 
>> I was wondering if someone could help me out with trying to understand what’s gong on with an MPI_Waitall() call with regards to resident set memory usage. Platform information at the end of the email.
>> 
>> If I run a 4 process MPI job on my desktop machine, I get a steady increase in memory usage over the course of the program. Apart from the places where I would reasonably expect expect memory usage to increase, I also see regular increases when I call MPI_Waitall().
>> 
>> I measure this behavior via:
>> 
>> getmem( &rss0, &vs );
>> 
>> result = MPI_Waitall( n_buffer_reqs, buffer_reqs, MPI_STATUSES_IGNORE );
>> if( result != MPI_SUCCESS ) ERROR_MACRO( "MPI_Waitall" );
>> 
>> getmem( &rss1, &vs );
>> 
>> … where getmem() queries the OS kernel to return the resident set size (“rss") and virtual memory size (“vs", ignored). The getmem() function does not allocate any memory itself, so I can hopefully see any difference in the resident set memory used before and after the MPI_Waitall() call via (rss1-rss0).
>> 
>> After MPI_Waitall() returns, I very often see an increase in the resident set size ranging from 4096 bytes to 65536 bytes.
>> 
>> Running an “identical” 4-process job on a single node of the Blue Waters compute cluster (Cray XE system, GCC 4.8.1, Cray mpich 6.0.1) does not seem to produce this behaviour.
>> 
>> Switching the MPI_Waitall() for a loop over MPI_Waitany() calls also produces resident set increases.
>> 
>> Does anyone have any comments or suggestions as to how I might understand whether this behavior is errant, and how I might track down the cause? It’s irritating me as I’d really like to produce code with as low a memory overhead as possible!
>> 
>> Cheers,
>> 
>> J.
>> 
>> ------------------
>> Platform info:
>> ------------------
>> 
>> OS: Apple OSX 10.9.1
>> 
>> mpicxx —version:
>> 
>> Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
>> Target: x86_64-apple-darwin13.0.0
>> Thread model: posix
>> 
>> mpiexec —version
>> 
>> HYDRA build details:
>>     Version:                                 3.0.4
>>     Release Date:                            Wed Apr 24 10:08:10 CDT 2013
>>     CC:                              /usr/bin/clang  -pipe -arch x86_64 -Wl,-headerpad_max_install_names -arch x86_64
>>     CXX:                             /usr/bin/clang++  -pipe -arch x86_64 -Wl,-headerpad_max_install_names -arch x86_64
>>     F77:                             /opt/local/bin/gfortran-mp-4.8 -pipe -m64 -Wl,-headerpad_max_install_names -arch x86_64
>>     F90:                             /opt/local/bin/gfortran-mp-4.8 -pipe -m64 -Wl,-headerpad_max_install_names -arch x86_64
>>     Configure options:                       '--disable-option-checking' '--prefix=/opt/local' '--disable-dependency-tracking' '--disable-silent-rules' '--enable-base-cache' '--enable-cache' '--enable-cxx' '--enable-fast=O2' '--enable-shared' '--enable-smpcoll' '--with-device=ch3:nemesis' '--with-pm=hydra' '--with-thread-package=posix' '--enable-versioning' 'F90FLAGS=' 'F90=' '--enable-timer-type=mach_absolute_time' '--libdir=/opt/local/lib/mpich-mp' '--sysconfdir=/opt/local/etc/mpich-mp' '--program-suffix=-mp' '--enable-f77' '--enable-fc' 'CC=/usr/bin/clang' 'CFLAGS=-pipe -arch x86_64 -O2' 'LDFLAGS=-Wl,-headerpad_max_install_names -arch x86_64 ' 'CXX=/usr/bin/clang++' 'CXXFLAGS=-pipe -arch x86_64 -O2' 'F77=/opt/local/bin/gfortran-mp-4.8' 'FFLAGS=-pipe -m64 -O2' 'FC=/opt/local/bin/gfortran-mp-4.8' 'FCFLAGS=-pipe -m64 -O2' '--cache-file=/dev/null' '--srcdir=.' 'LIBS=-lpthread ' 'CPPFLAGS= -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4
> /src/mpl/include -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/mpl/include -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/openpa/src -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/openpa/src -I/opt/local/var/macports/build/_opt_mports_dports_science_mpich/mpich-default/work/mpich-3.0.4/src/mpi/romio/include'
>>     Process Manager:                         pmi
>>     Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist
>>     Topology libraries available:            hwloc
>>     Resource management kernels available:   user slurm ll lsf sge pbs cobalt
>>     Checkpointing libraries available:
>>     Demux engines available:                 poll select
>> 
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>> 
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss




More information about the discuss mailing list