[mpich-discuss] Fwd: [petsc-dev] MPICH from --download-mpich reports inconsistent allocs/frees with valgrind

Patrick Sanan patrick.sanan at gmail.com
Tue Jul 23 08:19:07 CDT 2019


I'm trying to track down the cause of an inconsistency in valgrind's heap
allocation analysis.

This arose when using the MPICH downloaded by PETSc. To reproduce, I run
the following and note that the number of allocations and frees are
different (yet all blocks are reported free).

     printf "#include<mpi.h>\nint main(int a,char
**b){MPI_Init(&a,&b);MPI_Finalize();}" > t.c &&
$PETSC_DIR/$PETSC_ARCH/bin/mpicc t.c && valgrind ./a.out

==8578== Memcheck, a memory error detector
==8578== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==8578== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==8578== Command: ./a.out
==8578==
==8578==
==8578== HEAP SUMMARY:
==8578==     in use at exit: 0 bytes in 0 blocks
==8578==   total heap usage: *1,979 allocs, 1,974 frees*, 4,720,483 bytes
allocated
==8578==
==8578== All heap blocks were freed -- no leaks are possible
==8578==
==8578== For counts of detected and suppressed errors, rerun with: -v
==8578== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

The above was generated on Ubuntu 18.04.2 LTS, using MPICH 3.3b1 as
configured in the attached config.log (which was done automatically for me
by PETSc master).

---------- Forwarded message ---------
Von: Smith, Barry F. <bsmith at mcs.anl.gov>
Date: Mo., 22. Juli 2019 um 20:24 Uhr
Subject: Re: [petsc-dev] MPICH from --download-mpich reports inconsistent
allocs/frees with valgrind
To: petsc-dev <petsc-dev at mcs.anl.gov>
Cc: Patrick Sanan <patrick.sanan at gmail.com>, Balay, Satish <
balay at mcs.anl.gov>



  Bug report to MPICH.

> On Jul 22, 2019, at 1:22 PM, Balay, Satish via petsc-dev <
petsc-dev at mcs.anl.gov> wrote:
>
> Hm - I don't think we were monitoring the leaks via valgrind that closely.
>
> Looking at my old mpich install - I don't see a problem - so likely
> its an issue with newer versions of mpich.
>
> Satish
>
> -------
> balay at sb /home/balay/tmp
> $ mpichversion
> MPICH Version:        3.3
> MPICH Release date:   Wed Nov 21 11:32:40 CST 2018
> MPICH Device:         ch3:sock
> MPICH configure:      --prefix=/home/balay/soft/mpich-3.3
MAKE=/usr/bin/gmake --libdir=/home/balay/soft/mpich-3.3/lib CC=gcc
CFLAGS=-fPIC -g -O AR=/usr/bin/ar ARFLAGS=cr CXX=g++ CXXFLAGS=-g -O -fPIC
F77=gfortran FFLAGS=-fPIC -g -O FC=gfortran FCFLAGS=-fPIC -g -O
--enable-shared --with-device=ch3:sock --with-pm=hydra --enable-fast=no
--enable-error-messages=all --enable-g=meminit
> MPICH CC:     gcc -fPIC -g -O   -O0
> MPICH CXX:    g++ -g -O -fPIC  -O0
> MPICH F77:    gfortran -fPIC -g -O  -O0
> MPICH FC:     gfortran -fPIC -g -O  -O0
> MPICH Custom Information:
> balay at sb /home/balay/tmp
> $ printf "#include<mpi.h>\nint main(int
a,char**b){MPI_Init(&a,&b);MPI_Finalize();}" > t.c && mpicc t.c && valgrind
./a.out
> ==9024== Memcheck, a memory error detector
> ==9024== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
> ==9024== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright
info
> ==9024== Command: ./a.out
> ==9024==
> ==9024==
> ==9024== HEAP SUMMARY:
> ==9024==     in use at exit: 0 bytes in 0 blocks
> ==9024==   total heap usage: 1,886 allocs, 1,886 frees, 4,884,751 bytes
allocated
> ==9024==
> ==9024== All heap blocks were freed -- no leaks are possible
> ==9024==
> ==9024== For lists of detected and suppressed errors, rerun with: -s
> ==9024== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
> balay at sb /home/balay/tmp
>
>
>
> On Mon, 22 Jul 2019, Patrick Sanan via petsc-dev wrote:
>
>> It was pointed out to me that valgrind memcheck reports inconsistent heap
>> usage information when running PETSc examples. All blocks are reported
>> freed, yet the number of allocations and frees are different. My guess as
>> to what's going on is that this is an MPICH issue, as I can reproduce the
>> behavior with a minimal MPI program.
>>
>>> From the PETSc perspective, is this a known issue?  I'm wondering if
this
>> inconsistency was always there, whether it's worth looking into more,
etc.
>>
>> Here's a 1-liner to reproduce, using a PETSc master build with
>> --download-mpich (though note that this doesn't use anything from PETSc
>> except the MPICH it builds for you).
>>
>>     printf "#include<mpi.h>\nint main(int a,char
>> **b){MPI_Init(&a,&b);MPI_Finalize();}" > t.c &&
>> $PETSC_DIR/$PETSC_ARCH/bin/mpicc t.c && valgrind ./a.out
>>
>> ==14242== Memcheck, a memory error detector
>> ==14242== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
>> ==14242== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright
info
>> ==14242== Command: ./a.out
>> ==14242==
>> ==14242==
>> ==14242== HEAP SUMMARY:
>> ==14242==     in use at exit: 0 bytes in 0 blocks
>> ==14242==   total heap usage: *1,979 allocs, 1,974 frees*, 4,720,483
bytes
>> allocated
>> ==14242==
>> ==14242== All heap blocks were freed -- no leaks are possible
>> ==14242==
>> ==14242== For counts of detected and suppressed errors, rerun with: -v
>> ==14242== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20190723/f5bc7a42/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: config.log
Type: application/octet-stream
Size: 1008105 bytes
Desc: not available
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20190723/f5bc7a42/attachment-0001.obj>


More information about the discuss mailing list