<div dir="ltr">Thank you very much! That will be helpful.<div style>Still what I meant was something a little different.</div><div style>I would like to have a mode in mpi where every little error is reported.</div><div style>
"Like process 1 has send data of length 10 but process 0 needed only 9</div><div style>at line 400 and 500"</div><div style>"mpi process request at line 240 has not been honored"</div><div style>"process has left without mpi_finalize"</div>
<div style>"mpi_sent uses MPI_FLOAT but corresponding mpi_recv uses MPI_DOUBLE"</div><div style><br></div><div style>Also deadly embrace could be detected by the mpi library I think.</div><div style><br></div><div style>
All this kind of debugging would be extremely helpful in practice and I</div><div style>would gladly sacrifice speed if that allows to avoid the long thankless</div><div style>work of debugging.</div><div style><br></div>
<div style> Mathieu</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Feb 26, 2013 at 5:38 PM, Dave Goodell <span dir="ltr"><<a href="mailto:goodell@mcs.anl.gov" target="_blank">goodell@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">You can run your application under valgrind with a debug build of MPICH and you should be able to eventually find your leaks that way (though the output might be a bit hard to read at first, since you will be tracking leaks in memory associated with the MPI object, not the object itself).<br>
<br>
Just run your app like this:<br>
<br>
----8<----<br>
mpiexec -n NUMPROCS valgrind /path/to/your/app<br>
----8<----<br>
<br>
If the valgrind output gets too jumbled, you can separate them by passing the "--log-file" option to Valgrind, like this:<br>
<br>
----8<----<br>
mpiexec -n NUMPROCS valgrind --log-file='vg_out.%q{PMI_RANK}' /path/to/app<br>
----8<----<br>
<br>
This will deposit one file per process, suffixed by the rank in MPI_COMM_WORLD.<br>
<span class="HOEnZb"><font color="#888888"><br>
-Dave<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
On Feb 26, 2013, at 10:17 AM CST, Mathieu Dutour wrote:<br>
<br>
> Thank you, I will try TotalView.<br>
> Now another question:<br>
> ---In C we can run program with valgrind and it tells you exactly where memory is lost, when you use unitialized values and so on.<br>
> ---In fortran with ifort you can run with<br>
> -warn interfaces,nouncalled -fpp -gen-interface -g -traceback -check uninit -check bounds -check pointers<br>
> and you can do the same.<br>
><br>
> It would be extremely helpful to have similar tools in MPI that sacrifice speed<br>
> and allows you to find any error during runtime.<br>
><br>
> Mathieu<br>
><br>
><br>
> On Tue, Feb 26, 2013 at 5:01 PM, Dave Goodell <<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>> wrote:<br>
> On Feb 26, 2013, at 9:39 AM CST, Mathieu Dutour wrote:<br>
><br>
> > I used mpich-3.0.1 with debugging options and the program ran correctly<br>
> > but at the end returned some errors indicated later.<br>
> > I thank mpich for finding those errors that other mpi implementation did<br>
> > not find but I wonder if there is a way to transform this into more useful<br>
> > debugging informations.<br>
><br>
> High-quality patches to improve the output are welcome. We primarily view these leak-checking messages as tools for us (the core developers of MPICH), not for end-user consumption. So we probably won't spend any time to change these messages ourselves.<br>
><br>
> > Mathieu<br>
> ><br>
> > PS: The errors retuned after leaving:<br>
> > leaked context IDs detected: mask=0x9d7380 mask[0]=0x3fffffff<br>
> > In direct memory block for handle type GROUP, 3 handles are still allocated<br>
><br>
> […]<br>
><br>
> In case you have not yet found your bug, these messages are indicating that you are leaking MPI objects, especially communicators, groups, and datatypes. It could be that they are leaked indirectly because you have not completed an outstanding request (via MPI_Wait or similar), as indicated by the lines with "REQUEST" in them.<br>
><br>
> -Dave<br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</div></div></blockquote></div><br></div>