[mpich-discuss] MPE Profiling?

William Gropp wgropp at illinois.edu
Sat Aug 23 09:58:07 CDT 2014


I also have some updates, and they preserve the ability to work with MPI-2 and MPI-3 implementations.  

Bill

William Gropp
Director, Parallel Computing Institute
Thomas M. Siebel Chair in Computer Science
University of Illinois Urbana-Champaign





On Aug 22, 2014, at 5:07 PM, James <jamesqf at charter.net> wrote:

> That seems to have helped: I'm down from ~1600 lines of error messages
> to under 500.
> 
> I'll probably have to do the rest by hand, and will try to make a
> patch file of the changes
> 
> James
> 
> 
> On Fri, 22 Aug 2014 13:49:14 -0700, Balaji, Pavan <balaji at anl.gov> wrote:
> 
>> 
>> Can you try the patch from this ticket?
>> 
>> https://trac.mpich.org/projects/mpich/ticket/2090
>> 
>>  — Pavan
>> 
>> On Aug 22, 2014, at 3:47 PM, James <jamesqf at charter.net> wrote:
>> 
>>> Hi,
>>> 
>>> (Apologies if this is the wrong place to post, but all the links
>>> on the MPE page seem to redirect to MPICH.)
>>> 
>>> I am trying to build the MPE profiling libraries to work with MPICH.
>>> However, I am getting about a thousand lines of compile error messages,
>>> because some MPE routines use K&R-style function declarations, which
>>> don't match up with the prototypes in mpi.h
>>> 
>>> 00106 log_mpi_core.c: In function 'MPI_Allgather':
>>> 00107 log_mpi_core.c:1550:8: error: argument 'sendbuf' doesn't match prototype
>>> 00108  void * sendbuf;
>>> 00109         ^
>>> 00110 In file included from log_mpi_core.c:10:0:
>>> 00111 /opt/mpich/include/mpi.h:1002:5: error: prototype declaration
>>> 00112  int MPI_Allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf,
>>> 00113      ^
>>> 
>>> Before I go and try to fix all these myself, could I ask 1) Has it
>>> already been done somewhere? or 2) Has it not been done because there's
>>> now something better than MPE for tracing/profiling?
>>> 
>>> To be clear, I'm not interested in profiling MPI itself, but what the
>>> program is doing between calls to MPI routines.
>>> 
>>> Thanks,
>>> James
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20140823/5a702237/attachment.html>


More information about the discuss mailing list