[mpich-devel] MPICH memory pools

Dave Goodell (dgoodell) dgoodell at cisco.com
Tue Jan 20 10:47:34 CST 2015


The existing mempool stuff isn't particularly time efficient, as I recall.  You might want to benchmark it for your use case against a proper memory allocator like Hoard or tcmalloc and make sure it meets your needs.  It's also not overly space efficient, since IIRC it won't ever return memory to the OS or even to other memory users in the same process.

The mempool stuff only really exists for two reasons:

1. So that the "encode the predefined type width in the handle value" optimization can be implemented in MPICH.  IMO this is a pretty questionable on modern processors, but if we were to argue about that we should probably do some benchmarking rather than waving our hands.

2. So that one can implement all handles as integers, which simplifies the implementation of the Fortran bindings and avoids penalizing Fortran codes with one or more handle translation lookups on every MPI call.  The "kind" field of the handle value helps with type checking, which you would otherwise get from the compiler if properly pointers were used as the handle type instead of integers.

It seems unlikely that you need either of these features in some subsystem.

If I needed some new allocation logic in my netmod/device/whatever, I'd look for something off the shelf first, then roll my own second.  I'd probably stay away from the existing mempool stuff unless there was a killer feature there I'm forgetting about.

-Dave

On Jan 20, 2015, at 10:06 AM, Archer, Charles J <charles.j.archer at intel.com> wrote:

> Hi.
> 
> MPICH has some pretty nice functionality for memory pools implemented, but as far as I can tell, it’s a bit limited for internal device use because each pool you implement needs to consume an entry in the handle space.
> 
> Looking at the available “kinds” of memory pools already implemented:
> 
> typedef enum MPID_Object_kind {
>  MPID_COMM       = 0x1,
>  MPID_GROUP      = 0x2,
>  MPID_DATATYPE   = 0x3,
>  MPID_FILE       = 0x4, /* only used obliquely inside MPID_Errhandler objs */
>  MPID_ERRHANDLER = 0x5,
>  MPID_OP         = 0x6,
>  MPID_INFO       = 0x7,
>  MPID_WIN        = 0x8,
>  MPID_KEYVAL     = 0x9,
>  MPID_ATTR       = 0xa,
>  MPID_REQUEST    = 0xb,
>  MPID_PROCGROUP  = 0xc,               /* These are internal device objects */
>  MPID_VCONN      = 0xd,
>  MPID_GREQ_CLASS = 0xf
>  } MPID_Object_kind;
> 
> It looks like only 0xe is available for implementing a new type of memory pool, limiting me to one additional pool.
> Furthermore, the internal device objects don’t need publishable handles, right?
> It looks like the handle contains 2 bits for (internal, valid, invalid, direct), and 4 bits to contain the object kind.
> 
> Are there any memory pool routines that I’m missing somewhere that aren’t restricted to the limits of what we can publish in a handle?
> Since my object pools are internal, I don’t need to encode anything into a handle.
> 
> Furthermore, if we had a set of non-handle pool routines, internal pools like procgroup, vconn, wouldn’t consume entries in the handle space that could be used for future versions of MPI.
> 
> Looking for some guidance here, I don’t want to publish any internal device gorp into the object_kind space…but I want to use memory pools.
> I’ve used the mpich pools on internal objects with a garbage kind value (unintentionally set to the wrong enum value), and it appears I get a new pool and everything is working, but just because it works, doesn’t mean it’s correct.
> 
> What should I do?  Bracing for "implement your own memory pools, lazy”.
> _______________________________________________
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/devel



More information about the devel mailing list