[mpich-discuss] How To Limit Memory for each MPI Process

Lu, Huiwei huiweilu at mcs.anl.gov
Thu Sep 11 07:49:31 CDT 2014


Hi Steven,

It’s good to know that you have enabled MPICH to work with Hadoop YARN.

So you have a Java container inside a MPI process and want to limit the memory usage of a container. A MPI process is as the same as a system process. And you can limit its resource usage use either ulimit or cgroup. However, if the Java container is not aware of the limit and continue to allocate the memory, it will still exceed the memory usage. I don’t know if I understand the problem correctly. But maybe it’s better to limit the memory usage of a Java container. Is there a way to limit the memory usage of a Java container?

Thanks.
—
Huiwei Lu
Postdoc Appointee
Mathematics and Computer Science Division, Argonne National Laboratory
http://www.mcs.anl.gov/~huiweilu/

On Sep 11, 2014, at 6:55 AM, Bowen Yu <jxb002 at qq.com> wrote:

> Hi,
> 
> I'm developing a application that enables MPICH executables running at Hadoop YARN cluster, and most functionality has been finished: https://github.com/alibaba/mpich2-yarn. This MPICH-YARN uses MPICH-3.1.2 to run MPI executable.
> 
> YARN allocate resources in container, and in one container there are specific amount of memory and CPU virtual cores. MPICH-YARN assumes one MPI process is one-to-one correspondence with one container, so the MPI process' memory should be limited. But I have no idea how. How to do so that when I run mpiexec, each process is running with a limited resource, such as memory, and CPU utilization; and if one of the process' memory exceeds, the MPI whole program fails?
> 
> I know two ways to implement resource limitation in Linux, one is to use system call in programs or ulimit command in shell; the other is to use cgroup kernel module.
> 
> Thanks!
> Steven
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss




More information about the discuss mailing list