<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix"><br>
Hi Steven,<br>
<br>
Sorry, we don't have that functionality built-in in MPICH. You'll
have to find an external alternative like those you mentioned.<br>
<br>
Best,<br>
Antonio<br>
<br>
<br>
On 09/12/2014 01:09 AM, Bowen Yu wrote:<br>
</div>
<blockquote cite="mid:tencent_04B6A7136E3E5EDC45706CF3@qq.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div>Hi,</div>
<div><br>
</div>
<div>It's easy to control the memory usage of Java containers.
Specify <span style="line-height: 1.5;">java parameters like </span><span
style="line-height: 1.5;">-Xmx when launch the containers and
JVM will assure the maximum memory usage. However, limitation
on containers is not enough. When the Application Master
invokes mpiexec, Hydra will spawn proxy to all the nodes
needed by this application via ssh. All the processes are
forked by Hydra Process Manager Proxies, which is not
controlled by the container. </span></div>
<div><span style="line-height: 1.5;"><br>
</span></div>
<div><span style="line-height: 1.5;">I don't know if MPI has
mechanism to limit memory for each MPI processes. What I
really mean is like I specify MAX_MEMORY_USAGE_PER_PROCESS in
environment variable, and I am assured that none of the MPI
processes will exceed that amount. </span></div>
<div>
<div><br>
</div>
<div><br>
</div>
<div style="font-size: 12px;font-family: Arial
Narrow;padding:2px 0 2px 0;">------------------ Original ------------------</div>
<div style="font-size: 12px;background:#efefef;padding:8px;">
<div><b>From: </b> "Lu, Huiwei";<a class="moz-txt-link-rfc2396E" href="mailto:huiweilu@mcs.anl.gov"><huiweilu@mcs.anl.gov></a>;</div>
<div><b>Date: </b> Thu, Sep 11, 2014 08:49 PM</div>
<div><b>To: </b> <a class="moz-txt-link-rfc2396E" href="mailto:discuss@mpich.org">"discuss@mpich.org"</a><a class="moz-txt-link-rfc2396E" href="mailto:discuss@mpich.org"><discuss@mpich.org></a>;
<wbr></div>
<div><b>Subject: </b> Re: [mpich-discuss] How To Limit Memory
for each MPI Process</div>
</div>
<div><br>
</div>
Hi Steven,<br>
<br>
It’s good to know that you have enabled MPICH to work with
Hadoop YARN.<br>
<br>
So you have a Java container inside a MPI process and want to
limit the memory usage of a container. A MPI process is as the
same as a system process. And you can limit its resource usage
use either ulimit or cgroup. However, if the Java container is
not aware of the limit and continue to allocate the memory, it
will still exceed the memory usage. I don’t know if I understand
the problem correctly. But maybe it’s better to limit the memory
usage of a Java container. Is there a way to limit the memory
usage of a Java container?<br>
<br>
Thanks.<br>
—<br>
Huiwei Lu<br>
Postdoc Appointee<br>
Mathematics and Computer Science Division, Argonne National
Laboratory<br>
<a class="moz-txt-link-freetext" href="http://www.mcs.anl.gov/~huiweilu/">http://www.mcs.anl.gov/~huiweilu/</a><br>
<br>
On Sep 11, 2014, at 6:55 AM, Bowen Yu <a class="moz-txt-link-rfc2396E" href="mailto:jxb002@qq.com"><jxb002@qq.com></a>
wrote:<br>
<br>
> Hi,<br>
> <br>
> I'm developing a application that enables MPICH executables
running at Hadoop YARN cluster, and most functionality has been
finished: <a class="moz-txt-link-freetext" href="https://github.com/alibaba/mpich2-yarn">https://github.com/alibaba/mpich2-yarn</a>. This
MPICH-YARN uses MPICH-3.1.2 to run MPI executable.<br>
> <br>
> YARN allocate resources in container, and in one container
there are specific amount of memory and CPU virtual cores.
MPICH-YARN assumes one MPI process is one-to-one correspondence
with one container, so the MPI process' memory should be
limited. But I have no idea how. How to do so that when I run
mpiexec, each process is running with a limited resource, such
as memory, and CPU utilization; and if one of the process'
memory exceeds, the MPI whole program fails?<br>
> <br>
> I know two ways to implement resource limitation in Linux,
one is to use system call in programs or ulimit command in
shell; the other is to use cgroup kernel module.<br>
> <br>
> Thanks!<br>
> Steven<br>
> _______________________________________________<br>
> discuss mailing list <a class="moz-txt-link-abbreviated" href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a class="moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
_______________________________________________<br>
discuss mailing list <a class="moz-txt-link-abbreviated" href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a class="moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a></div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
discuss mailing list <a class="moz-txt-link-abbreviated" href="mailto:discuss@mpich.org">discuss@mpich.org</a>
To manage subscription options or unsubscribe:
<a class="moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a></pre>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
Antonio J. Peña
Postdoctoral Appointee
Mathematics and Computer Science Division
Argonne National Laboratory
9700 South Cass Avenue, Bldg. 240, Of. 3148
Argonne, IL 60439-4847
<a class="moz-txt-link-abbreviated" href="mailto:apenya@mcs.anl.gov">apenya@mcs.anl.gov</a>
<a class="moz-txt-link-abbreviated" href="http://www.mcs.anl.gov/~apenya">www.mcs.anl.gov/~apenya</a></pre>
</body>
</html>