[mpich-discuss] Limitation of open files on Mira
jt.meng at siat.ac.cn
jt.meng at siat.ac.cn
Thu Apr 20 10:18:44 CDT 2017
I notice that after sending that email.
Sorry for that.
jt.meng at siat.ac.cn
From: Jeff Hammond
Date: 2017-04-20 23:05
To: MPICH
Subject: Re: [mpich-discuss] Limitation of open files on Mira
This is the wrong email list for such a question. Please contact ALCF support regarding Mira.
Jeff
On Thu, Apr 20, 2017 at 1:35 AM, jt.meng at siat.ac.cn <jt.meng at siat.ac.cn> wrote:
Hi Rob,
Can we increase the number of open files on Mira to 1 million? Currently the number of open files is set to be 65536, and users can increase it manually to its hard limitation of 81920.
Can we increase this hard limitation to 1 million for Mira for a one week test on MPI IO in Mira? Thanks.
--------------- /etc/security/limits.conf---------------------------------
root soft nofile 65536
root hard nofile 81920 <--
Best,
jt.meng at siat.ac.cn
From: Latham, Robert J.
Date: 2017-04-19 22:35
To: discuss at mpich.org
Subject: Re: [mpich-discuss] Installation on Blue Gene/Q
On Wed, 2017-04-19 at 13:58 +0200, pramod kumbhar wrote:
> Hi Rob,
>
> Is this already installed/avaialble on MIRA? I would like to try this
> to check some mpi i/o hints.
Yes, there are several versions available:
manual progress optimized:
/projects/aurora_app/mpich3-ch4-ofi/install/gnu/bin
auto progress optimized:
/projects/aurora_app/mpich3-ch4-ofi/install-auto/gnu/bin
manual progress debug:
/projects/aurora_app/mpich3-ch4-ofi/install/gnu.debug/bin
auto progress debug:
/projects/aurora_app/mpich3-ch4-ofi/install-auto/gnu.debug/bin
There are some performance considerations (both positive and negative)
to consider:
Current performance improvements versus [IBM's vendor-supplied MPICH]:
1.) Pt2pt latency - better than 2x speedup for small messages, large
ones with few ranks-per-node less so due to detriment #2 listed below.
2.) RMA latency - exponentially better for small messages, then becomes
gradually even for large messages in the case of put and get.
Current performance detriments versus [IBM's vendor-supplied MPICH]:
1.) No optimized collectives -- all collectives are pt2pt algorithms in
MPICH CH4. There is a plan to develop optimized collectives via OFI
triggered operations but that work wouldn't begin until the April-May
timeframe at the earliest. There is also a plan to make optimized
pt2pt collectives available sooner.
2.) Only 1 rget injection fifo utilized per rank to pt2pt communication
--- pami can utilize up all 11 to maximize internode bandwidth, you
would mainly see this have an impact if you are running < 8 ranks per
node and passing large messages.
==rob
_______________________________________________
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20170420/7a16b7b9/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
More information about the discuss
mailing list