[mpich-discuss] (no subject)
Reuti
reuti at staff.uni-marburg.de
Tue Feb 4 10:53:33 CST 2014
Hi,
Am 04.02.2014 um 17:45 schrieb Andreas Gross:
> I am looking for help with MPICH2.
>
> I installed mpich-3.0.4 on a Centos 6.5 PC with 24 cores.
> When I launch with mpiexec on 17 procs all procs are running 100%.
> When I launch on more than 17 procs the job gets slower and most processors run on less than 100%.
>
> I spent a lot of time google'ing this problem but did not find any answers.
> Somebody said it may be a memory problem but the computer has 32GB RAM and the same problem occurred when running a very small problem.
> On bigger computers I have run the same code on 2,048 processors with no problem at all.
>
>
>
> "lscpu" gives the following response:
>
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 24
> On-line CPU(s) list: 0-23
> Thread(s) per core: 2
There are only 12 real core. Depending on the application it might degrade when the threads are used. For our applications I found it might be around factor 1.5 which can be used - this matches your detected limit of 17.
-- Reuti
> Core(s) per socket: 6
> Socket(s): 2
> NUMA node(s): 2
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 45
> Stepping: 7
> CPU MHz: 1200.000
> BogoMIPS: 4590.92
> Virtualization: VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 256K
> L3 cache: 15360K
> NUMA node0 CPU(s): 0-5,12-17
> NUMA node1 CPU(s): 6-11,18-23
>
>
>
> "less /proc/cpuinfo" shows:
>
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 6
> model : 45
> model name : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
> stepping : 7
> cpu MHz : 1200.000
> cache size : 15360 KB
> physical id : 0
> siblings : 12
> core id : 0
> cpu cores : 6
> apicid : 0
> initial apicid : 0
> fpu : yes
> fpu_exception : yes
> cpuid level : 13
> wp : yes
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
More information about the discuss
mailing list