[mpich-discuss] Understanding process bindings in MPICH
Benson Muite
benson_muite at emailplus.org
Fri May 15 13:02:47 CDT 2020
On Fri, May 15, 2020, at 8:52 PM, hritikesh semwal via discuss wrote:
> Hello,
>
> I am working on a parallel CFD solver with MPI and I am using an account on a cluster to run my executable. The hardware structure of my account is as follows;
>
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 32
> On-line CPU(s) list: 0-31
> Thread(s) per core: 2
> Core(s) per socket: 8
> CPU socket(s): 2
> NUMA node(s): 2
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 62
> Stepping: 4
> CPU MHz: 2600.079
> BogoMIPS: 5199.25
> Virtualization: VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 256K
> L3 cache: 20480K
> NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
> NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
>
> Initially, I was running my executable with any binding options and in that case, whenever I was switching from 2 to 4 processors my computation time was also increasing along with communication time inside some iterative loop.
>
> Today, somewhere I read about binding options in MPI through which I can manage the allocation of processors. Initially, I used the "-bind-to core" option and the results were different and I got time reduction up to 16 processors and after that with 24 and 32 processors, it has started increasing. Results of timing are as follows;
> 2 procs- 160 seconds, 4 procs- 84 seconds, 8 procs- 45 seconds, 16 procs- 28 seconds, 24 procs- 38 seconds, 32 procs- 34 seconds.
This seems reasonable. Are you able to turn of hyperthreading? For most numerical codes this is not useful as they are typically bandwidth limited. Thus for more than 16 processors will not see much speed up.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20200515/d0327a90/attachment.html>
More information about the discuss
mailing list