[mpich-discuss] Custom rank for processes

Zhou, Hui zhouh at anl.gov
Mon Jul 1 16:09:12 CDT 2024


> root at ampere-altra-2-1:/# mpirun -n 5   -bind-to user:10,11,12,13 -hosts 192.168.2.200,192.168.2.100 /mpitutorial/tutorials/mpi-hello-world/code/mpi_hello_world
Hello world from processor ampere-altra-2-1, rank 1 out of 5 processors
Hello world from processor ampere-altra-2-1, rank 3 out of 5 processors
Hello world from processor dpr740, rank 0 out of 5 processors
Hello world from processor dpr740, rank 4 out of 5 processors
Hello world from processor dpr740, rank 2 out of 5 processors

> -bind-to user:10,11,12,13
> This would mean on host 192.168.2.100
> P0=>10 , P2=>11
> This would mean on host 192.168.2.200
> P0=>10 , P2=>11, P3=12
> Is this correct understanding ?  Is it also possible to say which rank process will be pinned to which core ?

Yes, that is correct. The ranks are assigned to hosts as shown in the hello world output.


> About the rankmap:, trying to understand if I can select where a particular rank would be from > list of hosts. Currently, the first host in the list always get rank0.
> Can I specify the below ranks?
> mpirun -n 5   -bind-to user:10,11,12,13 -hosts 192.168.2.200,192.168.2.100 /mpitutorial/tutorials/mpi-hello-world/code/mpi_hello_world
>
> Hello world from processor ampere-altra-2-1, rank 1 out of 5 processors => rank0
> Hello world from processor ampere-altra-2-1, rank 3 out of 5 processors. => rank1
> Hello world from processor dpr740, rank 0 out of 5 processors           =>rank2
> Hello world from processor dpr740, rank 4 out of 5 processors           =>rank3
> Hello world from processor dpr740, rank 2 out of 5 processors           =>rank4

Yes. You can use "-rankmap (vector,1,1,0,0,0)". Alternatively, you can use "-hosts 192.168.2.100:2,192.168.2.200:3", the colon syntax specifies how many processes you want to assign to each host.

--
Hui
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20240701/d7de704c/attachment.html>


More information about the discuss mailing list