[mpich-discuss] MPI_DIMS_CREATE

Balaji, Pavan balaji at anl.gov
Thu Apr 2 14:17:42 CDT 2015


There's already a ticket on the Forum trac for deprecating this and replacing it with a better function that, at the very least, takes a communicator argument.  This function the way it is, is pretty useless, but is heavily used, unfortunately.

  -- Pavan

> On Apr 2, 2015, at 1:07 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
> 
> FWIW, this function is described as "Cartesian Convenience Function"
> and "helps the user select a balanced distribution of processes per
> coordinate direction", which means that it really isn't required if
> the user knows math.
> 
> Obviously, your case is trivially solved because you already know
> sqrt(nnodes), but I'm sure you are not using it this way in the real
> application.  However, there is nothing stopping you from writing a
> superior prime factorization routine than MPICH, or whatever is
> necessary to get behavior you want.
> 
> Given the number of bugs in this function I've seen reported over the
> years, I am inclined to ask the MPI Forum to deprecate it and
> encourage users to do the math themselves, since it seems more often
> than not that they can do better.
> 
> Best,
> 
> Jeff
> 
> On Tue, Mar 31, 2015 at 12:10 AM, Valery <valeryweber at hotmail.com> wrote:
>> Dear All
>> 
>> I noticed that MPI_DIMS_CREATE cannot split eg
>> 361
>> into a 19 * 19 processor grid. Is that a feature?
>> 
>> The code follows.
>> 
>> I used
>> MPICH-3.1.4
>> gcc-4.9.2
>> 
>> valery
>> 
>> 
>> cat mpi_dims.f90
>> program test
>>  use mpi
>>  implicit none
>>  integer :: ierr, nnodes, ndims, dims(2), i
>>  call MPI_INIT( ierr )
>>  ndims = 2
>>  do i = 1, 200
>>     nnodes = i**2
>>     dims(:) = 0
>>     call MPI_DIMS_CREATE( nnodes, ndims, dims, ierr )
>>     if( dims(1) /= i ) write(*,*) i, dims
>>  enddo
>>  call MPI_FINALIZE(ierr)
>> end program test
>> 
>> mpif90 mpi_dims.f90
>> 
>> 
>> mpiexec -n 1 ./a.out
>>          19         361           1
>>          41        1681           1
>>          43        1849           1
>>          71        5041           1
>>          73        5329           1
>>          79        6241           1
>>          83        6889           1
>>          89        7921           1
>>         137       18769           1
>>         139       19321           1
>>         149       22201           1
>>         151       22801           1
>>         157       24649           1
>>         163       26569           1
>>         167       27889           1
>>         173       29929           1
>>         179       32041           1
>>         181       32761           1
>> 
>> 
>> 
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
> 
> 
> 
> -- 
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

--
Pavan Balaji  ✉️
http://www.mcs.anl.gov/~balaji

_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list