[mpich-discuss] mpi windows with groups mpich2

Helen Kershaw hkershaw at ucar.edu
Mon Apr 28 12:27:33 CDT 2014


Will do.

On 4/28/14 11:24 AM, Rajeev Thakur wrote:
> Can you try the latest, 3.1.
>
> Rajeev
>
> On Apr 28, 2014, at 12:21 PM, Helen Kershaw <hkershaw at ucar.edu>
>   wrote:
>
>> Version 3.0.4.
>>
>> Helen
>>
>> On 4/28/14 10:38 AM, Rajeev Thakur wrote:
>>> The program runs fine on my Mac laptop. Which version of MPICH are you using?
>>>
>>> Rajeev
>>>
>>> On Apr 28, 2014, at 10:41 AM, Helen Kershaw <hkershaw at ucar.edu> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm trying to use MPI one-sided communication with sub-communicators.
>>>> I've put my test code at the end of the email.  It runs on 8 processors, creates two communicators, makes a window on the communicators, then tries to do some one-sided communication.
>>>>
>>>> I can use groups OR windows without a problem, but when I use them
>>>> together (in the test code) I get a coredump from the first 4 tasks.  I
>>>> think the one-sided communication may be using the local rank in the sub-communicator (mpi_comm_grid) as if it is the global rank in mpi_comm_world.
>>>>
>>>> I can run the test code ok using open-mpi, but not mpich2.  I might be
>>>> missing something obvious.
>>>>
>>>> Thanks for your help,
>>>> Helen
>>>>
>>>> -----
>>>> program test_group_win
>>>>
>>>> use mpi
>>>>
>>>> implicit none
>>>>
>>>> ! mpi
>>>> integer            :: rank, size, ierr, status(mpi_status_size)
>>>> ! groups
>>>> integer, parameter :: group_size = 4
>>>> integer            :: group_members(group_size)
>>>> integer            :: group_all ! mpi_comm_world group
>>>> integer            :: subgroup
>>>> integer            :: mpi_comm_grid
>>>> integer            :: local_rank ! rank within subgroup
>>>>
>>>>
>>>> ! window
>>>> integer                        :: win
>>>> integer                        ::bytesize !> size in bytes of each
>>>> element in the window
>>>> integer(KIND=MPI_ADDRESS_KIND) :: window_size
>>>> pointer(aa, duplicate)
>>>> real                           :: duplicate(*)
>>>> integer(KIND=MPI_ADDRESS_KIND) :: target_disp !> displacement in window
>>>> integer                        :: owner
>>>> real                           :: result
>>>>
>>>> call mpi_init(ierr)
>>>> call mpi_comm_size(mpi_comm_world, size, ierr)
>>>> call mpi_comm_rank(mpi_comm_world, rank, ierr)
>>>>
>>>> ! create groups
>>>> call mpi_comm_group(mpi_comm_world, group_all, ierr)
>>>>
>>>> if (rank < 4 ) then
>>>>    group_members = (/0,1,2,3/)
>>>> else
>>>>    group_members = (/4,5,6,7/)
>>>> endif
>>>>
>>>> call mpi_group_incl(group_all, group_size, group_members, subgroup, ierr)
>>>> call mpi_comm_create(mpi_comm_world, subgroup, mpi_comm_grid, ierr)
>>>> call mpi_comm_rank(mpi_comm_grid, local_rank, ierr) ! rank within group
>>>>
>>>> ! create window
>>>> call mpi_type_size(mpi_real, bytesize, ierr)
>>>> window_size = bytesize ! one element in the window
>>>> aa = malloc(1)
>>>> call MPI_ALLOC_MEM(window_size, mpi_info_null, aa, ierr)
>>>>
>>>> duplicate(1) = rank
>>>>
>>>> call mpi_win_create(duplicate,  window_size, bytesize, MPI_INFO_NULL,
>>>> mpi_comm_grid, win, ierr)
>>>>
>>>> ! grabbing data from rank above
>>>> if (rank < 3 ) then
>>>>    owner = rank + 1
>>>> elseif (rank < 7) then
>>>>    owner = rank + 1 - 4
>>>> else
>>>>    owner = 0
>>>> endif
>>>>
>>>> print*, 'rank owner', rank, owner
>>>>
>>>> target_disp = 0
>>>>
>>>> call mpi_win_lock(MPI_LOCK_SHARED, owner, 0,  win, ierr)
>>>> call mpi_get(result, 1, mpi_real, owner, target_disp, 1, mpi_real, win,
>>>> ierr)
>>>> call mpi_win_unlock(owner, win, ierr)
>>>>
>>>> print*, 'rank, result ', rank, result
>>>>
>>>> ! free window
>>>> call mpi_win_free(win, ierr)
>>>> call mpi_free_mem(duplicate, ierr)
>>>>
>>>> call mpi_finalize(ierr)
>>>>
>>>> end program test_group_win
>>>> ----
>>>>
>>>>
>>>> _______________________________________________
>>>> discuss mailing list     discuss at mpich.org
>>>> To manage subscription options or unsubscribe:
>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>
>>> _______________________________________________
>>> discuss mailing list     discuss at mpich.org
>>> To manage subscription options or unsubscribe:
>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>



More information about the discuss mailing list