[mpich-discuss] gather, send
Ryan Crocker
rcrocker at uvm.edu
Wed May 1 19:46:24 CDT 2013
I dropped down to two processes to debug:
the quick check:
print*,'send_buffer',irank,maxval(send_buffer),minval(send_buffer)
print*,'gsend_buffer',irank,maxval(gsend_buffer),minval(gsend_buffer)
send_buffer 1 1.0000000000000000 0.0000000000000000
send_buffer 2 1.0000000000000000 0.0000000000000000
gsend_buffer 2 0.0000000000000000 0.0000000000000000
gsend_buffer 1 0.0000000000000000 0.0000000000000000
long print check:
do k=kmin_,kmax_
do j=gy_wmin-1,gy_wmax+1
do i=imin_,imax_
print*,irank,i,j,k,send_buffer(nodes_n(i,j,k)),gsend_buffer(nodes_n(i,j,k))
end do
end do
end do
1 6 50 6 0.0000000000000000 0.0000000000000000
1 7 50 6 0.0000000000000000 0.0000000000000000
1 8 50 6 0.0000000000000000 0.0000000000000000
1 9 50 6 0.0000000000000000 0.0000000000000000
2 26 50 6 0.0000000000000000 0.0000000000000000
2 27 50 6 0.0000000000000000 0.0000000000000000
2 28 50 6 0.0000000000000000 0.0000000000000000
2 29 50 6 0.0000000000000000 0.0000000000000000
2 30 50 6 0.0000000000000000 0.0000000000000000
2 31 50 6 0.0000000000000000 0.0000000000000000
2 32 50 6 0.0000000000000000 0.0000000000000000
2 33 50 6 0.0000000000000000 0.0000000000000000
2 34 50 6 0.0000000000000000 0.0000000000000000
1 10 50 6 0.0000000000000000 0.0000000000000000
2 35 50 6 0.0000000000000000 0.0000000000000000
2 36 50 6 0.0000000000000000 0.0000000000000000
2 37 50 6 0.0000000000000000 0.0000000000000000
2 38 50 6 0.0000000000000000 0.0000000000000000
2 39 50 6 0.0000000000000000 0.0000000000000000
2 40 50 6 0.0000000000000000 0.0000000000000000
2 41 50 6 0.0000000000000000 0.0000000000000000
2 42 50 6 0.0000000000000000 0.0000000000000000
2 43 50 6 0.0000000000000000 0.0000000000000000
2 44 50 6 0.0000000000000000 0.0000000000000000
2 45 50 6 0.0000000000000000 0.0000000000000000
2 26 51 6 1.0000000000000000 0.0000000000000000
2 27 51 6 1.0000000000000000 0.0000000000000000
2 28 51 6 1.0000000000000000 0.0000000000000000
2 29 51 6 1.0000000000000000 0.0000000000000000
2 30 51 6 1.0000000000000000 0.0000000000000000
2 31 51 6 1.0000000000000000 0.0000000000000000
2 32 51 6 1.0000000000000000 0.0000000000000000
2 33 51 6 1.0000000000000000 0.0000000000000000
2 34 51 6 1.0000000000000000 0.0000000000000000
2 35 51 6 1.0000000000000000 0.0000000000000000
2 36 51 6 1.0000000000000000 0.0000000000000000
2 37 51 6 1.0000000000000000 0.0000000000000000
2 38 51 6 1.0000000000000000 0.0000000000000000
2 39 51 6 1.0000000000000000 0.0000000000000000
2 40 51 6 1.0000000000000000 0.0000000000000000
2 41 51 6 1.0000000000000000 0.0000000000000000
2 42 51 6 1.0000000000000000 0.0000000000000000
2 43 51 6 1.0000000000000000 0.0000000000000000
2 44 51 6 1.0000000000000000 0.0000000000000000
2 45 51 6 1.0000000000000000 0.0000000000000000
2 26 52 6 0.0000000000000000 0.0000000000000000
1 11 50 6 0.0000000000000000 0.0000000000000000
1 12 50 6 0.0000000000000000 0.0000000000000000
1 13 50 6 0.0000000000000000 0.0000000000000000
1 14 50 6 0.0000000000000000 0.0000000000000000
1 15 50 6 0.0000000000000000 0.0000000000000000
1 16 50 6 0.0000000000000000 0.0000000000000000
1 17 50 6 0.0000000000000000 0.0000000000000000
1 18 50 6 0.0000000000000000 0.0000000000000000
1 19 50 6 0.0000000000000000 0.0000000000000000
1 20 50 6 0.0000000000000000 0.0000000000000000
1 21 50 6 0.0000000000000000 0.0000000000000000
1 22 50 6 0.0000000000000000 0.0000000000000000
1 23 50 6 0.0000000000000000 0.0000000000000000
1 24 50 6 0.0000000000000000 0.0000000000000000
1 25 50 6 0.0000000000000000 0.0000000000000000
1 6 51 6 1.0000000000000000 0.0000000000000000
1 7 51 6 1.0000000000000000 0.0000000000000000
1 8 51 6 1.0000000000000000 0.0000000000000000
1 9 51 6 1.0000000000000000 0.0000000000000000
1 10 51 6 1.0000000000000000 0.0000000000000000
1 11 51 6 1.0000000000000000 0.0000000000000000
1 12 51 6 1.0000000000000000 0.0000000000000000
1 13 51 6 1.0000000000000000 0.0000000000000000
1 14 51 6 1.0000000000000000 0.0000000000000000
1 15 51 6 1.0000000000000000 0.0000000000000000
1 16 51 6 1.0000000000000000 0.0000000000000000
1 17 51 6 1.0000000000000000 0.0000000000000000
1 18 51 6 1.0000000000000000 0.0000000000000000
1 19 51 6 1.0000000000000000 0.0000000000000000
1 20 51 6 1.0000000000000000 0.0000000000000000
1 21 51 6 1.0000000000000000 0.0000000000000000
1 22 51 6 1.0000000000000000 0.0000000000000000
1 23 51 6 1.0000000000000000 0.0000000000000000
1 24 51 6 1.0000000000000000 0.0000000000000000
1 25 51 6 1.0000000000000000 0.0000000000000000
1 6 52 6 0.0000000000000000 0.0000000000000000
1 7 52 6 0.0000000000000000 0.0000000000000000
1 8 52 6 0.0000000000000000 0.0000000000000000
1 9 52 6 0.0000000000000000 0.0000000000000000
1 10 52 6 0.0000000000000000 0.0000000000000000
1 11 52 6 0.0000000000000000 0.0000000000000000
1 12 52 6 0.0000000000000000 0.0000000000000000
1 13 52 6 0.0000000000000000 0.0000000000000000
1 14 52 6 0.0000000000000000 0.0000000000000000
1 15 52 6 0.0000000000000000 0.0000000000000000
1 16 52 6 0.0000000000000000 0.0000000000000000
1 17 52 6 0.0000000000000000 0.0000000000000000
1 18 52 6 0.0000000000000000 0.0000000000000000
1 19 52 6 0.0000000000000000 0.0000000000000000
1 20 52 6 0.0000000000000000 0.0000000000000000
1 21 52 6 0.0000000000000000 0.0000000000000000
1 22 52 6 0.0000000000000000 0.0000000000000000
1 23 52 6 0.0000000000000000 0.0000000000000000
1 24 52 6 0.0000000000000000 0.0000000000000000
1 25 52 6 0.0000000000000000 0.0000000000000000
2 27 52 6 0.0000000000000000 0.0000000000000000
2 28 52 6 0.0000000000000000 0.0000000000000000
2 29 52 6 0.0000000000000000 0.0000000000000000
2 30 52 6 0.0000000000000000 0.0000000000000000
2 31 52 6 0.0000000000000000 0.0000000000000000
2 32 52 6 0.0000000000000000 0.0000000000000000
2 33 52 6 0.0000000000000000 0.0000000000000000
2 34 52 6 0.0000000000000000 0.0000000000000000
2 35 52 6 0.0000000000000000 0.0000000000000000
2 36 52 6 0.0000000000000000 0.0000000000000000
2 37 52 6 0.0000000000000000 0.0000000000000000
2 38 52 6 0.0000000000000000 0.0000000000000000
2 39 52 6 0.0000000000000000 0.0000000000000000
2 40 52 6 0.0000000000000000 0.0000000000000000
2 41 52 6 0.0000000000000000 0.0000000000000000
2 42 52 6 0.0000000000000000 0.0000000000000000
2 43 52 6 0.0000000000000000 0.0000000000000000
2 44 52 6 0.0000000000000000 0.0000000000000000
2 45 52 6 0.0000000000000000 0.0000000000000000
On May 1, 2013, at 5:34 PM, Rajeev Thakur wrote:
> The gather is sending everything to real rank 1. Since you have incremented irank, you need to print the values at irank=2.
>
> On May 1, 2013, at 7:31 PM, Ryan Crocker wrote:
>
>> my call for the memory allocation for the buffers:
>>
>> call parallel_min_int_0d(imaxo_*jmaxo_*kmaxo_,max_buff)
>> allocate(send_buffer(max_buff))
>> max_global_buff = max_buff*nproc
>> allocate(gsend_buffer (max_global_buff))
>>
>> in my mpi init subroutine:
>>
>>
>> call MPI_INIT(ierr)
>> call MPI_COMM_RANK(MPI_COMM_WORLD,irank,ierr)
>> call MPI_COMM_SIZE(MPI_COMM_WORLD,nproc,ierr)
>> irank = irank+1
>> iroot = 1
>>
>>
>> On May 1, 2013, at 5:21 PM, Rajeev Thakur wrote:
>>
>>> Is the send_buffer allocated to be large enough to hold max_buff on all ranks?
>>>
>>> Is gsend_buffer large enough to hold max_buff * nprocs?
>>>
>>> Is the rank 1, not rank 0, the root?
>>>
>>> On May 1, 2013, at 7:16 PM, Ryan Crocker wrote:
>>>
>>>> i'm just checking just the gather call, and the gsend_buffer from the gather call is still empty:
>>>>
>>>> max_buff is now the largest core domain.
>>>>
>>>> print*,max_buff -> 24750
>>>>
>>>> call MPI_GATHER(send_buffer, max_buff, MPI_REAL, gsend_buffer, max_buff, MPI_REAL, iroot, comm, ierr)
>>>>
>>>> a print out of the send_buffer and the gsend_buffer where it should have non-zero terms.
>>>>
>>>> print*,irank,i,j,k,send_buffer(i,j,k),gsent_buffer(i,j,k) ->
>>>>
>>>> 1 6 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 7 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 8 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 9 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 10 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 11 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 12 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 13 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 14 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 15 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 16 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 17 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 18 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 19 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 20 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 21 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 22 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 23 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 24 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 25 50 6 0.0000000000000000 0.0000000000000000
>>>> 1 6 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 7 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 8 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 9 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 10 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 11 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 12 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 13 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 14 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 15 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 16 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 17 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 18 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 19 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 20 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 21 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 22 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 23 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 24 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 25 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 6 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 7 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 8 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 9 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 10 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 11 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 12 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 13 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 14 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 15 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 16 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 17 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 18 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 19 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 20 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 21 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 22 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 23 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 24 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 25 51 6 1.0000000000000000 0.0000000000000000
>>>> 1 6 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 7 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 8 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 9 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 10 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 11 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 12 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 13 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 14 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 15 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 16 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 17 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 18 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 19 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 20 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 21 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 22 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 23 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 24 52 6 0.0000000000000000 0.0000000000000000
>>>> 1 25 52 6 0.0000000000000000 0.0000000000000000
>>>>
>>>>
>>>> On May 1, 2013, at 5:06 PM, Rajeev Thakur wrote:
>>>>
>>>>> Just before the call to gather, check the value of send_count and check that send_buffer is not all zeros.
>>>>>
>>>>> send_count has to be the same on all ranks. Otherwise you have to use Gatherv.
>>>>>
>>>>> On May 1, 2013, at 7:03 PM, Ryan Crocker wrote:
>>>>>
>>>>>> that's where i checked it with and it's min and max are still zero. I just amended it so that each send buffer from the different cores are the same size and when i check it on the root gsend_buffer is still incorrect.
>>>>>>
>>>>>> On May 1, 2013, at 5:00 PM, Rajeev Thakur wrote:
>>>>>>
>>>>>>> gsend_buffer will be valid only on the root.
>>>>>>>
>>>>>>> On May 1, 2013, at 6:55 PM, Ryan Crocker wrote:
>>>>>>>
>>>>>>>> I just looked at the gsend_buffer, that's actually all zeros as well, and each send_buffer is not. So i think my problem is there.
>>>>>>>>
>>>>>>>> On May 1, 2013, at 4:47 PM, Rajeev Thakur wrote:
>>>>>>>>
>>>>>>>>> The count passed to MPI_Scatter should be the local size, i.e., the amount that gets sent to each process. Looks like what is being passed is the global size.
>>>>>>>>>
>>>>>>>>> On May 1, 2013, at 6:21 PM, Ryan Crocker wrote:
>>>>>>>>>
>>>>>>>>>> Hi all,
>>>>>>>>>>
>>>>>>>>>> So i don't know what the issue with this code snippet is but i cannot get the scatter call to work. When i print out the recieve_buffer it comes out as all zeros. The counters without the underscore are the core domains, with the o, the global domains.
>>>>>>>>>>
>>>>>>>>>> rec_count = (imaxo-imino+1)*(jmaxo-jmino+1)*(kmaxo-kmino+1)
>>>>>>>>>> send_count = (imax_-imin_+1)*(jmax_-jmin_+1)*(kmax_-kmin_+1)
>>>>>>>>>>
>>>>>>>>>> call MPI_GATHER (send_buffer, send_count, MPI_REAL, gsend_buffer, send_count, MPI_REAL, iroot, comm, ierr)
>>>>>>>>>>
>>>>>>>>>> rec_count = (imaxo-imino+1)*(jmaxo-jmino+1)*(kmaxo-kmino+1)
>>>>>>>>>> send_count = (imaxo-imino+1)*(jmaxo-jmino+1)*(kmaxo-kmino+1)
>>>>>>>>>>
>>>>>>>>>> call MPI_SCATTER (gsend_buffer, send_count, MPI_REAL, recieve_buffer, rec_count, MPI_REAL, iroot, comm, ierr)
>>>>>>>>>>
>>>>>>>>>> thanks for the help.
>>>>>>>>>>
>>>>>>>>>> Ryan Crocker
>>>>>>>>>> University of Vermont, School of Engineering
>>>>>>>>>> Mechanical Engineering Department
>>>>>>>>>> rcrocker at uvm.edu
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> discuss mailing list discuss at mpich.org
>>>>>>>>>> To manage subscription options or unsubscribe:
>>>>>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> discuss mailing list discuss at mpich.org
>>>>>>>>> To manage subscription options or unsubscribe:
>>>>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>>>>>
>>>>>>>> Ryan Crocker
>>>>>>>> University of Vermont, School of Engineering
>>>>>>>> Mechanical Engineering Department
>>>>>>>> rcrocker at uvm.edu
>>>>>>>> 315-212-7331
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> discuss mailing list discuss at mpich.org
>>>>>>>> To manage subscription options or unsubscribe:
>>>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> discuss mailing list discuss at mpich.org
>>>>>>> To manage subscription options or unsubscribe:
>>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>>>
>>>>>> Ryan Crocker
>>>>>> University of Vermont, School of Engineering
>>>>>> Mechanical Engineering Department
>>>>>> rcrocker at uvm.edu
>>>>>> 315-212-7331
>>>>>>
>>>>>> _______________________________________________
>>>>>> discuss mailing list discuss at mpich.org
>>>>>> To manage subscription options or unsubscribe:
>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>>
>>>>> _______________________________________________
>>>>> discuss mailing list discuss at mpich.org
>>>>> To manage subscription options or unsubscribe:
>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>
>>>> Ryan Crocker
>>>> University of Vermont, School of Engineering
>>>> Mechanical Engineering Department
>>>> rcrocker at uvm.edu
>>>> 315-212-7331
>>>>
>>>> _______________________________________________
>>>> discuss mailing list discuss at mpich.org
>>>> To manage subscription options or unsubscribe:
>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>
>>> _______________________________________________
>>> discuss mailing list discuss at mpich.org
>>> To manage subscription options or unsubscribe:
>>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>> Ryan Crocker
>> University of Vermont, School of Engineering
>> Mechanical Engineering Department
>> rcrocker at uvm.edu
>> 315-212-7331
>>
>> _______________________________________________
>> discuss mailing list discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
Ryan Crocker
University of Vermont, School of Engineering
Mechanical Engineering Department
rcrocker at uvm.edu
315-212-7331
More information about the discuss
mailing list