[mpich-discuss] problem with MPI_SCATTERV

Danyang Su danyang.su at gmail.com
Fri Nov 29 11:49:33 CST 2013


Hi Jeff,

I changed recvcnt to the value of scounts(rank+1),  is it correct now? 
The results are correct.

recvcnt = scounts(rank+1)       !fortran array is 1-based)

Thanks,

Danyang



On 28/11/2013 6:38 PM, Jeff Hammond wrote:
> That's still wrong. Please read the documentation on how this function 
> must be used.
>
> Jeff
>
> Sent from my iPhone
>
> On Nov 28, 2013, at 5:48 PM, Danyang Su <danyang.su at gmail.com 
> <mailto:danyang.su at gmail.com>> wrote:
>
>> Hi Junchao,
>>
>> Thanks for pointing out this. It works after I changed the recvcnt 
>> value to the maximum number of elements in receive buffer (integer).
>>
>> Danyang
>>
>> On 28/11/2013 3:13 PM, Junchao Zhang wrote:
>>> The second scounts in your code is suspicious.
>>> According to 
>>> http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Scatterv.html, 
>>>  sendcnts and recvcnt are of different types.
>>>
>>>
>>> --Junchao Zhang
>>>
>>>
>>> On Thu, Nov 28, 2013 at 2:29 PM, Danyang Su <danyang.su at gmail.com 
>>> <mailto:danyang.su at gmail.com>> wrote:
>>>
>>>     Hi All,
>>>
>>>     I ran into a problem with MPI_SCATTERV. When the sendcounts
>>>     (scounts in code) is the same for every process, it works fine,
>>>     but if the sendcounts is different (e.g., 1, 2, 3, 4 for 4
>>>     processes, respectively),  there will be error as follows:
>>>
>>>     Fatal error in PMPI_Scatterv: Message truncated, error stack:
>>>     PMPI_Scatterv(376)................:
>>>     MPI_Scatterv(sbuf=0000000000000000, scnts=0000000000E16CD0,
>>>     displs=0000000000E16CA0,
>>>      MPI_INT, rbuf=0000000002BD0050, rcount=1, MPI_INT, root=0,
>>>     MPI_COMM_WORLD) failed
>>>     MPIR_Scatterv_impl(187)...........:
>>>     MPIR_Scatterv(144)................:
>>>     MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag
>>>     6 truncated; 16 bytes received but buffer size is 4
>>>
>>>     The code is simple as follows:
>>>
>>>     call MPI_SCATTERV(ja_in, *scounts*, displs, MPI_INT, ja,
>>>     *scounts*, MPI_INT, 0, comm , ierr)
>>>
>>>     ja_in is allocated for process #0 and ja is allocated for every
>>>     process. The size of ja_in and ja is big enough to hold large
>>>     dataset.
>>>
>>>     Thanks and regards,
>>>
>>>     Danyang
>>>
>>>     _______________________________________________
>>>     discuss mailing list discuss at mpich.org <mailto:discuss at mpich.org>
>>>     To manage subscription options or unsubscribe:
>>>     https://lists.mpich.org/mailman/listinfo/discuss
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> discuss mailing listdiscuss at mpich.org
>>> To manage subscription options or unsubscribe:
>>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>> _______________________________________________
>> discuss mailing list discuss at mpich.org <mailto:discuss at mpich.org>
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
>
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20131129/912a257f/attachment.html>


More information about the discuss mailing list