[mpich-discuss] problem with MPI_SCATTERV
jeff.science at gmail.com
Thu Nov 28 20:38:47 CST 2013
That's still wrong. Please read the documentation on how this function must be used.
Sent from my iPhone
> On Nov 28, 2013, at 5:48 PM, Danyang Su <danyang.su at gmail.com> wrote:
> Hi Junchao,
> Thanks for pointing out this. It works after I changed the recvcnt value to the maximum number of elements in receive buffer (integer).
>> On 28/11/2013 3:13 PM, Junchao Zhang wrote:
>> The second scounts in your code is suspicious.
>> According to http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Scatterv.html, sendcnts and recvcnt are of different types.
>> --Junchao Zhang
>>> On Thu, Nov 28, 2013 at 2:29 PM, Danyang Su <danyang.su at gmail.com> wrote:
>>> Hi All,
>>> I ran into a problem with MPI_SCATTERV. When the sendcounts (scounts in code) is the same for every process, it works fine, but if the sendcounts is different (e.g., 1, 2, 3, 4 for 4 processes, respectively), there will be error as follows:
>>> Fatal error in PMPI_Scatterv: Message truncated, error stack:
>>> PMPI_Scatterv(376)................: MPI_Scatterv(sbuf=0000000000000000, scnts=0000000000E16CD0, displs=0000000000E16CA0,
>>> MPI_INT, rbuf=0000000002BD0050, rcount=1, MPI_INT, root=0, MPI_COMM_WORLD) failed
>>> MPIDI_CH3U_Receive_data_found(129): Message from rank 0 and tag 6 truncated; 16 bytes received but buffer size is 4
>>> The code is simple as follows:
>>> call MPI_SCATTERV(ja_in, scounts, displs, MPI_INT, ja, scounts, MPI_INT, 0, comm , ierr)
>>> ja_in is allocated for process #0 and ja is allocated for every process. The size of ja_in and ja is big enough to hold large dataset.
>>> Thanks and regards,
>>> discuss mailing list discuss at mpich.org
>>> To manage subscription options or unsubscribe:
>> discuss mailing list discuss at mpich.org
>> To manage subscription options or unsubscribe:
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the discuss