[mpich-discuss] MPI_Send and recv
Sufeng Niu
sniu at hawk.iit.edu
Tue Jul 2 17:18:05 CDT 2013
Hi, Pavan,
Thanks a lot, that is most confusing part for me in MPI. I would like to
clear this type of problems.
So the blocking begins to send message to itself, but there is no receive
posted. Thus, the MPI block it. Am I right here?
Thus, I need a nonblocking receive to wait the message here. Can I use
blocking receive?
Hi Jeff,
The weblink is https://computing.llnl.gov/tutorials/mpi/#Derived_Data_Types
the code is second example. Thank you
Thanks a lot!
Sufeng
On Tue, Jul 2, 2013 at 4:48 PM, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>
> This is an incorrect program. You need to post an Irecv before sending to
> yourself.
>
> -- Pavan
>
>
> On 07/02/2013 04:39 PM, Sufeng Niu wrote:
>
>> Hi,
>>
>> I try to familiar with mpi self defined type api, and I directly run a
>> very simple code from LLNL webpage:
>>
>> //----------------------------**------------------------------**
>> ------------------------------**-------
>>
>> #include"mpi.h"
>> #include <stdio.h>
>> #define SIZE 4
>>
>> int main(argc,argv)
>> int argc;
>> char *argv[]; {
>> int numtasks, rank, source=0, dest, tag=1, i;
>> float a[SIZE][SIZE] =
>> {1.0, 2.0, 3.0, 4.0,
>> 5.0, 6.0, 7.0, 8.0,
>> 9.0, 10.0, 11.0, 12.0,
>> 13.0, 14.0, 15.0, 16.0};
>> float b[SIZE];
>>
>> MPI_Status stat;
>> MPI_Datatype columntype;
>>
>> MPI_Init(&argc,&argv);
>> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>> MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
>>
>> MPI_Type_vector(SIZE, 1, SIZE, MPI_FLOAT, &columntype);
>> MPI_Type_commit(&columntype);
>>
>> if (numtasks == SIZE) {
>> if (rank == 0) {
>> for (i=0; i<numtasks; i++)
>> MPI_Send(&a[0][i], 1, columntype, i, tag, MPI_COMM_WORLD);
>> }
>>
>> MPI_Recv(b, SIZE, MPI_FLOAT, source, tag, MPI_COMM_WORLD, &stat);
>> printf("rank= %d b= %3.1f %3.1f %3.1f %3.1f\n",
>> rank,b[0],b[1],b[2],b[3]);
>> }
>> else
>> printf("Must specify %d processors. Terminating.\n",SIZE);
>>
>> MPI_Type_free(&columntype);
>> MPI_Finalize();
>> }
>>
>> //----------------------------**------------------------------**
>> ------------------------------**-------
>>
>> However, it doesn't work. the program just suspend there. I figure out
>> the problem is at line MPI_Send, and MPI_Recv. the MPI_Send just send it
>> to itself. but I don't know why it doesn't work. Is it caused by dead
>> lock? why?
>>
>> Thank you very much!
>>
>> --
>> Best Regards,
>> Sufeng Niu
>> ECASP lab, ECE department, Illinois Institute of Technology
>> Tel: 312-731-7219
>>
>>
>> ______________________________**_________________
>> discuss mailing list discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/**mailman/listinfo/discuss<https://lists.mpich.org/mailman/listinfo/discuss>
>>
>>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji
>
--
Best Regards,
Sufeng Niu
ECASP lab, ECE department, Illinois Institute of Technology
Tel: 312-731-7219
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130702/d7ff34e9/attachment.html>
More information about the discuss
mailing list