[mpich-discuss] Query about MPI_Bsend
Pavan Balaji
balaji at mcs.anl.gov
Sun Apr 28 17:25:51 CDT 2013
Vihari,
Maybe I'm not fully understanding this, but perhaps you are
misunderstanding how MPI_Bsend works. MPI_Bsend only guarantees that it
will not block if enough buffering is available. It, however, does not
guarantee that it will block if the user buffer is full. For example,
the network (TCP/IP in this case) also has some buffering, so the
message can be pushed down to TCP/IP to free up the user buffer.
-- Pavan
On 04/25/2013 08:02 AM US Central Time, Vihari Piratla wrote:
> I am trying to implement a logical ring with MPI; where every process
> receives a message from process with id one less than that of current
> process and forwards to the next process in cyclic fashion. My aim is to
> traffic buffer such that it loses some messages or maybe puts them in
> out of order.
> A cycle of communication will finish when a message dispatched by root
> node comes back again to root.
> here is the code that I have tried: I am just including relevant parts
> of it.
>
> | if(procId!=root)
> {
> sleep(100);
> while(1)
> {
> tm = MPI_Wtime();
> MPI_Irecv( &message, STR_LEN, MPI_CHAR,
> ((procId-1)>=0?(procId-1):(numProc-1)),RETURN_DATA_TAG,
> MPI_COMM_WORLD,&receiveRequest);
>
> MPI_Wait(&receiveRequest,&status);
> printf("%d: Received\n",procId);
>
> if(!strncmp(message,"STOP",4)&&(procId==(numProc-1)))
> break;
>
> MPI_Ssend( message, STR_LEN, MPI_CHAR,
> (procId+1)%numProc, SEND_DATA_TAG, MPI_COMM_WORLD);
> if(!strncmp(message,"STOP",4))
> break;
> printf("%d: Sent\n",procId);
> }
> }
> else
> {
> for(iter=0;iter<benchmarkSize;iter++)
> {
> //Synthesize the message
> message[STR_LEN-1] = '\0';
> iErr = MPI_Bsend( message, STR_LEN, MPI_CHAR,
> (root+1)%numProc, SEND_DATA_TAG, MPI_COMM_WORLD);
> if (iErr != MPI_SUCCESS) {
> char error_string[BUFSIZ];
> int length_of_error_string;
> MPI_Error_string(iErr, error_string, &length_of_error_string);
> fprintf(stderr, "%3d: %s\n", procId, error_string);
> }
>
> tm = MPI_Wtime();
> while(((MPI_Wtime()-tm)*1000)<delay);
> printf("Root: Sending\n");
> }
>
> for(iter=0;iter<benchmarkSize;iter++)
> {
> MPI_Recv(message,STR_LEN,MPI_CHAR,
> (numProc-1),RETURN_DATA_TAG,MPI_COMM_WORLD,&status);
>
> //We should not wait for the messages to be received but wait for certain amount of time
>
> //Extract the fields in the message
> if(((prevRcvdSeqNum+1)!=atoi(seqNum))&&(prevRcvdSeqNum!=0))
> outOfOrderMsgs++;
> prevRcvdSeqNum = atoi(seqNum);
> printf("Seq Num: %d\n",atoi(seqNum));
> rcvdMsgs++;
> printf("Root: Receiving\n");
> }
> MPI_Isend( "STOP", 4, MPI_CHAR,
> (root+1)%numProc, SEND_DATA_TAG, MPI_COMM_WORLD,&sendRequest);
> MPI_Wait(&sendRequest,&status);
>
> /*This is to ask all other processes to terminate, when the work is done*/
> }|
>
> Now, I have these questions:
> 1) Why is it that when I inject some sleep in the other processes(I mean
> other than root) of the ring; NO receive is taking place?
> 2) Even when the buffer size is only one, how is it that root node is
> able to dispatch messages through MPI_Bsend without an error? for
> example the case when it needs to send total 10 messages at a rate of
> 1000 per second and with buffer size of 1. MPI_Bsend is able to dispatch
> all the messages without any error of "buffer full"; irrespective of the
> presence of sleep() in other processes of the ring!
>
> In short How on earth is MPI_Bsend is not giving me an error
> "Insufficient space in buffer" even if my buffersize is size enough to
> accommodate only a single message, MPI_Bsend did not show an error for
> 65 of the total 100 messages sent? I intentionally made the receive
> buffers to be unavailable by making other processes to sleep before they
> start processing receive and send.
>
> Thanks a ton!
> --
> V
>
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the discuss
mailing list