[mpich-discuss] Failed to allocate memory for an unexpected	message
    Luiz Carlos da Costa Junior 
    lcjunior at ufrj.br
       
    Thu Jan 16 12:07:51 CST 2014
    
    
  
Hi Pavan and Antonio,
I implemented the scheme you suggested and it was much easier than I
thought. Very nice, thanks for your help.
However, I noticed that the execution times were much higher than the cases
in which the failure didn't occur.
Is there any reason, apart from some implementation mistake, to explain
this behavior?
I don't know if it would help, but I am sending below part of the receiver
process Fortran code.
Thanks in advance.
Best regards,
Luiz
c-----------------------------------------------------------------------
      subroutine my_receiver
c     ------------------------------------------------------------------
      (...)
c     Local
c     -----
      integer*4 m_stat(MPI_STATUS_SIZE)
      integer*4 m_request(zrecv)        ! request identifier
for asynchronous receives
      character card(zrecv)*(zbuf)      ! buffer for receiving messages
c     Pre-post RECVs
c     --------------
      do irecv = 1, zrecv
        call MPI_IRECV(card(irecv), zbuf, MPI_CHARACTER,
     .                 MPI_ANY_SOURCE, MPI_ANY_TAG, M_COMM_SDDP,
     .                 m_request(irecv), m_ierr )
      end do !irecv
      do while( keep_receiving )
c       Wait for any of the pre-posted requests to arrive
c       -------------------------------------------------
        call MPI_WAITANY(zrecv, m_request, irecv, m_stat, m_ierr)
c       Process message: disk IO
c       ---------------
        <DO SOMETHING>
        if( SOMETHING_ELSE ) then
          keep_receiving = .false.
        end if
c       Re-post RECV
c       ------------
        call MPI_IRECV(card(irecv), zbuf, MPI_CHARACTER,
     .                 MPI_ANY_SOURCE, MPI_ANY_TAG, M_COMM_SDDP,
     .                 m_request(irecv), m_ierr)
      end do
c     Cancel unused RECVs
c     -------------------
      do irecv = 1, zrecv
        call MPI_CANCEL( m_request(irecv), m_ierr )
      end do !irecv
      (...)
      return
      end
On 1 November 2013 22:14, Luiz Carlos da Costa Junior <lcjunior at ufrj.br>wrote:
> Thanks
>
>
> On 1 November 2013 22:00, Pavan Balaji <balaji at mcs.anl.gov> wrote:
>
>>
>> On Nov 1, 2013, at 4:30 PM, Luiz Carlos da Costa Junior <lcjunior at ufrj.br>
>> wrote:
>> > I understand that I will have to have N buffers, one for each posted
>> MPI_Irecv. I will also have to TEST (using MPI_PROBE or MPI_WAITANY) until
>> a message comes. The result of this test will identify which one of the
>> posted MPI_Irecv has actually received the message and then process the
>> right buffer. Is this correct?
>>
>> Correct.
>>
>> > Should I have to change anything at the sender's processes?
>>
>> Likely not.  But you need to think through your algorithm to confirm that.
>>
>> > At the end, my receiver process receives a message identifying that it
>> should exit this routine. What should I do with the already posted
>> MPI_Irecv's? Can I cancel them?
>>
>> Yes, you can with MPI_CANCEL.
>>
>>   —- Pavan
>>
>> --
>> Pavan Balaji
>> http://www.mcs.anl.gov/~balaji
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20140116/7ff2858c/attachment.html>
    
    
More information about the discuss
mailing list