<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Yes. Send did not have a matching tag receive as thread was waiting
    for something else.<br>
    <br>
    Switching to Bsend should help?<br>
    <br>
    <div class="moz-cite-prefix">On 7/17/2015 9:03 PM, Jeff Hammond
      wrote:<br>
    </div>
    <blockquote cite="mid:CAGKz=uLA_q65JpFzKXsZizQv8xqW+4HyDtzDGdoHP5vB=-eOSQ@mail.gmail.com" type="cite">Blocking Send to self will hang if Irecv not
      pre-posted. You doing that?
      <div><br>
      </div>
      <div>Jeff </div>
      <div><br>
        On Friday, July 17, 2015, Nenad Vukicevic <<a moz-do-not-send="true" href="mailto:nenad@intrepid.com"><a class="moz-txt-link-abbreviated" href="mailto:nenad@intrepid.com">nenad@intrepid.com</a></a>>
        wrote:<br>
        <blockquote class="gmail_quote" style="margin:0 0 0
          .8ex;border-left:1px #ccc solid;padding-left:1ex">I will via
          separate mail.  But, I see that I made a mistake in my
          description as pthread does MPI_Send () to itself and not to
          another rank.<br>
          I'll try reversing the order (send to others then to
          yourself).<br>
          <br>
          On 7/17/2015 4:26 PM, Balaji, Pavan wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            You should not need to pass any additional configuration
            options.  MPICH is thread-safe by default.<br>
            <br>
            Can you send us a simple program that reproduces the error?<br>
            <br>
               -- Pavan<br>
            <br>
            <br>
            <br>
            <br>
            <br>
            On 7/17/15, 6:16 PM, "Nenad Vukicevic" <<a moz-do-not-send="true"><a class="moz-txt-link-abbreviated" href="mailto:nenad@intrepid.com">nenad@intrepid.com</a></a>> wrote:<br>
            <br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              I am having a problem where the system locks up inside the
              MPI_Send()<br>
              routine.  In my test, each MPI rank has an additional
              pthread and system<br>
              locks up when:<br>
              <br>
              - main thread does MPI_Recv from ANY rank<br>
              - pthread does MPI_Send to another rank<br>
              <br>
              I verified with MPI_Init_thread() that I can run
              MPI_THREAD_MULTIPLE<br>
              environment.<br>
              <br>
              The same thing happened on MPICH with Fedora 20 (3.0.4)
              and the one<br>
              build from 3.2b3 sources.  When building from sources I
              provided<br>
              '--enable-threads=multiple' option.  I also tried to play
              with<br>
              '--enable-thread-cs' option but got build failure when
              'per-object' was<br>
              selected.<br>
              <br>
              Is this supposed to work?<br>
              <br>
              Thanks,<br>
              Nenad<br>
              <br>
              <br>
              I am attaching traces form GDB for the rank that locks up.<br>
              <br>
              <br>
              (gdb) info thread<br>
                 Id   Target Id         Frame<br>
                 2    Thread 0x7ffff6a5a700 (LWP 29570) "barrier_test"<br>
              0x0000003ef040bca0 in <a class="moz-txt-link-abbreviated" href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a> ()
              from<br>
              /usr/lib64/libpthread.so.0<br>
              * 1    Thread 0x7ffff7a64b80 (LWP 29568) "barrier_test"<br>
              0x00007ffff7c58717 in
              MPIU_Thread_CS_yield_lockname_recursive_impl_ (<br>
                   lockname=0x7ffff7cdc8b1 "global_mutex",
              mutex=<optimized out>,<br>
              kind=MPIU_Nest_global_mutex) at
              ../src/src/include/mpiimplthreadpost.h:190<br>
              (gdb) where<br>
              #0  0x00007ffff7c58717 in
              MPIU_Thread_CS_yield_lockname_recursive_impl_<br>
              (lockname=0x7ffff7cdc8b1 "global_mutex",
              mutex=<optimized out>,<br>
                   kind=MPIU_Nest_global_mutex) at<br>
              ../src/src/include/mpiimplthreadpost.h:190<br>
              #1  0x00007ffff7c5db42 in MPIDI_CH3I_Progress<br>
              (progress_state=progress_state@entry=0x7fffffffd2c0,<br>
              is_blocking=is_blocking@entry=1)<br>
                   at
              ../src/src/mpid/ch3/channels/nemesis/src/ch3_progress.c:507<br>
              #2  0x00007ffff7b5e795 in PMPI_Recv (buf=0x7fffffffd61c,
              count=1,<br>
              datatype=1275069445, source=-2, tag=299, comm=1140850688,<br>
              status=0x7fffffffd620)<br>
                   at ../src/src/mpi/pt2pt/recv.c:157<br>
              #3  0x0000000000401732 in receive_int () at comm.c:52<br>
              #4  0x0000000000400bf2 in main (argc=1,
              argv=0x7fffffffd758) at<br>
              barrier_test.c:39<br>
              (gdb) thread 2<br>
              [Switching to thread 2 (Thread 0x7ffff6a5a700 (LWP
              29570))]<br>
              #0  0x0000003ef040bca0 in <a class="moz-txt-link-abbreviated" href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a>
              () from<br>
              /usr/lib64/libpthread.so.0<br>
              (gdb) where<br>
              #0  0x0000003ef040bca0 in <a class="moz-txt-link-abbreviated" href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a>
              () from<br>
              /usr/lib64/libpthread.so.0<br>
              #1  0x00007ffff7c5d614 in MPIDI_CH3I_Progress_delay<br>
              (completion_count=<optimized out>)<br>
                   at
              ../src/src/mpid/ch3/channels/nemesis/src/ch3_progress.c:566<br>
              #2  MPIDI_CH3I_Progress<br>
              (progress_state=progress_state@entry=0x7ffff6a59710,<br>
              is_blocking=is_blocking@entry=1)<br>
                   at
              ../src/src/mpid/ch3/channels/nemesis/src/ch3_progress.c:347<br>
              #3  0x00007ffff7b632ec in PMPI_Send (buf=0x7ffff6a5985c,
              count=1,<br>
              datatype=1275069445, dest=0, tag=199, comm=1140850688)<br>
                   at ../src/src/mpi/pt2pt/send.c:145<br>
              #4  0x0000000000400e42 in barrier_thread_release (id=0) at
              barrier.c:115<br>
              #5  0x0000000000401098 in barrier_helper (arg=0x0) at
              barrier.c:186<br>
              #6  0x0000003ef0407ee5 in start_thread () from
              /usr/lib64/libpthread.so.0<br>
              #7  0x0000003eef8f4d1d in clone () from
              /usr/lib64/libc.so.6<br>
              _______________________________________________<br>
              discuss mailing list     <a moz-do-not-send="true">discuss@mpich.org</a><br>
              To manage subscription options or unsubscribe:<br>
              <a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
            </blockquote>
            _______________________________________________<br>
            discuss mailing list     <a moz-do-not-send="true">discuss@mpich.org</a><br>
            To manage subscription options or unsubscribe:<br>
            <a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
          </blockquote>
          <br>
          _______________________________________________<br>
          discuss mailing list     <a moz-do-not-send="true">discuss@mpich.org</a><br>
          To manage subscription options or unsubscribe:<br>
          <a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
        </blockquote>
      </div>
      <br>
      <br>
      -- <br>
      Jeff Hammond<br>
      <a moz-do-not-send="true" href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br>
      <a moz-do-not-send="true" href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a><br>
    </blockquote>
    <br>
  </body>
</html>