<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    It seems some thing wrong with the process group id. Can you try the
    execution again with MPICH debug message ? You can enable debug as
    below:<br>
    - configure MPICH again with --enable-g=all, then make &&
    make install<br>
    - before executing:<br>
       mkdir -p log/<br>
       export MPICHD_DBG_PG=yes<br>
       export MPICH_DBG_FILENAME="log/dbg-%d.log"<br>
       export MPICH_DBG_CLASS=ALL<br>
       export MPICH_DBG_LEVEL=VERBOSE<br>
    <br>
    Then can you send me the output and the log files in log/ ?<br>
    <br>
    Min<br>
    <br>
    <div class="moz-cite-prefix">On 1/23/17 10:46 AM, Doha Ehab wrote:<br>
    </div>
    <blockquote cite="mid:CAEjFr-ZDs-9xZdjDcoMterRJSpEY8z9Pz+3Y9aoAj_-uqacSRg@mail.gmail.com" type="cite">
      
      <div dir="ltr">Hi Min,
        <div> I have attached the two Config.log. and here is the code</div>
        <div><br>
        </div>
        <div>
          <div>#include <stdio.h></div>
          <div>#include <mpi.h></div>
          <div><br>
          </div>
          <div>int main (argc, argv)</div>
          <div>     int argc;</div>
          <div>     char *argv[];</div>
          <div>{</div>
          <div><br>
          </div>
          <div>int i=0;</div>
          <div> MPI_Init (&argc, &argv);<span class="gmail-Apple-tab-span" style="white-space:pre">        </span>/*
            starts MPI */</div>
          <div>// Find out rank, size</div>
          <div>int world_rank;</div>
          <div>MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);</div>
          <div>int world_size;</div>
          <div>MPI_Comm_size(MPI_COMM_WORLD, &world_size);</div>
          <div><br>
          </div>
          <div>int number;</div>
          <div>if (world_rank == 0) {</div>
          <div><br>
          </div>
          <div>    number = -1;</div>
          <div>for( i=1; i < world_size; i++){</div>
          <div><br>
          </div>
          <div>    MPI_Send(&number, 1, MPI_INT, i, 0,
            MPI_COMM_WORLD);</div>
          <div>}</div>
          <div>} </div>
          <div>else  {</div>
          <div>    MPI_Recv(&number, 1, MPI_INT, 0, 0,
            MPI_COMM_WORLD,</div>
          <div>             MPI_STATUS_IGNORE);</div>
          <div>    printf("Process %d received number %d from process
            0\n",world_rank, number);</div>
          <div>}</div>
          <div>MPI_Finalize();</div>
          <div><br>
          </div>
          <div>  return 0;</div>
          <div>}</div>
        </div>
        <div><br>
        </div>
        <div>Regards,</div>
        <div>Doha</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Sun, Jan 22, 2017 at 10:47 PM, Min
          Si <span dir="ltr"><<a moz-do-not-send="true" href="mailto:msi@anl.gov" target="_blank">msi@anl.gov</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"> Hi Doha,<br>
              <br>
              Can you please send us the config.log file for each MPICH
              build and your helloworld source doe ? The config.log file
              should be under your MPICH build directory where you
              executed ./configure.<span class="HOEnZb"><font color="#888888"><br>
                  <br>
                  Min</font></span>
              <div>
                <div class="h5"><br>
                  <div class="m_4759687804894361769moz-cite-prefix">On
                    1/21/17 4:53 AM, Doha Ehab wrote:<br>
                  </div>
                  <blockquote type="cite">
                    <div dir="ltr">I have tried what you mentioned in
                      the previous E-mail. 
                      <div><br>
                        <div>1- I have build MPICH for CPU node and ARM
                          node.</div>
                        <div>2- Uploaded the <span style="font-size:12.8px">binaries</span> on
                          same path on the 2 nodes.</div>
                        <div>3- Compiled helloWorld (it sends a number
                          from process zero to all other processes ) for
                          both nodes. Then tried <span style="font-size:12.8px">mpiexec -np 2 -f
                            <hostfile with mic
                            hostnames>./helloworld</span> </div>
                        <div><br>
                        </div>
                        <div>I got this error</div>
                        <div> Fatal error in MPI_Recv: Other MPI error,
                          error stack:</div>
                        <div>MPI_Recv(200).................<wbr>...............:
                          MPI_Recv(buf=0xbe9460d0, count=1, MPI_INT,
                          src=0, tag=0, MPI_COMM_WORLD, status=0x1)
                          failed</div>
                        <div>MPIDI_CH3i_Progress_wait(242).<wbr>...............:
                          an error occurred while handling an event
                          returned by MPIDU_Sock_Wait()</div>
                        <div>MPIDI_CH3I_Progress_handle_<wbr>sock_event(554)...: </div>
                        <div>MPIDI_CH3_Sockconn_handle_<wbr>connopen_event(899):
                          unable to find the process group structure
                          with id <></div>
                        <div><br>
                        </div>
                        <div>Regards,</div>
                        <div>Doha</div>
                        <div><br>
                        </div>
                      </div>
                    </div>
                    <div class="gmail_extra"><br>
                      <div class="gmail_quote">On Wed, Nov 16, 2016 at
                        6:38 PM, Min Si <span dir="ltr"><<a moz-do-not-send="true" href="mailto:msi@anl.gov" target="_blank">msi@anl.gov</a>></span>
                        wrote:<br>
                        <blockquote class="gmail_quote" style="margin:0
                          0 0 .8ex;border-left:1px #ccc
                          solid;padding-left:1ex">I guess you might need
                          to put all the MPICH binaries (e.g.,
                          hydra_pmi_proxy) to the same path on each
                          node. I have executed MPICH on Intel MIC chips
                          from the host CPU node where OS are different.
                          The thing I did was:<br>
                          1. build MPICH for both CPU node and MIC on
                          the CPU node (you have done this step).<br>
                          2. upload the MIC binaries to the same path on
                          MIC chip as on the CPU node<br>
                             For example:<br>
                             - on CPU node : /tmp/mpich/install/bin
                          holds the CPU version<br>
                             - on MIC :          /tmp/mpich/install/bin
                          holds the MIC version<br>
                          3. compile helloworld.c with the MIC version
                          mpicc<br>
                          4. execute on CPU node: mpiexe -np 2 -f
                          <hostfile with mic
                          hostnames>./helloworld<br>
                          <br>
                          I think you should be able to follow step 2,
                          but since your helloworld binary is also built
                          with different OS, you might want to put it
                          also into the same path on two nodes similar
                          as we do for MPICH binaries.<span class="m_4759687804894361769HOEnZb"><font color="#888888"><br>
                              <br>
                              Min</font></span>
                          <div class="m_4759687804894361769HOEnZb">
                            <div class="m_4759687804894361769h5"><br>
                              <br>
                              On 11/16/16 8:29 AM, Kenneth Raffenetti
                              wrote:<br>
                              <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px
                                #ccc solid;padding-left:1ex"> Have you
                                disabled any and all firewalls on both
                                nodes? It sounds like they are unable to
                                communicate in initialization.<br>
                                <br>
                                Ken<br>
                                <br>
                                On 11/16/2016 07:34 AM, Doha Ehab wrote:<br>
                                <blockquote class="gmail_quote" style="margin:0 0 0
                                  .8ex;border-left:1px #ccc
                                  solid;padding-left:1ex"> Yes, I built
                                  MPICH-3 on both systems and I tried
                                  the code on each node<br>
                                  separately and it worked, I tried each
                                  node with other nodes that has<br>
                                  the same operating system and it
                                  worked as well.<br>
                                  When I try the code on the 2 nodes
                                  that have different operating systems<br>
                                  no result or error message appear.<br>
                                  <br>
                                  Regards<br>
                                  Doha<br>
                                  <br>
                                  On Mon, Nov 14, 2016 at 6:25 PM,
                                  Kenneth Raffenetti<br>
                                  <<a moz-do-not-send="true" href="mailto:raffenet@mcs.anl.gov" target="_blank">raffenet@mcs.anl.gov</a>
                                  <mailto:<a moz-do-not-send="true" href="mailto:raffenet@mcs.anl.gov" target="_blank">raffenet@mcs.anl.gov</a>>>
                                  wrote:<br>
                                  <br>
                                      It may be possible to run in such
                                  a setup, but it would not be<br>
                                      recommended. Did you build MPICH
                                  on both systems you are trying to<br>
                                      run on? What exactly happened when
                                  the code didn't work?<br>
                                  <br>
                                      Ken<br>
                                  <br>
                                  <br>
                                      On 11/13/2016 12:36 AM, Doha Ehab
                                  wrote:<br>
                                  <br>
                                          Hello,<br>
                                           I tried to run a parallel
                                  (Hello World) C code on a cluster<br>
                                          that has 2<br>
                                          nodes, the nodes have
                                  different operating system so the code
                                  did not<br>
                                          work and no results were
                                  printed.<br>
                                           How to make such a cluster
                                  work? is there is extra steps that<br>
                                          should be<br>
                                          done?<br>
                                  <br>
                                          Regards,<br>
                                          Doha<br>
                                  <br>
                                  <br>
                                          ______________________________<wbr>_________________<br>
                                          discuss mailing list     <a moz-do-not-send="true" href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
                                          <mailto:<a moz-do-not-send="true" href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a>><br>
                                          To manage subscription options
                                  or unsubscribe:<br>
                                          <a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
                                          <<a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailm<wbr>an/listinfo/discuss</a>><br>
                                  <br>
                                      ______________________________<wbr>_________________<br>
                                      discuss mailing list     <a moz-do-not-send="true" href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a>
                                  <mailto:<a moz-do-not-send="true" href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a>><br>
                                      To manage subscription options or
                                  unsubscribe:<br>
                                      <a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
                                      <<a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailm<wbr>an/listinfo/discuss</a>><br>
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                  ______________________________<wbr>_________________<br>
                                  discuss mailing list     <a moz-do-not-send="true" href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
                                  To manage subscription options or
                                  unsubscribe:<br>
                                  <a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
                                  <br>
                                </blockquote>
                                ______________________________<wbr>_________________<br>
                                discuss mailing list     <a moz-do-not-send="true" href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
                                To manage subscription options or
                                unsubscribe:<br>
                                <a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
                              </blockquote>
                              <br>
                              ______________________________<wbr>_________________<br>
                              discuss mailing list     <a moz-do-not-send="true" href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
                              To manage subscription options or
                              unsubscribe:<br>
                              <a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
                            </div>
                          </div>
                        </blockquote>
                      </div>
                      <br>
                    </div>
                    <br>
                    <fieldset class="m_4759687804894361769mimeAttachmentHeader"></fieldset>
                    <br>
                    <pre>______________________________<wbr>_________________
discuss mailing list     <a moz-do-not-send="true" class="m_4759687804894361769moz-txt-link-abbreviated" href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a>
To manage subscription options or unsubscribe:
<a moz-do-not-send="true" class="m_4759687804894361769moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/<wbr>mailman/listinfo/discuss</a></pre>
    </blockquote>
    

  </div></div></div>


______________________________<wbr>_________________

discuss mailing list     <a moz-do-not-send="true" href="mailto:discuss@mpich.org">discuss@mpich.org</a>

To manage subscription options or unsubscribe:

<a moz-do-not-send="true" href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/<wbr>mailman/listinfo/discuss</a>
</blockquote></div>
</div>


<fieldset class="mimeAttachmentHeader"></fieldset>
<pre wrap="">_______________________________________________
discuss mailing list     <a class="moz-txt-link-abbreviated" href="mailto:discuss@mpich.org">discuss@mpich.org</a>
To manage subscription options or unsubscribe:
<a class="moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a></pre>

</blockquote>
</body></html>