<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr"><pre style="white-space:pre-wrap;color:rgb(0,0,0)"><font face="arial, helvetica, sans-serif">Dear All,</font></pre><pre><font face="arial, helvetica, sans-serif"><font color="#000000"><span style="white-space:pre-wrap">I came across below thread in the archives about mpich 3.2 on BG-Q.</span></font></font></pre><pre><font face="arial, helvetica, sans-serif"><font color="#000000"><span style="white-space:pre-wrap">I am testing non-blocking collectives, i/o functions on cluster and would like to do the same on BG-Q. I have following questions:</span></font></font></pre><pre><font face="arial, helvetica, sans-serif"><font color="#000000"><span style="white-space:pre-wrap">1. Last email from Dominic suggest that last "compilable" version doesn't support async progress. Is there any version that has non-blocking support and compiles on bg-q? (the fork from Rob?)</span></font></font></pre><pre><font face="arial, helvetica, sans-serif"><font color="#000000"><span style="white-space:pre-wrap">2. Do I have to consider anything specific on bg-q while benchmarking non-blocking functions (from mpi-3) ?</span></font></font></pre><pre><font color="#000000" face="arial, helvetica, sans-serif"><span style="white-space:pre-wrap">Thanks in advance!</span></font></pre><pre><font color="#000000" face="arial, helvetica, sans-serif"><span style="white-space:pre-wrap">Regards,</span></font></pre><pre><span style="white-space:pre-wrap;color:rgb(0,0,0);font-family:arial,helvetica,sans-serif">Pramod</span></pre><pre><font color="#000000" face="arial, helvetica, sans-serif"><span style="white-space:pre-wrap">p.s. I am copying the email thread from archive, not sure if this will be delivered to the correct thread...</span></font></pre><pre style="white-space:pre-wrap;color:rgb(0,0,0)"><br></pre><pre style="white-space:pre-wrap;color:rgb(0,0,0)">Hi All,
Here is an update:
MPICH 3.1.3 is the last version that passed the nonblocking test, even without setting PAMID_THREAD_MULTIPLE. However, setting PAMID_ASYNC_PROGRESS=1 will cause error.(Abort(1) on node 7: 'locking' async progress not applicable...)
[<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> coll.bak]$ which mpif90
~/apps/mpich/3.1.3/bin/mpif90
[<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> coll.bak]$ make nonblocking
CC nonblocking.o
CCLD nonblocking
[<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> coll.bak]$ srun -n 2 ./nonblocking
No errors
[<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> coll.bak]$ srun -n 4 ./nonblocking
No errors
[<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> coll.bak]$ srun -n 16 ./nonblocking
No errors
Thanks all!
Regards,
Dominic
On 11 Jan, 2016, at 2:29 pm, Dominic Chien <<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh.acrc at gmail.com</a>> wrote:
><i> Thank you Jeff and Halim,
</i>><i>
</i>><i> Halim, I have tried 3.1.4 but it does not return 0 (error) when the program is finished, e.g. for a helloworld program
</i>><i> ==================================================================
</i>><i> program hello
</i>><i> include 'mpif.h'
</i>><i> integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
</i>><i>
</i>><i> call MPI_INIT(ierror)
</i>><i> call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
</i>><i> call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
</i>><i> print*, 'node', rank, ': Hello world'
</i>><i> call MPI_FINALIZE(ierror)
</i>><i> end
</i>><i> ==================================================================
</i>><i>
</i>><i> Using MPI 3.1.rc4
</i>><i> ==================================================================
</i>><i> [<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> test]$ which mpif90
</i>><i> ~/apps/mpich/3.1.rc4/bin/mpif90
</i>><i> [<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> test]$ srun -n 2 ./a.out
</i>><i> node 1 : Hello world
</i>><i> node 0 : Hello world
</i>><i> [<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> test]$
</i>><i> ==================================================================
</i>><i> Using MPI 3.1.4
</i>><i> ==================================================================
</i>><i> [<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> test]$ which mpif90
</i>><i> ~/apps/mpich/3.1.4/bin/mpif90
</i>><i> [<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> test]$ srun -n 2 ./a.out
</i>><i> node 1 : Hello world
</i>><i> node 0 : Hello world
</i>><i> 2016-01-11 14:24:25.968 (WARN ) [0xfff7ef48b10] 75532:ibm.runjob.client.Job: terminated by signal 11
</i>><i> 2016-01-11 14:24:25.968 (WARN ) [0xfff7ef48b10] 75532:ibm.runjob.client.Job: abnormal termination by signal 11 from rank 1
</i>><i> ==================================================================
</i>><i>
</i>><i>
</i>><i> Jeff, after I set PAMID_THREAD_MULTIPLE=1 and PAMID_ASYNC_PROGRESS=1, it seems to have some "improvement": nonblocking test can run for up to 4 processes sometime, but sometime it just get a "deadlock", (see below)
</i>><i> ==========================================================
</i>><i> [<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> coll.bak]$ srun --nodes=4 --ntasks-per-node=1 nonblocking
</i>><i> MPIDI_Process.*
</i>><i> verbose : 2
</i>><i> statistics : 1
</i>><i> contexts : 32
</i>><i> async_progress : 1
</i>><i> context_post : 1
</i>><i> pt2pt.limits
</i>><i> application
</i>><i> eager
</i>><i> remote, local : 4097, 4097
</i>><i> short
</i>><i> remote, local : 113, 113
</i>><i> internal
</i>><i> eager
</i>><i> remote, local : 4097, 4097
</i>><i> short
</i>><i> remote, local : 113, 113
</i>><i> rma_pending : 1000
</i>><i> shmem_pt2pt : 1
</i>><i> disable_internal_eager_scale : 524288
</i>><i> optimized.collectives : 0
</i>><i> optimized.select_colls: 2
</i>><i> optimized.subcomms : 1
</i>><i> optimized.memory : 0
</i>><i> optimized.num_requests: 1
</i>><i> mpir_nbc : 1
</i>><i> numTasks : 4
</i>><i> mpi thread level : 'MPI_THREAD_SINGLE'
</i>><i> MPIU_THREAD_GRANULARITY : 'per object'
</i>><i> ASSERT_LEVEL : 0
</i>><i> MPICH_LIBDIR : not defined
</i>><i> The following MPICH_* environment variables were specified:
</i>><i> The following PAMID_* environment variables were specified:
</i>><i> PAMID_STATISTICS=1
</i>><i> PAMID_ASYNC_PROGRESS=1
</i>><i> PAMID_THREAD_MULTIPLE=1
</i>><i> PAMID_VERBOSE=2
</i>><i> The following PAMI_* environment variables were specified:
</i>><i> The following COMMAGENT_* environment variables were specified:
</i>><i> The following MUSPI_* environment variables were specified:
</i>><i> The following BG_* environment variables were specified:
</i>><i> No errors
</i>><i> ==========================================================
</i>><i> [<a href="https://lists.mpich.org/mailman/listinfo/discuss">chiensh at cumulus</a> coll.bak]$ srun --nodes=4 --ntasks-per-node=1 nonblocking
</i>><i> MPIDI_Process.*
</i>><i> verbose : 2
</i>><i> statistics : 1
</i>><i> contexts : 32
</i>><i> async_progress : 1
</i>><i> context_post : 1
</i>><i> pt2pt.limits
</i>><i> application
</i>><i> eager
</i>><i> remote, local : 4097, 4097
</i>><i> short
</i>><i> remote, local : 113, 113
</i>><i> internal
</i>><i> eager
</i>><i> remote, local : 4097, 4097
</i>><i> short
</i>><i> remote, local : 113, 113
</i>><i> rma_pending : 1000
</i>><i> shmem_pt2pt : 1
</i>><i> disable_internal_eager_scale : 524288
</i>><i> optimized.collectives : 0
</i>><i> optimized.select_colls: 2
</i>><i> optimized.subcomms : 1
</i>><i> optimized.memory : 0
</i>><i> optimized.num_requests: 1
</i>><i> mpir_nbc : 1
</i>><i> numTasks : 4
</i>><i> mpi thread level : 'MPI_THREAD_SINGLE'
</i>><i> MPIU_THREAD_GRANULARITY : 'per object'
</i>><i> ASSERT_LEVEL : 0
</i>><i> MPICH_LIBDIR : not defined
</i>><i> The following MPICH_* environment variables were specified:
</i>><i> The following PAMID_* environment variables were specified:
</i>><i> PAMID_STATISTICS=1
</i>><i> PAMID_ASYNC_PROGRESS=1
</i>><i> PAMID_THREAD_MULTIPLE=1
</i>><i> PAMID_VERBOSE=2
</i>><i> The following PAMI_* environment variables were specified:
</i>><i> The following COMMAGENT_* environment variables were specified:
</i>><i> The following MUSPI_* environment variables were specified:
</i>><i> The following BG_* environment variables were specified:
</i>><i> (never return from here)
</i>><i> ==========================================================
</i>><i>
</i>><i> Thanks!
</i>><i>
</i>><i> Regards,
</i>><i> Dominic
</i>><i>
</i>><i>
</i>><i> On 11 Jan, 2016, at 12:08 pm, Halim Amer <<a href="https://lists.mpich.org/mailman/listinfo/discuss">aamer at anl.gov</a>> wrote:
</i>><i>
</i>>><i> Dominic,
</i>>><i>
</i>>><i> There were a bunch of fixes that went to PAMID since v3.1rc4. You could try a release from the 3.1 series (i.e. from 3.1 through 3.1.4).
</i>>><i>
</i>>><i> Regards,
</i>>><i> --Halim
</i>>><i>
</i>>><i> <a href="http://www.mcs.anl.gov/~aamer">www.mcs.anl.gov/~aamer</a>
</i>><i> On 11 Jan, 2016, at 11:30 am, Jeff Hammond <<a href="https://lists.mpich.org/mailman/listinfo/discuss">jeff.science at gmail.com</a>> wrote:
</i>>><i> I recall MPI-3 RMA on BGQ deadlocks if you set PAMID_THREAD_MULTIPLE (please see ALCF MPI docs to verify exact name), which is required for async progress.
</i>>><i>
</i>>><i> ARMCI-MPI test suite is one good way to validate MPI-3 RMA is working.
</i>>><i>
</i>>><i> Jeff
</i>><i> </i></pre></div>