<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">Thanks a lot. <div><br></div><div>In fact, I have tried MPI_REAL16. The same thing happened. mpi_gather does not work for it either.<div>I have decided to rewrite a new subroutine to replace mpi_reduce.</div></div><div><br></div><div>Best regards,</div><div><br></div><div><br></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Aug 10, 2018 at 12:38 AM Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">The specific issue here is that MPI_LONG_DOUBLE does not map to REAL*16. You should try MPI_REAL16. I suspect it generates an error telling you that it is not supported.<div><br></div><div>MPI_LONG_DOUBLE corresponds to C "long double", which is an awful type that means a bunch of different things depending on your hardware and compiler. If you are using x86 Linux, it should be the x87 80-bit type. See <a href="https://en.wikipedia.org/wiki/Long_double" target="_blank">https://en.wikipedia.org/wiki/Long_double</a> for details.<div><br></div><div>Jeff</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 9, 2018 at 8:33 AM, Jeff Hammond <span dir="ltr"><<a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">There is a challenge here because there is no standard type in C corresponding to REAL*16. Either the reduction operation needs to be written in Fortran or MPICH needs to figure out the compiler-dependent equivalent of REAL*16 that works in C. While GCC __float128 and Intel _Quad might be equivalent, this is not a rigorous assumption.<div><br></div><div>I recommend that you write your own user-defined reduction for REAL*16 with the reduction operation callback in Fortran.<br><div><br></div><div>Jeff</div></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_-3896592350245035959h5">On Thu, Aug 9, 2018 at 2:31 AM, Jiancang Zhuang <span dir="ltr"><<a href="mailto:zhuangjc@ism.ac.jp" target="_blank">zhuangjc@ism.ac.jp</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_-3896592350245035959h5"><div dir="ltr"><div>I have found that the fortran version mpi_reduce does not work for real*18. This can be shown by the following program. I have not test the C version of mpi_reduce. <br></div><div><br></div><div><br></div><div>c-----------------------Fortran code begins -----------------------------<br></div><div><div> implicit real*8 (a-h, o-z)<br> include 'mpif.h'<br> real*16 h1, h<br><br><br> call mpi_init(ierr)<br> call mpi_comm_size(mpi_comm_world, nprocs, ierr)<br> call mpi_comm_rank(mpi_comm_world, myrank, ierr)<br><br><br> h1 = (myrank+4) *2.00000000000000<br> write(*,'(''before reduce --'', i4,2f12.8)')myrank, h1,h<br><br><br> call mpi_reduce(h1,h,1,mpi_long_double,mpi_sum,0,<br> & mpi_comm_world,ierr)<br> write(*,'(''after reduce --'',i4,2f12.8)')myrank, h1,h<br><br> call mpi_bcast(h,1,mpi_long_double,0,<br> & mpi_comm_world,ierr)<br><br> write(*,'(''bcastvalue -- '',i4,2f12.8)')myrank, h1,h<br><br> call mpi_finalize(ierr)<br> end<br><br>
<div>c-----------------------Fortran code begins -----------------------------</div><div><br></div><div><br></div><div>$ mpif77 a.f -o a.out<br>$ mpirun -np 3 ./a.out<br>before reduce -- 1 10.00000000 0.00000000<br>after reduce -- 1 10.00000000 0.00000000<br>before reduce -- 2 12.00000000 0.00000000<br>before reduce -- 0 8.00000000 0.00000000<br>after reduce -- 2 12.00000000 0.00000000<br>after reduce -- 0 8.00000000 8.00000000<br>bcastvalue -- 0 8.00000000 8.00000000<br>bcastvalue -- 1 10.00000000 8.00000000<br>bcastvalue -- 2 12.00000000 8.00000000<br><br><br></div>
</div></div></div>
<br></div></div>_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br></blockquote></div><span class="m_-3896592350245035959HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div class="m_-3896592350245035959m_-3099678010932859931gmail_signature" data-smartmail="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</font></span></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="m_-3896592350245035959gmail_signature" data-smartmail="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</blockquote></div><br clear="all"><div><br></div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature">Address: 10-3 Midori-cho, Tachikawa, Tokyo 190-8562, Japan<br>Phone: +81-50-5533-8532 <br>Fax: +81-42-526-4335<br>email: <a href="mailto:zhuangjc@ism.ac.jp" target="_blank">zhuangjc@ism.ac.jp</a><br>homepage: <a href="http://bemlar.ism.ac.jp/zhuang" target="_blank">http://bemlar.ism.ac.jp/zhuang</a><br> ----------------------------------------------------------------------------------------</div></div>