<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">Indeed, Rob is right. I only tested np<=2. Didn't see failure until nproc=16 because of how malloc works but in any case, it is trivial to fix this by allocating C to be nproc elements.<div><br></div><div>Jeff<br><div><br></div><div>
<p class=""><span class="">#include <mpi.h></span></p>
<p class=""><span class="">#include <stdio.h></span></p>
<p class=""><span class="">#include <stdlib.h></span></p>
<p class=""><span class=""></span><br></p>
<p class=""><span class="">int main(int argc, char *argv[])</span></p>
<p class=""><span class="">{</span></p>
<p class=""><span class=""> MPI_Init(&argc, &argv);</span></p>
<p class=""><span class=""></span><br></p>
<p class=""><span class=""> MPI_Comm comm = MPI_COMM_WORLD;</span></p>
<p class=""><span class=""> uint64_t *A, *C;</span></p>
<p class=""><span class=""> int rnk, siz;</span></p>
<p class=""><span class=""></span><br></p>
<p class=""><span class=""> MPI_Comm_rank(comm, &rnk);</span></p>
<p class=""><span class=""> MPI_Comm_size(comm, &siz);</span></p>
<p class=""><span class=""> A = calloc(1, sizeof(uint64_t));</span></p>
<p class=""><span class=""> C = calloc(siz, sizeof(uint64_t));</span></p>
<p class=""><span class=""> A[0] = rnk + 1;</span></p>
<p class=""><span class=""></span><br></p>
<p class=""><span class=""> MPI_Allgather(A, 1, MPI_UINT64_T, C, 1, MPI_UINT64_T, comm);</span></p>
<p class=""><span class=""></span><br></p>
<p class=""><span class=""> free(C);</span></p>
<p class=""><span class=""> free(A);</span></p>
<p class=""><span class=""></span><br></p>
<p class=""><span class=""> MPI_Finalize();</span></p>
<p class=""><span class=""> return 0;</span></p>
<p class=""><span class="">}</span></p></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jul 26, 2016 at 9:00 AM, Rob Latham <span dir="ltr"><<a href="mailto:robl@mcs.anl.gov" target="_blank">robl@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 07/26/2016 10:17 AM, Andreas Noack wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On my El Capitan macbook I get a segfault when running the program below<br>
with more than a single process but only when MPICH has been compiled<br>
with Clang.<br>
<br>
I don't get that good debug info but here is some of what I got<br>
</blockquote>
<br>
<br></span>
valgrind is pretty good at sussing out these sorts of things:<br>
<br>
==18132== Unaddressable byte(s) found during client check request<br>
==18132== at 0x504D1D7: MPIR_Localcopy (helper_fns.c:84)<br>
==18132== by 0x4EC8EA1: MPIR_Allgather_intra (allgather.c:169)<br>
==18132== by 0x4ECA5EC: MPIR_Allgather (allgather.c:791)<br>
==18132== by 0x4ECA7A4: MPIR_Allgather_impl (allgather.c:832)<br>
==18132== by 0x4EC8B5C: MPID_Allgather (mpid_coll.h:61)<br>
==18132== by 0x4ECB9F7: PMPI_Allgather (allgather.c:978)<br>
==18132== by 0x4008F5: main (noack_segv.c:18)<br>
==18132== Address 0x6f2f138 is 8 bytes after a block of size 16 alloc'd<br>
==18132== at 0x4C2FB55: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)<br>
==18132== by 0x4008B0: main (noack_segv.c:15)<br>
==18132==<br>
==18132== Invalid write of size 8<br>
==18132== at 0x4C326CB: memcpy@@GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)<br>
==18132== by 0x504D31B: MPIR_Localcopy (helper_fns.c:84)<br>
==18132== by 0x4EC8EA1: MPIR_Allgather_intra (allgather.c:169)<br>
==18132== by 0x4ECA5EC: MPIR_Allgather (allgather.c:791)<br>
==18132== by 0x4ECA7A4: MPIR_Allgather_impl (allgather.c:832)<br>
==18132== by 0x4EC8B5C: MPID_Allgather (mpid_coll.h:61)<br>
==18132== by 0x4ECB9F7: PMPI_Allgather (allgather.c:978)<br>
==18132== by 0x4008F5: main (noack_segv.c:18)<br>
==18132== Address 0x6f2f138 is 8 bytes after a block of size 16 alloc'd<br>
==18132== at 0x4C2FB55: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)<br>
==18132== by 0x4008B0: main (noack_segv.c:15)<span class=""><br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
MPI_Comm_rank(comm, &rnk);<br>
A = calloc(1, sizeof(uint64_t));<br>
C = calloc(2, sizeof(uint64_t));<br>
A[0] = rnk + 1;<br>
<br>
MPI_Allgather(A, 1, MPI_UINT64_T, C, 1, MPI_UINT64_T, comm);<br>
</blockquote>
<br></span>
Your 'buf count tuple' is ok for A: every process sends one uint64<br>
<br>
your 'buf count tuple' is too small for C if there are any more than 2 proceses .<br>
<br>
When you say "more than one"... do you mean 2?<span class="HOEnZb"><font color="#888888"><br>
<br>
==rob</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Jeff Hammond<br><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a></div>
</div>