<div dir="ltr">Rajeev,<div><br></div><div>Thank you !</div><div>This was indeed a bug in my code that called Dist_graph_create,</div><div>good thing I copied the resulting values exactly into the demo code.</div><div>I should actually be specifying n=1 at all times for my application.</div><div>Things are working as I expect now, sorry if I wasted someone's time.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Oct 24, 2015 at 9:32 PM, Thakur, Rajeev <span dir="ltr"><<a href="mailto:thakur@mcs.anl.gov" target="_blank">thakur@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On rank 0, if you are passing n=2, shouldn’t the sources and degrees arrays be of size 2?<br>
<span class="HOEnZb"><font color="#888888"><br>
Rajeev<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
> On Oct 24, 2015, at 6:37 PM, Daniel Ibanez <<a href="mailto:dan.a.ibanez@gmail.com">dan.a.ibanez@gmail.com</a>> wrote:<br>
><br>
> Pavan,<br>
><br>
> Here is a short program that reproduces the issue:<br>
><br>
><br>
><br>
> #include <mpi.h><br>
> #include <assert.h><br>
><br>
> int main(int argc, char** argv)<br>
> {<br>
> MPI_Init(&argc, &argv);<br>
> int size;<br>
> MPI_Comm_size(MPI_COMM_WORLD, &size);<br>
> assert(size == 2);<br>
> int rank;<br>
> MPI_Comm_rank(MPI_COMM_WORLD, &rank);<br>
> MPI_Comm newcomm;<br>
> if (rank == 0) {<br>
> int sources[1] = {0};<br>
> int degrees[1] = {2};<br>
> int destinations[2] = {1,0};<br>
> int weights[2] = {2,1};<br>
> MPI_Dist_graph_create(MPI_COMM_WORLD, 2, sources, degrees, destinations,<br>
> weights, MPI_INFO_NULL, 0, &newcomm);<br>
> } else {<br>
> int sources[1] = {1};<br>
> int degrees[1] = {1};<br>
> int destinations[1] = {0};<br>
> int weights[1] = {2};<br>
> MPI_Dist_graph_create(MPI_COMM_WORLD, 1, sources, degrees, destinations,<br>
> weights, MPI_INFO_NULL, 0, &newcomm);<br>
> }<br>
> MPI_Comm_free(&newcomm);<br>
> MPI_Finalize();<br>
> }<br>
><br>
><br>
><br>
><br>
> Thanks,<br>
><br>
> Dan<br>
><br>
> On Sat, Oct 24, 2015 at 7:21 PM, Balaji, Pavan <<a href="mailto:balaji@anl.gov">balaji@anl.gov</a>> wrote:<br>
> Daniel,<br>
><br>
> Do you have a test program that shows these errors?<br>
><br>
> The algorithms are a first cut, for now, but we are hoping to optimize them in 2016 (together with one of our vendor partners, who will be contributing code for it).<br>
><br>
> Thanks,<br>
><br>
> -- Pavan<br>
><br>
> From: Daniel Ibanez <<a href="mailto:dan.a.ibanez@gmail.com">dan.a.ibanez@gmail.com</a>><br>
> Reply-To: "<a href="mailto:devel@mpich.org">devel@mpich.org</a>" <<a href="mailto:devel@mpich.org">devel@mpich.org</a>><br>
> Date: Saturday, October 24, 2015 at 5:05 PM<br>
> To: "<a href="mailto:devel@mpich.org">devel@mpich.org</a>" <<a href="mailto:devel@mpich.org">devel@mpich.org</a>><br>
> Subject: [mpich-devel] MPI_Dist_graph_create self-edge<br>
><br>
> Hello,<br>
><br>
> Using MPICH 3.1.4, I'm getting internal assertions of this kind<br>
> of when using MPI_Dist_graph_create :<br>
><br>
> Assertion failed in file src/mpi/topo/dist_gr_create.c at line 223: s_rank >= 0<br>
> Assertion failed in file src/mpi/topo/dist_gr_create.c at line 195: sources[i] < comm_size<br>
><br>
> I've checked that ranks passed in are in the proper range,<br>
> rather I think the issue is caused by requesting an edge<br>
> from rank 0 to itself.<br>
> (Another hint is that its non-deterministic which assertion<br>
> I get; it depends on the order in which the ranks get scheduled<br>
> by the OS, so there is a bit of a bug in the implementation,<br>
> or at least no check for self-edges).<br>
><br>
> Does the MPI standard allow for self-edges in these graphs ?<br>
><br>
> Thank you,<br>
><br>
> P.S. - I'll throw in a bigger question while I'm at it: are<br>
> MPI_Dist_graph_create and MPI_Neighbor_alltoallv<br>
> implemented with optimally scalable algorithms ?<br>
> I'm betting my scalability on them being roughly<br>
> O(log(P)) where P is communicator size, assuming<br>
> neighborhood sizes and message sizes are constant.<br>
><br>
> _______________________________________________<br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/devel</a><br>
><br>
> _______________________________________________<br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/devel</a><br>
<br>
_______________________________________________<br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/devel</a></div></div></blockquote></div><br></div>