[mpich-devel] MPI_Dist_graph_create self-edge

Daniel Ibanez dan.a.ibanez at gmail.com
Sat Oct 24 20:48:14 CDT 2015


Rajeev,

Thank you !
This was indeed a bug in my code that called Dist_graph_create,
good thing I copied the resulting values exactly into the demo code.
I should actually be specifying n=1 at all times for my application.
Things are working as I expect now, sorry if I wasted someone's time.

On Sat, Oct 24, 2015 at 9:32 PM, Thakur, Rajeev <thakur at mcs.anl.gov> wrote:

> On rank 0, if you are passing n=2, shouldn’t the sources and degrees
> arrays be of size 2?
>
> Rajeev
>
> > On Oct 24, 2015, at 6:37 PM, Daniel Ibanez <dan.a.ibanez at gmail.com>
> wrote:
> >
> > Pavan,
> >
> > Here is a short program that reproduces the issue:
> >
> >
> >
> > #include <mpi.h>
> > #include <assert.h>
> >
> > int main(int argc, char** argv)
> > {
> >   MPI_Init(&argc, &argv);
> >   int size;
> >   MPI_Comm_size(MPI_COMM_WORLD, &size);
> >   assert(size == 2);
> >   int rank;
> >   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >   MPI_Comm newcomm;
> >   if (rank == 0) {
> >     int sources[1] = {0};
> >     int degrees[1] = {2};
> >     int destinations[2] = {1,0};
> >     int weights[2] = {2,1};
> >     MPI_Dist_graph_create(MPI_COMM_WORLD, 2, sources, degrees,
> destinations,
> >         weights, MPI_INFO_NULL, 0, &newcomm);
> >   } else {
> >     int sources[1] = {1};
> >     int degrees[1] = {1};
> >     int destinations[1] = {0};
> >     int weights[1] = {2};
> >     MPI_Dist_graph_create(MPI_COMM_WORLD, 1, sources, degrees,
> destinations,
> >         weights, MPI_INFO_NULL, 0, &newcomm);
> >   }
> >   MPI_Comm_free(&newcomm);
> >   MPI_Finalize();
> > }
> >
> >
> >
> >
> > Thanks,
> >
> > Dan
> >
> > On Sat, Oct 24, 2015 at 7:21 PM, Balaji, Pavan <balaji at anl.gov> wrote:
> > Daniel,
> >
> > Do you have a test program that shows these errors?
> >
> > The algorithms are a first cut, for now, but we are hoping to optimize
> them in 2016 (together with one of our vendor partners, who will be
> contributing code for it).
> >
> > Thanks,
> >
> >   -- Pavan
> >
> > From: Daniel Ibanez <dan.a.ibanez at gmail.com>
> > Reply-To: "devel at mpich.org" <devel at mpich.org>
> > Date: Saturday, October 24, 2015 at 5:05 PM
> > To: "devel at mpich.org" <devel at mpich.org>
> > Subject: [mpich-devel] MPI_Dist_graph_create self-edge
> >
> > Hello,
> >
> > Using MPICH 3.1.4, I'm getting internal assertions of this kind
> > of when using MPI_Dist_graph_create :
> >
> > Assertion failed in file src/mpi/topo/dist_gr_create.c at line 223:
> s_rank >= 0
> > Assertion failed in file src/mpi/topo/dist_gr_create.c at line 195:
> sources[i] < comm_size
> >
> > I've checked that ranks passed in are in the proper range,
> > rather I think the issue is caused by requesting an edge
> > from rank 0 to itself.
> > (Another hint is that its non-deterministic which assertion
> > I get; it depends on the order in which the ranks get scheduled
> > by the OS, so there is a bit of a bug in the implementation,
> > or at least no check for self-edges).
> >
> > Does the MPI standard allow for self-edges in these graphs ?
> >
> > Thank you,
> >
> > P.S. - I'll throw in a bigger question while I'm at it: are
> > MPI_Dist_graph_create and MPI_Neighbor_alltoallv
> > implemented with optimally scalable algorithms ?
> > I'm betting my scalability on them being roughly
> > O(log(P)) where P is communicator size, assuming
> > neighborhood sizes and message sizes are constant.
> >
> > _______________________________________________
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/devel
> >
> > _______________________________________________
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/devel
>
> _______________________________________________
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/devel/attachments/20151024/f308eec8/attachment-0001.html>


More information about the devel mailing list