<div dir="ltr">Yes, I understand that. I"ll try to make my stand alone test closer to real application. Thank you.</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Dec 9, 2013 at 9:31 PM, Pavan Balaji <span dir="ltr"><<a href="mailto:balaji@mcs.anl.gov" target="_blank">balaji@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
It sounds like MPICH is working correctly. Without a test case, it’s unfortunately quite hard for us to even know what to look for. It’s also possible that there’s a bug in your code which might be causing some bad behavior.<br>
<span class="HOEnZb"><font color="#888888"><br>
— Pavan<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
On Dec 9, 2013, at 1:27 PM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br>
<br>
> Yes, I"m actually need Fault tolerance, and it was the main reason for choosing MPICH2. I use fault tolerance for unpredictable bugs in the future. My system should survive partially. But in the regular case I just need full performance. I"m suspect that I don't use MPI correctly, but on slow rate everything works fine. The fail caused by increasing rate of MPI_Isend or increasing data buffer size. I didn't find yet any strong dependence, only main stream.<br>
><br>
> Unfortunately I have a complex system which has a number of threads in each process. Part of the threads use different communicators.<br>
><br>
> I try to simulate the same MPI behavior in simple stand alone test, but stand alone test works fine. It shows a full network performance, when I slow down master (on stand alone test), all slaves are stopped too and are waiting for master to continue. Can I open any MPICH log to send you the results?<br>
><br>
><br>
> On Mon, Dec 9, 2013 at 8:10 PM, Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>> wrote:<br>
><br>
> Do you actually need Fault Tolerance (one of your previous emails seemed to indicate that)?<br>
><br>
> It sounds like there a bug in either your application or in the MPICH stack and you are trying to trace that down, and don’t really care about fault tolerance. Is that a correct assessment?<br>
><br>
> Do you have a simplified program that reproduces this error, that we can try?<br>
><br>
> — Pavan<br>
><br>
> On Dec 9, 2013, at 11:44 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br>
><br>
> > No. Hardware is Ok. Master process allocates memory (check with MemoryScape doesn't show any sufficient memory allocation in my code). Then network traffic become low, and then Master process crashes w/o saving core file. I have unlimited size of core files. The same fail (w/o core) I see when I call MPI_Abort, but I don't call it.<br>
> ><br>
> ><br>
> > On Mon, Dec 9, 2013 at 7:28 PM, Wesley Bland <<a href="mailto:wbland@mcs.anl.gov">wbland@mcs.anl.gov</a>> wrote:<br>
> > Are you actually seeing hardware failure or is your code just crashing? It's odd that one specific process would fail so often in the same way if it were a hardware problem.<br>
> ><br>
> > Thanks,<br>
> > Wesley<br>
> ><br>
> > On Dec 9, 2013, at 11:15 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br>
> ><br>
> >> One more interesting fact. Each time I have a failure, the fails only master process, but slaves are still exists together with mpiexec.hydra. I thought that slaves should fail too, but slaves are live.<br>
> >><br>
> >><br>
> >> On Mon, Dec 9, 2013 at 10:30 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br>
> >> I configure MPICH-3.1rc2 build w/o "so" files. But instead of MPICH2 & MPICH-3.0.4 I get so files. What should I change in configure line to link MPI with my application statically.<br>
> >><br>
> >><br>
> >><br>
> >> On Mon, Dec 9, 2013 at 9:47 AM, Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>> wrote:<br>
> >><br>
> >> Can you try mpich-3.1rc2? There were several fixes for this in this version and it’ll be good to try that out before we go digging too far into this.<br>
> >><br>
> >> — Pavan<br>
> >><br>
> >> On Dec 9, 2013, at 1:46 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br>
> >><br>
> >> > With MPICH - 3.0.4 the situation repeated. It looks like MPI allocates memory for messages.<br>
> >> > Can you please advice about scenario when MPI or may be TCP under MPI allocates memory due to high transfer rate?<br>
> >> ><br>
> >> ><br>
> >> > On Mon, Dec 9, 2013 at 9:32 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br>
> >> > Thank you very much.<br>
> >> > Issend - is not so good, It can't support me Fault tolerance. If slave process fails, the master stall.<br>
> >> > I tried mpich-3.0.4 with hydra-3.0.4 but my program which uses MPI Fault tolerance doesn't recognize failure of slave process, but recognizes failure with MPICH2. May be you can suggest solution?<br>
> >> > I tried to use hydra from MPICH2 but link my program with MPICH3. This combination recognizes failures, but I"m not sure that such combination is stable enough.<br>
> >> > Can you please advice?<br>
> >> > Anatoly.<br>
> >> ><br>
> >> ><br>
> >> ><br>
> >> > On Sat, Dec 7, 2013 at 5:20 PM, Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>> wrote:<br>
> >> ><br>
> >> > As much as I hate saying this — some people find it easier to think of it as “MPICH3”.<br>
> >> ><br>
> >> > — Pavan<br>
> >> ><br>
> >> > On Dec 7, 2013, at 7:37 AM, Wesley Bland <<a href="mailto:wbland@mcs.anl.gov">wbland@mcs.anl.gov</a>> wrote:<br>
> >> ><br>
> >> > > MPICH is just the new version of MPICH2. We renamed it when we went past version 3.0.<br>
> >> > ><br>
> >> > > On Dec 7, 2013, at 3:55 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br>
> >> > ><br>
> >> > >> Ok. I"ll try both Issend, and next step to upgrade MPICH to 3.0.4.<br>
> >> > >> I thought before that MPICH & MPICH2 are two different branches, when MPICH2 partially supports Fault tolerance, but MPICH not. Now I understand, that I was wrong and MPICH2 is just main version of MPICH.<br>
> >> > >><br>
> >> > >> Thank you very much,<br>
> >> > >> Anatoly.<br>
> >> > >><br>
> >> > >><br>
> >> > >><br>
> >> > >> On Thu, Dec 5, 2013 at 11:01 PM, Rajeev Thakur <<a href="mailto:thakur@mcs.anl.gov">thakur@mcs.anl.gov</a>> wrote:<br>
> >> > >> The master is receiving too many incoming messages than it can match quickly enough with Irecvs. Try using MPI_Issend instead of MPI_Isend.<br>
> >> > >><br>
> >> > >> Rajeev<br>
> >> > >><br>
> >> > >> On Dec 5, 2013, at 2:58 AM, Anatoly G <<a href="mailto:anatolyrishon@gmail.com">anatolyrishon@gmail.com</a>> wrote:<br>
> >> > >><br>
> >> > >> > Hello.<br>
> >> > >> > I"m using MPICH2 1.5.<br>
> >> > >> > My system contains master and 16 slaves.<br>
> >> > >> > System uses number of communicators.<br>
> >> > >> > The single communicator used for below scenario:<br>
> >> > >> > Each slave sends non-stop 2Kbyte data buffer using MPI_Isend and waits using MPI_Wait.<br>
> >> > >> > Master starts with MPI_Irecv to each slave<br>
> >> > >> > Then in endless loop:<br>
> >> > >> > MPI_Waitany and MPI_Irecv on rank returned by MPI_Waitany.<br>
> >> > >> ><br>
> >> > >> > Another communicator used for broadcast communication (commands between master + slaves),<br>
> >> > >> > but it's not used in parallel with previous communicator,<br>
> >> > >> > only before or after data transfer.<br>
> >> > >> ><br>
> >> > >> > The system executed on two computers linked by 1Gbit/s Ethernet.<br>
> >> > >> > Master executed on first computer, all slaves on other one.<br>
> >> > >> > Network traffic is ~800Mbit/s.<br>
> >> > >> ><br>
> >> > >> > After 1-2 minutes of execution, master process starts to increase it's memory allocation and network traffic becomes low.<br>
> >> > >> > This memory allocation & network traffic slow down continues until fail of MPI,<br>
> >> > >> > without core file save.<br>
> >> > >> > My program doesn't allocate memory. Can you please explain this behaviour.<br>
> >> > >> > How can I cause MPI to stop sending slaves if Master can't serve such traffic, instead of memory allocation and fail?<br>
> >> > >> ><br>
> >> > >> ><br>
> >> > >> > Thank you,<br>
> >> > >> > Anatoly.<br>
> >> > >> ><br>
> >> > >> > P.S.<br>
> >> > >> > On my stand alone test, I simulate similar behaviour, but with single thread on each process (master & hosts).<br>
> >> > >> > When I start stand alone test, master stops slaves until it completes accumulated data processing and MPI doesn't increase memory allocation.<br>
> >> > >> > When Master is free slaves continue to send data.<br>
> >> > >> > _______________________________________________<br>
> >> > >> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >> > >> > To manage subscription options or unsubscribe:<br>
> >> > >> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> >> > >><br>
> >> > >> _______________________________________________<br>
> >> > >> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >> > >> To manage subscription options or unsubscribe:<br>
> >> > >> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> >> > >><br>
> >> > >> _______________________________________________<br>
> >> > >> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >> > >> To manage subscription options or unsubscribe:<br>
> >> > >> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> >> > > _______________________________________________<br>
> >> > > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >> > > To manage subscription options or unsubscribe:<br>
> >> > > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> >> ><br>
> >> > --<br>
> >> > Pavan Balaji<br>
> >> > <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
> >> ><br>
> >> > _______________________________________________<br>
> >> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >> > To manage subscription options or unsubscribe:<br>
> >> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> >> ><br>
> >> ><br>
> >> > _______________________________________________<br>
> >> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >> > To manage subscription options or unsubscribe:<br>
> >> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> >><br>
> >> --<br>
> >> Pavan Balaji<br>
> >> <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
> >><br>
> >> _______________________________________________<br>
> >> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >> To manage subscription options or unsubscribe:<br>
> >> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> >><br>
> >><br>
> >> _______________________________________________<br>
> >> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> >> To manage subscription options or unsubscribe:<br>
> >> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> ><br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
> --<br>
> Pavan Balaji<br>
> <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
--<br>
Pavan Balaji<br>
<a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
<br>
_______________________________________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
</div></div></blockquote></div><br></div>