<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 5 May, 2020, 10:30 PM , <<a href="mailto:discuss-request@mpich.org" target="_blank" rel="noreferrer">discuss-request@mpich.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send discuss mailing list submissions to<br>
<a href="mailto:discuss@mpich.org" rel="noreferrer noreferrer" target="_blank">discuss@mpich.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:discuss-request@mpich.org" rel="noreferrer noreferrer" target="_blank">discuss-request@mpich.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:discuss-owner@mpich.org" rel="noreferrer noreferrer" target="_blank">discuss-owner@mpich.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of discuss digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Better alternatives of MPI_Allreduce() (Benson Muite)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Tue, 05 May 2020 15:17:08 +0300<br>
From: "Benson Muite" <<a href="mailto:benson_muite@emailplus.org" rel="noreferrer noreferrer" target="_blank">benson_muite@emailplus.org</a>><br>
To: "Benson Muite via discuss" <<a href="mailto:discuss@mpich.org" rel="noreferrer noreferrer" target="_blank">discuss@mpich.org</a>><br>
Subject: [mpich-discuss] Better alternatives of MPI_Allreduce()<br>
Message-ID: <<a href="mailto:ef00483f-25a7-48d8-a04c-964e8001def7@www.fastmail.com" rel="noreferrer noreferrer" target="_blank">ef00483f-25a7-48d8-a04c-964e8001def7@www.fastmail.com</a>><br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
> 1. I am using MPI_Neighbor_alltoallw() for exchanging the data by generating a distributed graph topology communicator. My concern is that most of the time my code is working fine but sometimes I guess it is going into deadlock (as it is not showing any output). But MPI_Neighbor_alltoallw uses MPI_Waitall inside it so I am not getting why exactly this is happening.<br>
>> <br>
>> May want to check sending and receiving correct data. Perhaps also try MPI_Neighbor_alltoallw<br>
>> <br>
>> > 2. Is it possible that every time I run the code the processors times for completion of the task may vary? For example, for one run it all processors takes around 100 seconds and for another run, all processors take 110 seconds. <br>
>> <br>
>> There is usually some variability. Do you solve the same system each time? What is the method of solution? If your code is available it can sometimes be easier to give suggestions.<br>
>> <br>
> Yes, the system of equations are the same. I am using the finite volume method for solving Navier stokes equations. By first sentence you mean to say it is possible.<br>
<br>
Is the method implicit or explicit?<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div></div><div dir="auto">Its an explicit method.</div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
> <br>
>> > <br>
>> > Please help in above two matters.<br>
>> > <br>
>> > On Tue, May 5, 2020 at 4:28 PM hritikesh semwal <<a href="mailto:hritikesh.semwal@gmail.com" rel="noreferrer noreferrer" target="_blank">hritikesh.semwal@gmail.com</a>> wrote:<br>
>> >> Thanks for your response.<br>
>> >> <br>
>> >> Yes, you are right. I have put barrier just before Allreduce and out of the total time consumed by Allreduce, 79% time is consumed by the barrier. But my computational work is balanced. Right now, I have distributed 97336 cells among 24 processors and maximum and minimum cell distribution among all processors is 4057 and 4055 respectively which is not too bad. Is there any solution to get rid of this.<br>
>> <br>
>> Try profiling your code not just looking at cell distribution. Are any profling tools already installed on your cluster?<br>
> <br>
> gprof and valgrind are there.<br>
<br>
While not ideal GPROF may be helpful. Perhaps initial try running on 12 processors. With GPROF you will get 12 files to examine. Check if all subroutines take similar times on each processor. You can also time the subroutines individually using MPI_WTIME to get the same information.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div></div><div dir="auto">Yes, I have already timed my code before posting this question. I will try with gprof.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
Also, try not to reply to the digest -, or if you do, change the subject of the message. This is useful in deciding what to read.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div></div><div dir="auto">Is it fine this time? I have changed the subject line. Is that what you want to say?</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20200505/bfebcddc/attachment-0001.html" rel="noreferrer noreferrer noreferrer" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20200505/bfebcddc/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
_______________________________________________<br>
discuss mailing list<br>
<a href="mailto:discuss@mpich.org" rel="noreferrer noreferrer" target="_blank">discuss@mpich.org</a><br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
<br>
------------------------------<br>
<br>
End of discuss Digest, Vol 91, Issue 7<br>
**************************************<br>
</blockquote></div></div></div>