<!DOCTYPE html><html><head><title></title><style type="text/css">p.MsoNormal,p.MsoNoSpacing{margin:0}</style></head><body><div style="font-family:Arial;"><br></div><div style="font-family:Arial;"><br></div><div>On Sun, May 10, 2020, at 6:33 AM, hritikesh semwal via discuss wrote:<br></div><blockquote type="cite" id="qt" style=""><div dir="ltr"><div dir="ltr"><br></div><div><br></div><div class="qt-gmail_quote"><div dir="ltr" class="qt-gmail_attr">On Sun, May 10, 2020 at 4:18 AM Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>> wrote:<br></div><blockquote class="qt-gmail_quote" style="margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204, 204, 204);padding-left:1ex;"><div dir="ltr"><div dir="ltr"><br></div><div><br></div><div class="qt-gmail_quote"><div dir="ltr" class="qt-gmail_attr">On Sat, May 9, 2020 at 10:10 AM hritikesh semwal via discuss <<a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a>> wrote:<br></div><blockquote class="qt-gmail_quote" style="margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204, 204, 204);padding-left:1ex;"><div dir="ltr"><div>Hello,<br></div><div><br></div><div>I have following two questions with MPI,<br></div><div><br></div><div>1. Do I need to give a separate input file to each processor in a group or all of them can read the input from a single file? All data inside the file should be read by every processor.<br></div></div></blockquote><div><br></div><div>You can read the input file from every process but it isn't a good idea. Read the input file from rank 0 and broadcast the contents. This assumes that your input file is small. If your input file is huge, then you should consider designing it for parallel reading with MPI_File_* functions.<br></div></div></div></blockquote><div><br></div><div>Thanks for your response. I am reading two input files in my parallel CFD solver at different places of the code. The first input file has 7 integer values and second input file has 15 integer values. Is it large to read it through conventional C functions for file reading i.e. fscanf() and is broadcast good for this amount of data?<br></div><div> <br></div></div></div></blockquote><div style="font-family:Arial;">fscanf on process 0 with a broadcast should be sufficient.<br></div><div style="font-family:Arial;"><br></div><blockquote type="cite" id="qt" style=""><div dir="ltr"><div class="qt-gmail_quote"><blockquote class="qt-gmail_quote" style="margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204, 204, 204);padding-left:1ex;"><div dir="ltr"><div class="qt-gmail_quote"><div><br></div><div style="font-family:Arial;"><br></div><div style="font-family:Arial;"> <br></div><blockquote class="qt-gmail_quote" style="margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204, 204, 204);padding-left:1ex;"><div dir="ltr"><div><br></div><div>2. Can you please tell me what is the difference between MPI_Neighbor_alltoallw and MPI_Ineighbor_alltoallw? As I read in MPI 3.0 document that MPI_Neighbor_alltoallw also used non-blocking send and receive inside its execution.<br></div><div><br></div></div></blockquote><div><br></div><div>MPI_Neighbor_alltoallw is a blocking function while MPI_Ineighbor_alltoallw is not. Using non-blocking send and receive inside its execution does not mean that MPI_Neighbor_alltoallw doesn't block - if that is how it's implemented, then there will be the equivalent of a MPI_Waitall before the function returns. But in any case, just because the MPI document suggests a possible implementation does not mean that is how all implementations work.<br></div></div></div></blockquote><div><br></div><div>Yes, there is MPI_Waitall inside the MPI_Neighbor_alltoallw before it returns. <br></div><div> <br></div><blockquote class="qt-gmail_quote" style="margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204, 204, 204);padding-left:1ex;"><div dir="ltr"><div class="qt-gmail_quote"><div><br></div><div><br></div><div>Jeff<br></div><div> <br></div><blockquote class="qt-gmail_quote" style="margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204, 204, 204);padding-left:1ex;"><div dir="ltr"><div><br></div><div>Thank you.<br></div></div><div>_______________________________________________<br></div><div> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br></div><div> To manage subscription options or unsubscribe:<br></div><div> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br></div></blockquote></div><div><br></div><div><br></div><div>-- <br></div><div dir="ltr"><div>Jeff Hammond<br></div><div><a href="mailto:jeff.science@gmail.com" target="_blank">jeff.science@gmail.com</a><br></div><div><a href="http://jeffhammond.github.io/" target="_blank">http://jeffhammond.github.io/</a><br></div></div></div></blockquote></div></div><div>_______________________________________________<br></div><div>discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br></div><div>To manage subscription options or unsubscribe:<br></div><div><a href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a><br></div><div><br></div></blockquote><div style="font-family:Arial;"><br></div></body></html>