[mpich-discuss] 1. Single file reading by all processors in MPI. 2.Difference between blocking and non-blocking call of MPI_neighbor_alltoallw
Benson Muite
benson_muite at emailplus.org
Sun May 10 01:19:39 CDT 2020
On Sun, May 10, 2020, at 6:33 AM, hritikesh semwal via discuss wrote:
>
>
> On Sun, May 10, 2020 at 4:18 AM Jeff Hammond <jeff.science at gmail.com> wrote:
>>
>>
>> On Sat, May 9, 2020 at 10:10 AM hritikesh semwal via discuss <discuss at mpich.org> wrote:
>>> Hello,
>>>
>>> I have following two questions with MPI,
>>>
>>> 1. Do I need to give a separate input file to each processor in a group or all of them can read the input from a single file? All data inside the file should be read by every processor.
>>
>> You can read the input file from every process but it isn't a good idea. Read the input file from rank 0 and broadcast the contents. This assumes that your input file is small. If your input file is huge, then you should consider designing it for parallel reading with MPI_File_* functions.
>
> Thanks for your response. I am reading two input files in my parallel CFD solver at different places of the code. The first input file has 7 integer values and second input file has 15 integer values. Is it large to read it through conventional C functions for file reading i.e. fscanf() and is broadcast good for this amount of data?
>
fscanf on process 0 with a broadcast should be sufficient.
>>
>>
>>
>>>
>>> 2. Can you please tell me what is the difference between MPI_Neighbor_alltoallw and MPI_Ineighbor_alltoallw? As I read in MPI 3.0 document that MPI_Neighbor_alltoallw also used non-blocking send and receive inside its execution.
>>>
>>
>> MPI_Neighbor_alltoallw is a blocking function while MPI_Ineighbor_alltoallw is not. Using non-blocking send and receive inside its execution does not mean that MPI_Neighbor_alltoallw doesn't block - if that is how it's implemented, then there will be the equivalent of a MPI_Waitall before the function returns. But in any case, just because the MPI document suggests a possible implementation does not mean that is how all implementations work.
>
> Yes, there is MPI_Waitall inside the MPI_Neighbor_alltoallw before it returns.
>
>>
>>
>> Jeff
>>
>>>
>>> Thank you.
>>> _______________________________________________
>>> discuss mailing list discuss at mpich.org
>>> To manage subscription options or unsubscribe:
>>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>>
>> --
>> Jeff Hammond
>> jeff.science at gmail.com
>> http://jeffhammond.github.io/
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20200510/ddae4918/attachment-0001.html>
More information about the discuss
mailing list