[mpich-discuss] MPI IO, reading MPI Writes without MPI read

Ryan Crocker rcrocker at uvm.edu
Thu Mar 28 14:34:00 CDT 2013


buffer3_hexa(1,1)  would just write out the first point in the buffer3 array, while buffer3_hexa(:,1) tells it to write the whole vector.  which is just a stacked version of my domain:

do k
  do j
    do i
      m=m+1
      buffer(m) = U(i,j,k)   
    end do 
  end do 
end do 

I think maybe my disp are messed up.  If i'm writing from each process with the disp a function of the size of the number of nodes in the process does reading it in with the disp a function of all the nodes in all the process make sense?  Like i said paraview and ensight have no problems reading these files, and the only information they get in the header is the total number of nodes in all the processes.


On Mar 28, 2013, at 12:22 PM, Rajeev Thakur wrote:

> The other thing I would try is use buffer3_hexa(1,1) instead of buffer3_hexa(:,1). Not sure what Fortran compilers do with the latter format, but there is discussion of issues with Fortran subscript notation on pgs 626-628 of the MPI-3 standard.
> 
> On Mar 28, 2013, at 2:16 PM, Ryan Crocker wrote:
> 
>> Yes.
>> 
>> On Mar 28, 2013, at 12:11 PM, Rajeev Thakur wrote:
>> 
>>> Have you declared disp as integer (kind=MPI_OFFSET_KIND) instead of just integer?
>>> 
>>> On Mar 28, 2013, at 1:33 PM, Ryan Crocker wrote:
>>> 
>>>> Okay, so i coded the MPI IO.  All the characters and stuff show up, but when i try to read in my data buffers i just get the same number for the whole vector.  All the characters in the header read in and are what they'er supposed to be, and the program finishes without any errors.
>>>> 
>>>> my write is:
>>>> 
>>>> call MPI_FILE_OPEN(comm,file,IOR(MPI_MODE_WRONLY,MPI_MODE_CREATE),mpi_info,iunit,ierr)
>>>> 
>>>> ! Write header (only root)
>>>> if (irank.eq.iroot) then
>>>>  buffer = trim(adjustl(name))
>>>>  size = 80
>>>>  call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
>>>>  buffer = 'part'
>>>>  size = 80
>>>>  call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
>>>>  ibuffer = 1
>>>>  size = 1
>>>>  call MPI_FILE_WRITE(iunit,ibuffer,size,MPI_INTEGER,status,ierr)
>>>>  buffer = 'hexa8'
>>>>  size = 80
>>>>  call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
>>>> end if
>>>> 
>>>> ! Write the data
>>>> disp = 3*80+4+0*ncells_hexa*4
>>>> call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
>>>> call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,1),ncells_hexa_,MPI_REAL_SP,status,ierr)
>>>> disp = 3*80+4+1*ncells_hexa*4
>>>> call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
>>>> call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,2),ncells_hexa_,MPI_REAL_SP,status,ierr)
>>>> disp = 3*80+4+2*ncells_hexa*4
>>>> call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
>>>> call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,3),ncells_hexa_,MPI_REAL_SP,status,ierr)
>>>> 
>>>> ! Close the file
>>>> call MPI_FILE_CLOSE(iunit,ierr)
>>>> 
>>>> and the read is:
>>>> 
>>>> ncells = parallel_sum(ncells_hexa_,ncells)
>>>> 
>>>> allocate(buffer3(ncells,3))
>>>> 
>>>> openfile=trim(workdir)//'/'//'V/V.000002'
>>>> call MPI_FILE_OPEN(comm,openfile,MPI_MODE_RDONLY,mpi_info,iunit,ierr)
>>>> 
>>>> ! Read header 
>>>>  bsize = 80
>>>>  call MPI_FILE_READ(iunit,cbuffer,bsize,MPI_CHARACTER,status,ierr)
>>>>  print*,trim(cbuffer)
>>>>  bsize = 80
>>>>  call MPI_FILE_READ(iunit,cbuffer,bsize,MPI_CHARACTER,status,ierr)
>>>>  print*,trim(cbuffer)
>>>>  bsize = 1
>>>>  call MPI_FILE_READ(iunit,ibuffer,bsize,MPI_INTEGER,status,ierr)
>>>>  print*,ibuffer
>>>>  bsize = 80
>>>>  call MPI_FILE_READ(iunit,cbuffer,bsize,MPI_CHARACTER,status,ierr)
>>>>  print*,trim(cbuffer),ncells
>>>> 
>>>> ! Read the data
>>>> disp = 3*80+4+0*ncells*4
>>>> call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
>>>> call MPI_FILE_READ_ALL(iunit,buffer3(:,1),ncells,MPI_REAL_SP,status,ierr)
>>>> disp = 3*80+4+1*ncells*4
>>>> call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
>>>> call MPI_FILE_READ_ALL(iunit,buffer3(:,2),ncells,MPI_REAL_SP,status,ierr)
>>>> disp = 3*80+4+2*ncells*4
>>>> call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
>>>> call MPI_FILE_READ_ALL(iunit,buffer3(:,3),ncells,MPI_REAL_SP,status,ierr)
>>>> 
>>>> ! Close the file
>>>> call MPI_FILE_CLOSE(iunit,ierr) 
>>>> 
>>>> On Mar 28, 2013, at 8:48 AM, Rob Latham wrote:
>>>> 
>>>>> On Thu, Mar 28, 2013 at 08:19:23AM -0700, Ryan Crocker wrote:
>>>>>> from the file those points are most definitely not in the same spot.  We're using a set view so the memory is stored non-contiguously so you can't just read it back in, and i can see the offset in the data blocks.
>>>>> 
>>>>> If you write the data with MPI_File_write_at, then you should read it
>>>>> back with MPI_File_read_at.  Fortran I/O will expect the input to have various
>>>>> compiler-specific padding or ... well I don't properly know the
>>>>> details, other than it's a mess.
>>>>> 
>>>>> ==rob
>>>>> 
>>>>> 
>>>>>> 
>>>>>> On Mar 28, 2013, at 6:47 AM, Rajeev Thakur wrote:
>>>>>> 
>>>>>>> Most MPI-IO implementations write plain binary files without any additional information. So the data should be exactly what you wrote and at the locations you specified. The disp is not stored in the file. It is just used to offset to the right location. The file won't be portable with the default "native" file format used for writing. In other words, it will be readable only on the same type of machine architecture. 
>>>>>>> 
>>>>>>> Try writing a simple MPI-IO program to write a file and then read it back using POSIX I/O calls (lseek, read) from a C program. I don't know if Fortran I/O expects files to be formatted in a particular way, which could cause a problem when reading from Fortran.
>>>>>>> 
>>>>>>> Rajeev
>>>>>>> 
>>>>>>> 
>>>>>>> On Mar 28, 2013, at 2:00 AM, Ryan Crocker wrote:
>>>>>>> 
>>>>>>>> So i'm not sure if this is crazy or not, but i have file outputs from my code that write ensight gold files in MPI.  Here is the write, 
>>>>>>>> 
>>>>>>>> disp = 3*80+4+0*ncells_hexa*4
>>>>>>>> call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
>>>>>>>> call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,1),ncells_hexa_,MPI_REAL_SP,status,ierr)
>>>>>>>> 
>>>>>>>> if i wanted to read that in binary in fortran, or C (preferably fortran) what exactly would i need to do?  I can't seem to write code that reads these in and produces anything that looks like the plot i get in paraview.  I know that MPI write puts out each processor data vector with that disp in between them, but i just can't make that structure make sense to me when i try to read it in.
>>>>>>>> 
>>>>>>>> Thanks for the help, 
>>>>>>>> 
>>>>>>>> Ryan Crocker
>>>>>>>> University of Vermont, School of Engineering
>>>>>>>> Mechanical Engineering Department
>>>>>>>> rcrocker at uvm.edu
>>>>>>>> 315-212-7331
>>>>>>>> 
>>>>>>>> _______________________________________________
>>>>>>>> discuss mailing list     discuss at mpich.org
>>>>>>>> To manage subscription options or unsubscribe:
>>>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> discuss mailing list     discuss at mpich.org
>>>>>>> To manage subscription options or unsubscribe:
>>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>>> 
>>>>>> Ryan Crocker
>>>>>> University of Vermont, School of Engineering
>>>>>> Mechanical Engineering Department
>>>>>> rcrocker at uvm.edu
>>>>>> 315-212-7331
>>>>>> 
>>>>>> _______________________________________________
>>>>>> discuss mailing list     discuss at mpich.org
>>>>>> To manage subscription options or unsubscribe:
>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>> 
>>>>> -- 
>>>>> Rob Latham
>>>>> Mathematics and Computer Science Division
>>>>> Argonne National Lab, IL USA
>>>>> _______________________________________________
>>>>> discuss mailing list     discuss at mpich.org
>>>>> To manage subscription options or unsubscribe:
>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>> 
>>>> Ryan Crocker
>>>> University of Vermont, School of Engineering
>>>> Mechanical Engineering Department
>>>> rcrocker at uvm.edu
>>>> 315-212-7331
>>>> 
>>>> _______________________________________________
>>>> discuss mailing list     discuss at mpich.org
>>>> To manage subscription options or unsubscribe:
>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>> 
>>> _______________________________________________
>>> discuss mailing list     discuss at mpich.org
>>> To manage subscription options or unsubscribe:
>>> https://lists.mpich.org/mailman/listinfo/discuss
>> 
>> Ryan Crocker
>> University of Vermont, School of Engineering
>> Mechanical Engineering Department
>> rcrocker at uvm.edu
>> 315-212-7331
>> 
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
> 
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

Ryan Crocker
University of Vermont, School of Engineering
Mechanical Engineering Department
rcrocker at uvm.edu
315-212-7331




More information about the discuss mailing list