[mpich-discuss] MPI IO, reading MPI Writes without MPI read

Ryan Crocker rcrocker at uvm.edu
Thu Mar 28 02:34:21 CDT 2013


Here is the whole code snippet for writing a vector, it will probably help:

  inquire(file=file,exist=file_is_there)
  if (file_is_there .and. irank.eq.iroot) call MPI_FILE_DELETE(file,mpi_info,ierr)
  call MPI_FILE_OPEN(comm,file,IOR(MPI_MODE_WRONLY,MPI_MODE_CREATE),mpi_info,iunit,ierr)
  
  ! Write header (only root)
  if (irank.eq.iroot) then
     buffer = trim(adjustl(name))
     size = 80
     call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
     buffer = 'part'
     size = 80
     call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
     ibuffer = 1
     size = 1
     call MPI_FILE_WRITE(iunit,ibuffer,size,MPI_INTEGER,status,ierr)
     buffer = 'hexa8'
     size = 80
     call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
  end if
  
  ! Write the data
  disp = 3*80+4+0*ncells_hexa*4
  call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
  call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,1),ncells_hexa_,MPI_REAL_SP,status,ierr)
  disp = 3*80+4+1*ncells_hexa*4
  call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
  call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,2),ncells_hexa_,MPI_REAL_SP,status,ierr)
  disp = 3*80+4+2*ncells_hexa*4
  call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
  call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,3),ncells_hexa_,MPI_REAL_SP,status,ierr)
  
  ! Write the file - wedge
  if (ncells_wedge.gt.0) then
     disp = 3*80+4+3*ncells_hexa*4+0*ncells_wedge*4
     call MPI_FILE_SET_VIEW(iunit,disp,MPI_CHARACTER,MPI_CHARACTER,"native",mpi_info,ierr)
     if (irank.eq.iroot) then
        buffer = 'penta6'
        size = 80
        call MPI_FILE_WRITE(iunit,buffer,size,MPI_CHARACTER,status,ierr)
     end if
     disp = 3*80+4+3*ncells_hexa*4+80+0*ncells_wedge*4
     call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_wedge,"native",mpi_info,ierr)
     call MPI_FILE_WRITE_ALL(iunit,buffer3_wedge(:,1),ncells_wedge_,MPI_REAL_SP,status,ierr)
     disp = 3*80+4+3*ncells_hexa*4+80+1*ncells_wedge*4
     call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_wedge,"native",mpi_info,ierr)
     call MPI_FILE_WRITE_ALL(iunit,buffer3_wedge(:,2),ncells_wedge_,MPI_REAL_SP,status,ierr)
     disp = 3*80+4+3*ncells_hexa*4+80+2*ncells_wedge*4
     call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_wedge,"native",mpi_info,ierr)
     call MPI_FILE_WRITE_ALL(iunit,buffer3_wedge(:,3),ncells_wedge_,MPI_REAL_SP,status,ierr)
  end if
  
  ! Close the file
  call MPI_FILE_CLOSE(iunit,ierr)

On Mar 28, 2013, at 12:00 AM, Ryan Crocker wrote:

> So i'm not sure if this is crazy or not, but i have file outputs from my code that write ensight gold files in MPI.  Here is the write, 
> 
>  disp = 3*80+4+0*ncells_hexa*4
>  call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
>  call MPI_FILE_WRITE_ALL(iunit,buffer3_hexa(:,1),ncells_hexa_,MPI_REAL_SP,status,ierr)
> 
> if i wanted to read that in binary in fortran, or C (preferably fortran) what exactly would i need to do?  I can't seem to write code that reads these in and produces anything that looks like the plot i get in paraview.  I know that MPI write puts out each processor data vector with that disp in between them, but i just can't make that structure make sense to me when i try to read it in.
> 
> Thanks for the help, 
> 
> Ryan Crocker
> University of Vermont, School of Engineering
> Mechanical Engineering Department
> rcrocker at uvm.edu
> 315-212-7331
> 
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

Ryan Crocker
University of Vermont, School of Engineering
Mechanical Engineering Department
rcrocker at uvm.edu
315-212-7331




More information about the discuss mailing list