[mpich-discuss] MPI IO, reading MPI Writes without MPI read
Rob Latham
robl at mcs.anl.gov
Mon Apr 1 16:44:04 CDT 2013
On Thu, Mar 28, 2013 at 12:53:42PM -0700, Ryan Crocker wrote:
> I just went though the standard, i'm not doing anything to untoward. that i could see.
>
> I also would really not have to re-run these simulations. There has to be a way to get access to those MPI writes with an MPI read, and like i said, other programs have no trouble reading these in and using them (paravew, and ensight). I would just like to have my own stand alone in fortran. The writes come out of a fortran code compiled with mpich so there should be a way to read them back in with a fortran, mpich compiled code. Shouldn't there?
If you have confirmed that the data really does live in that file,
with a 244 byte header and three 'ncells_hexa*4'-byte regions of data,
then heck yeah we can read it all back into fortran.
The quoting's making it hard to read so here's the code again:
ncells = parallel_sum(ncells_hexa_,ncells)
allocate(buffer3(ncells,3))
openfile=trim(workdir)//'/'//'V/V.000002'
call MPI_FILE_OPEN(comm,openfile,MPI_MODE_RDONLY,mpi_info,iunit,ierr)
! Read header
bsize = 80
call MPI_FILE_READ(iunit,cbuffer,bsize,MPI_CHARACTER,status,ierr)
print*,trim(cbuffer)
bsize = 80
call MPI_FILE_READ(iunit,cbuffer,bsize,MPI_CHARACTER,status,ierr)
print*,trim(cbuffer)
bsize = 1
call MPI_FILE_READ(iunit,ibuffer,bsize,MPI_INTEGER,status,ierr)
print*,ibuffer
bsize = 80
call MPI_FILE_READ(iunit,cbuffer,bsize,MPI_CHARACTER,status,ierr)
print*,trim(cbuffer),ncells
! Read the data
disp = 3*80+4+0*ncells*4
call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
call MPI_FILE_READ_ALL(iunit,buffer3(:,1),ncells,MPI_REAL_SP,status,ierr)
disp = 3*80+4+1*ncells*4
call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
call MPI_FILE_READ_ALL(iunit,buffer3(:,2),ncells,MPI_REAL_SP,status,ierr)
disp = 3*80+4+2*ncells*4
call MPI_FILE_SET_VIEW(iunit,disp,MPI_REAL_SP,fileview_hexa,"native",mpi_info,ierr)
call MPI_FILE_READ_ALL(iunit,buffer3(:,3),ncells,MPI_REAL_SP,status,ierr)
! Close the file
call MPI_FILE_CLOSE(iunit,ierr)
The symptom is htat 'buffer3' ends up with all the same values, right?
I think what Rajeev was suggesting is that you pass not 'buffer3(:,1)'
to MPI_File_read_all -- the entire array -- but rather pass in the
first element of that (row? column? I have very little fortran
experience, sorry).
Your use of MPI_FILE_SET_VIEW to move the file pointer to the right
place is uncommon but should be correct.
One way to rationalize this is that many aspects of the fortran
interface are designed to accommodate "speaking to C".
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
More information about the discuss
mailing list