[mpich-discuss] reading from a read only directory

Jeff Hammond jeff.science at gmail.com
Fri Nov 29 22:35:23 CST 2013


Personally, I'd address the paranoia by saving a copy of your files
into a read-only directory.  Since I know where you're running, I can
say that we should be able to afford the extra storage.  If not, let
me know and I'll address it.  And note also that NERSC $SCRATCH is not
backed up, so the safe place to story anything valuable is $HOME.  I'm
not sure about project directories but I imagine there are docs on
this.

It would be nice if it were possible to do read-only MPI-IO in a
read-only directory.  I don't have a clue what the technical
challenges are.  Can you not make just the files read-only but let new
files be created in the directory?

Jeff

On Wed, Nov 27, 2013 at 11:32 PM, Geoffrey Irving <irving at naml.us> wrote:
> I got the following error trying to slurp in a large file
> (slice-17.pentago) with MPI_File_read_ordered:
>
> rank 0: pentago/mpi/io.cpp:read_sections:397:
> MPI_File_read_ordered(file,raw.data(),raw.size(),MPI_BYTE,MPI_STATUS_IGNO
> ADIOI_CRAY_OPEN(102): Access denied to file all-1/.slice-17.pentago.shfp.670064
>
> The directory all-1 was read only due to paranoia over accidentally
> deleting expensively obtained data.  Looking at ADIOI_Shfp_fname in
> the mpich source, it looks like this error is intentional: the shared
> file pointer routines generate temporary files in the same directory
> as the read-from file.  I couldn't find any attempts at recovery if
> the file cannot be written in that place.  This behavior doesn't seem
> to have changed since the beginning of the git repository.
>
> Is my reading of the code correct: MPI_File_read_ordered can't be used
> on a file in a read only directory?
>
> I can see the motivation for this in the case of noncollective shared
> routines: since the other processes aren't necessarily doing any MPI
> at the moment, the only way to synchronize is through the file system.
>  And because there might be all sorts of different filesystems in
> operation, the easiest way to ensure that we're touching the right one
> is use the same directory.  I can't imagine any reasonable use of the
> noncollective shared routines, but maybe that's a different
> discussion.  Is this an unfortunate leak between broken routines which
> need questionable trickery and perfectly good routines like
> MPI_File_read_ordered?
>
> Thanks,
> Geoffrey
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss



-- 
Jeff Hammond
jeff.science at gmail.com



More information about the discuss mailing list