<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr">setting the driver as outlined by rob, fixed the issue with the program. Thanks!<div>Luke</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Nov 1, 2016 at 11:52 AM, Rob Latham <span dir="ltr"><<a href="mailto:robl@mcs.anl.gov" target="_blank">robl@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
On 11/01/2016 11:10 AM, Wei-keng Liao wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi, Luke<br>
<br>
Could you try the attached program and run by adding "lustre:" prefix to<br>
the filename?<br>
I.e. mpiexec -n 2 a.out lustre:/path/to/Lustre/testfil<wbr>e<br>
<br>
This checks whether the Intel MPI calls the Lustre driver correctly.<br>
<br>
</blockquote>
<br>
Intel's MPI does driver selection a little differently. Does prefixing the file name work?<br>
<br>
<a href="http://press3.mcs.anl.gov/romio/2014/06/12/romio-and-intel-mpi/" rel="noreferrer" target="_blank">http://press3.mcs.anl.gov/romi<wbr>o/2014/06/12/romio-and-intel-<wbr>mpi/</a><br>
<br>
You request a file system with the I_MPI_EXTRA_FILESYSTEM and<br>
I_MPI_EXTRA_FILESYSTEM_LIST environment variables.<br>
<br>
==rob<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Wei-keng<br>
<br>
<br>
<br>
On Nov 1, 2016, at 10:00 AM, Luke Van Roekel wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks Rob, no luck with the mpi_info object and our computing folks are not willing to set the necessary option. Do you know why this is specific to intel-mpi? Openmpi has no issue. Our HPC people thought the MPI_File_write_at_all always requires a file lock, but openmpi seems to be fine.<br>
<br>
On Tue, Nov 1, 2016 at 8:42 AM, Rob Latham <<a href="mailto:robl@mcs.anl.gov" target="_blank">robl@mcs.anl.gov</a>> wrote:<br>
<br>
<br>
On 10/31/2016 11:47 PM, Luke Van Roekel wrote:<br>
Hello,<br>
<br>
I've been trying to compile and run a very simple mpi test on our<br>
cluster with intel-mpi and openmpi. The test program is below. When I<br>
run with openmpi everything is fine. When I run with intel-mpi, I<br>
receive the following error<br>
<br>
This requires fcntl(2) to be implemented. As of 8/25/2011 it is not.<br>
Generic MPICH Message: File locking failed in ADIOI_Set_lock(fd 6,cmd<br>
F_SETLKW/7,type F_WRLCK/1,whence 0) with return value FFFFFFFF and errno 26.<br>
<br>
<br>
- If the file system is NFS, you need to use NFS version 3, ensure that<br>
the lockd daemon is running on all the machines, and mount the directory<br>
with the 'noac' option (no attribute caching).<br>
<br>
- If the file system is LUSTRE, ensure that the directory is mounted<br>
with the 'flock' option.<br>
<br>
ADIOI_Set_lock:: Function not implemented<br>
<br>
ADIOI_Set_lock:offset 0, length 4<br>
<br>
<br>
your site administrator needs to enable fcntl locking with the 'flock' mount option .<br>
<br>
You can try disabling data sieving: you would create an MPI_Info object and add the key "romio_ds_write" with the value "disable"<br>
<br>
==rob<br>
<br>
<br>
<br>
Any thoughts on how to proceed? The size/format of the file read in<br>
seems to make no difference.<br>
<br>
Regards,<br>
Luke<br>
<br>
<br>
#include <stdio.h><br>
<br>
#include <stdlib.h><br>
<br>
#include <mpi.h><br>
<br>
<br>
<br>
int main(int argc, char **argv) {<br>
<br>
int buf, err;<br>
<br>
MPI_File fh;<br>
<br>
MPI_Status status;<br>
<br>
<br>
MPI_Init(&argc, &argv);<br>
<br>
if (argc != 2) {<br>
<br>
printf("Usage: %s filename\n", argv[0]);<br>
<br>
MPI_Finalize();<br>
<br>
return 1;<br>
<br>
}<br>
<br>
err = MPI_File_open(MPI_COMM_WORLD, argv[1], MPI_MODE_CREATE |<br>
<br>
MPI_MODE_RDWR, MPI_INFO_NULL, &fh);<br>
<br>
if (err != MPI_SUCCESS) printf("Error: MPI_File_open()\n");<br>
<br>
<br>
err = MPI_File_write_all(fh, &buf, 1, MPI_INT, &status);<br>
<br>
if (err != MPI_SUCCESS) printf("Error: MPI_File_write_all()\n");<br>
<br>
<br>
MPI_File_close(&fh);<br>
<br>
MPI_Finalize();<br>
<br>
return 0;<br>
<br>
}<br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
<br>
______________________________<wbr>_________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
<br>
______________________________<wbr>_________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
</blockquote>
<br>
______________________________<wbr>_________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
</blockquote>
______________________________<wbr>_________________<br>
discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailma<wbr>n/listinfo/discuss</a><br>
</blockquote></div><br></div>