<div dir="ltr">Hi Rob,<div><br></div><div>The liborangefsposix library works when I use standard file path like /mnt/pfs/data instead of pvfs2://mnt/pfs/data.</div><div>I want to use that because it gives better performance in metadata operation which I need for my project.</div><div>I'm not using MPI I/O to access the files on OrangeFS. It is just a standard C interface call (fopen and fclose).</div><div>CH4 and liborangefsposix is the only combination that produces this error. Here is the result matrix I have tried.</div><div><table cellspacing="0" cellpadding="0" dir="ltr" border="1" style="table-layout:fixed;font-size:10pt;font-family:Arial;width:0px;border-collapse:collapse;border:none"><colgroup><col width="100"><col width="118"><col width="100"></colgroup><tbody><tr style="height:21px"><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">MPICH device</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">Linking OrangeFS</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">Result</td></tr><tr style="height:21px"><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">CH3</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">non direct interface</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">success</td></tr><tr style="height:21px"><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">CH3</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">direct interface</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">success</td></tr><tr style="height:21px"><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">CH4</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">non direct interface</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">success</td></tr><tr style="height:21px"><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;border:1px solid rgb(204,204,204)">CH4</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;color:rgb(255,0,0);border:1px solid rgb(204,204,204)">direct interface</td><td style="overflow:hidden;padding:2px 3px;vertical-align:bottom;color:rgb(255,0,0);border:1px solid rgb(204,204,204)">no output</td></tr></tbody></table></div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Thanks<div>Kun</div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Oct 8, 2019 at 10:10 AM Latham, Robert J. <<a href="mailto:robl@mcs.anl.gov">robl@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Sun, 2019-10-06 at 12:20 -0500, Kun Feng via discuss wrote:<br>
> Hi Min,<br>
> <br>
> If that is the case, please ignore this email. Nothing is wrong<br>
> without OrangeFS direct interface. I will try "ch4:ucx". Thank you<br>
> for the info.<br>
<br>
Does the 'pvfs2' driver still work? the liborangefsposix library might<br>
be intercepting system calls MPICH expects to use natively.<br>
<br>
The liborangefsposix library is intended more for non-mpi applications<br>
-- Hadoop workflows, for example. MPICH's pvfs2 driver (the old name<br>
for OrangeFS) speaks directly to the orangefs servers. It also uses a<br>
few optimizations not available if MPICH treats the OrangeFS like a<br>
traditional UNIX-like file system.<br>
<br>
==rob<br>
<br>
> <br>
> On Sun, Oct 6, 2019 at 10:25 AM Si, Min via discuss <<br>
> <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a>> wrote:<br>
> > Hi Kun,<br>
> > <br>
> > Can you please try to reproduce the issue in a simple MPI program<br>
> > which does not use OrangeFS ? It is hard for the MPICH community to<br>
> > help when mixing MPI and OrangeFS together, because we are not<br>
> > OrangeFS experts.<br>
> > <br>
> > Besides, for InfiniBand networks, you might want to use `ch4:ucx`<br>
> > instead of `ch4:ofi`. But I do not think it causes the failure in<br>
> > your use case.<br>
> > <br>
> > Best regards,<br>
> > Min<br>
> > <br>
> > On 2019/10/04 12:21, Kun Feng via discuss wrote:<br>
> > > To Whom It May Concern,<br>
> > > <br>
> > > Recently, I switched to CH4 device in MPICH 3.3.1 for better<br>
> > > network performance over the RoCE network we are using.<br>
> > > I realized that my code fails to run when I use direct interface<br>
> > > of OrangeFS 2.9.7. It exits without any error. But even simple<br>
> > > helloworld cannot print anything. It happens only when I enable<br>
> > > direct interface of OrangeFS by linking -lorangefsposix.<br>
> > > Could you please help me on this issue?<br>
> > > Here are some information that might be useful:<br>
> > > Output of ibv_devinfo of 40Gbps Mellanox ConnectX-4 Lx adapter:<br>
> > > hca_id: mlx5_0<br>
> > > transport: InfiniBand (0)<br>
> > > fw_ver: 14.20.1030<br>
> > > node_guid: 248a:0703:0015:a800<br>
> > > sys_image_guid: 248a:0703:0015:a800<br>
> > > vendor_id: 0x02c9<br>
> > > vendor_part_id: 4117<br>
> > > hw_ver: 0x0<br>
> > > board_id: LNV2430110027<br>
> > > phys_port_cnt: 1<br>
> > > port: 1<br>
> > > state: PORT_ACTIVE (4)<br>
> > > max_mtu: 4096 (5)<br>
> > > active_mtu: 1024 (3)<br>
> > > sm_lid: 0<br>
> > > port_lid: 0<br>
> > > port_lmc: 0x00<br>
> > > link_layer: Ethernet<br>
> > > <br>
> > > hca_id: i40iw0<br>
> > > transport: iWARP (1)<br>
> > > fw_ver: 0.2<br>
> > > node_guid: 7cd3:0aef:3da0:0000<br>
> > > sys_image_guid: 7cd3:0aef:3da0:0000<br>
> > > vendor_id: 0x8086<br>
> > > vendor_part_id: 14289<br>
> > > hw_ver: 0x0<br>
> > > board_id: I40IW Board ID<br>
> > > phys_port_cnt: 1<br>
> > > port: 1<br>
> > > state: PORT_ACTIVE (4)<br>
> > > max_mtu: 4096 (5)<br>
> > > active_mtu: 1024 (3)<br>
> > > sm_lid: 0<br>
> > > port_lid: 1<br>
> > > port_lmc: 0x00<br>
> > > link_layer: Ethernet<br>
> > > MPICH 3.3.1 configuration command: ./configure --with-<br>
> > > device=ch4:ofi --with-pvfs2=/home/kfeng/install --enable-shared<br>
> > > --enable-romio --with-file-system=ufs+pvfs2+zoidfs --enable-<br>
> > > fortran=no --with-libfabric=/home/kfeng/install<br>
> > > OrangeFS 2.9.7 configuration command: ./configure --<br>
> > > prefix=/home/kfeng/install --enable-shared --enable-jni --with-<br>
> > > jdk=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64<br>
> > > --with-kernel=/usr/src/kernels/3.10.0-862.el7.x86_64<br>
> > > Make command: mpicc -o ~/hello ~/hello.c<br>
> > > -L/home/kfeng/install/lib -lorangefsposix<br>
> > > The verbose outputs of mpiexec are attached.<br>
> > > <br>
> > > Thanks<br>
> > > Kun<br>
> > > <br>
> > > <br>
> > > _______________________________________________<br>
> > > discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> > > To manage subscription options or unsubscribe:<br>
> > > <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> > <br>
> > _______________________________________________<br>
> > discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> > To manage subscription options or unsubscribe:<br>
> > <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> <br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
</blockquote></div>