From m.geimer at fz-juelich.de Sat Jun 1 09:21:35 2013 From: m.geimer at fz-juelich.de (Markus Geimer) Date: Sat, 1 Jun 2013 16:21:35 +0200 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM Message-ID: <51AA036F.2030204@fz-juelich.de> Dear MPICH developers, We are experiencing some problems getting MPICH jobs to run under SLURM (Debian package slurm-llnl 2.3.4-2+b1) on our small test cluster. When starting an MPI job with more than one rank, the program crashes immediately with the following output: ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- *** glibc detected *** ./hello: double free or corruption (fasttop): 0x00000000014c9680 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x76d76)[0x7f4ee51fed76] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x6c)[0x7f4ee5203aac] /opt/mpich/3.0.4-gcc/lib/libmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7f4ee5dec5d9] /opt/mpich/3.0.4-gcc/lib/libmpich.so.10(MPID_Init+0x136)[0x7f4ee5de6dd6] /opt/mpich/3.0.4-gcc/lib/libmpich.so.10(MPIR_Init_thread+0x23f)[0x7f4ee5e9cf7f] /opt/mpich/3.0.4-gcc/lib/libmpich.so.10(MPI_Init+0xae)[0x7f4ee5e9c90e] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd)[0x7fa889cddead] ./hello[0x400799] ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- Single rank jobs run fine, but they are obviously of little interest ;-) The test program is a simple 'hello world' printing the rank, started using a minimal batch script: ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- #!/bin/sh #SBATCH -n 4 mpiexec ./hello ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- We have tried with MPICH 3.0.4 as well as MPICH2 1.5, configured using only --prefix=... --enable-shared --enable-debuginfo Both are showing the same symptoms. MPICH2 1.4.1p1, however, works without problems. Any idea what's going wrong in the newer versions? Thanks, Markus -- Dr. Markus Geimer Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-1773 Fax: +49-2461-61-6656 E-mail: m.geimer at fz-juelich.de WWW: http://www.fz-juelich.de/jsc/ ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ From jeff.science at gmail.com Sat Jun 1 10:15:40 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Sat, 1 Jun 2013 10:15:40 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51AA036F.2030204@fz-juelich.de> References: <51AA036F.2030204@fz-juelich.de> Message-ID: <-2104451799775482129@unknownmsgid> Have you tried MPICH 3.0.4? Hydra has been improved a great deal since the 2.4 release. Jeff Sent from my iPhone On Jun 1, 2013, at 9:21 AM, Markus Geimer wrote: > Dear MPICH developers, > > We are experiencing some problems getting MPICH jobs to run under > SLURM (Debian package slurm-llnl 2.3.4-2+b1) on our small test > cluster. When starting an MPI job with more than one rank, the > program crashes immediately with the following output: > > ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- > > *** glibc detected *** ./hello: double free or corruption (fasttop): > 0x00000000014c9680 *** > ======= Backtrace: ========= > /lib/x86_64-linux-gnu/libc.so.6(+0x76d76)[0x7f4ee51fed76] > /lib/x86_64-linux-gnu/libc.so.6(cfree+0x6c)[0x7f4ee5203aac] > /opt/mpich/3.0.4-gcc/lib/libmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7f4ee5dec5d9] > /opt/mpich/3.0.4-gcc/lib/libmpich.so.10(MPID_Init+0x136)[0x7f4ee5de6dd6] > /opt/mpich/3.0.4-gcc/lib/libmpich.so.10(MPIR_Init_thread+0x23f)[0x7f4ee5e9cf7f] > /opt/mpich/3.0.4-gcc/lib/libmpich.so.10(MPI_Init+0xae)[0x7f4ee5e9c90e] > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd)[0x7fa889cddead] > ./hello[0x400799] > > ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- > > Single rank jobs run fine, but they are obviously of little interest ;-) > The test program is a simple 'hello world' printing the rank, started > using a minimal batch script: > > ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- > > #!/bin/sh > #SBATCH -n 4 > mpiexec ./hello > > ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- > > We have tried with MPICH 3.0.4 as well as MPICH2 1.5, configured using > only > > --prefix=... --enable-shared --enable-debuginfo > > Both are showing the same symptoms. MPICH2 1.4.1p1, however, works > without problems. Any idea what's going wrong in the newer versions? > > Thanks, > Markus > > -- > Dr. Markus Geimer > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-1773 > Fax: +49-2461-61-6656 > E-mail: m.geimer at fz-juelich.de > WWW: http://www.fz-juelich.de/jsc/ > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From m.geimer at fz-juelich.de Sat Jun 1 10:29:16 2013 From: m.geimer at fz-juelich.de (Markus Geimer) Date: Sat, 1 Jun 2013 17:29:16 +0200 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <-2104451799775482129@unknownmsgid> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> Message-ID: <51AA134C.2030407@fz-juelich.de> Jeff, > Have you tried MPICH 3.0.4? Hydra has been improved a great deal since > the 2.4 release. As I wrote: We first tried the latest and greatest, i.e., MPICH 3.0.4. Only afterwards we tried MPICH2 1.5 to see whether previous versions behave the same. Unfortunately, both show the same behavior. The predecessor version MPICH2 1.4.1p1, however, works. Therefore we assume that some bug sneaked in between 1.4.1p1 and 1.5. Markus -- Dr. Markus Geimer Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-1773 Fax: +49-2461-61-6656 E-mail: m.geimer at fz-juelich.de WWW: http://www.fz-juelich.de/jsc/ ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ From thakur at mcs.anl.gov Sat Jun 1 14:39:31 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Sat, 1 Jun 2013 14:39:31 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51AA134C.2030407@fz-juelich.de> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> Message-ID: I have created a ticket for it: http://trac.mpich.org/projects/mpich/ticket/1871 Rajeev On Jun 1, 2013, at 10:29 AM, Markus Geimer wrote: > Jeff, > >> Have you tried MPICH 3.0.4? Hydra has been improved a great deal since >> the 2.4 release. > > As I wrote: We first tried the latest and greatest, i.e., MPICH 3.0.4. > Only afterwards we tried MPICH2 1.5 to see whether previous versions > behave the same. Unfortunately, both show the same behavior. > > The predecessor version MPICH2 1.4.1p1, however, works. Therefore we > assume that some bug sneaked in between 1.4.1p1 and 1.5. > > Markus > > -- > Dr. Markus Geimer > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-1773 > Fax: +49-2461-61-6656 > E-mail: m.geimer at fz-juelich.de > WWW: http://www.fz-juelich.de/jsc/ > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From balaji at mcs.anl.gov Sat Jun 1 17:06:23 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sat, 01 Jun 2013 17:06:23 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> Message-ID: <51AA705F.3090500@mcs.anl.gov> Thanks. I'll look into this. Markus: does this happen only with SLURM or can you reproduce this without SLURM as well? -- Pavan On 06/01/2013 02:39 PM, Rajeev Thakur wrote: > I have created a ticket for it: http://trac.mpich.org/projects/mpich/ticket/1871 > > Rajeev > > On Jun 1, 2013, at 10:29 AM, Markus Geimer wrote: > >> Jeff, >> >>> Have you tried MPICH 3.0.4? Hydra has been improved a great deal since >>> the 2.4 release. >> >> As I wrote: We first tried the latest and greatest, i.e., MPICH 3.0.4. >> Only afterwards we tried MPICH2 1.5 to see whether previous versions >> behave the same. Unfortunately, both show the same behavior. >> >> The predecessor version MPICH2 1.4.1p1, however, works. Therefore we >> assume that some bug sneaked in between 1.4.1p1 and 1.5. >> >> Markus >> >> -- >> Dr. Markus Geimer >> Juelich Supercomputing Centre >> Institute for Advanced Simulation >> Forschungszentrum Juelich GmbH >> 52425 Juelich, Germany >> >> Phone: +49-2461-61-1773 >> Fax: +49-2461-61-6656 >> E-mail: m.geimer at fz-juelich.de >> WWW: http://www.fz-juelich.de/jsc/ >> >> >> ------------------------------------------------------------------------------------------------ >> ------------------------------------------------------------------------------------------------ >> Forschungszentrum Juelich GmbH >> 52425 Juelich >> Sitz der Gesellschaft: Juelich >> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 >> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher >> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), >> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, >> Prof. Dr. Sebastian M. Schmidt >> ------------------------------------------------------------------------------------------------ >> ------------------------------------------------------------------------------------------------ >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Sat Jun 1 17:12:03 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sat, 01 Jun 2013 17:12:03 -0500 Subject: [mpich-discuss] weird behavior with mpiexe (3.0.4) In-Reply-To: References: Message-ID: <51AA71B3.6090906@mcs.anl.gov> We certainly want to give better error messages where possible. I've created a ticket for it: https://trac.mpich.org/projects/mpich/ticket/1872 -- Pavan On 05/29/2013 04:37 PM, Edscott Wilson wrote: > > > > 2013/5/29 Jeff Hammond > > > > > > > > > Wouldn't a message such as "`pwd` directory does not exist on > node velascoj" > > be more illustrative? > > Yes. However, the set of improper uses of MPI that could generate > helpful error messages is uncountable. Do you not think it is a good > use of finite developer effort to implement an infinitesimal fraction > of such warnings? There has to be a minimum requirement placed upon > the user. I personally think that it should include running in a > directory that actually exists. > > > Certainly! But then again, some developer must have thought it a good > idea, since under different circumstances, I get: > > /bin/bash -c mpiexec -n 1 -hosts tauro,velascoj gmandel > [proxy:0:0 at tauro] launch_procs (./pm/pmiserv/pmip_cb.c:648): unable to > change wdir to /tmp/edscott/mnt/tauro-home/GIT/gmandel (No such file or > directory) > [proxy:0:0 at tauro] HYD_pmcd_pmip_control_cmd_cb > (./pm/pmiserv/pmip_cb.c:893): launch_procs returned error > [proxy:0:0 at tauro] HYDT_dmxu_poll_wait_for_event > (./tools/demux/demux_poll.c:77): callback returned error status > [proxy:0:0 at tauro] main (./pm/pmiserv/pmip.c:206): demux engine error > waiting for event > [mpiexec at velascoj] control_cb (./pm/pmiserv/pmiserv_cb.c:202): assert > (!closed) failed > [mpiexec at velascoj] HYDT_dmxu_poll_wait_for_event > (./tools/demux/demux_poll.c:77): callback returned error status > [mpiexec at velascoj] HYD_pmci_wait_for_completion > (./pm/pmiserv/pmiserv_pmci.c:197): error waiting for event > [mpiexec at velascoj] main (./ui/mpich/mpiexec.c:331): process manager > error waiting for completion > Which is inconsistent with the previous behavior. Anyways, its no big deal. > > BTW, would you happen to know why a process which is started with > MPI_Comm_spawn will go into what seems like an active wait after > MPI_Comm_disconnect and MPI_Finalize has been called? These spawned > processed will hog up CPU until the parent process exits. Curious > enough, this behavior is not mirrored in openmpi. > > Edscott > > > > ------------------------------- > Dr. Edscott Wilson Garcia > Applied Mathematics and Computing > Mexican Petroleum Institute > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From m.geimer at fz-juelich.de Sun Jun 2 04:31:34 2013 From: m.geimer at fz-juelich.de (Markus Geimer) Date: Sun, 2 Jun 2013 11:31:34 +0200 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51AA705F.3090500@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> Message-ID: <51AB10F6.9070109@fz-juelich.de> Hi Pavan, > Markus: does this happen only with SLURM or can you reproduce this > without SLURM as well? It seems to happen only when hydra queries the host list from SLURM. I tried executing two different setups on the head node, both listing two compute nodes in a hostfile: 1) mpiexec -f hostfile -n 4 ./hello Since the SLURM PAM module was disabled, SSH login to the nodes was possible and the job run as expected, with two ranks on each node. SLURM's sinfo showed both nodes as state 'idle' and the HYDRA_DEBUG output said '--rmk user --launcher ssh'. 2) mpiexec -f hostfile -rmk slurm -n 4 ./hello This job ran as well, with the nodes allocated via SLURM and shown as 'alloc'. Debug output: '--rmk slurm --launcher slurm'. Specifying a hostfile with mpiexec within a SLURM batch job also worked, but that's obviously not what you normally want to do... Hope this helps. If there is anything else I should try out to help tracking down the issue, please let me know. Thanks, Markus -- Dr. Markus Geimer Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-1773 Fax: +49-2461-61-6656 E-mail: m.geimer at fz-juelich.de WWW: http://www.fz-juelich.de/jsc/ ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ From johnd9886 at gmail.com Sun Jun 2 11:43:28 2013 From: johnd9886 at gmail.com (john donald) Date: Sun, 2 Jun 2013 18:43:28 +0200 Subject: [mpich-discuss] ckpoint-num error Message-ID: i used mpiexec with checkpoint and created two checkpoint files: mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -ckpoint-interval 4 -n 4 /home/john/app/md context-num0-0-0 context-num1-0-0 i am trying to make a restart mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 1 but nothing happened it just hangs i also tried: mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 0-0-0 also hangs -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Sun Jun 2 12:35:13 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 02 Jun 2013 12:35:13 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51AB10F6.9070109@fz-juelich.de> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> Message-ID: <51AB8251.6030007@mcs.anl.gov> Hi Markus, On 06/02/2013 04:31 AM, Markus Geimer wrote: >> Markus: does this happen only with SLURM or can you reproduce this >> without SLURM as well? > > It seems to happen only when hydra queries the host list from SLURM. > I tried executing two different setups on the head node, both listing > two compute nodes in a hostfile: > > 1) mpiexec -f hostfile -n 4 ./hello > > Since the SLURM PAM module was disabled, SSH login to the nodes > was possible and the job run as expected, with two ranks on each > node. SLURM's sinfo showed both nodes as state 'idle' and the > HYDRA_DEBUG output said '--rmk user --launcher ssh'. That sounds good. > 2) mpiexec -f hostfile -rmk slurm -n 4 ./hello > > This job ran as well, with the nodes allocated via SLURM and > shown as 'alloc'. Debug output: '--rmk slurm --launcher slurm'. > > Specifying a hostfile with mpiexec within a SLURM batch job also > worked, but that's obviously not what you normally want to do... Hmm. -f hostfile and -rmk slurm are contradictory options, since both are just ways to get the host list. This should throw an error. I'll add that into Hydra. > Hope this helps. If there is anything else I should try out to help > tracking down the issue, please let me know. I'm still trying to find the exact option that fails. Can you try the following: # Use the slurm launcher, and user-specified resources % mpiexec -f hostfile -launcher slurm -n 4 ./hello # Use the ssh launcher, and user-specified resources % mpiexec -f hostfile -launcher ssh -n 4 ./hello # Use the ssh launcher, and slurm-specified resources % mpiexec -rmk slurm -launcher ssh -n 4 ./hello # Use the slurm launcher, and slurm-specified resources % mpiexec -rmk slurm -launcher slurm -n 4 ./hello At least one of them should throw the error you reported. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From m.geimer at fz-juelich.de Sun Jun 2 14:46:12 2013 From: m.geimer at fz-juelich.de (Markus Geimer) Date: Sun, 2 Jun 2013 21:46:12 +0200 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51AB8251.6030007@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> Message-ID: <51ABA104.1070502@fz-juelich.de> Hi Pavan, > Can you try the following: > > # Use the slurm launcher, and user-specified resources > % mpiexec -f hostfile -launcher slurm -n 4 ./hello Works. > # Use the ssh launcher, and user-specified resources > % mpiexec -f hostfile -launcher ssh -n 4 ./hello Works. > # Use the ssh launcher, and slurm-specified resources > % mpiexec -rmk slurm -launcher ssh -n 4 ./hello Fails. > # Use the slurm launcher, and slurm-specified resources > % mpiexec -rmk slurm -launcher slurm -n 4 ./hello Fails. So it seems as if the SLURM-provided host list is somehow causing trouble... Markus -- Dr. Markus Geimer Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-1773 Fax: +49-2461-61-6656 E-Mail: m.geimer at fz-juelich.de WWW: http://www.fz-juelich.de/ias/jsc ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ From balaji at mcs.anl.gov Sun Jun 2 21:09:37 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 02 Jun 2013 21:09:37 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51ABA104.1070502@fz-juelich.de> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> Message-ID: <51ABFAE1.8020808@mcs.anl.gov> Markus, I'm not able to reproduce this at all. Looking through the code didn't give any information either. Just to make sure, are these the only configure options you are using: --prefix=... --enable-shared --enable-debuginfo Also, can you run mpiexec with the -verbose option for one of the failing tests (probably just mpiexec -n 4 ./hello) and send me the output? -- Pavan On 06/02/2013 02:46 PM, Markus Geimer wrote: > Hi Pavan, > >> Can you try the following: >> >> # Use the slurm launcher, and user-specified resources >> % mpiexec -f hostfile -launcher slurm -n 4 ./hello > > Works. > >> # Use the ssh launcher, and user-specified resources >> % mpiexec -f hostfile -launcher ssh -n 4 ./hello > > Works. > >> # Use the ssh launcher, and slurm-specified resources >> % mpiexec -rmk slurm -launcher ssh -n 4 ./hello > > Fails. > >> # Use the slurm launcher, and slurm-specified resources >> % mpiexec -rmk slurm -launcher slurm -n 4 ./hello > > Fails. > > So it seems as if the SLURM-provided host list is somehow causing > trouble... > > Markus > > -- > Dr. Markus Geimer > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-1773 > Fax: +49-2461-61-6656 > E-Mail: m.geimer at fz-juelich.de > WWW: http://www.fz-juelich.de/ias/jsc > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Mon Jun 3 08:30:31 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Mon, 03 Jun 2013 08:30:31 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51AC3E6E.6090305@fz-juelich.de> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> Message-ID: <51AC9A77.5040609@mcs.anl.gov> On 06/03/2013 01:57 AM, Markus Geimer wrote: > (reply intentionally not sent to the list -- I don't like such logs > to show up in mailing list archives...) Ok, I'm cc'ing discuss at mpich.org back. >> Just to make sure, are these the only configure options you are using: >> >> --prefix=... --enable-shared --enable-debuginfo > > Yes, these are the only options (besides explicitly specifying the > GNU compiler, but this shouldn't do any harm). Please find the full > configure log attached. Thanks. >> Also, can you run mpiexec with the -verbose option for one of the >> failing tests (probably just mpiexec -n 4 ./hello) and send me the output? > > Output attached. Hmm. The double free error seems to be coming from the executable, rather than from mpiexec or the proxy. So we might be looking in the wrong place. 1. Can you run your application processes using "ddd" or some other debugger to see where the double free is coming from? You might have to build mpich with --enable-g=dbg to get the debug symbols in. 2. Can you send me the output with the ssh launcher as well? I want to see if there are any critical differences in the environment variables being propagated (e.g., LD_LIBRARY_PATH/LD_PRELOAD) that might affect shared library builds. Feel free to send me the logs off-list. Thanks, -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From m.geimer at fz-juelich.de Mon Jun 3 09:35:48 2013 From: m.geimer at fz-juelich.de (Markus Geimer) Date: Mon, 3 Jun 2013 16:35:48 +0200 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51AC9A77.5040609@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> Message-ID: <51ACA9C4.4080203@fz-juelich.de> Pavan, > 1. Can you run your application processes using "ddd" or some other > debugger to see where the double free is coming from? You might have to > build mpich with --enable-g=dbg to get the debug symbols in. Here is the full stack backtrace: ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- #0 0x00007ffff6deb475 in *__GI_raise (sig=) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 #1 0x00007ffff6dee6f0 in *__GI_abort () at abort.c:92 #2 0x00007ffff6e2652b in __libc_message (do_abort=, fmt=) at ../sysdeps/unix/sysv/linux/libc_fatal.c:189 #3 0x00007ffff6e2fd76 in malloc_printerr (action=3, str=0x7ffff6f081e0 "double free or corruption (fasttop)", ptr=) at malloc.c:6283 #4 0x00007ffff6e34aac in *__GI___libc_free (mem=) at malloc.c:3738 #5 0x00007ffff7a1d5d9 in populate_ids_from_mapping ( did_map=, num_nodes=, mapping=, pg=) at src/mpid/ch3/src/mpid_vc.c:1063 #6 MPIDI_Populate_vc_node_ids (pg=pg at entry=0x604910, our_pg_rank=our_pg_rank at entry=0) at src/mpid/ch3/src/mpid_vc.c:1193 #7 0x00007ffff7a17dd6 in MPID_Init (argc=argc at entry=0x7fffffffd97c, argv=argv at entry=0x7fffffffd970, requested=requested at entry=0, provided=provided at entry=0x7fffffffd8e8, has_args=has_args at entry=0x7fffffffd8e0, has_env=has_env at entry=0x7fffffffd8e4) at src/mpid/ch3/src/mpid_init.c:156 #8 0x00007ffff7acdf7f in MPIR_Init_thread (argc=argc at entry=0x7fffffffd97c, argv=argv at entry=0x7fffffffd970, required=required at entry=0, provided=provided at entry=0x7fffffffd944) at src/mpi/init/initthread.c:431 #9 0x00007ffff7acd90e in PMPI_Init (argc=0x7fffffffd97c, argv=0x7fffffffd970) at src/mpi/init/init.c:136 #10 0x000000000040086d in main () ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- > 2. Can you send me the output with the ssh launcher as well? See mail sent off-list. Thanks, Markus -- Dr. Markus Geimer Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-1773 Fax: +49-2461-61-6656 E-mail: m.geimer at fz-juelich.de WWW: http://www.fz-juelich.de/jsc/ ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ From johnd9886 at gmail.com Mon Jun 3 14:59:07 2013 From: johnd9886 at gmail.com (john donald) Date: Mon, 3 Jun 2013 21:59:07 +0200 Subject: [mpich-discuss] ckpoint-num error Message-ID: i used mpiexec with checkpoint and created two checkpoint files: mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -ckpoint-interval 4 -n 4 /home/john/app/md context-num0-0-0 context-num1-0-0 i am trying to make a restart mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 1 but nothing happened it just hangs i also tried: mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 0-0-0 also hangs -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruno.guerraz at orange.com Tue Jun 4 05:10:49 2013 From: bruno.guerraz at orange.com (bruno.guerraz at orange.com) Date: Tue, 4 Jun 2013 10:10:49 +0000 Subject: [mpich-discuss] MPICH2 logging on windows Message-ID: <17300_1370340650_51ADBD2A_17300_255_1_EF57EDE3FD880F448F471B5B3416152206730E@PEXCVZYM12.corporate.adroot.infra.ftgroup> Hi, I try to use MPE logging on my mpi program on Windows. It works fine on the cpi example. I have only linked my program with mpe.lib and run on a single machine with -localonly -log options. I use dynamic process via MPI_Spawn. The program crashed on MPI_Intercomm_merge with the log ------------------------------------------------------- .\src\logging\src\clog_commset.c:CLOG_CommSet_get_IDs() - PMPI_Comm_get_attr() fails! Backtrace of the callstack at rank 0: unable to read the cmd header on the pmi context, Error = -1 I really appreciate any help Bruno _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, France Telecom - Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, France Telecom - Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jayesh at mcs.anl.gov Tue Jun 4 09:47:55 2013 From: jayesh at mcs.anl.gov (Jayesh Krishna) Date: Tue, 4 Jun 2013 09:47:55 -0500 (CDT) Subject: [mpich-discuss] MPICH2 logging on windows In-Reply-To: <17300_1370340650_51ADBD2A_17300_255_1_EF57EDE3FD880F448F471B5B3416152206730E@PEXCVZYM12.corporate.adroot.infra.ftgroup> Message-ID: <710851674.1970050.1370357275265.JavaMail.root@mcs.anl.gov> Hi, Can you send us a simple test case (extracted from your code) that fails? Regards, Jayesh ----- Original Message ----- From: "bruno guerraz" To: discuss at mpich.org Sent: Tuesday, June 4, 2013 5:10:49 AM Subject: [mpich-discuss] MPICH2 logging on windows Hi, I try to use MPE logging on my mpi program on Windows. It works fine on the cpi example. I have only linked my program with mpe.lib and run on a single machine with ?localonly ?log options. I use dynamic process via MPI_Spawn. The program crashed on MPI_Intercomm_merge with the log ------------------------------------------------------- .\src\logging\src\clog_commset.c:CLOG_CommSet_get_IDs() - PMPI_Comm_get_attr() fails! Backtrace of the callstack at rank 0: unable to read the cmd header on the pmi context, Error = -1 I really appreciate any help Bruno _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, France Telecom - Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, France Telecom - Orange is not liable for messages that have been modified, changed or falsified. Thank you. _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From fernando_luz at tpn.usp.br Tue Jun 4 10:59:02 2013 From: fernando_luz at tpn.usp.br (fernando_luz) Date: Tue, 04 Jun 2013 12:59:02 -0300 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. Message-ID: <51AE0EC6.7040903@tpn.usp.br> Hi, I didn't find the MPE source in mpich-3.0.4 package. Where I can download the source? It is still compatible with mpich? And I tried to install the logging support available in this release, but my try didn't was successful. I received the follow error: /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: line 3694: PAC_CC_SUBDIR_SHLIBS: command not found configure: creating ./config.status config.status: error: cannot find input file: `Makefile.in' configure: error: src/util/logging/rlog configure failed I attached the c.txt file used in the configuration. Regards Fernando -------------- next part -------------- Configuring MPICH version 3.0.4 with '--prefix=/home/fernando_luz/mpich-3.0.4' '--enable-cxx' '--disable-fc' '--disable-f77' '--enable-timing=log' '--with-logging=rlog' '--enable-timer-type=gettimeofday' Running on system: Linux TPN000300 3.2.0-45-generic #70-Ubuntu SMP Wed May 29 20:12:06 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux checking for icc... no checking for pgcc... no checking for xlc... no checking for xlC... no checking for pathcc... no checking for cc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking whether cc understands -c and -o together... yes checking how to run the C preprocessor... cc -E checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking whether make supports nested variables... yes checking dependency style of cc... gcc3 checking whether to enable maintainer-specific portions of Makefiles... yes checking for ar... ar checking the archiver (ar) interface... ar checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from cc object... ok checking for sysroot... no checking for mt... mt checking if mt is a manifest tool... no checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC -DPIC checking if cc PIC flag -fPIC -DPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking if cc supports -c -o file.o... (cached) yes checking whether the cc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... no checking whether to build static libraries... yes checking whether make supports nested variables... (cached) yes checking for icpc... no checking for pgCC... no checking for xlC... no checking for pathCC... no checking for c++... c++ checking whether we are using the GNU C++ compiler... yes checking whether c++ accepts -g... yes checking dependency style of c++... gcc3 checking how to run the C++ preprocessor... c++ -E checking for ld used by c++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the c++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for c++ option to produce PIC... -fPIC -DPIC checking if c++ PIC flag -fPIC -DPIC works... yes checking if c++ static flag -static works... yes checking if c++ supports -c -o file.o... yes checking if c++ supports -c -o file.o... (cached) yes checking whether the c++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... (cached) GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether we are using the GNU Fortran 77 compiler... no checking whether no accepts -g... no checking whether we are using the GNU Fortran compiler... no checking whether no accepts -g... no configure: RUNNING PREREQ FOR ch3:nemesis checking for getpagesize... yes configure: ===== configuring src/mpl ===== configure: running /bin/bash /home/fernando_luz/software/mpich-3.0.4/src/mpl/configure --disable-option-checking '--prefix=/home/fernando_luz/mpich-3.0.4' '--enable-cxx' '--disable-fc' '--disable-f77' '--enable-timing=log' '--with-logging=rlog' '--enable-timer-type=gettimeofday' --cache-file=/dev/null --srcdir=/home/fernando_luz/software/mpich-3.0.4/src/mpl checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking dependency style of cc... gcc3 checking the archiver (ar) interface... ar checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from cc object... ok checking for sysroot... no checking for mt... mt checking if mt is a manifest tool... no checking how to run the C preprocessor... cc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC -DPIC checking if cc PIC flag -fPIC -DPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking if cc supports -c -o file.o... (cached) yes checking whether the cc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... no checking whether to build static libraries... yes checking whether make supports nested variables... yes checking for an ANSI C-conforming const... yes checking for C/C++ restrict keyword... __restrict checking for variable argument list macro functionality... yes checking for gcov... gcov checking whether the compiler supports __typeof(variable)... yes checking stdio.h usability... yes checking stdio.h presence... yes checking for stdio.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking stdarg.h usability... yes checking stdarg.h presence... yes checking for stdarg.h... yes checking ctype.h usability... yes checking ctype.h presence... yes checking for ctype.h... yes checking search.h usability... yes checking search.h presence... yes checking for search.h... yes checking for inttypes.h... (cached) yes checking for stdint.h... (cached) yes checking valgrind.h usability... no checking valgrind.h presence... no checking for valgrind.h... no checking memcheck.h usability... no checking memcheck.h presence... no checking for memcheck.h... no checking valgrind/valgrind.h usability... yes checking valgrind/valgrind.h presence... yes checking for valgrind/valgrind.h... yes checking valgrind/memcheck.h usability... yes checking valgrind/memcheck.h presence... yes checking for valgrind/memcheck.h... yes checking helgrind.h usability... no checking helgrind.h presence... no checking for helgrind.h... no checking valgrind/helgrind.h usability... yes checking valgrind/helgrind.h presence... yes checking for valgrind/helgrind.h... yes checking drd.h usability... no checking drd.h presence... no checking for drd.h... no checking valgrind/drd.h usability... yes checking valgrind/drd.h presence... yes checking for valgrind/drd.h... yes checking whether the valgrind headers are broken or too old... no checking for strdup... yes checking whether strdup needs a declaration... no checking for snprintf... yes checking whether snprintf needs a declaration... no checking for strncmp... yes checking whether strncmp needs a declaration... no checking for putenv... yes checking whether putenv needs a declaration... no checking whether __attribute__ allowed... yes checking whether __attribute__((format)) allowed... yes checking that generated files are newer than configure... done configure: creating ./config.status config.status: creating Makefile config.status: creating localdefs config.status: creating include/config.h config.status: include/config.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands config.status: executing include/mplconfig.h commands config.status: creating include/mplconfig.h - prefix MPL for include/config.h defines config.status: include/mplconfig.h is unchanged configure: ===== done with src/mpl configure ===== configure: sourcing src/mpl/localdefs configure: ===== configuring src/openpa ===== configure: running /bin/bash /home/fernando_luz/software/mpich-3.0.4/src/openpa/configure --disable-option-checking '--prefix=/home/fernando_luz/mpich-3.0.4' --with-atomic-primitives=auto_allow_emulation '--enable-cxx' '--disable-fc' '--disable-f77' '--enable-timing=log' '--with-logging=rlog' '--enable-timer-type=gettimeofday' --cache-file=/dev/null --srcdir=/home/fernando_luz/software/mpich-3.0.4/src/openpa checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking dependency style of cc... gcc3 checking the archiver (ar) interface... ar checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from cc object... ok checking for sysroot... no checking for mt... mt checking if mt is a manifest tool... no checking how to run the C preprocessor... cc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC -DPIC checking if cc PIC flag -fPIC -DPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking if cc supports -c -o file.o... (cached) yes checking whether the cc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... no checking whether to build static libraries... yes checking whether make supports nested variables... yes checking for gcc... (cached) cc checking whether we are using the GNU C compiler... (cached) yes checking whether cc accepts -g... (cached) yes checking for cc option to accept ISO C89... (cached) none needed checking dependency style of cc... (cached) gcc3 checking whether to enable assertions... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking atomic.h usability... no checking atomic.h presence... no checking for atomic.h... no checking intrin.h usability... no checking intrin.h presence... no checking for intrin.h... no checking for inttypes.h... (cached) yes checking for stdint.h... (cached) yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for C/C++ restrict keyword... __restrict checking for inline... inline checking for an ANSI C-conforming const... yes checking for pthread_create in -lpthread... yes checking for pthread_yield... yes checking if 100 threads can be run at once... yes checking size of void *... 8 checking size of int... 4 checking whether __attribute__ allowed... yes checking whether __attribute__((format)) allowed... yes checking if compiler rejects bogus asm statements... yes checking for support for gcc x86/x86_64 primitives... yes checking for support for gcc x86 primitives for pre-Pentium 4... yes checking for support for gcc ia64 primitives... no checking for support for gcc PowerPC atomics... no checking for support for gcc ARM atomics... no checking for support for gcc SiCortex atomics... no checking for support for gcc atomic intrinsics... yes checking for support for Windows NT atomic intrinsics... no checking for support for Sun atomic operations library... no checking whether to enable strict fairness checks... no checking that generated files are newer than configure... done configure: creating ./config.status config.status: creating Makefile config.status: creating src/Makefile config.status: creating test/Makefile config.status: creating openpa.pc config.status: creating src/config.h config.status: src/config.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands config.status: executing src/opa_config.h commands config.status: creating src/opa_config.h - prefix OPA for src/config.h defines config.status: src/opa_config.h is unchanged configure: ===== done with src/openpa configure ===== sourcing /home/fernando_luz/software/mpich-3.0.4/src/pm/hydra/mpichprereq checking whether the compiler defines __func__... yes checking whether the compiler defines __FUNC__... no checking whether the compiler sets __FUNCTION__... yes checking whether C compiler accepts option -O2... yes checking whether C compiler option -O2 works with an invalid prototype program... yes checking whether routines compiled with -O2 can be linked with ones compiled without -O2... yes checking for type of weak symbol alias support... pragma weak checking whether __attribute__ ((weak)) allowed... yes checking whether __attribute__ ((weak_import)) allowed... yes checking whether __attribute__((weak,alias(...))) allowed... no checking for multiple weak symbol support... yes checking for shared library (esp. rpath) characteristics of CC... done (results in src/env/cc_shlib.conf) checking whether the C++ compiler c++ can build an executable... yes checking whether C++ compiler works with string... yes checking whether the compiler supports exceptions... yes checking whether the compiler recognizes bool as a built-in type... yes checking whether the compiler implements namespaces... yes checking whether available... yes checking whether the compiler implements the namespace std... yes checking whether available... no checking for GNU g++ version... 4 . 6 checking for shared library (esp. rpath) characteristics of CXX... done (results in src/env/cxx_shlib.conf) checking whether C++ compiler accepts option -O2... yes checking whether routines compiled with -O2 can be linked with ones compiled without -O2... yes checking for perl... /usr/bin/perl checking for ar... ar checking for ranlib... ranlib checking for killall... killall checking whether install works... yes checking whether mkdir -p works... yes checking for make... make checking whether clock skew breaks make... no checking whether make supports include... yes checking whether make allows comments in actions... yes checking for virtual path format... VPATH checking whether make sets CFLAGS... yes checking for bash... /bin/bash checking whether /bin/bash supports arrays... yes checking for doctext... false checking for an ANSI C-conforming const... yes checking for working volatile... yes checking for C/C++ restrict keyword... __restrict checking for inline... inline checking whether __attribute__ allowed... yes checking whether __attribute__((format)) allowed... yes checking whether byte ordering is bigendian... no checking whether C compiler allows unaligned doubles... yes checking whether cc supports __func__... yes checking whether long double is supported... yes checking whether long long is supported... yes checking for max C struct integer alignment... eight checking for max C struct floating point alignment... sixteen checking for max C struct alignment of structs with doubles... eight checking for max C struct floating point alignment with long doubles... sixteen configure: WARNING: Structures containing long doubles may be aligned differently from structures with floats or longs. MPICH does not handle this case automatically and you should avoid assumed extents for structures containing float types. checking if alignment of structs with doubles is based on position... no checking if alignment of structs with long long ints is based on position... no checking if double alignment breaks rules, find actual alignment... no checking for alignment restrictions on pointers... int or better checking size of char... 1 checking size of unsigned char... 1 checking size of short... 2 checking size of unsigned short... 2 checking size of int... 4 checking size of unsigned int... 4 checking size of long... 8 checking size of unsigned long... 8 checking size of long long... 8 checking size of unsigned long long... 8 checking size of float... 4 checking size of double... 8 checking size of long double... 16 checking size of void *... 8 checking for ANSI C header files... (cached) yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking size of wchar_t... 4 checking size of float_int... 8 checking size of double_int... 16 checking size of long_int... 16 checking size of short_int... 8 checking size of two_int... 8 checking size of long_double_int... 32 checking sys/bitypes.h usability... yes checking sys/bitypes.h presence... yes checking for sys/bitypes.h... yes checking for inttypes.h... (cached) yes checking for stdint.h... (cached) yes checking for int8_t... yes checking for int16_t... yes checking for int32_t... yes checking for int64_t... yes checking for uint8_t... yes checking for uint16_t... yes checking for uint32_t... yes checking for uint64_t... yes checking stdbool.h usability... yes checking stdbool.h presence... yes checking for stdbool.h... yes checking complex.h usability... yes checking complex.h presence... yes checking for complex.h... yes checking size of _Bool... 1 checking size of float _Complex... 8 checking size of double _Complex... 16 checking size of long double _Complex... 32 checking for _Bool... yes checking for float _Complex... yes checking for double _Complex... yes checking for long double _Complex... yes checking size of bool... 1 checking complex usability... yes checking complex presence... yes checking for complex... yes checking size of Complex... 8 checking size of DoubleComplex... 16 checking size of LongDoubleComplex... 32 checking for alignment restrictions on int64_t... no checking for alignment restrictions on int32_t... no checking size of MPIR_Bsend_data_t... 96 checking for gcc __asm__ and pentium cmpxchgl instruction... no checking for gcc __asm__ and AMD x86_64 cmpxchgq instruction... yes checking for gcc __asm__ and IA64 xchg4 instruction... no checking for gcov... gcov checking for ANSI C header files... (cached) yes checking for stdlib.h... (cached) yes checking stdarg.h usability... yes checking stdarg.h presence... yes checking for stdarg.h... yes checking for sys/types.h... (cached) yes checking for string.h... (cached) yes checking for inttypes.h... (cached) yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking for stddef.h... (cached) yes checking errno.h usability... yes checking errno.h presence... yes checking for errno.h... yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking for unistd.h... (cached) yes checking endian.h usability... yes checking endian.h presence... yes checking for endian.h... yes checking assert.h usability... yes checking assert.h presence... yes checking for assert.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking for sys/uio.h... yes checking for size_t... yes checking for setitimer... yes checking for alarm... yes checking for vsnprintf... yes checking for vsprintf... yes checking whether vsnprintf needs a declaration... no checking for strerror... yes checking for strncasecmp... yes checking whether strerror_r is declared... yes checking for strerror_r... yes checking whether strerror_r returns char *... no checking whether strerror_r needs a declaration... no checking for snprintf... yes checking whether snprintf needs a declaration... no checking for qsort... yes checking for va_copy... yes checking for variable argument list macro functionality... yes checking for working alloca.h... yes checking for alloca... yes checking for strdup... yes checking whether strdup needs a declaration... no checking for mkstemp... yes checking whether mkstemp needs a declaration... no checking for fdopen... yes checking whether fdopen needs a declaration... yes checking for putenv... yes checking whether putenv needs a declaration... no checking for gettimeofday... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking for pthread_key_create in -lpthread... yes checking for pthread_yield... yes checking for pthread_key_create... yes checking for pthread_cleanup_push... no checking whether pthread_cleanup_push is available (may be a macro in pthread.h)... no checking whether pthread.h defines PTHREAD_MUTEX_RECURSIVE_NP... yes checking whether pthread.h defines PTHREAD_MUTEX_RECURSIVE... yes checking whether pthread.h defines PTHREAD_MUTEX_ERRORCHECK_NP... yes checking whether pthread.h defines PTHREAD_MUTEX_ERRORCHECK... yes checking whether pthread_mutexattr_settype needs a declaration... no checking for thread local storage specifier... __thread checking for getpid... yes checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking for unistd.h... (cached) yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking for sched_yield... yes checking for yield... no checking for usleep... yes checking for sleep... yes checking for select... yes checking whether usleep needs a declaration... no checking for sched_setaffinity... yes checking for sched_getaffinity... yes checking for bindprocessor... no checking for thread_policy_set... no checking whether cpu_set_t available... yes checking whether the CPU_SET and CPU_ZERO macros are defined... no checking for unistd.h... (cached) yes checking for string.h... (cached) yes checking for stdlib.h... (cached) yes checking for sys/socket.h... (cached) yes checking for strings.h... (cached) yes checking for assert.h... (cached) yes checking for snprintf... (cached) yes checking whether snprintf needs a declaration... (cached) no checking for strncasecmp... (cached) yes checking for sys/types.h... (cached) yes checking for sys/param.h... (cached) yes checking for sys/socket.h... (cached) yes checking netinet/in.h usability... yes checking netinet/in.h presence... yes checking for netinet/in.h... yes checking netinet/tcp.h usability... yes checking netinet/tcp.h presence... yes checking for netinet/tcp.h... yes checking sys/un.h usability... yes checking sys/un.h presence... yes checking for sys/un.h... yes checking netdb.h usability... yes checking netdb.h presence... yes checking for netdb.h... yes checking for library containing socket... none required checking for library containing gethostbyname... none required checking for socket... yes checking for setsockopt... yes checking for gethostbyname... yes checking whether socklen_t is defined (in sys/socket.h if present)... yes checking whether struct hostent contains h_addr_list... yes checking whether __attribute__ allowed... (cached) yes checking whether __attribute__((format)) allowed... (cached) yes configure: RUNNING CONFIGURE FOR CH3 DEVICE checking for assert.h... (cached) yes checking for limits.h... (cached) yes checking for string.h... (cached) yes checking for sys/types.h... (cached) yes checking for sys/uio.h... (cached) yes checking uuid/uuid.h usability... no checking uuid/uuid.h presence... no checking for uuid/uuid.h... no checking time.h usability... yes checking time.h presence... yes checking for time.h... yes checking ctype.h usability... yes checking ctype.h presence... yes checking for ctype.h... yes checking for unistd.h... (cached) yes checking arpa/inet.h usability... yes checking arpa/inet.h presence... yes checking for arpa/inet.h... yes checking for sys/socket.h... (cached) yes checking for net/if.h... yes checking for pid_t... yes checking for inet_pton... yes checking for gethostname... yes checking whether gethostname needs a declaration... no checking for CFUUIDCreate... no checking for uuid_generate... no checking for time... yes checking for OpenPA atomic primitive availability... yes checking whether byte ordering is bigendian... (cached) no configure: RUNNING CONFIGURE FOR ch3:nemesis checking for net/if.h... yes checking for assert.h... (cached) yes checking for netdb.h... (cached) yes checking for unistd.h... (cached) yes checking for sched.h... (cached) yes checking sys/mman.h usability... yes checking sys/mman.h presence... yes checking for sys/mman.h... yes checking sys/ioctl.h usability... yes checking sys/ioctl.h presence... yes checking for sys/ioctl.h... yes checking for sys/socket.h... (cached) yes checking sys/sockio.h usability... no checking sys/sockio.h presence... no checking for sys/sockio.h... no checking for sys/types.h... (cached) yes checking for errno.h... (cached) yes checking sys/ipc.h usability... yes checking sys/ipc.h presence... yes checking for sys/ipc.h... yes checking sys/shm.h usability... yes checking sys/shm.h presence... yes checking for sys/shm.h... yes checking for netinet/in.h... (cached) yes checking signal.h usability... yes checking signal.h presence... yes checking for signal.h... yes checking for signal... yes checking for mkstemp... (cached) yes checking for rand... yes checking for srand... yes checking for mmap... yes checking for munmap... yes configure: Using a memory-mapped file for shared memory checking whether struct hostent contains h_addr_list... (cached) yes checking whether we can use struct ifconf... yes checking whether we can use struct ifreq... yes checking knem_io.h usability... no checking knem_io.h presence... no checking for knem_io.h... no configure: ===== configuring src/util/logging/rlog ===== configure: running /bin/bash /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure --disable-option-checking '--prefix=/home/fernando_luz/mpich-3.0.4' '--enable-cxx' '--disable-fc' '--disable-f77' '--enable-timing=log' '--with-logging=rlog' '--enable-timer-type=gettimeofday' --cache-file=/dev/null --srcdir=/home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog RUNNING CONFIGURE FOR RLOG /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: line 2025: PAC_ARG_CACHING: command not found /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: line 2058: PAC_PROG_CC: command not found checking for ar... /usr/bin/ar checking for ranlib... ranlib checking for a BSD-compatible install... /usr/bin/install -c /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: line 2238: PAC_PROG_CHECK_INSTALL_WORKS: command not found /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: line 2239: PAC_PROG_INSTALL_BREAKS_LIBS: command not found /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: line 2240: PAC_PROG_MKDIR_P: command not found /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: line 2241: PAC_PROG_MAKE: command not found checking for perl... /usr/bin/perl checking for gcc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking for an ANSI C-conforming const... yes checking for working volatile... yes checking for C/C++ restrict keyword... __restrict checking for inline... inline checking how to run the C preprocessor... cc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for unistd.h... (cached) yes /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: line 3694: PAC_CC_SUBDIR_SHLIBS: command not found configure: creating ./config.status config.status: error: cannot find input file: `Makefile.in' configure: error: src/util/logging/rlog configure failed From jhammond at alcf.anl.gov Tue Jun 4 12:48:22 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Tue, 4 Jun 2013 10:48:22 -0700 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: <51AE0EC6.7040903@tpn.usp.br> References: <51AE0EC6.7040903@tpn.usp.br> Message-ID: MPE isn't actively developed and should sit strictly on top of any MPI implementation so you can just grab MPE from an older release of MPICH. My guess is that MPE will be a standalone download at some point in the future. Jeff On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz wrote: > Hi, > > I didn't find the MPE source in mpich-3.0.4 package. Where I can download > the source? It is still compatible with mpich? > > And I tried to install the logging support available in this release, but my > try didn't was successful. I received the follow error: > > /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: > line 3694: PAC_CC_SUBDIR_SHLIBS: command not found > configure: creating ./config.status > config.status: error: cannot find input file: `Makefile.in' > configure: error: src/util/logging/rlog configure failed > > I attached the c.txt file used in the configuration. > > Regards > > Fernando > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From thakur at mcs.anl.gov Tue Jun 4 13:05:10 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Tue, 4 Jun 2013 13:05:10 -0500 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: References: <51AE0EC6.7040903@tpn.usp.br> Message-ID: It can be downloaded from http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. The source repository is at http://git.mpich.org/mpe.git/ Rajeev On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: > MPE isn't actively developed and should sit strictly on top of any MPI > implementation so you can just grab MPE from an older release of > MPICH. > > My guess is that MPE will be a standalone download at some point in the future. > > Jeff > > On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz wrote: >> Hi, >> >> I didn't find the MPE source in mpich-3.0.4 package. Where I can download >> the source? It is still compatible with mpich? >> >> And I tried to install the logging support available in this release, but my >> try didn't was successful. I received the follow error: >> >> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >> configure: creating ./config.status >> config.status: error: cannot find input file: `Makefile.in' >> configure: error: src/util/logging/rlog configure failed >> >> I attached the c.txt file used in the configuration. >> >> Regards >> >> Fernando >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > Argonne Leadership Computing Facility > University of Chicago Computation Institute > jhammond at alcf.anl.gov / (630) 252-5381 > http://www.linkedin.com/in/jeffhammond > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond > ALCF docs: http://www.alcf.anl.gov/user-guides > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From bruno.guerraz at orange.com Wed Jun 5 02:15:15 2013 From: bruno.guerraz at orange.com (bruno.guerraz at orange.com) Date: Wed, 5 Jun 2013 07:15:15 +0000 Subject: [mpich-discuss] MPICH2 logging on windows In-Reply-To: <710851674.1970050.1370357275265.JavaMail.root@mcs.anl.gov> References: <17300_1370340650_51ADBD2A_17300_255_1_EF57EDE3FD880F448F471B5B3416152206730E@PEXCVZYM12.corporate.adroot.infra.ftgroup> <710851674.1970050.1370357275265.JavaMail.root@mcs.anl.gov> Message-ID: <27655_1370416516_51AEE584_27655_1043_1_EF57EDE3FD880F448F471B5B3416152206741E@PEXCVZYM12.corporate.adroot.infra.ftgroup> Here is attached a simple test. This is a simple Master/Slave program. The master launches several slaves with MPI_Comm_spawn. I run the program with the command line mpiexec.exe -log -n 1 -localonly master.exe With or without the lines MPE_Init_log(); and MPE_Finalize_log() it fails with the same error .\src\logging\src\clog_commset.c:CLOG_CommSet_get_IDs() - PMPI_Comm_get_attr() fails! Backtrace of the callstack at rank 0: unable to read the cmd header on the pmi context, Error = -1 thanks in advance Bruno -----Message d'origine----- De?: Jayesh Krishna [mailto:jayesh at mcs.anl.gov] Envoy??: mardi 4 juin 2013 16:48 ??: discuss at mpich.org Cc?: GUERRAZ Bruno OLNC/OLPS Objet?: Re: [mpich-discuss] MPICH2 logging on windows Hi, Can you send us a simple test case (extracted from your code) that fails? Regards, Jayesh ----- Original Message ----- From: "bruno guerraz" To: discuss at mpich.org Sent: Tuesday, June 4, 2013 5:10:49 AM Subject: [mpich-discuss] MPICH2 logging on windows Hi, I try to use MPE logging on my mpi program on Windows. It works fine on the cpi example. I have only linked my program with mpe.lib and run on a single machine with ?localonly ?log options. I use dynamic process via MPI_Spawn. The program crashed on MPI_Intercomm_merge with the log ------------------------------------------------------- .\src\logging\src\clog_commset.c:CLOG_CommSet_get_IDs() - PMPI_Comm_get_attr() fails! Backtrace of the callstack at rank 0: unable to read the cmd header on the pmi context, Error = -1 I really appreciate any help Bruno _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, France Telecom - Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, France Telecom - Orange is not liable for messages that have been modified, changed or falsified. Thank you. _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, France Telecom - Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, France Telecom - Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: master.cpp URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: slave.cpp URL: From johnd9886 at gmail.com Wed Jun 5 16:09:25 2013 From: johnd9886 at gmail.com (john donald) Date: Wed, 5 Jun 2013 23:09:25 +0200 Subject: [mpich-discuss] Fwd: ckpoint-num error In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: john donald Date: 2013/6/3 Subject: ckpoint-num error To: mpich-discuss at mcs.anl.gov i used mpiexec with checkpoint and created two checkpoint files: mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -ckpoint-interval 4 -n 4 /home/john/app/md context-num0-0-0 context-num1-0-0 i am trying to make a restart mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 1 but nothing happened it just hangs i also tried: mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 0-0-0 also hangs -------------- next part -------------- An HTML attachment was scrubbed... URL: From thakur at mcs.anl.gov Wed Jun 5 16:14:35 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Wed, 5 Jun 2013 16:14:35 -0500 Subject: [mpich-discuss] Fwd: ckpoint-num error In-Reply-To: References: Message-ID: <88F47ECA-32D7-4A78-85AD-E6E69D74CC06@mcs.anl.gov> I don't know, but see if anything on this page helps: http://wiki.mpich.org/mpich/index.php/Checkpointing On Jun 5, 2013, at 4:09 PM, john donald wrote: > > > ---------- Forwarded message ---------- > From: john donald > Date: 2013/6/3 > Subject: ckpoint-num error > To: mpich-discuss at mcs.anl.gov > > > i used mpiexec with checkpoint and created two checkpoint files: > > mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -ckpoint-interval 4 -n 4 /home/john/app/md > > context-num0-0-0 > context-num1-0-0 > > > i am trying to make a restart > mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 1 > > but nothing happened it just hangs > i also tried: > mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 0-0-0 > also hangs > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From Harry.Miller at gov.bc.ca Wed Jun 5 16:55:23 2013 From: Harry.Miller at gov.bc.ca (Miller, Harry CSNR:EX) Date: Wed, 5 Jun 2013 14:55:23 -0700 Subject: [mpich-discuss] MPICH problem Message-ID: Hello, We are having problems after having installed the MPICH2 (32-bit) along with TauDEM 5.1 32-bit Install Package on one of our Windows Server 2008 64-bit Citrix 6.5 test severs. When registering users using the wpmiregister.exe it appears that only the admin. credentials will work and NOT Citrix regular user accounts. Is there some way to allow Citrix regular users to be able to run commands that call the TauDEM applications from the server? For example, opening up a command prompt on the test server and typing in ... mpiexec -n 8 pitremove -z "W:\FOR\RSI\RSI\Projects\wburt\projects\2013\006_HSCM_Boundary\data\Raster\dem25.tif" -fel "T:\test.tif" requires users credentials to launch the processes (see screenshot below). Regular Citrix credentials don't seem to be accepted whilst my admin. ones are? Also, do we add the credentials logged in to the server as 'admin' or do individual regular users register their credentials from the Citrix server session? Our users won't have admin. accounts but they will all log into the same test server with the MPICH2 and TuaDEM 5.1 install ... and yet the processes ask for credentials even though the users are already logged in with their regular IDIR Citrix login creds.??? I registered my regular Citrix user creds. and also my admin. ones but ONLy the admin one is accepted. When I installed MPICH2 I also checked off that it be available to everyone. But we still get the Aborting: Unable to connect message??? [cid:image001.jpg at 01CE61FA.800DCF30] Thank you in advance for a reply. H.J. Miller Spatial Technology Analyst | Infrastructure Services Section Information Management Branch (IMB) Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca Corporate Services For The Natural Resources Sector (CSNR) From: David Tarboton [mailto:dtarb at usu.edu] Sent: Tuesday, June 4, 2013 7:23 PM To: Miller, Harry CSNR:EX Subject: Re: FW: TauDEM problem Harry, I do not understand MPICH2 credentials very well so I do not know what to say. I have successfully run on Windows 2008 as a Windows user but not with Citrix. You might want to make sure MPICH2 was installed for all users. Also see http://nick-goodman.blogspot.com/2012/02/using-mpich-from-ms-visual-studio-2010.html for some suggestions that may help. Also you might try mpiexec -register with the admin user. If it is running as admin maybe this will work for other users without having to give them admin credentials. Also is wpmiregister.exe the same as mpiexec -register. Sometimes there are different versions of MPI on the same machine from different software and if the wrong one runs it does not work. It is frustrating to me to have to have TauDEM depend on this software that is so hard to install and get configured right. Good luck. Dave On 6/4/2013 5:06 PM, Miller, Harry CSNR:EX wrote: Hello, I have installed the MPICH2 (32-bit) and TauDEM 5.1 32-bit Install Package on one of our Windows Server 2008 64-bit Citrix 6.5 test severs. However, when registering users using the wpmiregister.exe it appears that only the admin. credentials will work and NOT Citrix regular user accounts. Is there some way to allow Citrix regular users to be able to run commands that call the TauDEM applications from the server? For example, opening up a command prompt on the test server and typing in ... mpiexec -n 8 pitremove -z "W:\FOR\RSI\RSI\Projects\wburt\projects\2013\006_HSCM_Boundary\data\Raster\dem25.tif" -fel "T:\test.tif" requires users credentials to launch the processes (see screenshot below). Regular Citrix credentials don't seem to be accepted whilst my admin. ones are? Our users won't have admin. accounts but they will all log into the same test server with the MPICH2 and TuaDEM 5.1 install ... and yet the processes ask for credentials even though the users are already logged in with their regular IDIR Citrix login creds.??? I registered my regular Citrix user creds. and also my admin. ones. Is something still missing here?? Thanks for a reply. Harry H.J. Miller Spatial Technology Analyst | Infrastructure Services Section Information Management Branch (IMB) Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca Corporate Services For The Natural Resources Sector (CSNR) Government of British Columbia From: Miller, Harry CSNR:EX Sent: Tuesday, June 4, 2013 3:20 PM To: Burt, William FLNR:EX Subject: RE: TauDEM fix Will, Looks like the creds have to be admin ones. Do you have an admin. account? My admin. creds work but not my regular IDIR creds. [cid:part3.01050101.08020301 at usu.edu] Harry H.J. Miller Spatial Technology Analyst | Infrastructure Services Section Information Management Branch (IMB) Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca Corporate Services For The Natural Resources Sector (CSNR) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00001.png Type: image/png Size: 56457 bytes Desc: ATT00001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 66323 bytes Desc: image001.jpg URL: From wbland at mcs.anl.gov Wed Jun 5 22:19:08 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Wed, 5 Jun 2013 22:19:08 -0500 (CDT) Subject: [mpich-discuss] Fwd: ckpoint-num error In-Reply-To: <88F47ECA-32D7-4A78-85AD-E6E69D74CC06@mcs.anl.gov> References: <88F47ECA-32D7-4A78-85AD-E6E69D74CC06@mcs.anl.gov> Message-ID: <089CF900-3487-42BF-91EF-57984AE3943D@mcs.anl.gov> Is there actually anything in those checkpoints? With a checkpoint happening every 4 seconds you may be overdoing it. Wesley On Jun 5, 2013, at 2:14 PM, Rajeev Thakur wrote: > I don't know, but see if anything on this page helps: > http://wiki.mpich.org/mpich/index.php/Checkpointing > > On Jun 5, 2013, at 4:09 PM, john donald wrote: > >> >> >> ---------- Forwarded message ---------- >> From: john donald >> Date: 2013/6/3 >> Subject: ckpoint-num error >> To: mpich-discuss at mcs.anl.gov >> >> >> i used mpiexec with checkpoint and created two checkpoint files: >> >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -ckpoint-interval 4 -n 4 /home/john/app/md >> >> context-num0-0-0 >> context-num1-0-0 >> >> >> i am trying to make a restart >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 1 >> >> but nothing happened it just hangs >> i also tried: >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 0-0-0 >> also hangs >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From wbland at mcs.anl.gov Wed Jun 5 23:04:39 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Wed, 5 Jun 2013 23:04:39 -0500 (CDT) Subject: [mpich-discuss] MPICH problem In-Reply-To: References: Message-ID: <446D93AA-A51E-46E0-A265-4284DF444E74@mcs.anl.gov> It sounds like your problem is outside of MPICH's control but just to be sure, you can run a simple MPI application and get expected output as long as you're using a regular Windows user account correct? (You'll have to forgive me if I don't know all the Windowsisms. There aren't many Windows people left on this project as we stopped officially supporting Windows on version 1.4.1p) Wesley On Jun 5, 2013, at 2:55 PM, "Miller, Harry CSNR:EX" wrote: > Hello, > > We are having problems after having installed the MPICH2 (32-bit) along with TauDEM 5.1 32-bit Install Package on one of our Windows Server 2008 64-bit Citrix 6.5 test severs. When registering users using the wpmiregister.exe it appears that only the admin. credentials will work and NOT Citrix regular user accounts. Is there some way to allow Citrix regular users to be able to run commands that call the TauDEM applications from the server? > > For example, opening up a command prompt on the test server and typing in ... > > mpiexec -n 8 pitremove -z "W:\FOR\RSI\RSI\Projects\wburt\projects\2013\006_HSCM_Boundary\data\Raster\dem25.tif" -fel "T:\test.tif" > > requires users credentials to launch the processes (see screenshot below). Regular Citrix credentials don?t seem to be accepted whilst my admin. ones are? Also, do we add the credentials logged in to the server as ?admin? or do individual regular users register their credentials from the Citrix server session? Our users won?t have admin. accounts but they will all log into the same test server with the MPICH2 and TuaDEM 5.1 install ... and yet the processes ask for credentials even though the users are already logged in with their regular IDIR Citrix login creds.??? > > I registered my regular Citrix user creds. and also my admin. ones but ONLy the admin one is accepted. When I installed MPICH2 I also checked off that it be available to everyone. But we still get the Aborting: Unable to connect message??? > > > > Thank you in advance for a reply. > > > H.J. Miller > Spatial Technology Analyst | Infrastructure Services Section > Information Management Branch (IMB) > Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca > Corporate Services For The Natural Resources Sector (CSNR) > > > > > From: David Tarboton [mailto:dtarb at usu.edu] > Sent: Tuesday, June 4, 2013 7:23 PM > To: Miller, Harry CSNR:EX > Subject: Re: FW: TauDEM problem > > Harry, > > I do not understand MPICH2 credentials very well so I do not know what to say. I have successfully run on Windows 2008 as a Windows user but not with Citrix. You might want to make sure MPICH2 was installed for all users. Also see http://nick-goodman.blogspot.com/2012/02/using-mpich-from-ms-visual-studio-2010.html for some suggestions that may help. Also you might try mpiexec -register with the admin user. If it is running as admin maybe this will work for other users without having to give them admin credentials. Also is wpmiregister.exe the same as mpiexec -register. Sometimes there are different versions of MPI on the same machine from different software and if the wrong one runs it does not work. It is frustrating to me to have to have TauDEM depend on this software that is so hard to install and get configured right. > > Good luck. > > Dave > > On 6/4/2013 5:06 PM, Miller, Harry CSNR:EX wrote: > Hello, > > I have installed the MPICH2 (32-bit) and TauDEM 5.1 32-bit Install Package on one of our Windows Server 2008 64-bit Citrix 6.5 test severs. However, when registering users using the wpmiregister.exe it appears that only the admin. credentials will work and NOT Citrix regular user accounts. Is there some way to allow Citrix regular users to be able to run commands that call the TauDEM applications from the server? > > For example, opening up a command prompt on the test server and typing in ... > > mpiexec -n 8 pitremove -z "W:\FOR\RSI\RSI\Projects\wburt\projects\2013\006_HSCM_Boundary\data\Raster\dem25.tif" -fel "T:\test.tif" > > requires users credentials to launch the processes (see screenshot below). Regular Citrix credentials don?t seem to be accepted whilst my admin. ones are? Our users won?t have admin. accounts but they will all log into the same test server with the MPICH2 and TuaDEM 5.1 install ... and yet the processes ask for credentials even though the users are already logged in with their regular IDIR Citrix login creds.??? > > I registered my regular Citrix user creds. and also my admin. ones. > > Is something still missing here?? > > Thanks for a reply. > > Harry > > > H.J. Miller > Spatial Technology Analyst | Infrastructure Services Section > Information Management Branch (IMB) > Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca > Corporate Services For The Natural Resources Sector (CSNR) > Government of British Columbia > > From: Miller, Harry CSNR:EX > Sent: Tuesday, June 4, 2013 3:20 PM > To: Burt, William FLNR:EX > Subject: RE: TauDEM fix > > Will, > > Looks like the creds have to be admin ones. Do you have an admin. account? My admin. creds work but not my regular IDIR creds. > > > > Harry > > > H.J. Miller > Spatial Technology Analyst | Infrastructure Services Section > Information Management Branch (IMB) > Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca > Corporate Services For The Natural Resources Sector (CSNR) > > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From biddisco at cscs.ch Thu Jun 6 01:39:20 2013 From: biddisco at cscs.ch (Biddiscombe, John A.) Date: Thu, 6 Jun 2013 06:39:20 +0000 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51ACA9C4.4080203@fz-juelich.de> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> Message-ID: <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> Just FYI. I am also getting the double free error when I run under slurm (mpich 3.0.4). Please don't take correspondence off list as I'm following the thread. I can't add anything more useful than Markus has already provided with his stack trace and logs. [I did find that if I configure --with-slurm and use srun instead of mpiexec , then all works, as expected, but I need mpiexec to pass env vars to processes using mpmd syntax] JB -----Original Message----- From: discuss-bounces at mpich.org [mailto:discuss-bounces at mpich.org] On Behalf Of Markus Geimer Sent: 03 June 2013 16:36 To: Pavan Balaji Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Problems running MPICH jobs under SLURM Pavan, > 1. Can you run your application processes using "ddd" or some other > debugger to see where the double free is coming from? You might have > to build mpich with --enable-g=dbg to get the debug symbols in. Here is the full stack backtrace: ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- #0 0x00007ffff6deb475 in *__GI_raise (sig=) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 #1 0x00007ffff6dee6f0 in *__GI_abort () at abort.c:92 #2 0x00007ffff6e2652b in __libc_message (do_abort=, fmt=) at ../sysdeps/unix/sysv/linux/libc_fatal.c:189 #3 0x00007ffff6e2fd76 in malloc_printerr (action=3, str=0x7ffff6f081e0 "double free or corruption (fasttop)", ptr=) at malloc.c:6283 #4 0x00007ffff6e34aac in *__GI___libc_free (mem=) at malloc.c:3738 #5 0x00007ffff7a1d5d9 in populate_ids_from_mapping ( did_map=, num_nodes=, mapping=, pg=) at src/mpid/ch3/src/mpid_vc.c:1063 #6 MPIDI_Populate_vc_node_ids (pg=pg at entry=0x604910, our_pg_rank=our_pg_rank at entry=0) at src/mpid/ch3/src/mpid_vc.c:1193 #7 0x00007ffff7a17dd6 in MPID_Init (argc=argc at entry=0x7fffffffd97c, argv=argv at entry=0x7fffffffd970, requested=requested at entry=0, provided=provided at entry=0x7fffffffd8e8, has_args=has_args at entry=0x7fffffffd8e0, has_env=has_env at entry=0x7fffffffd8e4) at src/mpid/ch3/src/mpid_init.c:156 #8 0x00007ffff7acdf7f in MPIR_Init_thread (argc=argc at entry=0x7fffffffd97c, argv=argv at entry=0x7fffffffd970, required=required at entry=0, provided=provided at entry=0x7fffffffd944) at src/mpi/init/initthread.c:431 #9 0x00007ffff7acd90e in PMPI_Init (argc=0x7fffffffd97c, argv=0x7fffffffd970) at src/mpi/init/init.c:136 #10 0x000000000040086d in main () ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- > 2. Can you send me the output with the ssh launcher as well? See mail sent off-list. Thanks, Markus -- Dr. Markus Geimer Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-1773 Fax: +49-2461-61-6656 E-mail: m.geimer at fz-juelich.de WWW: http://www.fz-juelich.de/jsc/ ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From apenya at mcs.anl.gov Thu Jun 6 01:44:22 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Wed, 05 Jun 2013 23:44:22 -0700 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> References: <51AA036F.2030204@fz-juelich.de> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> Message-ID: <2604634.5QX7XPuoOc@localhost.localdomain> Thanks for your inputs JB. Everything related to this issue will be discussed through this thread. You can also check the corresponding ticket: http://trac.mpich.org/projects/mpich/ticket/1871 Antonio On Thursday, June 06, 2013 06:39:20 AM Biddiscombe, John A. wrote: > Just FYI. I am also getting the double free error when I run under slurm > (mpich 3.0.4). Please don't take correspondence off list as I'm following > the thread. > > I can't add anything more useful than Markus has already provided with his > stack trace and logs. > > [I did find that if I configure --with-slurm and use srun instead of mpiexec > , then all works, as expected, but I need mpiexec to pass env vars to > processes using mpmd syntax] > > JB > > -----Original Message----- > From: discuss-bounces at mpich.org [mailto:discuss-bounces at mpich.org] On Behalf > Of Markus Geimer Sent: 03 June 2013 16:36 > To: Pavan Balaji > Cc: discuss at mpich.org > Subject: Re: [mpich-discuss] Problems running MPICH jobs under SLURM > > Pavan, > > > 1. Can you run your application processes using "ddd" or some other > > debugger to see where the double free is coming from? You might have > > to build mpich with --enable-g=dbg to get the debug symbols in. > > Here is the full stack backtrace: > > ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- > > #0 0x00007ffff6deb475 in *__GI_raise (sig=) > at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 > #1 0x00007ffff6dee6f0 in *__GI_abort () at abort.c:92 > #2 0x00007ffff6e2652b in __libc_message (do_abort=, > fmt=) at ../sysdeps/unix/sysv/linux/libc_fatal.c:189 > #3 0x00007ffff6e2fd76 in malloc_printerr (action=3, > str=0x7ffff6f081e0 "double free or corruption (fasttop)", > ptr=) at malloc.c:6283 > #4 0x00007ffff6e34aac in *__GI___libc_free (mem=) > at malloc.c:3738 > #5 0x00007ffff7a1d5d9 in populate_ids_from_mapping ( > did_map=, num_nodes=, > mapping=, pg=) > at src/mpid/ch3/src/mpid_vc.c:1063 > #6 MPIDI_Populate_vc_node_ids (pg=pg at entry=0x604910, > our_pg_rank=our_pg_rank at entry=0) at src/mpid/ch3/src/mpid_vc.c:1193 > #7 0x00007ffff7a17dd6 in MPID_Init (argc=argc at entry=0x7fffffffd97c, > argv=argv at entry=0x7fffffffd970, requested=requested at entry=0, > provided=provided at entry=0x7fffffffd8e8, > has_args=has_args at entry=0x7fffffffd8e0, > has_env=has_env at entry=0x7fffffffd8e4) at > src/mpid/ch3/src/mpid_init.c:156 > #8 0x00007ffff7acdf7f in MPIR_Init_thread (argc=argc at entry=0x7fffffffd97c, > argv=argv at entry=0x7fffffffd970, required=required at entry=0, > provided=provided at entry=0x7fffffffd944) at src/mpi/init/initthread.c:431 > #9 0x00007ffff7acd90e in PMPI_Init (argc=0x7fffffffd97c, > argv=0x7fffffffd970) > at src/mpi/init/init.c:136 > #10 0x000000000040086d in main () > > ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- > > > 2. Can you send me the output with the ssh launcher as well? > > See mail sent off-list. > > Thanks, > Markus > > -- > Dr. Markus Geimer > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-1773 > Fax: +49-2461-61-6656 > E-mail: m.geimer at fz-juelich.de > WWW: http://www.fz-juelich.de/jsc/ > > > > ---------------------------------------------------------------------------- > -------------------- > --------------------------------------------------------------------------- > --------------------- Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke > (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. > Schmidt > --------------------------------------------------------------------------- > --------------------- > --------------------------------------------------------------------------- > --------------------- _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From alejandro.allievi at gmail.com Thu Jun 6 08:18:08 2013 From: alejandro.allievi at gmail.com (Alejandro Allievi) Date: Thu, 6 Jun 2013 10:48:08 -0230 Subject: [mpich-discuss] Linking to personal libraries Message-ID: H i When linking to personal libraries using MPICH2, how does each process access them in a distributed environment?? Does the linker actually combines everything into a single executable program and "sends a copy" of entire executable to each process even though not all processes may use the libraries?? Can somebody shed some light on the entire linking process?? Thanks for any help!! Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbland at mcs.anl.gov Thu Jun 6 08:45:17 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Thu, 6 Jun 2013 08:45:17 -0500 (CDT) Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: Message-ID: It depends on whether you are linking your library statically or dynamically. If you are linking statically, the linker puts everything in one executable that is sent to all of the nodes. If you do it dynamically, you need to make sure your libraries are available on all of the nodes you will be using (usually by using something like NFS to mirror your home directory across the cluster. You will also need to make sure your environment is set up correctly to allow the libraries to be found on the remote process, usually via an environment variable such as LD_LIBRARY_PATH. Wesley On Jun 6, 2013, at 6:18 AM, Alejandro Allievi wrote: > Hi > > When linking to personal libraries using MPICH2, how does each process access them in a distributed environment?? Does the linker actually combines everything into a single executable program and "sends a copy" of entire executable to each process even though not all processes may use the libraries?? Can somebody shed some light on the entire linking process?? > > Thanks for any help!! > > Alejandro > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandro.allievi at gmail.com Thu Jun 6 13:09:16 2013 From: alejandro.allievi at gmail.com (Alejandro Allievi) Date: Thu, 6 Jun 2013 15:39:16 -0230 Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: Message-ID: Hi Wesley, Just to clarify myself: provided our NFS mirrors my home directory and that LD_LIBRARY_PATH is set correctly, the dynamic compile/link step for the user (me) is the same as for static compile/link?? Thanks again Wesley!! Alejandro On Thu, Jun 6, 2013 at 11:15 AM, Wesley Bland wrote: > It depends on whether you are linking your library statically or > dynamically. If you are linking statically, the linker puts everything in > one executable that is sent to all of the nodes. If you do it dynamically, > you need to make sure your libraries are available on all of the nodes you > will be using (usually by using something like NFS to mirror your home > directory across the cluster. You will also need to make sure your > environment is set up correctly to allow the libraries to be found on the > remote process, usually via an environment variable such as > LD_LIBRARY_PATH. > > Wesley > > > On Jun 6, 2013, at 6:18 AM, Alejandro Allievi > wrote: > > H > i > > When linking to personal libraries using MPICH2, how does each process > access them in a distributed environment?? Does the linker actually > combines > everything > into a single > executable > program > and "sends a copy" of entire executable to each process even though not > all processes may use the libraries?? Can somebody shed some light on the > entire linking process?? > > Thanks for any help!! > > Alejandro > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Alejandro Allievi http://www.ace-net.ca/wiki/Alejandro_Allievi "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. *Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci*. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbland at mcs.anl.gov Thu Jun 6 13:10:09 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Thu, 6 Jun 2013 11:10:09 -0700 Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: Message-ID: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> That's true and for most smallish clusters, that's how people use it. On Jun 6, 2013, at 11:09 AM, Alejandro Allievi wrote: > Hi Wesley, > > Just to clarify myself: provided our NFS mirrors my home directory and that LD_LIBRARY_PATH is set correctly, the dynamic compile/link step for the user (me) is the same as for static compile/link?? > > Thanks again Wesley!! > > Alejandro > > > On Thu, Jun 6, 2013 at 11:15 AM, Wesley Bland wrote: > It depends on whether you are linking your library statically or dynamically. If you are linking statically, the linker puts everything in one executable that is sent to all of the nodes. If you do it dynamically, you need to make sure your libraries are available on all of the nodes you will be using (usually by using something like NFS to mirror your home directory across the cluster. You will also need to make sure your environment is set up correctly to allow the libraries to be found on the remote process, usually via an environment variable such as LD_LIBRARY_PATH. > > Wesley > > > On Jun 6, 2013, at 6:18 AM, Alejandro Allievi wrote: > >> Hi >> >> When linking to personal libraries using MPICH2, how does each process access them in a distributed environment?? Does the linker actually combines everything into a single executable program and "sends a copy" of entire executable to each process even though not all processes may use the libraries?? Can somebody shed some light on the entire linking process?? >> >> Thanks for any help!! >> >> Alejandro >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. > > Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandro.allievi at gmail.com Thu Jun 6 13:12:38 2013 From: alejandro.allievi at gmail.com (Alejandro Allievi) Date: Thu, 6 Jun 2013 15:42:38 -0230 Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> References: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> Message-ID: Is there a better way?? On Thu, Jun 6, 2013 at 3:40 PM, Wesley Bland wrote: > That's true and for most smallish clusters, that's how people use it. > > On Jun 6, 2013, at 11:09 AM, Alejandro Allievi < > alejandro.allievi at gmail.com> wrote: > > Hi Wesley, > > Just to clarify myself: provided our NFS mirrors my home directory and > that LD_LIBRARY_PATH is set correctly, the dynamic compile/link step for > the user (me) is the same as for static compile/link?? > > Thanks again Wesley!! > > Alejandro > > > On Thu, Jun 6, 2013 at 11:15 AM, Wesley Bland wrote: > >> It depends on whether you are linking your library statically or >> dynamically. If you are linking statically, the linker puts everything in >> one executable that is sent to all of the nodes. If you do it dynamically, >> you need to make sure your libraries are available on all of the nodes you >> will be using (usually by using something like NFS to mirror your home >> directory across the cluster. You will also need to make sure your >> environment is set up correctly to allow the libraries to be found on the >> remote process, usually via an environment variable such as >> LD_LIBRARY_PATH. >> >> Wesley >> >> >> On Jun 6, 2013, at 6:18 AM, Alejandro Allievi < >> alejandro.allievi at gmail.com> wrote: >> >> H >> i >> >> When linking to personal libraries using MPICH2, how does each process >> access them in a distributed environment?? Does the linker actually >> combines >> everything >> into a single >> executable >> program >> and "sends a copy" of entire executable to each process even though not >> all processes may use the libraries?? Can somebody shed some light on the >> entire linking process?? >> >> Thanks for any help!! >> >> Alejandro >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre > v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais > vacill?; mais celui auquel nous sommes le plus habituellement revenus". > Denis Diderot. > > Este e-mail esta consignado s?lo para el destinatario designado en el > mismo y puede contener informaci?n confidencial y privilegiada. Su > distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario > designado, por favor notif?quenos de inmediato y destruya este e-mail en > forma permanente as? como todas las copias del mismo. *Ce courriel peut > renfermer des renseignements confidentiels et privil?gi?s et s'adresse au > destinataire d?sign? seulement. La distribution ou la copie de ce courriel > est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en > aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que > toute copie de celui-ci*. This e-mail may contain confidential > information, and is intended only for the named recipient and may be > privileged. Distribution or copying of this email is prohibited. If you are > not the named recipient, please notify us immediately and permanently > destroy this email and all copies of it. > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Alejandro Allievi http://www.ace-net.ca/wiki/Alejandro_Allievi "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. *Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci*. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbland at mcs.anl.gov Thu Jun 6 13:14:55 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Thu, 6 Jun 2013 11:14:55 -0700 Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> Message-ID: No, this is the correct way to do it. On Jun 6, 2013, at 11:12 AM, Alejandro Allievi wrote: > Is there a better way?? > > > On Thu, Jun 6, 2013 at 3:40 PM, Wesley Bland wrote: > That's true and for most smallish clusters, that's how people use it. > > On Jun 6, 2013, at 11:09 AM, Alejandro Allievi wrote: > >> Hi Wesley, >> >> Just to clarify myself: provided our NFS mirrors my home directory and that LD_LIBRARY_PATH is set correctly, the dynamic compile/link step for the user (me) is the same as for static compile/link?? >> >> Thanks again Wesley!! >> >> Alejandro >> >> >> On Thu, Jun 6, 2013 at 11:15 AM, Wesley Bland wrote: >> It depends on whether you are linking your library statically or dynamically. If you are linking statically, the linker puts everything in one executable that is sent to all of the nodes. If you do it dynamically, you need to make sure your libraries are available on all of the nodes you will be using (usually by using something like NFS to mirror your home directory across the cluster. You will also need to make sure your environment is set up correctly to allow the libraries to be found on the remote process, usually via an environment variable such as LD_LIBRARY_PATH. >> >> Wesley >> >> >> On Jun 6, 2013, at 6:18 AM, Alejandro Allievi wrote: >> >>> Hi >>> >>> When linking to personal libraries using MPICH2, how does each process access them in a distributed environment?? Does the linker actually combines everything into a single executable program and "sends a copy" of entire executable to each process even though not all processes may use the libraries?? Can somebody shed some light on the entire linking process?? >>> >>> Thanks for any help!! >>> >>> Alejandro >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> -- >> Alejandro Allievi >> http://www.ace-net.ca/wiki/Alejandro_Allievi >> >> "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. >> >> Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. > > Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandro.allievi at gmail.com Thu Jun 6 13:22:37 2013 From: alejandro.allievi at gmail.com (Alejandro Allievi) Date: Thu, 6 Jun 2013 15:52:37 -0230 Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> Message-ID: Thanks!! On Thu, Jun 6, 2013 at 3:44 PM, Wesley Bland wrote: > No, this is the correct way to do it. > > On Jun 6, 2013, at 11:12 AM, Alejandro Allievi < > alejandro.allievi at gmail.com> wrote: > > Is there a better way?? > > > On Thu, Jun 6, 2013 at 3:40 PM, Wesley Bland wrote: > >> That's true and for most smallish clusters, that's how people use it. >> >> On Jun 6, 2013, at 11:09 AM, Alejandro Allievi < >> alejandro.allievi at gmail.com> wrote: >> >> Hi Wesley, >> >> Just to clarify myself: provided our NFS mirrors my home directory and >> that LD_LIBRARY_PATH is set correctly, the dynamic compile/link step for >> the user (me) is the same as for static compile/link?? >> >> Thanks again Wesley!! >> >> Alejandro >> >> >> On Thu, Jun 6, 2013 at 11:15 AM, Wesley Bland wrote: >> >>> It depends on whether you are linking your library statically or >>> dynamically. If you are linking statically, the linker puts everything in >>> one executable that is sent to all of the nodes. If you do it dynamically, >>> you need to make sure your libraries are available on all of the nodes you >>> will be using (usually by using something like NFS to mirror your home >>> directory across the cluster. You will also need to make sure your >>> environment is set up correctly to allow the libraries to be found on the >>> remote process, usually via an environment variable such as >>> LD_LIBRARY_PATH. >>> >>> Wesley >>> >>> >>> On Jun 6, 2013, at 6:18 AM, Alejandro Allievi < >>> alejandro.allievi at gmail.com> wrote: >>> >>> H >>> i >>> >>> When linking to personal libraries using MPICH2, how does each process >>> access them in a distributed environment?? Does the linker actually >>> combines >>> everything >>> into a single >>> executable >>> program >>> and "sends a copy" of entire executable to each process even though not >>> all processes may use the libraries?? Can somebody shed some light on the >>> entire linking process?? >>> >>> Thanks for any help!! >>> >>> Alejandro >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >> >> >> >> -- >> Alejandro Allievi >> http://www.ace-net.ca/wiki/Alejandro_Allievi >> >> "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre >> v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais >> vacill?; mais celui auquel nous sommes le plus habituellement revenus". >> Denis Diderot. >> >> Este e-mail esta consignado s?lo para el destinatario designado en el >> mismo y puede contener informaci?n confidencial y privilegiada. Su >> distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario >> designado, por favor notif?quenos de inmediato y destruya este e-mail en >> forma permanente as? como todas las copias del mismo. *Ce courriel peut >> renfermer des renseignements confidentiels et privil?gi?s et s'adresse au >> destinataire d?sign? seulement. La distribution ou la copie de ce courriel >> est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en >> aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que >> toute copie de celui-ci*. This e-mail may contain confidential >> information, and is intended only for the named recipient and may be >> privileged. Distribution or copying of this email is prohibited. If you are >> not the named recipient, please notify us immediately and permanently >> destroy this email and all copies of it. >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre > v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais > vacill?; mais celui auquel nous sommes le plus habituellement revenus". > Denis Diderot. > > Este e-mail esta consignado s?lo para el destinatario designado en el > mismo y puede contener informaci?n confidencial y privilegiada. Su > distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario > designado, por favor notif?quenos de inmediato y destruya este e-mail en > forma permanente as? como todas las copias del mismo. *Ce courriel peut > renfermer des renseignements confidentiels et privil?gi?s et s'adresse au > destinataire d?sign? seulement. La distribution ou la copie de ce courriel > est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en > aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que > toute copie de celui-ci*. This e-mail may contain confidential > information, and is intended only for the named recipient and may be > privileged. Distribution or copying of this email is prohibited. If you are > not the named recipient, please notify us immediately and permanently > destroy this email and all copies of it. > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Alejandro Allievi http://www.ace-net.ca/wiki/Alejandro_Allievi "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. *Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci*. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbland at mcs.anl.gov Thu Jun 6 13:28:20 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Thu, 6 Jun 2013 11:28:20 -0700 Subject: [mpich-discuss] MPICH problem In-Reply-To: References: <446D93AA-A51E-46E0-A265-4284DF444E74@mcs.anl.gov> Message-ID: <9A755467-3360-4BD8-9ACD-FAB3F09B8DC7@mcs.anl.gov> (Re-adding mpich-discuss) Sorry I can't help you more. I'm afraid I'll have to defer to Jayesh on this one he's the only one who still knows anything about Windows support. On Jun 6, 2013, at 11:17 AM, "Miller, Harry CSNR:EX" wrote: > I am at a loss as when I even ?google? the problem everyone writing in seems to have a similar problem but no final (fix) resolution is ever offered? > > http://trac.mpich.org/projects/mpich/ticket/1151 > http://trac.mpich.org/projects/mpich/ticket/1577 > http://trac.mpich.org/projects/mpich/ticket/1692 > http://lists.mcs.anl.gov/pipermail/mpich-discuss/2011-March/009303.html > http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-May/007293.html > http://lists.mcs.anl.gov/pipermail/mpich-discuss/2009-July/005324.html > http://lists.mcs.anl.gov/pipermail/mpich-discuss/2008-November/000032.html > http://lists.mcs.anl.gov/pipermail/mpich-discuss/2005-September/000768.html > https://groups.google.com/forum/?fromgroups#!topic/fds-smv/3nSxMLMWRvY > http://auriza.site40.net/notes/mpi/mpich2-on-windows-xp/ > > I can go on ... > > mpiexec -register ONLY works with the admin user credentials (mine). I installed MPICH as ?admin? as it indicated one should but it now it doesn?t work if register other regular Citrix user credentials using either wmpiexec or mpiexec. Is wpmiregister.exe the same as mpiexec.exe? mpiexec -register.? Maybe I have to remove the credentials registered with the wmpiexec.exe app. and make sure to register with mpiexec.exe using the mpiexec ?register command? > > Confusing. > > Harry > > > H.J. Miller > Spatial Technology Analyst | Infrastructure Services Section > Information Management Branch (IMB) > Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca > Corporate Services For The Natural Resources Sector (CSNR) > > > > > > From: Wesley Bland [mailto:wbland at mcs.anl.gov] > Sent: Wednesday, June 5, 2013 9:05 PM > To: Miller, Harry CSNR:EX > Cc: discuss at mpich.org > Subject: Re: [mpich-discuss] MPICH problem > > It sounds like your problem is outside of MPICH's control but just to be sure, you can run a simple MPI application and get expected output as long as you're using a regular Windows user account correct? (You'll have to forgive me if I don't know all the Windowsisms. There aren't many Windows people left on this project as we stopped officially supporting Windows on version 1.4.1p) > > Wesley > > On Jun 5, 2013, at 2:55 PM, "Miller, Harry CSNR:EX" wrote: > > Hello, > > We are having problems after having installed the MPICH2 (32-bit) along withTauDEM 5.1 32-bit Install Package on one of our Windows Server 2008 64-bit Citrix 6.5 test severs. When registering users using the wpmiregister.exe it appears that only the admin. credentials will work and NOT Citrix regular user accounts. Is there some way to allow Citrix regular users to be able to run commands that call the TauDEM applications from the server? > > For example, opening up a command prompt on the test server and typing in ... > > mpiexec -n 8 pitremove -z "W:\FOR\RSI\RSI\Projects\wburt\projects\2013\006_HSCM_Boundary\data\Raster\dem25.tif" -fel "T:\test.tif" > > requires users credentials to launch the processes (see screenshot below). Regular Citrix credentials don?t seem to be accepted whilst my admin. ones are? Also, do we add the credentials logged in to the server as ?admin? or do individual regular users register their credentials from the Citrix server session? Our users won?t have admin. accounts but they will all log into the same test server with the MPICH2 and TuaDEM 5.1 install ... and yet the processes ask for credentials even though the users are already logged in with their regular IDIR Citrix login creds.??? > > I registered my regular Citrix user creds. and also my admin. ones but ONLy the admin one is accepted. When I installed MPICH2 I also checked off that it be available to everyone. But we still get the Aborting: Unable to connect message??? > > > > Thank you in advance for a reply. > > > H.J. Miller > Spatial Technology Analyst | Infrastructure Services Section > Information Management Branch (IMB) > Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca > Corporate Services For The Natural Resources Sector (CSNR) > > > > > From: David Tarboton [mailto:dtarb at usu.edu] > Sent: Tuesday, June 4, 2013 7:23 PM > To: Miller, Harry CSNR:EX > Subject: Re: FW: TauDEM problem > > Harry, > > I do not understand MPICH2 credentials very well so I do not know what to say. I have successfully run on Windows 2008 as a Windows user but not with Citrix. You might want to make sure MPICH2 was installed for all users. Also see http://nick-goodman.blogspot.com/2012/02/using-mpich-from-ms-visual-studio-2010.htmlfor some suggestions that may help. Also you might try mpiexec -register with the admin user. If it is running as admin maybe this will work for other users without having to give them admin credentials. Also is wpmiregister.exe the same as mpiexec -register. Sometimes there are different versions of MPI on the same machine from different software and if the wrong one runs it does not work. It is frustrating to me to have to have TauDEM depend on this software that is so hard to install and get configured right. > > Good luck. > > Dave > > On 6/4/2013 5:06 PM, Miller, Harry CSNR:EX wrote: > Hello, > > I have installed the MPICH2 (32-bit) and TauDEM 5.1 32-bit Install Package on one of our Windows Server 2008 64-bit Citrix 6.5 test severs. However, when registering users using the wpmiregister.exe it appears that only the admin. credentials will work and NOT Citrix regular user accounts. Is there some way to allow Citrix regular users to be able to run commands that call the TauDEM applications from the server? > > For example, opening up a command prompt on the test server and typing in ... > > mpiexec -n 8 pitremove -z "W:\FOR\RSI\RSI\Projects\wburt\projects\2013\006_HSCM_Boundary\data\Raster\dem25.tif" -fel "T:\test.tif" > > requires users credentials to launch the processes (see screenshot below). Regular Citrix credentials don?t seem to be accepted whilst my admin. ones are? Our users won?t have admin. accounts but they will all log into the same test server with the MPICH2 and TuaDEM 5.1 install ... and yet the processes ask for credentials even though the users are already logged in with their regular IDIR Citrix login creds.??? > > I registered my regular Citrix user creds. and also my admin. ones. > > Is something still missing here?? > > Thanks for a reply. > > Harry > > > H.J. Miller > Spatial Technology Analyst | Infrastructure Services Section > Information Management Branch (IMB) > Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca > Corporate Services For The Natural Resources Sector (CSNR) > Government of British Columbia > > From: Miller, Harry CSNR:EX > Sent: Tuesday, June 4, 2013 3:20 PM > To: Burt, William FLNR:EX > Subject: RE: TauDEM fix > > Will, > > Looks like the creds have to be admin ones. Do you have an admin. account? My admin. creds work but not my regular IDIR creds. > > > > Harry > > > H.J. Miller > Spatial Technology Analyst | Infrastructure Services Section > Information Management Branch (IMB) > Phone: 250-356-5217 | FAX: 250-953-3493 | E-Mail: Harry.Miller at gov.bc.ca > Corporate Services For The Natural Resources Sector (CSNR) > > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Fri Jun 7 01:40:18 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 07 Jun 2013 01:40:18 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> Message-ID: <51B18052.2080603@mcs.anl.gov> FYI, I believe this is now fixed. Please try out the latest nightly snapshot and let us know if you are still running into this issue: http://www.mpich.org/static/tarballs/nightly/master/hydra/ http://www.mpich.org/static/tarballs/nightly/master/mpich/ -- Pavan On 06/06/2013 01:39 AM, Biddiscombe, John A. wrote: > Just FYI. I am also getting the double free error when I run under slurm (mpich 3.0.4). Please don't take correspondence off list as I'm following the thread. > > I can't add anything more useful than Markus has already provided with his stack trace and logs. > > [I did find that if I configure --with-slurm and use srun instead of mpiexec , then all works, as expected, but I need mpiexec to pass env vars to processes using mpmd syntax] > > JB > > -----Original Message----- > From: discuss-bounces at mpich.org [mailto:discuss-bounces at mpich.org] On Behalf Of Markus Geimer > Sent: 03 June 2013 16:36 > To: Pavan Balaji > Cc: discuss at mpich.org > Subject: Re: [mpich-discuss] Problems running MPICH jobs under SLURM > > Pavan, > >> 1. Can you run your application processes using "ddd" or some other >> debugger to see where the double free is coming from? You might have >> to build mpich with --enable-g=dbg to get the debug symbols in. > > Here is the full stack backtrace: > > ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- > > #0 0x00007ffff6deb475 in *__GI_raise (sig=) > at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 > #1 0x00007ffff6dee6f0 in *__GI_abort () at abort.c:92 > #2 0x00007ffff6e2652b in __libc_message (do_abort=, > fmt=) at ../sysdeps/unix/sysv/linux/libc_fatal.c:189 > #3 0x00007ffff6e2fd76 in malloc_printerr (action=3, > str=0x7ffff6f081e0 "double free or corruption (fasttop)", > ptr=) at malloc.c:6283 > #4 0x00007ffff6e34aac in *__GI___libc_free (mem=) > at malloc.c:3738 > #5 0x00007ffff7a1d5d9 in populate_ids_from_mapping ( > did_map=, num_nodes=, > mapping=, pg=) > at src/mpid/ch3/src/mpid_vc.c:1063 > #6 MPIDI_Populate_vc_node_ids (pg=pg at entry=0x604910, > our_pg_rank=our_pg_rank at entry=0) at src/mpid/ch3/src/mpid_vc.c:1193 > #7 0x00007ffff7a17dd6 in MPID_Init (argc=argc at entry=0x7fffffffd97c, > argv=argv at entry=0x7fffffffd970, requested=requested at entry=0, > provided=provided at entry=0x7fffffffd8e8, > has_args=has_args at entry=0x7fffffffd8e0, > has_env=has_env at entry=0x7fffffffd8e4) at > src/mpid/ch3/src/mpid_init.c:156 > #8 0x00007ffff7acdf7f in MPIR_Init_thread (argc=argc at entry=0x7fffffffd97c, > argv=argv at entry=0x7fffffffd970, required=required at entry=0, > provided=provided at entry=0x7fffffffd944) at src/mpi/init/initthread.c:431 > #9 0x00007ffff7acd90e in PMPI_Init (argc=0x7fffffffd97c, > argv=0x7fffffffd970) > at src/mpi/init/init.c:136 > #10 0x000000000040086d in main () > > ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- > >> 2. Can you send me the output with the ssh launcher as well? > > See mail sent off-list. > > Thanks, > Markus > > -- > Dr. Markus Geimer > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-1773 > Fax: +49-2461-61-6656 > E-mail: m.geimer at fz-juelich.de > WWW: http://www.fz-juelich.de/jsc/ > > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From biddisco at cscs.ch Fri Jun 7 03:40:26 2013 From: biddisco at cscs.ch (Biddiscombe, John A.) Date: Fri, 7 Jun 2013 08:40:26 +0000 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B18052.2080603@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> Message-ID: <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> I downloaded the nightly tarball and recompiled/installed mpich (used mpich-master-v3.0.4-259-gf322ce79) I still get this (output below) with a simple hello world program. Now you must understand that I have no idea what I'm doing (really). I wanted to test some debugging features under slurm so installed slurm myself on a workstation with just 2 cores and have the bare minimum setup. I'm doing the following sudo munged & sudo slurmd & sudo slurmctld -D and then I can run jobs on the local machine and it seems to be ok, except that mpi jobs always give the double free error as below when run under slurm, but are just fine when run from the command line. My suspicion is that slurm is not actually using the hydra pm that I just compiled. I installed slurm from rpms. Should I recompile slurm myself and somehow tell it which mpi to use? My job script looks as follows ###################### #!/bin/bash # # Create the job script from the supplied parameters # #SBATCH --job-name=pvserver #SBATCH --time=00:04:00 #SBATCH --nodes=1 #SBATCH --partition=normal #SBATCH --output=/home/biddisco/slurm.out #SBATCH --error=/home/biddisco/slurm.err #SBATCH --mem=2048MB #export # echo "Path is $PATH" # echo "LD_LIBRARY_PATH is " $LD_LIBRARY_PATH # cd /home/biddisco/build/pv-38/bin/ #export PMI_DEBUG=9 #ulimit -s unlimited #ulimit -c 0 /home/biddisco/apps/mpich-3.0.4/bin/mpiexec -rmk slurm -n 2 /home/biddisco/build/hello/hello ###################### It gives the same result with or without the -rmk slurm and the #ulimit settings. Apologies for wasting your time, I'm certain I'm doing something wrong - I just don't know what. JB biddisco at breno2 ~ $ more ~/slurm.err *** glibc detected *** /home/biddisco/build/hello/hello: double free or corruption (fasttop): 0x0000000001896340 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f9a1695cb96] /home/biddisco/build/hello/hello(MPIDI_Populate_vc_node_ids+0x3f9)[0x427c89] /home/biddisco/build/hello/hello(MPID_Init+0x136)[0x4253f6] /home/biddisco/build/hello/hello(MPIR_Init_thread+0x22f)[0x414cbf] /home/biddisco/build/hello/hello(MPI_Init+0xae)[0x4146ee] /home/biddisco/build/hello/hello(main+0x22)[0x413f2e] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f9a168ff76d] /home/biddisco/build/hello/hello[0x413e31] ======= Memory map: ======== 00400000-0051a000 r-xp 00000000 08:01 8661191 /home/biddisco/build/hello/hello 0071a000-00727000 r--p 0011a000 08:01 8661191 /home/biddisco/build/hello/hello 00727000-00729000 rw-p 00127000 08:01 8661191 /home/biddisco/build/hello/hello 00729000-00751000 rw-p 00000000 00:00 0 01895000-018b6000 rw-p 00000000 00:00 0 [heap] 7f9a166c8000-7f9a166dd000 r-xp 00000000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f9a166dd000-7f9a168dc000 ---p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f9a168dc000-7f9a168dd000 r--p 00014000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f9a168dd000-7f9a168de000 rw-p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f9a168de000-7f9a16a93000 r-xp 00000000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f9a16a93000-7f9a16c92000 ---p 001b5000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f9a16c92000-7f9a16c96000 r--p 001b4000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f9a16c96000-7f9a16c98000 rw-p 001b8000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f9a16c98000-7f9a16c9d000 rw-p 00000000 00:00 0 7f9a16c9d000-7f9a16cb5000 r-xp 00000000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f9a16cb5000-7f9a16eb4000 ---p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f9a16eb4000-7f9a16eb5000 r--p 00017000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f9a16eb5000-7f9a16eb6000 rw-p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f9a16eb6000-7f9a16eba000 rw-p 00000000 00:00 0 7f9a16eba000-7f9a16edc000 r-xp 00000000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7f9a170c1000-7f9a170c4000 rw-p 00000000 00:00 0 7f9a170d9000-7f9a170dc000 rw-p 00000000 00:00 0 7f9a170dc000-7f9a170dd000 r--p 00022000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7f9a170dd000-7f9a170df000 rw-p 00023000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7fff52f27000-7fff52f48000 rw-p 00000000 00:00 0 [stack] 7fff52fff000-7fff53000000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandro.allievi at gmail.com Fri Jun 7 06:48:30 2013 From: alejandro.allievi at gmail.com (Alejandro Allievi) Date: Fri, 7 Jun 2013 09:18:30 -0230 Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> Message-ID: Good morning Wesley, Under NFS, is there a way to bypass dynamic linking and "force" static linking?? Thanks. Alejandro On Thu, Jun 6, 2013 at 3:52 PM, Alejandro Allievi < alejandro.allievi at gmail.com> wrote: > Thanks!! > > > On Thu, Jun 6, 2013 at 3:44 PM, Wesley Bland wrote: > >> No, this is the correct way to do it. >> >> On Jun 6, 2013, at 11:12 AM, Alejandro Allievi < >> alejandro.allievi at gmail.com> wrote: >> >> Is there a better way?? >> >> >> On Thu, Jun 6, 2013 at 3:40 PM, Wesley Bland wrote: >> >>> That's true and for most smallish clusters, that's how people use it. >>> >>> On Jun 6, 2013, at 11:09 AM, Alejandro Allievi < >>> alejandro.allievi at gmail.com> wrote: >>> >>> Hi Wesley, >>> >>> Just to clarify myself: provided our NFS mirrors my home directory and >>> that LD_LIBRARY_PATH is set correctly, the dynamic compile/link step >>> for the user (me) is the same as for static compile/link?? >>> >>> Thanks again Wesley!! >>> >>> Alejandro >>> >>> >>> On Thu, Jun 6, 2013 at 11:15 AM, Wesley Bland wrote: >>> >>>> It depends on whether you are linking your library statically or >>>> dynamically. If you are linking statically, the linker puts everything in >>>> one executable that is sent to all of the nodes. If you do it dynamically, >>>> you need to make sure your libraries are available on all of the nodes you >>>> will be using (usually by using something like NFS to mirror your home >>>> directory across the cluster. You will also need to make sure your >>>> environment is set up correctly to allow the libraries to be found on the >>>> remote process, usually via an environment variable such as >>>> LD_LIBRARY_PATH. >>>> >>>> Wesley >>>> >>>> >>>> On Jun 6, 2013, at 6:18 AM, Alejandro Allievi < >>>> alejandro.allievi at gmail.com> wrote: >>>> >>>> H >>>> i >>>> >>>> When linking to personal libraries using MPICH2, how does each process >>>> access them in a distributed environment?? Does the linker actually >>>> combines >>>> everything >>>> into a single >>>> executable >>>> program >>>> and "sends a copy" of entire executable to each process even though not >>>> all processes may use the libraries?? Can somebody shed some light on the >>>> entire linking process?? >>>> >>>> Thanks for any help!! >>>> >>>> Alejandro >>>> >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >>>> >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >>> >>> >>> >>> -- >>> Alejandro Allievi >>> http://www.ace-net.ca/wiki/Alejandro_Allievi >>> >>> "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre >>> v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais >>> vacill?; mais celui auquel nous sommes le plus habituellement revenus". >>> Denis Diderot. >>> >>> Este e-mail esta consignado s?lo para el destinatario designado en el >>> mismo y puede contener informaci?n confidencial y privilegiada. Su >>> distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario >>> designado, por favor notif?quenos de inmediato y destruya este e-mail en >>> forma permanente as? como todas las copias del mismo. *Ce courriel peut >>> renfermer des renseignements confidentiels et privil?gi?s et s'adresse au >>> destinataire d?sign? seulement. La distribution ou la copie de ce courriel >>> est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en >>> aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que >>> toute copie de celui-ci*. This e-mail may contain confidential >>> information, and is intended only for the named recipient and may be >>> privileged. Distribution or copying of this email is prohibited. If you are >>> not the named recipient, please notify us immediately and permanently >>> destroy this email and all copies of it. >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >> >> >> >> -- >> Alejandro Allievi >> http://www.ace-net.ca/wiki/Alejandro_Allievi >> >> "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre >> v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais >> vacill?; mais celui auquel nous sommes le plus habituellement revenus". >> Denis Diderot. >> >> Este e-mail esta consignado s?lo para el destinatario designado en el >> mismo y puede contener informaci?n confidencial y privilegiada. Su >> distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario >> designado, por favor notif?quenos de inmediato y destruya este e-mail en >> forma permanente as? como todas las copias del mismo. *Ce courriel peut >> renfermer des renseignements confidentiels et privil?gi?s et s'adresse au >> destinataire d?sign? seulement. La distribution ou la copie de ce courriel >> est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en >> aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que >> toute copie de celui-ci*. This e-mail may contain confidential >> information, and is intended only for the named recipient and may be >> privileged. Distribution or copying of this email is prohibited. If you are >> not the named recipient, please notify us immediately and permanently >> destroy this email and all copies of it. >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre > v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais > vacill?; mais celui auquel nous sommes le plus habituellement revenus". > Denis Diderot. > > Este e-mail esta consignado s?lo para el destinatario designado en el > mismo y puede contener informaci?n confidencial y privilegiada. Su > distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario > designado, por favor notif?quenos de inmediato y destruya este e-mail en > forma permanente as? como todas las copias del mismo. *Ce courriel peut > renfermer des renseignements confidentiels et privil?gi?s et s'adresse au > destinataire d?sign? seulement. La distribution ou la copie de ce courriel > est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en > aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que > toute copie de celui-ci*. This e-mail may contain confidential > information, and is intended only for the named recipient and may be > privileged. Distribution or copying of this email is prohibited. If you are > not the named recipient, please notify us immediately and permanently > destroy this email and all copies of it. > -- Alejandro Allievi http://www.ace-net.ca/wiki/Alejandro_Allievi "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. *Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci*. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steffen.Weise at iec.tu-freiberg.de Fri Jun 7 07:50:05 2013 From: Steffen.Weise at iec.tu-freiberg.de (Weise Steffen) Date: Fri, 7 Jun 2013 12:50:05 +0000 Subject: [mpich-discuss] crash on 2^31 size in MPI_Win_allocate_shared(...) Message-ID: <4B76CCC2-F6D0-41BB-B0A7-6389448D8008@iec.tu-freiberg.de> Dear mailing-list, this is my first time posting here. I found that with version 3.0.4 using MPI_Win_allocate_shared i get an error when using a size exactly 2^31 everything below and above is ok. Though i also had the same issue with 2^34. Some kind of division or type conversion seems to be off. (/dev/shm has 4G so it is not a size issue.. i know what those errors look like) I attach my code and the output i get on a linux (debian 6.0) 64 bit machine (same issue on a mac though) . I'll be happy to provide more machine details or everything you guys need to analyse whats going on. with kind regards, Steffen Weise -------------- next part -------------- A non-text attachment was scrubbed... Name: limit.c Type: application/octet-stream Size: 1409 bytes Desc: limit.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: limit.txt URL: From balaji at mcs.anl.gov Fri Jun 7 08:32:45 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 07 Jun 2013 08:32:45 -0500 Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> Message-ID: <51B1E0FD.1010307@mcs.anl.gov> static vs. dynamic linking has nothing to do with the NFS. With static linking, the executable needs to be available on all nodes (either through the NFS or through explicit copies). With dynamic linking, the executable and all its dependency libraries need to be available on all nodes (again, either through the NFS or through explicit copies). So far, mpich used static builds by default. All released versions so far use this model. You can ask it to build dynamic libraries as well by passing --enable-shared to configure. We recently moved to building mpich as a dynamic library by default as well (both static and dynamic libraries will be built, but the dynamic library is prioritized for linking). You'll see this model in the upcoming mpich-3.1.x series and beyond. -- Pavan On 06/07/2013 06:48 AM, Alejandro Allievi wrote: > Good morning Wesley, > > Under NFS, is there a way to bypass dynamic linking and "force" static > linking?? > > Thanks. > > Alejandro > > > On Thu, Jun 6, 2013 at 3:52 PM, Alejandro Allievi > > wrote: > > Thanks!! > > > On Thu, Jun 6, 2013 at 3:44 PM, Wesley Bland > wrote: > > No, this is the correct way to do it. > > On Jun 6, 2013, at 11:12 AM, Alejandro Allievi > > wrote: > >> Is there a better way?? >> >> >> On Thu, Jun 6, 2013 at 3:40 PM, Wesley Bland >> > wrote: >> >> That's true and for most smallish clusters, that's how >> people use it. >> >> On Jun 6, 2013, at 11:09 AM, Alejandro Allievi >> > > wrote: >> >>> Hi Wesley, >>> >>> Just to clarify myself: provided our NFS mirrors my home >>> directory and that LD_LIBRARY_PATH is set correctly, the >>> dynamic compile/link step for the user (me) is the same >>> as for static compile/link?? >>> >>> Thanks again Wesley!! >>> >>> Alejandro >>> >>> >>> On Thu, Jun 6, 2013 at 11:15 AM, Wesley Bland >>> > wrote: >>> >>> It depends on whether you are linking your library >>> statically or dynamically. If you are linking >>> statically, the linker puts everything in one >>> executable that is sent to all of the nodes. If you >>> do it dynamically, you need to make sure your >>> libraries are available on all of the nodes you will >>> be using (usually by using something like NFS to >>> mirror your home directory across the cluster. You >>> will also need to make sure your environment is set >>> up correctly to allow the libraries to be found on >>> the remote process, usually via an environment >>> variable such as LD_LIBRARY_PATH. >>> >>> Wesley >>> >>> >>> On Jun 6, 2013, at 6:18 AM, Alejandro Allievi >>> >> > wrote: >>> >>>> H >>>> i >>>> >>>> When linking to personal libraries using MPICH2, how >>>> does each process access them in a distributed >>>> environment?? Does the linker actually >>>> combines >>>> everything >>>> into a single >>>> executable >>>> program >>>> and "sends a copy" of entire executable to each >>>> process even though not all processes may use the >>>> libraries?? Can somebody shed some light on the >>>> entire linking process?? >>>> >>>> Thanks for any help!! >>>> >>>> Alejandro >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> >>> >>> -- >>> Alejandro Allievi >>> http://www.ace-net.ca/wiki/Alejandro_Allievi >>> "Tenez, mon ami, si vous y pensez bien, vous trouverez >>> qu'en tout, notre v?ritable sentiment n'est pas celui >>> dans lequel nous n'avons jamais vacill?; mais celui >>> auquel nous sommes le plus habituellement revenus". Denis >>> Diderot. >>> Este e-mail esta consignado s?lo para el destinatario >>> designado en el mismo y puede contener informaci?n >>> confidencial y privilegiada. Su distribuci?n o copiado >>> est? prohibido. Si Usted no fuera el destinatario >>> designado, por favor notif?quenos de inmediato y destruya >>> este e-mail en forma permanente as? como todas las copias >>> del mismo. _Ce courriel peut renfermer des renseignements >>> confidentiels et privil?gi?s et s'adresse au destinataire >>> d?sign? seulement. La distribution ou la copie de ce >>> courriel est interdit. Si vous n'?tes pas le destinataire >>> d?sign?, veuillez nous en aviser imm?diatement et >>> d?truire de fa?on permanente ce courriel ainsi que toute >>> copie de celui-ci_. This e-mail may contain confidential >>> information, and is intended only for the named recipient >>> and may be privileged. Distribution or copying of this >>> email is prohibited. If you are not the named recipient, >>> please notify us immediately and permanently destroy this >>> email and all copies of it. >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> >> -- >> Alejandro Allievi >> http://www.ace-net.ca/wiki/Alejandro_Allievi >> "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en >> tout, notre v?ritable sentiment n'est pas celui dans lequel >> nous n'avons jamais vacill?; mais celui auquel nous sommes le >> plus habituellement revenus". Denis Diderot. >> Este e-mail esta consignado s?lo para el destinatario >> designado en el mismo y puede contener informaci?n >> confidencial y privilegiada. Su distribuci?n o copiado est? >> prohibido. Si Usted no fuera el destinatario designado, por >> favor notif?quenos de inmediato y destruya este e-mail en >> forma permanente as? como todas las copias del mismo. _Ce >> courriel peut renfermer des renseignements confidentiels et >> privil?gi?s et s'adresse au destinataire d?sign? seulement. La >> distribution ou la copie de ce courriel est interdit. Si vous >> n'?tes pas le destinataire d?sign?, veuillez nous en aviser >> imm?diatement et d?truire de fa?on permanente ce courriel >> ainsi que toute copie de celui-ci_. This e-mail may contain >> confidential information, and is intended only for the named >> recipient and may be privileged. Distribution or copying of >> this email is prohibited. If you are not the named recipient, >> please notify us immediately and permanently destroy this >> email and all copies of it. >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, > notre v?ritable sentiment n'est pas celui dans lequel nous n'avons > jamais vacill?; mais celui auquel nous sommes le plus habituellement > revenus". Denis Diderot. > Este e-mail esta consignado s?lo para el destinatario designado en > el mismo y puede contener informaci?n confidencial y privilegiada. > Su distribuci?n o copiado est? prohibido. Si Usted no fuera el > destinatario designado, por favor notif?quenos de inmediato y > destruya este e-mail en forma permanente as? como todas las copias > del mismo. _Ce courriel peut renfermer des renseignements > confidentiels et privil?gi?s et s'adresse au destinataire d?sign? > seulement. La distribution ou la copie de ce courriel est interdit. > Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser > imm?diatement et d?truire de fa?on permanente ce courriel ainsi que > toute copie de celui-ci_. This e-mail may contain confidential > information, and is intended only for the named recipient and may be > privileged. Distribution or copying of this email is prohibited. If > you are not the named recipient, please notify us immediately and > permanently destroy this email and all copies of it. > > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre > v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais > vacill?; mais celui auquel nous sommes le plus habituellement revenus". > Denis Diderot. > Este e-mail esta consignado s?lo para el destinatario designado en el > mismo y puede contener informaci?n confidencial y privilegiada. Su > distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario > designado, por favor notif?quenos de inmediato y destruya este e-mail en > forma permanente as? como todas las copias del mismo. _Ce courriel peut > renfermer des renseignements confidentiels et privil?gi?s et s'adresse > au destinataire d?sign? seulement. La distribution ou la copie de ce > courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, > veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce > courriel ainsi que toute copie de celui-ci_. This e-mail may contain > confidential information, and is intended only for the named recipient > and may be privileged. Distribution or copying of this email is > prohibited. If you are not the named recipient, please notify us > immediately and permanently destroy this email and all copies of it. > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From wbland at mcs.anl.gov Fri Jun 7 08:35:17 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Fri, 7 Jun 2013 08:35:17 -0500 (CDT) Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> Message-ID: As long as you have *.a versions of your library, you can use those instead of the *.so versions. You'll need to pass -static to mpicc to make that work. You also need to link MPICH statically, but as of all of the released versions of MPICH, that is the default behavior. In future releases, that will change. I believe the configure flag that you'll have to use is --enable-static, but I could be wrong about that. Wesley On Jun 7, 2013, at 4:48 AM, Alejandro Allievi wrote: > Good morning Wesley, > > Under NFS, is there a way to bypass dynamic linking and "force" static linking?? > > Thanks. > > Alejandro > > > On Thu, Jun 6, 2013 at 3:52 PM, Alejandro Allievi wrote: > Thanks!! > > > On Thu, Jun 6, 2013 at 3:44 PM, Wesley Bland wrote: > No, this is the correct way to do it. > > On Jun 6, 2013, at 11:12 AM, Alejandro Allievi wrote: > >> Is there a better way?? >> >> >> On Thu, Jun 6, 2013 at 3:40 PM, Wesley Bland wrote: >> That's true and for most smallish clusters, that's how people use it. >> >> On Jun 6, 2013, at 11:09 AM, Alejandro Allievi wrote: >> >>> Hi Wesley, >>> >>> Just to clarify myself: provided our NFS mirrors my home directory and that LD_LIBRARY_PATH is set correctly, the dynamic compile/link step for the user (me) is the same as for static compile/link?? >>> >>> Thanks again Wesley!! >>> >>> Alejandro >>> >>> >>> On Thu, Jun 6, 2013 at 11:15 AM, Wesley Bland wrote: >>> It depends on whether you are linking your library statically or dynamically. If you are linking statically, the linker puts everything in one executable that is sent to all of the nodes. If you do it dynamically, you need to make sure your libraries are available on all of the nodes you will be using (usually by using something like NFS to mirror your home directory across the cluster. You will also need to make sure your environment is set up correctly to allow the libraries to be found on the remote process, usually via an environment variable such as LD_LIBRARY_PATH. >>> >>> Wesley >>> >>> >>> On Jun 6, 2013, at 6:18 AM, Alejandro Allievi wrote: >>> >>>> Hi >>>> >>>> When linking to personal libraries using MPICH2, how does each process access them in a distributed environment?? Does the linker actually combines everything into a single executable program and "sends a copy" of entire executable to each process even though not all processes may use the libraries?? Can somebody shed some light on the entire linking process?? >>>> >>>> Thanks for any help!! >>>> >>>> Alejandro >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> >>> -- >>> Alejandro Allievi >>> http://www.ace-net.ca/wiki/Alejandro_Allievi >>> >>> "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. >>> >>> Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> -- >> Alejandro Allievi >> http://www.ace-net.ca/wiki/Alejandro_Allievi >> >> "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. >> >> Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. > > Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. > > > > -- > Alejandro Allievi > http://www.ace-net.ca/wiki/Alejandro_Allievi > > "Tenez, mon ami, si vous y pensez bien, vous trouverez qu'en tout, notre v?ritable sentiment n'est pas celui dans lequel nous n'avons jamais vacill?; mais celui auquel nous sommes le plus habituellement revenus". Denis Diderot. > > Este e-mail esta consignado s?lo para el destinatario designado en el mismo y puede contener informaci?n confidencial y privilegiada. Su distribuci?n o copiado est? prohibido. Si Usted no fuera el destinatario designado, por favor notif?quenos de inmediato y destruya este e-mail en forma permanente as? como todas las copias del mismo. Ce courriel peut renfermer des renseignements confidentiels et privil?gi?s et s'adresse au destinataire d?sign? seulement. La distribution ou la copie de ce courriel est interdit. Si vous n'?tes pas le destinataire d?sign?, veuillez nous en aviser imm?diatement et d?truire de fa?on permanente ce courriel ainsi que toute copie de celui-ci. This e-mail may contain confidential information, and is intended only for the named recipient and may be privileged. Distribution or copying of this email is prohibited. If you are not the named recipient, please notify us immediately and permanently destroy this email and all copies of it. > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Fri Jun 7 08:38:21 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 07 Jun 2013 08:38:21 -0500 Subject: [mpich-discuss] Linking to personal libraries In-Reply-To: References: <7B9CC100-E89E-4FB4-BA26-C5C660BD9125@mcs.anl.gov> Message-ID: <51B1E24D.9040804@mcs.anl.gov> On 06/07/2013 08:35 AM, Wesley Bland wrote: > As long as you have *.a versions of your library, you can use those > instead of the *.so versions. You'll need to pass -static to mpicc to > make that work. > > You also need to link MPICH statically, but as of all of the released > versions of MPICH, that is the default behavior. In future releases, > that will change. I believe the configure flag that you'll have to use > is --enable-static, but I could be wrong about that. FYI, --enable-static is the right flag, but it's not required. Till mpich-3.0.4, the default is --enable-static --disable-shared. From mpich-3.1.x, the default will be --enable-static --enable-shared. (though shared will be prioritized by the system, unless you pass -static to mpicc). -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jhammond at alcf.anl.gov Fri Jun 7 09:00:40 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Fri, 7 Jun 2013 07:00:40 -0700 Subject: [mpich-discuss] crash on 2^31 size in MPI_Win_allocate_shared(...) In-Reply-To: <4B76CCC2-F6D0-41BB-B0A7-6389448D8008@iec.tu-freiberg.de> References: <4B76CCC2-F6D0-41BB-B0A7-6389448D8008@iec.tu-freiberg.de> Message-ID: First, "long unsigned int window_size=2147483648" is not correct. They type you need to use there is MPI_Aint. The syntax of this function is int MPI_Win_allocate_shared(MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, void *baseptr, MPI_Win *win) It may be true that "long unsigned int" is safely case to MPI_Aint, but that's a very danger way to write code and it may be broken on some platforms. In any case, everything above 2^31 is probably not okay. Unless absolutely every integer type used in the code paths you are hitting is size_t (or equivalent) and not int, you're going to hit overflow somewhere. Maybe I'm wrong, but you should verify (as a debugging mechanism, not in general) that MPI_Win_allocate_shared is behaving as desired by memset-ing the resulting data (mem) to verify that you're actually getting e.g. 2^34 bytes back. If /dev/shm is 4G, I'm not sure how that's possible but maybe the implementation doesn't use that. I'm going to be on a plane today but I'll run your code on my machine and try to figure out more about how "count-safe" MPI_Win_allocate_shared is. Jeff PS Installing MPICH in ~/git/openmpi is just dirty :-) On Fri, Jun 7, 2013 at 5:50 AM, Weise Steffen wrote: > Dear mailing-list, > > this is my first time posting here. I found that with version 3.0.4 using MPI_Win_allocate_shared i get an error when using a size exactly 2^31 everything below and above is ok. Though i also had the same issue with 2^34. Some kind of division or type conversion seems to be off. (/dev/shm has 4G so it is not a size issue.. i know what those errors look like) > > I attach my code and the output i get on a linux (debian 6.0) 64 bit machine (same issue on a mac though) . > > I'll be happy to provide more machine details or everything you guys need to analyse whats going on. > > with kind regards, > Steffen Weise > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From m.geimer at fz-juelich.de Fri Jun 7 09:02:25 2013 From: m.geimer at fz-juelich.de (Markus Geimer) Date: Fri, 7 Jun 2013 16:02:25 +0200 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B18052.2080603@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> Message-ID: <51B1E7F1.906@fz-juelich.de> Hi Pavan, The nightly snapshot fixes the issue for me. Many thanks! @John: Did you also recompile your sample application? From what I understand, the issue is not in hydra but in the MPI library (please correct me if I'm wrong, Pavan). Best regards, Markus On 06/07/13 08:40, Pavan Balaji wrote: > > FYI, I believe this is now fixed. Please try out the latest nightly > snapshot and let us know if you are still running into this issue: > > http://www.mpich.org/static/tarballs/nightly/master/hydra/ > http://www.mpich.org/static/tarballs/nightly/master/mpich/ > > -- Pavan > > On 06/06/2013 01:39 AM, Biddiscombe, John A. wrote: >> Just FYI. I am also getting the double free error when I run under >> slurm (mpich 3.0.4). Please don't take correspondence off list as I'm >> following the thread. >> >> I can't add anything more useful than Markus has already provided with >> his stack trace and logs. >> >> [I did find that if I configure --with-slurm and use srun instead of >> mpiexec , then all works, as expected, but I need mpiexec to pass env >> vars to processes using mpmd syntax] >> >> JB >> >> -----Original Message----- >> From: discuss-bounces at mpich.org [mailto:discuss-bounces at mpich.org] On >> Behalf Of Markus Geimer >> Sent: 03 June 2013 16:36 >> To: Pavan Balaji >> Cc: discuss at mpich.org >> Subject: Re: [mpich-discuss] Problems running MPICH jobs under SLURM >> >> Pavan, >> >>> 1. Can you run your application processes using "ddd" or some other >>> debugger to see where the double free is coming from? You might have >>> to build mpich with --enable-g=dbg to get the debug symbols in. >> >> Here is the full stack backtrace: >> >> ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- 8< ----- >> >> #0 0x00007ffff6deb475 in *__GI_raise (sig=) >> at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 >> #1 0x00007ffff6dee6f0 in *__GI_abort () at abort.c:92 >> #2 0x00007ffff6e2652b in __libc_message (do_abort=, >> fmt=) at ../sysdeps/unix/sysv/linux/libc_fatal.c:189 >> #3 0x00007ffff6e2fd76 in malloc_printerr (action=3, >> str=0x7ffff6f081e0 "double free or corruption (fasttop)", >> ptr=) at malloc.c:6283 >> #4 0x00007ffff6e34aac in *__GI___libc_free (mem=) >> at malloc.c:3738 >> #5 0x00007ffff7a1d5d9 in populate_ids_from_mapping ( >> did_map=, num_nodes=, >> mapping=, pg=) >> at src/mpid/ch3/src/mpid_vc.c:1063 >> #6 MPIDI_Populate_vc_node_ids (pg=pg at entry=0x604910, >> our_pg_rank=our_pg_rank at entry=0) at src/mpid/ch3/src/mpid_vc.c:1193 >> #7 0x00007ffff7a17dd6 in MPID_Init (argc=argc at entry=0x7fffffffd97c, >> argv=argv at entry=0x7fffffffd970, requested=requested at entry=0, >> provided=provided at entry=0x7fffffffd8e8, >> has_args=has_args at entry=0x7fffffffd8e0, >> has_env=has_env at entry=0x7fffffffd8e4) at >> src/mpid/ch3/src/mpid_init.c:156 >> #8 0x00007ffff7acdf7f in MPIR_Init_thread >> (argc=argc at entry=0x7fffffffd97c, >> argv=argv at entry=0x7fffffffd970, required=required at entry=0, >> provided=provided at entry=0x7fffffffd944) at >> src/mpi/init/initthread.c:431 >> #9 0x00007ffff7acd90e in PMPI_Init (argc=0x7fffffffd97c, >> argv=0x7fffffffd970) >> at src/mpi/init/init.c:136 >> #10 0x000000000040086d in main () >> >> ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >8 ----- >> >>> 2. Can you send me the output with the ssh launcher as well? >> >> See mail sent off-list. >> >> Thanks, >> Markus >> >> -- >> Dr. Markus Geimer >> Juelich Supercomputing Centre >> Institute for Advanced Simulation >> Forschungszentrum Juelich GmbH >> 52425 Juelich, Germany >> >> Phone: +49-2461-61-1773 >> Fax: +49-2461-61-6656 >> E-mail: m.geimer at fz-juelich.de >> WWW: http://www.fz-juelich.de/jsc/ >> >> >> >> ------------------------------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------------------------------ >> >> Forschungszentrum Juelich GmbH >> 52425 Juelich >> Sitz der Gesellschaft: Juelich >> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 >> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher >> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten >> Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. >> Sebastian M. Schmidt >> ------------------------------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------------------------------ >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > -- Dr. Markus Geimer Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-1773 Fax: +49-2461-61-6656 E-mail: m.geimer at fz-juelich.de WWW: http://www.fz-juelich.de/jsc/ From balaji at mcs.anl.gov Fri Jun 7 12:31:18 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 07 Jun 2013 12:31:18 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B1E7F1.906@fz-juelich.de> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <51B1E7F1.906@fz-juelich.de> Message-ID: <51B218E6.3050308@mcs.anl.gov> On 06/07/2013 09:02 AM, Markus Geimer wrote: > The nightly snapshot fixes the issue for me. Many thanks! > > @John: Did you also recompile your sample application? From what > I understand, the issue is not in hydra but in the MPI library > (please correct me if I'm wrong, Pavan). No, my initial diagnosis was incorrect. The problem was completely inside hydra (mpiexec), so the application doesn't need to be recompiled. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From fernando_luz at tpn.usp.br Fri Jun 7 14:07:53 2013 From: fernando_luz at tpn.usp.br (fernando_luz) Date: Fri, 07 Jun 2013 16:07:53 -0300 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: References: <51AE0EC6.7040903@tpn.usp.br> Message-ID: <51B22F89.3050107@tpn.usp.br> Hi Rajeev, Thanks for the answers. I get the source code in repository, but I didn't succeed in the compile process. I ran the autogen.sh and after this I tried to configure my installation and I received the following error message. fernando_luz at TPN000300:~/git/mpe$ ./configure --prefix=/home/fernando_luz/usr/mpe --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ --with-mpiinc=/home/fernando_luz/usr/mpich/include/ Configuring MPE Profiling System with '--prefix=/home/fernando_luz/usr/mpe' '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' 'MPI_INC=/home/fernando_luz/usr/mpich/include' 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' checking for current directory name... /home/fernando_luz/git/mpe checking gnumake... yes using --no-print-directory checking BSD 4.4 make... no - whew checking OSF V3 make... no checking for virtual path format... VPATH User supplied MPI implmentation (Good Luck!) checking for leftover Makefiles in subpackages ... none checking for gcc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking whether MPI_CC has been set ... /home/fernando_luz/usr/mpich/bin/mpicc checking whether we are using the GNU Fortran 77 compiler... no checking whether f77 accepts -g... no checking whether MPI_F77 has been set ... f77 checking for the linkage of the supplied MPI C definitions ... no configure: error: Cannot link with basic MPI C program! Check your MPI include paths, MPI libraries and MPI CC compiler Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). I prefer to use the mpe in repository because in the site, the last version was dated in 2010 and in the git repository the last commit was in 2012. Regards Fernando On 06/04/2013 03:05 PM, Rajeev Thakur wrote: > It can be downloaded from http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. > > The source repository is at http://git.mpich.org/mpe.git/ > > Rajeev > > On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: > >> MPE isn't actively developed and should sit strictly on top of any MPI >> implementation so you can just grab MPE from an older release of >> MPICH. >> >> My guess is that MPE will be a standalone download at some point in the future. >> >> Jeff >> >> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz wrote: >>> Hi, >>> >>> I didn't find the MPE source in mpich-3.0.4 package. Where I can download >>> the source? It is still compatible with mpich? >>> >>> And I tried to install the logging support available in this release, but my >>> try didn't was successful. I received the follow error: >>> >>> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >>> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >>> configure: creating ./config.status >>> config.status: error: cannot find input file: `Makefile.in' >>> configure: error: src/util/logging/rlog configure failed >>> >>> I attached the c.txt file used in the configuration. >>> >>> Regards >>> >>> Fernando >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> -- >> Jeff Hammond >> Argonne Leadership Computing Facility >> University of Chicago Computation Institute >> jhammond at alcf.anl.gov / (630) 252-5381 >> http://www.linkedin.com/in/jeffhammond >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >> ALCF docs: http://www.alcf.anl.gov/user-guides >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From jhammond at alcf.anl.gov Fri Jun 7 14:56:10 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Fri, 7 Jun 2013 13:56:10 -0600 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: <51B22F89.3050107@tpn.usp.br> References: <51AE0EC6.7040903@tpn.usp.br> <51B22F89.3050107@tpn.usp.br> Message-ID: please attach config.log. jeff On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz wrote: > Hi Rajeev, > > Thanks for the answers. > > I get the source code in repository, but I didn't succeed in the compile > process. > I ran the autogen.sh and after this I tried to configure my installation and > I received the following error message. > > > fernando_luz at TPN000300:~/git/mpe$ ./configure > --prefix=/home/fernando_luz/usr/mpe > --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc > --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ > --with-mpiinc=/home/fernando_luz/usr/mpich/include/ > Configuring MPE Profiling System with '--prefix=/home/fernando_luz/usr/mpe' > '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' > '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' > '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' > 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' > 'MPI_INC=/home/fernando_luz/usr/mpich/include' > 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' > checking for current directory name... /home/fernando_luz/git/mpe > checking gnumake... yes using --no-print-directory > checking BSD 4.4 make... no - whew > checking OSF V3 make... no > checking for virtual path format... VPATH > User supplied MPI implmentation (Good Luck!) > checking for leftover Makefiles in subpackages ... none > checking for gcc... cc > checking whether the C compiler works... yes > checking for C compiler default output file name... a.out > checking for suffix of executables... > checking whether we are cross compiling... no > checking for suffix of object files... o > checking whether we are using the GNU C compiler... yes > checking whether cc accepts -g... yes > checking for cc option to accept ISO C89... none needed > checking whether MPI_CC has been set ... > /home/fernando_luz/usr/mpich/bin/mpicc > checking whether we are using the GNU Fortran 77 compiler... no > checking whether f77 accepts -g... no > checking whether MPI_F77 has been set ... f77 > checking for the linkage of the supplied MPI C definitions ... no > configure: error: Cannot link with basic MPI C program! > Check your MPI include paths, MPI libraries and MPI CC compiler > > Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). > > I prefer to use the mpe in repository because in the site, the last version > was dated in 2010 and in the git repository the last commit was in 2012. > > Regards > > Fernando > > > > On 06/04/2013 03:05 PM, Rajeev Thakur wrote: >> >> It can be downloaded from >> http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. >> >> The source repository is at http://git.mpich.org/mpe.git/ >> >> Rajeev >> >> On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: >> >>> MPE isn't actively developed and should sit strictly on top of any MPI >>> implementation so you can just grab MPE from an older release of >>> MPICH. >>> >>> My guess is that MPE will be a standalone download at some point in the >>> future. >>> >>> Jeff >>> >>> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz >>> wrote: >>>> >>>> Hi, >>>> >>>> I didn't find the MPE source in mpich-3.0.4 package. Where I can >>>> download >>>> the source? It is still compatible with mpich? >>>> >>>> And I tried to install the logging support available in this release, >>>> but my >>>> try didn't was successful. I received the follow error: >>>> >>>> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >>>> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >>>> configure: creating ./config.status >>>> config.status: error: cannot find input file: `Makefile.in' >>>> configure: error: src/util/logging/rlog configure failed >>>> >>>> I attached the c.txt file used in the configuration. >>>> >>>> Regards >>>> >>>> Fernando >>>> >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> >>> -- >>> Jeff Hammond >>> Argonne Leadership Computing Facility >>> University of Chicago Computation Institute >>> jhammond at alcf.anl.gov / (630) 252-5381 >>> http://www.linkedin.com/in/jeffhammond >>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >>> ALCF docs: http://www.alcf.anl.gov/user-guides >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From fernando_luz at tpn.usp.br Fri Jun 7 15:04:42 2013 From: fernando_luz at tpn.usp.br (fernando_luz) Date: Fri, 07 Jun 2013 17:04:42 -0300 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: References: <51AE0EC6.7040903@tpn.usp.br> <51B22F89.3050107@tpn.usp.br> Message-ID: <51B23CDA.9070502@tpn.usp.br> In attachment. Fernando On 06/07/2013 04:56 PM, Jeff Hammond wrote: > please attach config.log. > > jeff > > On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz wrote: >> Hi Rajeev, >> >> Thanks for the answers. >> >> I get the source code in repository, but I didn't succeed in the compile >> process. >> I ran the autogen.sh and after this I tried to configure my installation and >> I received the following error message. >> >> >> fernando_luz at TPN000300:~/git/mpe$ ./configure >> --prefix=/home/fernando_luz/usr/mpe >> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc >> --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ >> --with-mpiinc=/home/fernando_luz/usr/mpich/include/ >> Configuring MPE Profiling System with '--prefix=/home/fernando_luz/usr/mpe' >> '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' >> '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' >> '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' >> 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' >> 'MPI_INC=/home/fernando_luz/usr/mpich/include' >> 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' >> checking for current directory name... /home/fernando_luz/git/mpe >> checking gnumake... yes using --no-print-directory >> checking BSD 4.4 make... no - whew >> checking OSF V3 make... no >> checking for virtual path format... VPATH >> User supplied MPI implmentation (Good Luck!) >> checking for leftover Makefiles in subpackages ... none >> checking for gcc... cc >> checking whether the C compiler works... yes >> checking for C compiler default output file name... a.out >> checking for suffix of executables... >> checking whether we are cross compiling... no >> checking for suffix of object files... o >> checking whether we are using the GNU C compiler... yes >> checking whether cc accepts -g... yes >> checking for cc option to accept ISO C89... none needed >> checking whether MPI_CC has been set ... >> /home/fernando_luz/usr/mpich/bin/mpicc >> checking whether we are using the GNU Fortran 77 compiler... no >> checking whether f77 accepts -g... no >> checking whether MPI_F77 has been set ... f77 >> checking for the linkage of the supplied MPI C definitions ... no >> configure: error: Cannot link with basic MPI C program! >> Check your MPI include paths, MPI libraries and MPI CC compiler >> >> Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). >> >> I prefer to use the mpe in repository because in the site, the last version >> was dated in 2010 and in the git repository the last commit was in 2012. >> >> Regards >> >> Fernando >> >> >> >> On 06/04/2013 03:05 PM, Rajeev Thakur wrote: >>> It can be downloaded from >>> http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. >>> >>> The source repository is at http://git.mpich.org/mpe.git/ >>> >>> Rajeev >>> >>> On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: >>> >>>> MPE isn't actively developed and should sit strictly on top of any MPI >>>> implementation so you can just grab MPE from an older release of >>>> MPICH. >>>> >>>> My guess is that MPE will be a standalone download at some point in the >>>> future. >>>> >>>> Jeff >>>> >>>> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz >>>> wrote: >>>>> Hi, >>>>> >>>>> I didn't find the MPE source in mpich-3.0.4 package. Where I can >>>>> download >>>>> the source? It is still compatible with mpich? >>>>> >>>>> And I tried to install the logging support available in this release, >>>>> but my >>>>> try didn't was successful. I received the follow error: >>>>> >>>>> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >>>>> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >>>>> configure: creating ./config.status >>>>> config.status: error: cannot find input file: `Makefile.in' >>>>> configure: error: src/util/logging/rlog configure failed >>>>> >>>>> I attached the c.txt file used in the configuration. >>>>> >>>>> Regards >>>>> >>>>> Fernando >>>>> >>>>> _______________________________________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >>>> >>>> -- >>>> Jeff Hammond >>>> Argonne Leadership Computing Facility >>>> University of Chicago Computation Institute >>>> jhammond at alcf.anl.gov / (630) 252-5381 >>>> http://www.linkedin.com/in/jeffhammond >>>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >>>> ALCF docs: http://www.alcf.anl.gov/user-guides >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > -------------- next part -------------- A non-text attachment was scrubbed... Name: config.log Type: text/x-log Size: 11749 bytes Desc: not available URL: From balaji at mcs.anl.gov Fri Jun 7 15:08:05 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 07 Jun 2013 15:08:05 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> Message-ID: <51B23DA5.3090805@mcs.anl.gov> Are you using the correct mpiexec? Your submission script is using the mpiexec from this directory: /home/biddisco/apps/mpich-3.0.4/bin/mpiexec -- Pavan On 06/07/2013 03:40 AM, Biddiscombe, John A. wrote: > I downloaded the nightly tarball and recompiled/installed mpich (used > mpich-master-v3.0.4-259-gf322ce79) > > I still get this (output below) with a simple hello world program. > > Now you must understand that I have no idea what I?m doing (really). I > wanted to test some debugging features under slurm so installed slurm > myself on a workstation with just 2 cores and have the bare minimum > setup. I?m doing the following > > sudo munged & > > sudo slurmd & > > sudo slurmctld -D > > and then I can run jobs on the local machine and it seems to be ok, > except that mpi jobs always give the double free error as below when run > under slurm, but are just fine when run from the command line. > > My suspicion is that slurm is not actually using the hydra pm that I > just compiled. I installed slurm from rpms. Should I recompile slurm > myself and somehow tell it which mpi to use? > > My job script looks as follows > > ###################### > > #!/bin/bash > > # > > # Create the job script from the supplied parameters > > # > > #SBATCH --job-name=pvserver > > #SBATCH --time=00:04:00 > > #SBATCH --nodes=1 > > #SBATCH --partition=normal > > #SBATCH --output=/home/biddisco/slurm.out > > #SBATCH --error=/home/biddisco/slurm.err > > #SBATCH --mem=2048MB > > #export > > # echo "Path is $PATH" > > # echo "LD_LIBRARY_PATH is " $LD_LIBRARY_PATH > > # cd /home/biddisco/build/pv-38/bin/ > > #export PMI_DEBUG=9 > > #ulimit -s unlimited > > #ulimit -c 0 > > /home/biddisco/apps/mpich-3.0.4/bin/mpiexec -rmk slurm -n 2 > /home/biddisco/build/hello/hello > > ###################### > > It gives the same result with or without the ?rmk slurm and the #ulimit > settings. > > Apologies for wasting your time, I?m certain I?m doing something wrong ? > I just don?t know what. > > JB > > biddisco at breno2 ~ $ more ~/slurm.err > > *** glibc detected *** /home/biddisco/build/hello/hello: double free or > corruption (fasttop): 0x0000000001896340 *** > > ======= Backtrace: ========= > > /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f9a1695cb96] > > /home/biddisco/build/hello/hello(MPIDI_Populate_vc_node_ids+0x3f9)[0x427c89] > > /home/biddisco/build/hello/hello(MPID_Init+0x136)[0x4253f6] > > /home/biddisco/build/hello/hello(MPIR_Init_thread+0x22f)[0x414cbf] > > /home/biddisco/build/hello/hello(MPI_Init+0xae)[0x4146ee] > > /home/biddisco/build/hello/hello(main+0x22)[0x413f2e] > > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f9a168ff76d] > > /home/biddisco/build/hello/hello[0x413e31] > > ======= Memory map: ======== > > 00400000-0051a000 r-xp 00000000 08:01 8661191 > /home/biddisco/build/hello/hello > > 0071a000-00727000 r--p 0011a000 08:01 8661191 > /home/biddisco/build/hello/hello > > 00727000-00729000 rw-p 00127000 08:01 8661191 > /home/biddisco/build/hello/hello > > 00729000-00751000 rw-p 00000000 00:00 0 > > 01895000-018b6000 rw-p 00000000 00:00 0 > [heap] > > 7f9a166c8000-7f9a166dd000 r-xp 00000000 08:01 9047556 > /lib/x86_64-linux-gnu/libgcc_s.so.1 > > 7f9a166dd000-7f9a168dc000 ---p 00015000 08:01 9047556 > /lib/x86_64-linux-gnu/libgcc_s.so.1 > > 7f9a168dc000-7f9a168dd000 r--p 00014000 08:01 9047556 > /lib/x86_64-linux-gnu/libgcc_s.so.1 > > 7f9a168dd000-7f9a168de000 rw-p 00015000 08:01 9047556 > /lib/x86_64-linux-gnu/libgcc_s.so.1 > > 7f9a168de000-7f9a16a93000 r-xp 00000000 08:01 9050358 > /lib/x86_64-linux-gnu/libc-2.15.so > > 7f9a16a93000-7f9a16c92000 ---p 001b5000 08:01 9050358 > /lib/x86_64-linux-gnu/libc-2.15.so > > 7f9a16c92000-7f9a16c96000 r--p 001b4000 08:01 9050358 > /lib/x86_64-linux-gnu/libc-2.15.so > > 7f9a16c96000-7f9a16c98000 rw-p 001b8000 08:01 9050358 > /lib/x86_64-linux-gnu/libc-2.15.so > > 7f9a16c98000-7f9a16c9d000 rw-p 00000000 00:00 0 > > 7f9a16c9d000-7f9a16cb5000 r-xp 00000000 08:01 9050338 > /lib/x86_64-linux-gnu/libpthread-2.15.so > > 7f9a16cb5000-7f9a16eb4000 ---p 00018000 08:01 9050338 > /lib/x86_64-linux-gnu/libpthread-2.15.so > > 7f9a16eb4000-7f9a16eb5000 r--p 00017000 08:01 9050338 > /lib/x86_64-linux-gnu/libpthread-2.15.so > > 7f9a16eb5000-7f9a16eb6000 rw-p 00018000 08:01 9050338 > /lib/x86_64-linux-gnu/libpthread-2.15.so > > 7f9a16eb6000-7f9a16eba000 rw-p 00000000 00:00 0 > > 7f9a16eba000-7f9a16edc000 r-xp 00000000 08:01 9050344 > /lib/x86_64-linux-gnu/ld-2.15.so > > 7f9a170c1000-7f9a170c4000 rw-p 00000000 00:00 0 > > 7f9a170d9000-7f9a170dc000 rw-p 00000000 00:00 0 > > 7f9a170dc000-7f9a170dd000 r--p 00022000 08:01 9050344 > /lib/x86_64-linux-gnu/ld-2.15.so > > 7f9a170dd000-7f9a170df000 rw-p 00023000 08:01 9050344 > /lib/x86_64-linux-gnu/ld-2.15.so > > 7fff52f27000-7fff52f48000 rw-p 00000000 00:00 0 > [stack] > > 7fff52fff000-7fff53000000 r-xp 00000000 00:00 0 > [vdso] > > ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 > [vsyscall] > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From biddisco at cscs.ch Fri Jun 7 15:50:21 2013 From: biddisco at cscs.ch (Biddiscombe, John A.) Date: Fri, 7 Jun 2013 20:50:21 +0000 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B23DA5.3090805@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> Message-ID: <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> Pavan Yes, that's where I compiled and installed the nightly snapshot to. (I did a make clean and recompile of the app, though I see that only hydra needed changing ) - I assumed that the mpich snapshot did include the updated hydra - if not, I should redo it with the fixed hydra. I'll wipe everything and rebuild one more time just in case I messed up. Does it matter that I'm installing to a non standard location? Does it need to be on any system paths or have special privileges? JB -----Original Message----- From: Pavan Balaji [mailto:balaji at mcs.anl.gov] Sent: 07 June 2013 22:08 To: Biddiscombe, John A. Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Problems running MPICH jobs under SLURM Are you using the correct mpiexec? Your submission script is using the mpiexec from this directory: /home/biddisco/apps/mpich-3.0.4/bin/mpiexec -- Pavan On 06/07/2013 03:40 AM, Biddiscombe, John A. wrote: > I downloaded the nightly tarball and recompiled/installed mpich (used > mpich-master-v3.0.4-259-gf322ce79) > > I still get this (output below) with a simple hello world program. > > Now you must understand that I have no idea what I'm doing (really). I > wanted to test some debugging features under slurm so installed slurm > myself on a workstation with just 2 cores and have the bare minimum > setup. I'm doing the following > > sudo munged & > > sudo slurmd & > > sudo slurmctld -D > > and then I can run jobs on the local machine and it seems to be ok, > except that mpi jobs always give the double free error as below when > run under slurm, but are just fine when run from the command line. > > My suspicion is that slurm is not actually using the hydra pm that I > just compiled. I installed slurm from rpms. Should I recompile slurm > myself and somehow tell it which mpi to use? > > My job script looks as follows > > ###################### > > #!/bin/bash > > # > > # Create the job script from the supplied parameters > > # > > #SBATCH --job-name=pvserver > > #SBATCH --time=00:04:00 > > #SBATCH --nodes=1 > > #SBATCH --partition=normal > > #SBATCH --output=/home/biddisco/slurm.out > > #SBATCH --error=/home/biddisco/slurm.err > > #SBATCH --mem=2048MB > > #export > > # echo "Path is $PATH" > > # echo "LD_LIBRARY_PATH is " $LD_LIBRARY_PATH > > # cd /home/biddisco/build/pv-38/bin/ > > #export PMI_DEBUG=9 > > #ulimit -s unlimited > > #ulimit -c 0 > > /home/biddisco/apps/mpich-3.0.4/bin/mpiexec -rmk slurm -n 2 > /home/biddisco/build/hello/hello > > ###################### > > It gives the same result with or without the -rmk slurm and the > #ulimit settings. > > Apologies for wasting your time, I'm certain I'm doing something wrong > - I just don't know what. > > JB > > biddisco at breno2 ~ $ more ~/slurm.err > > *** glibc detected *** /home/biddisco/build/hello/hello: double free > or corruption (fasttop): 0x0000000001896340 *** > > ======= Backtrace: ========= > > /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f9a1695cb96] > > /home/biddisco/build/hello/hello(MPIDI_Populate_vc_node_ids+0x3f9)[0x4 > 27c89] > > /home/biddisco/build/hello/hello(MPID_Init+0x136)[0x4253f6] > > /home/biddisco/build/hello/hello(MPIR_Init_thread+0x22f)[0x414cbf] > > /home/biddisco/build/hello/hello(MPI_Init+0xae)[0x4146ee] > > /home/biddisco/build/hello/hello(main+0x22)[0x413f2e] > > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f9a168ff76d > ] > > /home/biddisco/build/hello/hello[0x413e31] > > ======= Memory map: ======== > > 00400000-0051a000 r-xp 00000000 08:01 8661191 > /home/biddisco/build/hello/hello > > 0071a000-00727000 r--p 0011a000 08:01 8661191 > /home/biddisco/build/hello/hello > > 00727000-00729000 rw-p 00127000 08:01 8661191 > /home/biddisco/build/hello/hello > > 00729000-00751000 rw-p 00000000 00:00 0 > > 01895000-018b6000 rw-p 00000000 00:00 0 > [heap] > > 7f9a166c8000-7f9a166dd000 r-xp 00000000 08:01 9047556 > /lib/x86_64-linux-gnu/libgcc_s.so.1 > > 7f9a166dd000-7f9a168dc000 ---p 00015000 08:01 9047556 > /lib/x86_64-linux-gnu/libgcc_s.so.1 > > 7f9a168dc000-7f9a168dd000 r--p 00014000 08:01 9047556 > /lib/x86_64-linux-gnu/libgcc_s.so.1 > > 7f9a168dd000-7f9a168de000 rw-p 00015000 08:01 9047556 > /lib/x86_64-linux-gnu/libgcc_s.so.1 > > 7f9a168de000-7f9a16a93000 r-xp 00000000 08:01 9050358 > /lib/x86_64-linux-gnu/libc-2.15.so > > 7f9a16a93000-7f9a16c92000 ---p 001b5000 08:01 9050358 > /lib/x86_64-linux-gnu/libc-2.15.so > > 7f9a16c92000-7f9a16c96000 r--p 001b4000 08:01 9050358 > /lib/x86_64-linux-gnu/libc-2.15.so > > 7f9a16c96000-7f9a16c98000 rw-p 001b8000 08:01 9050358 > /lib/x86_64-linux-gnu/libc-2.15.so > > 7f9a16c98000-7f9a16c9d000 rw-p 00000000 00:00 0 > > 7f9a16c9d000-7f9a16cb5000 r-xp 00000000 08:01 9050338 > /lib/x86_64-linux-gnu/libpthread-2.15.so > > 7f9a16cb5000-7f9a16eb4000 ---p 00018000 08:01 9050338 > /lib/x86_64-linux-gnu/libpthread-2.15.so > > 7f9a16eb4000-7f9a16eb5000 r--p 00017000 08:01 9050338 > /lib/x86_64-linux-gnu/libpthread-2.15.so > > 7f9a16eb5000-7f9a16eb6000 rw-p 00018000 08:01 9050338 > /lib/x86_64-linux-gnu/libpthread-2.15.so > > 7f9a16eb6000-7f9a16eba000 rw-p 00000000 00:00 0 > > 7f9a16eba000-7f9a16edc000 r-xp 00000000 08:01 9050344 > /lib/x86_64-linux-gnu/ld-2.15.so > > 7f9a170c1000-7f9a170c4000 rw-p 00000000 00:00 0 > > 7f9a170d9000-7f9a170dc000 rw-p 00000000 00:00 0 > > 7f9a170dc000-7f9a170dd000 r--p 00022000 08:01 9050344 > /lib/x86_64-linux-gnu/ld-2.15.so > > 7f9a170dd000-7f9a170df000 rw-p 00023000 08:01 9050344 > /lib/x86_64-linux-gnu/ld-2.15.so > > 7fff52f27000-7fff52f48000 rw-p 00000000 00:00 0 [stack] > > 7fff52fff000-7fff53000000 r-xp 00000000 00:00 0 [vdso] > > ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Fri Jun 7 16:00:22 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 07 Jun 2013 16:00:22 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> Message-ID: <51B249E6.9070102@mcs.anl.gov> On 06/07/2013 03:50 PM, Biddiscombe, John A. wrote: > Yes, that's where I compiled and installed the nightly snapshot to. (I did a make clean and recompile of the app, though I see that only hydra needed changing ) - > I assumed that the mpich snapshot did include the updated hydra - if not, I should redo it with the fixed hydra. > > I'll wipe everything and rebuild one more time just in case I messed up. Does it matter that I'm installing to a non standard location? Does it need to be on any system paths or have special privileges? Non-standard locations are fine, as long as you use full paths to the executables. Just untar the nightly tarball and build from scratch. Technically you only need to get a new mpiexec, not the entire mpich, but it only takes a couple of minutes to build, so it's not a big deal either way. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From apenya at mcs.anl.gov Fri Jun 7 16:36:58 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Fri, 07 Jun 2013 14:36:58 -0700 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: <51B23CDA.9070502@tpn.usp.br> References: <51AE0EC6.7040903@tpn.usp.br> <51B23CDA.9070502@tpn.usp.br> Message-ID: <19584840.D69VZFJ8CV@localhost.localdomain> Hi Fernando, Your log file seems to indicate that configure is not finding the directories you indicated in the following options: --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ --with-mpiinc=/home/fernando_luz/usr/mpich/include/ Could you please check these paths are correct and contain the corresponding libraries and include files? Antonio On Friday, June 07, 2013 05:04:42 PM fernando_luz wrote: > In attachment. > > Fernando > > On 06/07/2013 04:56 PM, Jeff Hammond wrote: > > please attach config.log. > > > > jeff > > > > On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz wrote: > >> Hi Rajeev, > >> > >> Thanks for the answers. > >> > >> I get the source code in repository, but I didn't succeed in the compile > >> process. > >> I ran the autogen.sh and after this I tried to configure my installation > >> and I received the following error message. > >> > >> > >> fernando_luz at TPN000300:~/git/mpe$ ./configure > >> --prefix=/home/fernando_luz/usr/mpe > >> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc > >> --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ > >> --with-mpiinc=/home/fernando_luz/usr/mpich/include/ > >> Configuring MPE Profiling System with > >> '--prefix=/home/fernando_luz/usr/mpe' > >> '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' > >> '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' > >> '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' > >> 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' > >> 'MPI_INC=/home/fernando_luz/usr/mpich/include' > >> 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' > >> checking for current directory name... /home/fernando_luz/git/mpe > >> checking gnumake... yes using --no-print-directory > >> checking BSD 4.4 make... no - whew > >> checking OSF V3 make... no > >> checking for virtual path format... VPATH > >> User supplied MPI implmentation (Good Luck!) > >> checking for leftover Makefiles in subpackages ... none > >> checking for gcc... cc > >> checking whether the C compiler works... yes > >> checking for C compiler default output file name... a.out > >> checking for suffix of executables... > >> checking whether we are cross compiling... no > >> checking for suffix of object files... o > >> checking whether we are using the GNU C compiler... yes > >> checking whether cc accepts -g... yes > >> checking for cc option to accept ISO C89... none needed > >> checking whether MPI_CC has been set ... > >> /home/fernando_luz/usr/mpich/bin/mpicc > >> checking whether we are using the GNU Fortran 77 compiler... no > >> checking whether f77 accepts -g... no > >> checking whether MPI_F77 has been set ... f77 > >> checking for the linkage of the supplied MPI C definitions ... no > >> configure: error: Cannot link with basic MPI C program! > >> > >> Check your MPI include paths, MPI libraries and MPI CC compiler > >> > >> Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). > >> > >> I prefer to use the mpe in repository because in the site, the last > >> version > >> was dated in 2010 and in the git repository the last commit was in 2012. > >> > >> Regards > >> > >> Fernando > >> > >> On 06/04/2013 03:05 PM, Rajeev Thakur wrote: > >>> It can be downloaded from > >>> http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. > >>> > >>> The source repository is at http://git.mpich.org/mpe.git/ > >>> > >>> Rajeev > >>> > >>> On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: > >>>> MPE isn't actively developed and should sit strictly on top of any MPI > >>>> implementation so you can just grab MPE from an older release of > >>>> MPICH. > >>>> > >>>> My guess is that MPE will be a standalone download at some point in the > >>>> future. > >>>> > >>>> Jeff > >>>> > >>>> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz > >>>> > >>>> wrote: > >>>>> Hi, > >>>>> > >>>>> I didn't find the MPE source in mpich-3.0.4 package. Where I can > >>>>> download > >>>>> the source? It is still compatible with mpich? > >>>>> > >>>>> And I tried to install the logging support available in this release, > >>>>> but my > >>>>> try didn't was successful. I received the follow error: > >>>>> > >>>>> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configur > >>>>> e: > >>>>> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found > >>>>> configure: creating ./config.status > >>>>> config.status: error: cannot find input file: `Makefile.in' > >>>>> configure: error: src/util/logging/rlog configure failed > >>>>> > >>>>> I attached the c.txt file used in the configuration. > >>>>> > >>>>> Regards > >>>>> > >>>>> Fernando > >>>>> > >>>>> _______________________________________________ > >>>>> discuss mailing list discuss at mpich.org > >>>>> To manage subscription options or unsubscribe: > >>>>> https://lists.mpich.org/mailman/listinfo/discuss > >>>> > >>>> -- > >>>> Jeff Hammond > >>>> Argonne Leadership Computing Facility > >>>> University of Chicago Computation Institute > >>>> jhammond at alcf.anl.gov / (630) 252-5381 > >>>> http://www.linkedin.com/in/jeffhammond > >>>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond > >>>> ALCF docs: http://www.alcf.anl.gov/user-guides > >>>> _______________________________________________ > >>>> discuss mailing list discuss at mpich.org > >>>> To manage subscription options or unsubscribe: > >>>> https://lists.mpich.org/mailman/listinfo/discuss > >>> > >>> _______________________________________________ > >>> discuss mailing list discuss at mpich.org > >>> To manage subscription options or unsubscribe: > >>> https://lists.mpich.org/mailman/listinfo/discuss > >> > >> _______________________________________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/mailman/listinfo/discuss -- Antonio J. Pe?a Postdoctoral Appointee Mathematics and Computer Science Division Argonne National Laboratory 9700 South Cass Avenue, Bldg. 240, Of. 3148 Argonne, IL 60439-4847 (+1) 630-252-7928 apenya at mcs.anl.gov From biddisco at cscs.ch Fri Jun 7 16:37:23 2013 From: biddisco at cscs.ch (Biddiscombe, John A.) Date: Fri, 7 Jun 2013 21:37:23 +0000 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B249E6.9070102@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> Message-ID: <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> I did a complete rebuild and sometimes error appears twice, apart from that, no change I'll play some more next week, but for now it isn't urgent. I still think I've done something wrong For reference, my mpich compile is (no slurm options present - I bet that's what's wrong) ./configure --prefix=/home/biddisco/apps/mpich-3.0.4 --enable-static --with-pic JB breno2 biddisco # more slurm.err *** glibc detected *** /home/biddisco/build/hello/hello: double free or corruption (fasttop): 0x0000000001cf8340 *** *** glibc detected *** /home/biddisco/build/hello/hello: double free or corruption (fasttop): 0x00000000010b7340 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f11c18f0b96] /home/biddisco/build/hello/hello(MPIDI_Populate_vc_node_ids+0x3f9)[0x427c89] /home/biddisco/build/hello/hello(MPID_Init+0x136)[0x4253f6] /home/biddisco/build/hello/hello(MPIR_Init_thread+0x22f)[0x414cbf] /home/biddisco/build/hello/hello(MPI_Init+0xae)[0x4146ee] /home/biddisco/build/hello/hello(main+0x22)[0x413f2e] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f11c189376d] /home/biddisco/build/hello/hello[0x413e31] ======= Memory map: ======== 00400000-0051a000 r-xp 00000000 08:01 8661128 /home/biddisco/build/hello/hello 0071a000-00727000 r--p 0011a000 08:01 8661128 /home/biddisco/build/hello/hello 00727000-00729000 rw-p 00127000 08:01 8661128 /home/biddisco/build/hello/hello 00729000-00751000 rw-p 00000000 00:00 0 01cf7000-01d18000 rw-p 00000000 00:00 0 [heap] 7f11c165c000-7f11c1671000 r-xp 00000000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f11c1671000-7f11c1870000 ---p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f11c1870000-7f11c1871000 r--p 00014000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f11c1871000-7f11c1872000 rw-p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f11c1872000-7f11c1a27000 r-xp 00000000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f11c1a27000-7f11c1c26000 ---p 001b5000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f11c1c26000-7f11c1c2a000 r--p 001b4000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f11c1c2a000-7f11c1c2c000 rw-p 001b8000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f11c1c2c000-7f11c1c31000 rw-p 00000000 00:00 0 7f11c1c31000-7f11c1c49000 r-xp 00000000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f11c1c49000-7f11c1e48000 ---p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f11c1e48000-7f11c1e49000 r--p 00017000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f11c1e49000-7f11c1e4a000 rw-p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f11c1e4a000-7f11c1e4e000 rw-p 00000000 00:00 0 7f11c1e4e000-7f11c1e70000 r-xp 00000000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7f11c2053000-7f11c2056000 rw-p 00000000 00:00 0 7f11c206d000-7f11c2070000 rw-p 00000000 00:00 0 7f11c2070000-7f11c2071000 r--p 00022000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7f11c2071000-7f11c2073000 rw-p 00023000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7fff0da4c000-7fff0da6d000 rw-p 00000000 00:00 0 [stack] 7fff0db6a000-7fff0db6b000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f27e69f2b96] /home/biddisco/build/hello/hello(MPIDI_Populate_vc_node_ids+0x3f9)[0x427c89] /home/biddisco/build/hello/hello(MPID_Init+0x136)[0x4253f6] /home/biddisco/build/hello/hello(MPIR_Init_thread+0x22f)[0x414cbf] /home/biddisco/build/hello/hello(MPI_Init+0xae)[0x4146ee] /home/biddisco/build/hello/hello(main+0x22)[0x413f2e] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f27e699576d] /home/biddisco/build/hello/hello[0x413e31] ======= Memory map: ======== 00400000-0051a000 r-xp 00000000 08:01 8661128 /home/biddisco/build/hello/hello 0071a000-00727000 r--p 0011a000 08:01 8661128 /home/biddisco/build/hello/hello 00727000-00729000 rw-p 00127000 08:01 8661128 /home/biddisco/build/hello/hello 00729000-00751000 rw-p 00000000 00:00 0 010b6000-010d7000 rw-p 00000000 00:00 0 [heap] 7f27e675e000-7f27e6773000 r-xp 00000000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f27e6773000-7f27e6972000 ---p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f27e6972000-7f27e6973000 r--p 00014000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f27e6973000-7f27e6974000 rw-p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f27e6974000-7f27e6b29000 r-xp 00000000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f27e6b29000-7f27e6d28000 ---p 001b5000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f27e6d28000-7f27e6d2c000 r--p 001b4000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f27e6d2c000-7f27e6d2e000 rw-p 001b8000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7f27e6d2e000-7f27e6d33000 rw-p 00000000 00:00 0 7f27e6d33000-7f27e6d4b000 r-xp 00000000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f27e6d4b000-7f27e6f4a000 ---p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f27e6f4a000-7f27e6f4b000 r--p 00017000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f27e6f4b000-7f27e6f4c000 rw-p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7f27e6f4c000-7f27e6f50000 rw-p 00000000 00:00 0 7f27e6f50000-7f27e6f72000 r-xp 00000000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7f27e7155000-7f27e7158000 rw-p 00000000 00:00 0 7f27e716f000-7f27e7172000 rw-p 00000000 00:00 0 7f27e7172000-7f27e7173000 r--p 00022000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7f27e7173000-7f27e7175000 rw-p 00023000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7fff08d40000-7fff08d61000 rw-p 00000000 00:00 0 [stack] 7fff08dff000-7fff08e00000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] From balaji at mcs.anl.gov Fri Jun 7 16:58:06 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 07 Jun 2013 16:58:06 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> Message-ID: <51B2576E.1010604@mcs.anl.gov> John, On 06/07/2013 04:37 PM, Biddiscombe, John A. wrote: > I did a complete rebuild and sometimes error appears twice, apart from that, no change > > I'll play some more next week, but for now it isn't urgent. I still think I've done something wrong > > For reference, my mpich compile is (no slurm options present - I bet that's what's wrong) > ./configure --prefix=/home/biddisco/apps/mpich-3.0.4 --enable-static --with-pic Your naming is pretty confusing. You are using "mpich-3.0.4" for a build that is not really mpich-3.0.4. Try this: % wget http://www.mpich.org/static/tarballs/nightly/master/mpich/mpich-master-v3.0.4-259-gf322ce79.tar.gz % tar -xzvf mpich-master-v3.0.4-259-gf322ce79.tar.gz % cd mpich-master-v3.0.4-259-gf322ce79 % ./configure --prefix=`pwd`/install CC=gcc CXX=g++ F77=gfortran FC=gfortran && make && make install % salloc -N 2 -n 4 % ./install/bin/mpicc hello.c -o hello % ./install/bin/mpiexec -n 4 ./hello -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jeff.science at gmail.com Fri Jun 7 17:38:05 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Fri, 7 Jun 2013 17:38:05 -0500 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: <51B23CDA.9070502@tpn.usp.br> References: <51AE0EC6.7040903@tpn.usp.br> <51B22F89.3050107@tpn.usp.br> <51B23CDA.9070502@tpn.usp.br> Message-ID: <-4466127925988670465@unknownmsgid> Did you read it? It's got the reason that configure is failing pretty clearly stated. You need to debug your configure invocation. Maybe use CC=mpicc etc instead... Jeff Sent from my iPhone On Jun 7, 2013, at 3:04 PM, fernando_luz wrote: > In attachment. > > Fernando > > On 06/07/2013 04:56 PM, Jeff Hammond wrote: >> please attach config.log. >> >> jeff >> >> On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz wrote: >>> Hi Rajeev, >>> >>> Thanks for the answers. >>> >>> I get the source code in repository, but I didn't succeed in the compile >>> process. >>> I ran the autogen.sh and after this I tried to configure my installation and >>> I received the following error message. >>> >>> >>> fernando_luz at TPN000300:~/git/mpe$ ./configure >>> --prefix=/home/fernando_luz/usr/mpe >>> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc >>> --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ >>> --with-mpiinc=/home/fernando_luz/usr/mpich/include/ >>> Configuring MPE Profiling System with '--prefix=/home/fernando_luz/usr/mpe' >>> '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' >>> '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' >>> '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' >>> 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' >>> 'MPI_INC=/home/fernando_luz/usr/mpich/include' >>> 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' >>> checking for current directory name... /home/fernando_luz/git/mpe >>> checking gnumake... yes using --no-print-directory >>> checking BSD 4.4 make... no - whew >>> checking OSF V3 make... no >>> checking for virtual path format... VPATH >>> User supplied MPI implmentation (Good Luck!) >>> checking for leftover Makefiles in subpackages ... none >>> checking for gcc... cc >>> checking whether the C compiler works... yes >>> checking for C compiler default output file name... a.out >>> checking for suffix of executables... >>> checking whether we are cross compiling... no >>> checking for suffix of object files... o >>> checking whether we are using the GNU C compiler... yes >>> checking whether cc accepts -g... yes >>> checking for cc option to accept ISO C89... none needed >>> checking whether MPI_CC has been set ... >>> /home/fernando_luz/usr/mpich/bin/mpicc >>> checking whether we are using the GNU Fortran 77 compiler... no >>> checking whether f77 accepts -g... no >>> checking whether MPI_F77 has been set ... f77 >>> checking for the linkage of the supplied MPI C definitions ... no >>> configure: error: Cannot link with basic MPI C program! >>> Check your MPI include paths, MPI libraries and MPI CC compiler >>> >>> Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). >>> >>> I prefer to use the mpe in repository because in the site, the last version >>> was dated in 2010 and in the git repository the last commit was in 2012. >>> >>> Regards >>> >>> Fernando >>> >>> >>> >>> On 06/04/2013 03:05 PM, Rajeev Thakur wrote: >>>> It can be downloaded from >>>> http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. >>>> >>>> The source repository is at http://git.mpich.org/mpe.git/ >>>> >>>> Rajeev >>>> >>>> On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: >>>> >>>>> MPE isn't actively developed and should sit strictly on top of any MPI >>>>> implementation so you can just grab MPE from an older release of >>>>> MPICH. >>>>> >>>>> My guess is that MPE will be a standalone download at some point in the >>>>> future. >>>>> >>>>> Jeff >>>>> >>>>> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz >>>>> wrote: >>>>>> Hi, >>>>>> >>>>>> I didn't find the MPE source in mpich-3.0.4 package. Where I can >>>>>> download >>>>>> the source? It is still compatible with mpich? >>>>>> >>>>>> And I tried to install the logging support available in this release, >>>>>> but my >>>>>> try didn't was successful. I received the follow error: >>>>>> >>>>>> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >>>>>> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >>>>>> configure: creating ./config.status >>>>>> config.status: error: cannot find input file: `Makefile.in' >>>>>> configure: error: src/util/logging/rlog configure failed >>>>>> >>>>>> I attached the c.txt file used in the configuration. >>>>>> >>>>>> Regards >>>>>> >>>>>> Fernando >>>>>> >>>>>> _______________________________________________ >>>>>> discuss mailing list discuss at mpich.org >>>>>> To manage subscription options or unsubscribe: >>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> >>>>> >>>>> -- >>>>> Jeff Hammond >>>>> Argonne Leadership Computing Facility >>>>> University of Chicago Computation Institute >>>>> jhammond at alcf.anl.gov / (630) 252-5381 >>>>> http://www.linkedin.com/in/jeffhammond >>>>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >>>>> ALCF docs: http://www.alcf.anl.gov/user-guides >>>>> _______________________________________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From jhammond at alcf.anl.gov Fri Jun 7 22:10:30 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Fri, 7 Jun 2013 21:10:30 -0600 Subject: [mpich-discuss] Porting MPICH In-Reply-To: References: <519922D1.2010808@mcs.anl.gov> <519929A9.6080509@mcs.anl.gov> <51992CD2.4040305@mcs.anl.gov> Message-ID: I looked into this more thoroughly. The simple solution is to use "--with-thread-package=no --enable-threads=single" together. The latter will disable the use of all thread-related functions that lead to missing symbols due to the former. In short, a no-op implementation is not necessary, as I found out _after_ I had created one. Jeff On Mon, May 20, 2013 at 6:46 AM, jhonatan alves wrote: > Hello, > Today we will try the strategies proposed by you and see how it goes. > > Thank you for helping > > > 2013/5/19 Jeff Hammond >> >> Yeah, I would use the abort strategy to detect if MPICH is using >> thread-related functions in MPI_THREAD_SINGLE (ignoring >> MPICH_ASYNC_PROGRESS=1) and the noop implementation for practical >> purposes. >> >> Jeff >> >> On Sun, May 19, 2013 at 2:49 PM, Pavan Balaji wrote: >> > >> > Thanks. Yes, they can be mostly no-ops (except TLS checks which can be >> > any static variable since there's only one thread by definition). >> > Implementing them as "not-implemented" aborts might be OK too. >> > >> > -- Pavan >> > >> > On 05/19/2013 02:43 PM US Central Time, Jeff Hammond wrote: >> >> Presumably those functions can be noops if no threads are going to be >> >> used. Am I wrong? >> >> >> >> It might be worth implementing those functions as stubs that abort >> >> with UNIMPL error and see how far that goes. I'll try to get to this >> >> later today. >> >> >> >> Jeff >> >> >> >> On Sun, May 19, 2013 at 2:36 PM, Pavan Balaji >> >> wrote: >> >>> >> >>> On 05/19/2013 02:28 PM US Central Time, jhonatan alves wrote: >> >>>> But i believe that the missing of POSIX >> >>>> threads can be the most trouble part. So we need to port to the >> >>>> thread >> >>>> implementation in EPOS. >> >>> >> >>> Yup, I too believe that'll be the blocker. MPICH supports multiple >> >>> threading packages, but currently requires at least one to function >> >>> correctly: >> >>> >> >>> https://trac.mpich.org/projects/mpich/ticket/231 >> >>> >> >>> It's been a while since I looked into this issue, but I could look >> >>> into >> >>> it if you are running into it for your platform. >> >>> >> >>> -- Pavan >> >>> >> >>> -- >> >>> Pavan Balaji >> >>> http://www.mcs.anl.gov/~balaji >> >>> _______________________________________________ >> >>> discuss mailing list discuss at mpich.org >> >>> To manage subscription options or unsubscribe: >> >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> >> >> >> > >> > -- >> > Pavan Balaji >> > http://www.mcs.anl.gov/~balaji >> >> >> >> -- >> Jeff Hammond >> Argonne Leadership Computing Facility >> University of Chicago Computation Institute >> jhammond at alcf.anl.gov / (630) 252-5381 >> http://www.linkedin.com/in/jeffhammond >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >> ALCF docs: http://www.alcf.anl.gov/user-guides > > -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From balaji at mcs.anl.gov Fri Jun 7 23:58:25 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 07 Jun 2013 23:58:25 -0500 Subject: [mpich-discuss] Porting MPICH In-Reply-To: References: <519922D1.2010808@mcs.anl.gov> <519929A9.6080509@mcs.anl.gov> <51992CD2.4040305@mcs.anl.gov> Message-ID: <51B2B9F1.2080105@mcs.anl.gov> On 06/07/2013 10:10 PM, Jeff Hammond wrote: > I looked into this more thoroughly. The simple solution is to use > "--with-thread-package=no --enable-threads=single" together. The > latter will disable the use of all thread-related functions that lead > to missing symbols due to the former. > > In short, a no-op implementation is not necessary, as I found out > _after_ I had created one. I've committed a patch into mpich/master to throw an error if this happens. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From Steffen.Weise at iec.tu-freiberg.de Sat Jun 8 03:02:07 2013 From: Steffen.Weise at iec.tu-freiberg.de (Weise Steffen) Date: Sat, 8 Jun 2013 08:02:07 +0000 Subject: [mpich-discuss] crash on 2^31 size in MPI_Win_allocate_shared(...) Message-ID: <4A73417D-9023-4385-8ACB-D63C65407428@iec.tu-freiberg.de> Dear Jeff, thanks for taking care of the issue. Sure your right about the signed vs. unsigned int stuff but i also tried it with long int : same result. Every integer in my code (memory manager) is long unsigned int, no reason to deal with negative sizes (at least from my perspective). I have an extended version of this example which actually uses the window, sets it and checks it afterwards (but didn't post it to keep the source file small, also the issue is not with that part of the code). All sizes not exactly equal to 2^31 and some multiples of that work very well and "df -h" on /dev/shm clearly shows that it is used correctly. Memory sizes returned by MPI_Win_shared_query also match the requested amount. my installation is in /opt/mpi/ (which would mean i consider MPICH to be THE MPI) ;) I still have to rename my source repo ~/git/openmpi though ;) regards, Steffen Weise First, "long unsigned int window_size=2147483648" is not correct. They type you need to use there is MPI_Aint. The syntax of this function is int MPI_Win_allocate_shared(MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, void *baseptr, MPI_Win *win) It may be true that "long unsigned int" is safely case to MPI_Aint, but that's a very danger way to write code and it may be broken on some platforms. In any case, everything above 2^31 is probably not okay. Unless absolutely every integer type used in the code paths you are hitting is size_t (or equivalent) and not int, you're going to hit overflow somewhere. Maybe I'm wrong, but you should verify (as a debugging mechanism, not in general) that MPI_Win_allocate_shared is behaving as desired by memset-ing the resulting data (mem) to verify that you're actually getting e.g. 2^34 bytes back. If /dev/shm is 4G, I'm not sure how that's possible but maybe the implementation doesn't use that. I'm going to be on a plane today but I'll run your code on my machine and try to figure out more about how "count-safe" MPI_Win_allocate_shared is. Jeff PS Installing MPICH in ~/git/openmpi is just dirty :-) On Fri, Jun 7, 2013 at 5:50 AM, Weise Steffen > wrote: Dear mailing-list, this is my first time posting here. I found that with version 3.0.4 using MPI_Win_allocate_shared i get an error when using a size exactly 2^31 everything below and above is ok. Though i also had the same issue with 2^34. Some kind of division or type conversion seems to be off. (/dev/shm has 4G so it is not a size issue.. i know what those errors look like) I attach my code and the output i get on a linux (debian 6.0) 64 bit machine (same issue on a mac though) . I'll be happy to provide more machine details or everything you guys need to analyse whats going on. with kind regards, Steffen Weise _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From biddisco at cscs.ch Sat Jun 8 15:02:34 2013 From: biddisco at cscs.ch (Biddiscombe, John A.) Date: Sat, 8 Jun 2013 20:02:34 +0000 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B2576E.1010604@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> <51B2576E.1010604@mcs.anl.gov> Message-ID: <50320452A334BD42A5EC72BAD21450990863277A@MBX10.d.ethz.ch> Following your instructions (I only have 1 node, so changed N2 n4 to N 1 n2), same error, listed below... [NB . Only one interesting thing is that I cannot do a make as user biddisco and have to sudo make as it gives me some permission error otherwise mkdir -p '/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc' /usr/bin/install -c -m 644 src/env/mpicc.conf src/env/mpif77.conf src/env/mpif90.conf src/env/mpicxx.conf '/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc' /usr/bin/install: cannot remove `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpicc.conf': Permission denied /usr/bin/install: cannot remove `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpif77.conf': Permission denied /usr/bin/install: cannot remove `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpif90.conf': Permission denied /usr/bin/install: cannot remove `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpicxx.conf': Permission denied make[3]: *** [install-sysconfDATA] Error 1 make[3]: Leaving directory `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' make[2]: *** [install-am] Error 2 make[2]: Leaving directory `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' make: *** [install] Error 2 ] JB biddisco at breno2 ~/build/mpich-master-v3.0.4-259-gf322ce79 $ ./install/bin/mpiexec -n 2 ./hello *** glibc detected *** ./hello: double free or corruption (fasttop): 0x00000000017b2340 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fcaeb4a2b96] *** glibc detected *** ./hello: double free or corruption (fasttop): 0x00000000011e7340 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fd1a77ddb96] /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7fd1a7bc75a9] /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPID_Init+0x136)[0x7fd1a7bc1da6] /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIR_Init_thread+0x22f)[0x7fd1a7c78f1f] /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPI_Init+0xae)[0x7fd1a7c788be] ./hello[0x40081e] /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7fcaeb88c5a9] /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPID_Init+0x136)[0x7fcaeb886da6] /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIR_Init_thread+0x22f)[0x7fcaeb93df1f] /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPI_Init+0xae)[0x7fcaeb93d8be] ./hello[0x40081e] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fcaeb44576d] ./hello[0x400719] ======= Memory map: ======== 00400000-00401000 r-xp 00000000 08:01 10625669 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello 00600000-00601000 r--p 00000000 08:01 10625669 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello 00601000-00602000 rw-p 00001000 08:01 10625669 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello 017b1000-017d2000 rw-p 00000000 00:00 0 [heap] 7fcaeabe1000-7fcaeabe4000 rw-p 00000000 00:00 0 7fcaeabe4000-7fcaeabf9000 r-xp 00000000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fcaeabf9000-7fcaeadf8000 ---p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fcaeadf8000-7fcaeadf9000 r--p 00014000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fcaeadf9000-7fcaeadfa000 rw-p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fcaeadfa000-7fcaeae12000 r-xp 00000000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7fcaeae12000-7fcaeb011000 ---p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7fcaeb011000-7fcaeb012000 r--p 00017000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7fcaeb012000-7fcaeb013000 rw-p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so 7fcaeb013000-7fcaeb017000 rw-p 00000000 00:00 0 7fcaeb017000-7fcaeb01e000 r-xp 00000000 08:01 9050343 /lib/x86_64-linux-gnu/librt-2.15.so 7fcaeb01e000-7fcaeb21d000 ---p 00007000 08:01 9050343 /lib/x86_64-linux-gnu/librt-2.15.so 7fcaeb21d000-7fcaeb21e000 r--p 00006000 08:01 9050343 /lib/x86_64-linux-gnu/librt-2.15.so 7fcaeb21e000-7fcaeb21f000 rw-p 00007000 08:01 9050343 /lib/x86_64-linux-gnu/librt-2.15.so 7fcaeb21f000-7fcaeb223000 r-xp 00000000 08:01 8661134 /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 7fcaeb223000-7fcaeb422000 ---p 00004000 08:01 8661134 /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 7fcaeb422000-7fcaeb423000 r--p 00003000 08:01 8661134 /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 7fcaeb423000-7fcaeb424000 rw-p 00004000 08:01 8661134 /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 7fcaeb424000-7fcaeb5d9000 r-xp 00000000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7fcaeb5d9000-7fcaeb7d8000 ---p 001b5000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7fcaeb7d8000-7fcaeb7dc000 r--p 001b4000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7fcaeb7dc000-7fcaeb7de000 rw-p 001b8000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so 7fcaeb7de000-7fcaeb7e3000 rw-p 00000000 00:00 0 7fcaeb7e3000-7fcaeba03000 r-xp 00000000 08:01 11675463 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 7fcaeba03000-7fcaebc03000 ---p 00220000 08:01 11675463 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 7fcaebc03000-7fcaebc10000 r--p 00220000 08:01 11675463 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 7fcaebc10000-7fcaebc16000 rw-p 0022d000 08:01 11675463 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 7fcaebc16000-7fcaebc4e000 rw-p 00000000 00:00 0 7fcaebc4e000-7fcaebc70000 r-xp 00000000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7fcaebe54000-7fcaebe56000 rw-p 00000000 00:00 0 7fcaebe6d000-7fcaebe70000 rw-p 00000000 00:00 0 7fcaebe70000-7fcaebe71000 r--p 00022000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7fcaebe71000-7fcaebe73000 rw-p 00023000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so 7fff671e5000-7fff67206000 rw-p 00000000 00:00 0 [stack] 7fff673b4000-7fff673b5000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 6 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6) This typically refers to a problem with your application. Please see the FAQ page for debugging suggestions From balaji at mcs.anl.gov Sat Jun 8 15:23:38 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sat, 08 Jun 2013 15:23:38 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <50320452A334BD42A5EC72BAD21450990863277A@MBX10.d.ethz.ch> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> <51B2576E.1010604@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990863277A@MBX10.d.ethz.ch> Message-ID: <51B392CA.6000700@mcs.anl.gov> Thanks, John. I'll look into it and get back to you if I need any more information. Btw, you should not need sudo at all. You might have some previously left over files with root permissions that might have caused the issue. If you delete the entire directory and start from scratch, this issue should not be there. -- Pavan On 06/08/2013 03:02 PM, Biddiscombe, John A. wrote: > Following your instructions (I only have 1 node, so changed N2 n4 to N 1 n2), same error, listed below... > > [NB . Only one interesting thing is that I cannot do a make as user biddisco and have to sudo make as it gives me some permission error otherwise > mkdir -p '/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc' > /usr/bin/install -c -m 644 src/env/mpicc.conf src/env/mpif77.conf src/env/mpif90.conf src/env/mpicxx.conf '/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc' > /usr/bin/install: cannot remove `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpicc.conf': Permission denied > /usr/bin/install: cannot remove `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpif77.conf': Permission denied > /usr/bin/install: cannot remove `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpif90.conf': Permission denied > /usr/bin/install: cannot remove `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpicxx.conf': Permission denied > make[3]: *** [install-sysconfDATA] Error 1 > make[3]: Leaving directory `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' > make[2]: *** [install-am] Error 2 > make[2]: Leaving directory `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' > make[1]: *** [install-recursive] Error 1 > make[1]: Leaving directory `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' > make: *** [install] Error 2 > ] > > JB > > biddisco at breno2 ~/build/mpich-master-v3.0.4-259-gf322ce79 $ ./install/bin/mpiexec -n 2 ./hello > *** glibc detected *** ./hello: double free or corruption (fasttop): 0x00000000017b2340 *** > ======= Backtrace: ========= > /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fcaeb4a2b96] > *** glibc detected *** ./hello: double free or corruption (fasttop): 0x00000000011e7340 *** > ======= Backtrace: ========= > /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fd1a77ddb96] > /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7fd1a7bc75a9] > /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPID_Init+0x136)[0x7fd1a7bc1da6] > /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIR_Init_thread+0x22f)[0x7fd1a7c78f1f] > /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPI_Init+0xae)[0x7fd1a7c788be] > ./hello[0x40081e] > /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7fcaeb88c5a9] > /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPID_Init+0x136)[0x7fcaeb886da6] > /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIR_Init_thread+0x22f)[0x7fcaeb93df1f] > /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPI_Init+0xae)[0x7fcaeb93d8be] > ./hello[0x40081e] > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fcaeb44576d] > ./hello[0x400719] > ======= Memory map: ======== > 00400000-00401000 r-xp 00000000 08:01 10625669 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello > 00600000-00601000 r--p 00000000 08:01 10625669 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello > 00601000-00602000 rw-p 00001000 08:01 10625669 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello > 017b1000-017d2000 rw-p 00000000 00:00 0 [heap] > 7fcaeabe1000-7fcaeabe4000 rw-p 00000000 00:00 0 > 7fcaeabe4000-7fcaeabf9000 r-xp 00000000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 > 7fcaeabf9000-7fcaeadf8000 ---p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 > 7fcaeadf8000-7fcaeadf9000 r--p 00014000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 > 7fcaeadf9000-7fcaeadfa000 rw-p 00015000 08:01 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 > 7fcaeadfa000-7fcaeae12000 r-xp 00000000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so > 7fcaeae12000-7fcaeb011000 ---p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so > 7fcaeb011000-7fcaeb012000 r--p 00017000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so > 7fcaeb012000-7fcaeb013000 rw-p 00018000 08:01 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so > 7fcaeb013000-7fcaeb017000 rw-p 00000000 00:00 0 > 7fcaeb017000-7fcaeb01e000 r-xp 00000000 08:01 9050343 /lib/x86_64-linux-gnu/librt-2.15.so > 7fcaeb01e000-7fcaeb21d000 ---p 00007000 08:01 9050343 /lib/x86_64-linux-gnu/librt-2.15.so > 7fcaeb21d000-7fcaeb21e000 r--p 00006000 08:01 9050343 /lib/x86_64-linux-gnu/librt-2.15.so > 7fcaeb21e000-7fcaeb21f000 rw-p 00007000 08:01 9050343 /lib/x86_64-linux-gnu/librt-2.15.so > 7fcaeb21f000-7fcaeb223000 r-xp 00000000 08:01 8661134 /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 > 7fcaeb223000-7fcaeb422000 ---p 00004000 08:01 8661134 /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 > 7fcaeb422000-7fcaeb423000 r--p 00003000 08:01 8661134 /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 > 7fcaeb423000-7fcaeb424000 rw-p 00004000 08:01 8661134 /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 > 7fcaeb424000-7fcaeb5d9000 r-xp 00000000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so > 7fcaeb5d9000-7fcaeb7d8000 ---p 001b5000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so > 7fcaeb7d8000-7fcaeb7dc000 r--p 001b4000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so > 7fcaeb7dc000-7fcaeb7de000 rw-p 001b8000 08:01 9050358 /lib/x86_64-linux-gnu/libc-2.15.so > 7fcaeb7de000-7fcaeb7e3000 rw-p 00000000 00:00 0 > 7fcaeb7e3000-7fcaeba03000 r-xp 00000000 08:01 11675463 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 > 7fcaeba03000-7fcaebc03000 ---p 00220000 08:01 11675463 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 > 7fcaebc03000-7fcaebc10000 r--p 00220000 08:01 11675463 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 > 7fcaebc10000-7fcaebc16000 rw-p 0022d000 08:01 11675463 /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 > 7fcaebc16000-7fcaebc4e000 rw-p 00000000 00:00 0 > 7fcaebc4e000-7fcaebc70000 r-xp 00000000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so > 7fcaebe54000-7fcaebe56000 rw-p 00000000 00:00 0 > 7fcaebe6d000-7fcaebe70000 rw-p 00000000 00:00 0 > 7fcaebe70000-7fcaebe71000 r--p 00022000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so > 7fcaebe71000-7fcaebe73000 rw-p 00023000 08:01 9050344 /lib/x86_64-linux-gnu/ld-2.15.so > 7fff671e5000-7fff67206000 rw-p 00000000 00:00 0 [stack] > 7fff673b4000-7fff673b5000 r-xp 00000000 00:00 0 [vdso] > ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 6 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > =================================================================================== > YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6) > This typically refers to a problem with your application. > Please see the FAQ page for debugging suggestions > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Sun Jun 9 00:13:09 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 09 Jun 2013 00:13:09 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B392CA.6000700@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> <51B2576E.1010604@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990863277A@MBX10.d.ethz.ch> <51B392CA.6000700@mcs.anl.gov> Message-ID: <51B40EE5.7020306@mcs.anl.gov> John, Can you try the latest nightly snapshot? http://www.mpich.org/static/tarballs/nightly/master/mpich/ -- Pavan On 06/08/2013 03:23 PM, Pavan Balaji wrote: > > Thanks, John. I'll look into it and get back to you if I need any more > information. > > Btw, you should not need sudo at all. You might have some previously > left over files with root permissions that might have caused the issue. > If you delete the entire directory and start from scratch, this issue > should not be there. > > -- Pavan > > On 06/08/2013 03:02 PM, Biddiscombe, John A. wrote: >> Following your instructions (I only have 1 node, so changed N2 n4 to N >> 1 n2), same error, listed below... >> >> [NB . Only one interesting thing is that I cannot do a make as user >> biddisco and have to sudo make as it gives me some permission error >> otherwise >> mkdir -p >> '/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc' >> /usr/bin/install -c -m 644 src/env/mpicc.conf src/env/mpif77.conf >> src/env/mpif90.conf src/env/mpicxx.conf >> '/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc' >> /usr/bin/install: cannot remove >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpicc.conf': >> Permission denied >> /usr/bin/install: cannot remove >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpif77.conf': >> Permission denied >> /usr/bin/install: cannot remove >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpif90.conf': >> Permission denied >> /usr/bin/install: cannot remove >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpicxx.conf': >> Permission denied >> make[3]: *** [install-sysconfDATA] Error 1 >> make[3]: Leaving directory >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' >> make[2]: *** [install-am] Error 2 >> make[2]: Leaving directory >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' >> make[1]: *** [install-recursive] Error 1 >> make[1]: Leaving directory >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' >> make: *** [install] Error 2 >> ] >> >> JB >> >> biddisco at breno2 ~/build/mpich-master-v3.0.4-259-gf322ce79 $ >> ./install/bin/mpiexec -n 2 ./hello >> *** glibc detected *** ./hello: double free or corruption (fasttop): >> 0x00000000017b2340 *** >> ======= Backtrace: ========= >> /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fcaeb4a2b96] >> *** glibc detected *** ./hello: double free or corruption (fasttop): >> 0x00000000011e7340 *** >> ======= Backtrace: ========= >> /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fd1a77ddb96] >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7fd1a7bc75a9] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPID_Init+0x136)[0x7fd1a7bc1da6] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIR_Init_thread+0x22f)[0x7fd1a7c78f1f] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPI_Init+0xae)[0x7fd1a7c788be] >> >> ./hello[0x40081e] >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7fcaeb88c5a9] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPID_Init+0x136)[0x7fcaeb886da6] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPIR_Init_thread+0x22f)[0x7fcaeb93df1f] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10(MPI_Init+0xae)[0x7fcaeb93d8be] >> >> ./hello[0x40081e] >> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fcaeb44576d] >> ./hello[0x400719] >> ======= Memory map: ======== >> 00400000-00401000 r-xp 00000000 08:01 >> 10625669 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello >> 00600000-00601000 r--p 00000000 08:01 >> 10625669 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello >> 00601000-00602000 rw-p 00001000 08:01 >> 10625669 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello >> 017b1000-017d2000 rw-p 00000000 00:00 >> 0 [heap] >> 7fcaeabe1000-7fcaeabe4000 rw-p 00000000 00:00 0 >> 7fcaeabe4000-7fcaeabf9000 r-xp 00000000 08:01 >> 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 >> 7fcaeabf9000-7fcaeadf8000 ---p 00015000 08:01 >> 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 >> 7fcaeadf8000-7fcaeadf9000 r--p 00014000 08:01 >> 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 >> 7fcaeadf9000-7fcaeadfa000 rw-p 00015000 08:01 >> 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 >> 7fcaeadfa000-7fcaeae12000 r-xp 00000000 08:01 >> 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so >> 7fcaeae12000-7fcaeb011000 ---p 00018000 08:01 >> 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so >> 7fcaeb011000-7fcaeb012000 r--p 00017000 08:01 >> 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so >> 7fcaeb012000-7fcaeb013000 rw-p 00018000 08:01 >> 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so >> 7fcaeb013000-7fcaeb017000 rw-p 00000000 00:00 0 >> 7fcaeb017000-7fcaeb01e000 r-xp 00000000 08:01 >> 9050343 /lib/x86_64-linux-gnu/librt-2.15.so >> 7fcaeb01e000-7fcaeb21d000 ---p 00007000 08:01 >> 9050343 /lib/x86_64-linux-gnu/librt-2.15.so >> 7fcaeb21d000-7fcaeb21e000 r--p 00006000 08:01 >> 9050343 /lib/x86_64-linux-gnu/librt-2.15.so >> 7fcaeb21e000-7fcaeb21f000 rw-p 00007000 08:01 >> 9050343 /lib/x86_64-linux-gnu/librt-2.15.so >> 7fcaeb21f000-7fcaeb223000 r-xp 00000000 08:01 >> 8661134 >> /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 >> 7fcaeb223000-7fcaeb422000 ---p 00004000 08:01 >> 8661134 >> /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 >> 7fcaeb422000-7fcaeb423000 r--p 00003000 08:01 >> 8661134 >> /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 >> 7fcaeb423000-7fcaeb424000 rw-p 00004000 08:01 >> 8661134 >> /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 >> 7fcaeb424000-7fcaeb5d9000 r-xp 00000000 08:01 >> 9050358 /lib/x86_64-linux-gnu/libc-2.15.so >> 7fcaeb5d9000-7fcaeb7d8000 ---p 001b5000 08:01 >> 9050358 /lib/x86_64-linux-gnu/libc-2.15.so >> 7fcaeb7d8000-7fcaeb7dc000 r--p 001b4000 08:01 >> 9050358 /lib/x86_64-linux-gnu/libc-2.15.so >> 7fcaeb7dc000-7fcaeb7de000 rw-p 001b8000 08:01 >> 9050358 /lib/x86_64-linux-gnu/libc-2.15.so >> 7fcaeb7de000-7fcaeb7e3000 rw-p 00000000 00:00 0 >> 7fcaeb7e3000-7fcaeba03000 r-xp 00000000 08:01 >> 11675463 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 >> >> 7fcaeba03000-7fcaebc03000 ---p 00220000 08:01 >> 11675463 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 >> >> 7fcaebc03000-7fcaebc10000 r--p 00220000 08:01 >> 11675463 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 >> >> 7fcaebc10000-7fcaebc16000 rw-p 0022d000 08:01 >> 11675463 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/libmpich.so.10.0.4 >> >> 7fcaebc16000-7fcaebc4e000 rw-p 00000000 00:00 0 >> 7fcaebc4e000-7fcaebc70000 r-xp 00000000 08:01 >> 9050344 /lib/x86_64-linux-gnu/ld-2.15.so >> 7fcaebe54000-7fcaebe56000 rw-p 00000000 00:00 0 >> 7fcaebe6d000-7fcaebe70000 rw-p 00000000 00:00 0 >> 7fcaebe70000-7fcaebe71000 r--p 00022000 08:01 >> 9050344 /lib/x86_64-linux-gnu/ld-2.15.so >> 7fcaebe71000-7fcaebe73000 rw-p 00023000 08:01 >> 9050344 /lib/x86_64-linux-gnu/ld-2.15.so >> 7fff671e5000-7fff67206000 rw-p 00000000 00:00 >> 0 [stack] >> 7fff673b4000-7fff673b5000 r-xp 00000000 00:00 >> 0 [vdso] >> ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 >> 0 [vsyscall] >> >> =================================================================================== >> >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = EXIT CODE: 6 >> = CLEANING UP REMAINING PROCESSES >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> =================================================================================== >> >> YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6) >> This typically refers to a problem with your application. >> Please see the FAQ page for debugging suggestions >> > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From biddisco at cscs.ch Sun Jun 9 01:03:42 2013 From: biddisco at cscs.ch (Biddiscombe, John A.) Date: Sun, 9 Jun 2013 06:03:42 +0000 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B40EE5.7020306@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> <51B2576E.1010604@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990863277A@MBX10.d.ethz.ch> <51B392CA.6000700@mcs.anl.gov> <51B40EE5.7020306@mcs.anl.gov> Message-ID: <50320452A334BD42A5EC72BAD214509908632B8F@MBX10.d.ethz.ch> For reasons unclear to me, With this tarball, I get no mpiexec compiled. I'm doing a diff between the latest tarballs and trying to find the problem ... JB ll install/bin/ total 68 lrwxrwxrwx 1 biddisco biddisco 6 Jun 9 07:55 mpic++ -> mpicxx -rwxr-xr-x 1 biddisco biddisco 9905 Jun 9 07:55 mpicc -rwxr-xr-x 1 biddisco biddisco 9300 Jun 9 07:55 mpichversion -rwxr-xr-x 1 biddisco biddisco 9458 Jun 9 07:55 mpicxx -rwxr-xr-x 1 biddisco biddisco 11551 Jun 9 07:55 mpif77 -rwxr-xr-x 1 biddisco biddisco 13375 Jun 9 07:55 mpif90 -rwxr-xr-x 1 biddisco biddisco 3430 Jun 9 07:55 parkill -----Original Message----- From: Pavan Balaji [mailto:balaji at mcs.anl.gov] Sent: 09 June 2013 07:13 To: discuss at mpich.org Cc: Biddiscombe, John A. Subject: Re: [mpich-discuss] Problems running MPICH jobs under SLURM John, Can you try the latest nightly snapshot? http://www.mpich.org/static/tarballs/nightly/master/mpich/ -- Pavan On 06/08/2013 03:23 PM, Pavan Balaji wrote: > > Thanks, John. I'll look into it and get back to you if I need any > more information. > > Btw, you should not need sudo at all. You might have some previously > left over files with root permissions that might have caused the issue. > If you delete the entire directory and start from scratch, this > issue should not be there. > > -- Pavan > > On 06/08/2013 03:02 PM, Biddiscombe, John A. wrote: >> Following your instructions (I only have 1 node, so changed N2 n4 to >> N >> 1 n2), same error, listed below... >> >> [NB . Only one interesting thing is that I cannot do a make as user >> biddisco and have to sudo make as it gives me some permission error >> otherwise mkdir -p >> '/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc' >> /usr/bin/install -c -m 644 src/env/mpicc.conf src/env/mpif77.conf >> src/env/mpif90.conf src/env/mpicxx.conf >> '/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc' >> /usr/bin/install: cannot remove >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpicc.conf': >> Permission denied >> /usr/bin/install: cannot remove >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpif77.conf': >> Permission denied >> /usr/bin/install: cannot remove >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpif90.conf': >> Permission denied >> /usr/bin/install: cannot remove >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/etc/mpicxx.conf': >> Permission denied >> make[3]: *** [install-sysconfDATA] Error 1 >> make[3]: Leaving directory >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' >> make[2]: *** [install-am] Error 2 >> make[2]: Leaving directory >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' >> make[1]: *** [install-recursive] Error 1 >> make[1]: Leaving directory >> `/home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79' >> make: *** [install] Error 2 >> ] >> >> JB >> >> biddisco at breno2 ~/build/mpich-master-v3.0.4-259-gf322ce79 $ >> ./install/bin/mpiexec -n 2 ./hello >> *** glibc detected *** ./hello: double free or corruption (fasttop): >> 0x00000000017b2340 *** >> ======= Backtrace: ========= >> /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fcaeb4a2b96] >> *** glibc detected *** ./hello: double free or corruption (fasttop): >> 0x00000000011e7340 *** >> ======= Backtrace: ========= >> /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fd1a77ddb96] >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7fd1a7bc75a9] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10(MPID_Init+0x136)[0x7fd1a7bc1da6] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10(MPIR_Init_thread+0x22f)[0x7fd1a7c78f1f] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10(MPI_Init+0xae)[0x7fd1a7c788be] >> >> ./hello[0x40081e] >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10(MPIDI_Populate_vc_node_ids+0x3f9)[0x7fcaeb88c5a9] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10(MPID_Init+0x136)[0x7fcaeb886da6] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10(MPIR_Init_thread+0x22f)[0x7fcaeb93df1f] >> >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10(MPI_Init+0xae)[0x7fcaeb93d8be] >> >> ./hello[0x40081e] >> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fcaeb44576 >> d] >> ./hello[0x400719] >> ======= Memory map: ======== >> 00400000-00401000 r-xp 00000000 08:01 >> 10625669 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello >> 00600000-00601000 r--p 00000000 08:01 >> 10625669 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello >> 00601000-00602000 rw-p 00001000 08:01 >> 10625669 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/hello >> 017b1000-017d2000 rw-p 00000000 00:00 >> 0 [heap] >> 7fcaeabe1000-7fcaeabe4000 rw-p 00000000 00:00 0 >> 7fcaeabe4000-7fcaeabf9000 r-xp 00000000 08:01 >> 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 >> 7fcaeabf9000-7fcaeadf8000 ---p 00015000 08:01 >> 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 >> 7fcaeadf8000-7fcaeadf9000 r--p 00014000 08:01 >> 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 >> 7fcaeadf9000-7fcaeadfa000 rw-p 00015000 08:01 >> 9047556 /lib/x86_64-linux-gnu/libgcc_s.so.1 >> 7fcaeadfa000-7fcaeae12000 r-xp 00000000 08:01 >> 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so >> 7fcaeae12000-7fcaeb011000 ---p 00018000 08:01 >> 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so >> 7fcaeb011000-7fcaeb012000 r--p 00017000 08:01 >> 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so >> 7fcaeb012000-7fcaeb013000 rw-p 00018000 08:01 >> 9050338 /lib/x86_64-linux-gnu/libpthread-2.15.so >> 7fcaeb013000-7fcaeb017000 rw-p 00000000 00:00 0 >> 7fcaeb017000-7fcaeb01e000 r-xp 00000000 08:01 >> 9050343 /lib/x86_64-linux-gnu/librt-2.15.so >> 7fcaeb01e000-7fcaeb21d000 ---p 00007000 08:01 >> 9050343 /lib/x86_64-linux-gnu/librt-2.15.so >> 7fcaeb21d000-7fcaeb21e000 r--p 00006000 08:01 >> 9050343 /lib/x86_64-linux-gnu/librt-2.15.so >> 7fcaeb21e000-7fcaeb21f000 rw-p 00007000 08:01 >> 9050343 /lib/x86_64-linux-gnu/librt-2.15.so >> 7fcaeb21f000-7fcaeb223000 r-xp 00000000 08:01 >> 8661134 >> /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 >> 7fcaeb223000-7fcaeb422000 ---p 00004000 08:01 >> 8661134 >> /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 >> 7fcaeb422000-7fcaeb423000 r--p 00003000 08:01 >> 8661134 >> /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 >> 7fcaeb423000-7fcaeb424000 rw-p 00004000 08:01 >> 8661134 >> /home/biddisco/apps/mpich-3.0.4/lib/libmpl.so.1.0.0 >> 7fcaeb424000-7fcaeb5d9000 r-xp 00000000 08:01 >> 9050358 /lib/x86_64-linux-gnu/libc-2.15.so >> 7fcaeb5d9000-7fcaeb7d8000 ---p 001b5000 08:01 >> 9050358 /lib/x86_64-linux-gnu/libc-2.15.so >> 7fcaeb7d8000-7fcaeb7dc000 r--p 001b4000 08:01 >> 9050358 /lib/x86_64-linux-gnu/libc-2.15.so >> 7fcaeb7dc000-7fcaeb7de000 rw-p 001b8000 08:01 >> 9050358 /lib/x86_64-linux-gnu/libc-2.15.so >> 7fcaeb7de000-7fcaeb7e3000 rw-p 00000000 00:00 0 >> 7fcaeb7e3000-7fcaeba03000 r-xp 00000000 08:01 >> 11675463 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10.0.4 >> >> 7fcaeba03000-7fcaebc03000 ---p 00220000 08:01 >> 11675463 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10.0.4 >> >> 7fcaebc03000-7fcaebc10000 r--p 00220000 08:01 >> 11675463 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10.0.4 >> >> 7fcaebc10000-7fcaebc16000 rw-p 0022d000 08:01 >> 11675463 >> /home/biddisco/build/mpich-master-v3.0.4-259-gf322ce79/install/lib/li >> bmpich.so.10.0.4 >> >> 7fcaebc16000-7fcaebc4e000 rw-p 00000000 00:00 0 >> 7fcaebc4e000-7fcaebc70000 r-xp 00000000 08:01 >> 9050344 /lib/x86_64-linux-gnu/ld-2.15.so >> 7fcaebe54000-7fcaebe56000 rw-p 00000000 00:00 0 >> 7fcaebe6d000-7fcaebe70000 rw-p 00000000 00:00 0 >> 7fcaebe70000-7fcaebe71000 r--p 00022000 08:01 >> 9050344 /lib/x86_64-linux-gnu/ld-2.15.so >> 7fcaebe71000-7fcaebe73000 rw-p 00023000 08:01 >> 9050344 /lib/x86_64-linux-gnu/ld-2.15.so >> 7fff671e5000-7fff67206000 rw-p 00000000 00:00 >> 0 [stack] >> 7fff673b4000-7fff673b5000 r-xp 00000000 00:00 >> 0 [vdso] >> ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 >> 0 [vsyscall] >> >> ===================================================================== >> ============== >> >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = EXIT CODE: 6 >> = CLEANING UP REMAINING PROCESSES >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> ===================================================================== >> ============== >> >> YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6) >> This typically refers to a problem with your application. >> Please see the FAQ page for debugging suggestions >> > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Sun Jun 9 10:17:22 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 09 Jun 2013 10:17:22 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <50320452A334BD42A5EC72BAD214509908632B8F@MBX10.d.ethz.ch> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> <51B2576E.1010604@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990863277A@MBX10.d.ethz.ch> <51B392CA.6000700@mcs.anl.gov> <51B40EE5.7020306@mcs.anl.gov> <50320452A334BD42A5EC72BAD214509908632B8F@MBX10.d.ethz.ch> Message-ID: <51B49C82.8030205@mcs.anl.gov> On 06/09/2013 01:03 AM, Biddiscombe, John A. wrote: > For reasons unclear to me, With this tarball, I get no mpiexec compiled. > > I'm doing a diff between the latest tarballs and trying to find the problem ... > > JB > > ll install/bin/ > total 68 > lrwxrwxrwx 1 biddisco biddisco 6 Jun 9 07:55 mpic++ -> mpicxx > -rwxr-xr-x 1 biddisco biddisco 9905 Jun 9 07:55 mpicc > -rwxr-xr-x 1 biddisco biddisco 9300 Jun 9 07:55 mpichversion > -rwxr-xr-x 1 biddisco biddisco 9458 Jun 9 07:55 mpicxx > -rwxr-xr-x 1 biddisco biddisco 11551 Jun 9 07:55 mpif77 > -rwxr-xr-x 1 biddisco biddisco 13375 Jun 9 07:55 mpif90 > -rwxr-xr-x 1 biddisco biddisco 3430 Jun 9 07:55 parkill Gah. Sorry about that. Can you try once more? http://www.mpich.org/static/tarballs/nightly/master/mpich/ -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From biddisco at cscs.ch Sun Jun 9 15:09:14 2013 From: biddisco at cscs.ch (Biddiscombe, John A.) Date: Sun, 9 Jun 2013 20:09:14 +0000 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <51B49C82.8030205@mcs.anl.gov> References: <51AA036F.2030204@fz-juelich.de> <-2104451799775482129@unknownmsgid> <51AA134C.2030407@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> <51B2576E.1010604@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990863277A@MBX10.d.ethz.ch> <51B392CA.6000700@mcs.anl.gov> <51B40EE5.7020306@mcs.anl.gov> <50320452A334BD42A5EC72BAD214509908632B8F@MBX10.d.ethz.ch> <51B49C82.8030205@mcs.anl.gov> Message-ID: <50320452A334BD42A5EC72BAD214509908635147@MBX10.d.ethz.ch> Pavan The words you wanted to see ... biddisco at breno2 ~/build/mpich-master-v3.0.4-294-ga5719ca8 $ salloc -N 1 -n2 salloc: Granted job allocation 25 biddisco at breno2 ~/build/mpich-master-v3.0.4-294-ga5719ca8 $ ./install/bin/mpiexec -n 2 ./hello Hello world from process 0 of 2 Hello world from process 1 of 2 Thank you very much indeed. Not only a very welcome fix, but done at the weekend too! Great work. Thanks again. JB -----Original Message----- From: Pavan Balaji [mailto:balaji at mcs.anl.gov] Sent: 09 June 2013 17:17 To: Biddiscombe, John A. Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Problems running MPICH jobs under SLURM On 06/09/2013 01:03 AM, Biddiscombe, John A. wrote: > For reasons unclear to me, With this tarball, I get no mpiexec compiled. > > I'm doing a diff between the latest tarballs and trying to find the problem ... > > JB > > ll install/bin/ > total 68 > lrwxrwxrwx 1 biddisco biddisco 6 Jun 9 07:55 mpic++ -> mpicxx > -rwxr-xr-x 1 biddisco biddisco 9905 Jun 9 07:55 mpicc -rwxr-xr-x 1 > biddisco biddisco 9300 Jun 9 07:55 mpichversion -rwxr-xr-x 1 > biddisco biddisco 9458 Jun 9 07:55 mpicxx -rwxr-xr-x 1 biddisco > biddisco 11551 Jun 9 07:55 mpif77 -rwxr-xr-x 1 biddisco biddisco > 13375 Jun 9 07:55 mpif90 -rwxr-xr-x 1 biddisco biddisco 3430 Jun 9 > 07:55 parkill Gah. Sorry about that. Can you try once more? http://www.mpich.org/static/tarballs/nightly/master/mpich/ -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Sun Jun 9 15:20:51 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 09 Jun 2013 15:20:51 -0500 Subject: [mpich-discuss] Problems running MPICH jobs under SLURM In-Reply-To: <50320452A334BD42A5EC72BAD214509908635147@MBX10.d.ethz.ch> References: <51AA036F.2030204@fz-juelich.de> <51AA705F.3090500@mcs.anl.gov> <51AB10F6.9070109@fz-juelich.de> <51AB8251.6030007@mcs.anl.gov> <51ABA104.1070502@fz-juelich.de> <51ABFAE1.8020808@mcs.anl.gov> <51AC3E6E.6090305@fz-juelich.de> <51AC9A77.5040609@mcs.anl.gov> <51ACA9C4.4080203@fz-juelich.de> <50320452A334BD42A5EC72BAD21450990862847A@MBX10.d.ethz.ch> <51B18052.2080603@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862C3FE@MBX10.d.ethz.ch> <51B23DA5.3090805@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DAD0@MBX10.d.ethz.ch> <51B249E6.9070102@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990862DB53@MBX10.d.ethz.ch> <51B2576E.1010604@mcs.anl.gov> <50320452A334BD42A5EC72BAD21450990863277A@MBX10.d.ethz.ch> <51B392CA.6000700@mcs.anl.gov> <51B40EE5.7020306@mcs.anl.gov> <50320452A334BD42A5EC72BAD214509908632B8F@MBX10.d.ethz.ch> <51B49C82.8030205@mcs.anl.gov> <50320452A334BD42A5EC72BAD214509908635147@MBX10.d.ethz.ch> Message-ID: <51B4E3A3.5020802@mcs.anl.gov> Excellent! Thanks for helping debug this. This fix will be available in the upcoming mpich-3.1b1 release. -- Pavan On 06/09/2013 03:09 PM, Biddiscombe, John A. wrote: > Pavan > > The words you wanted to see ... > > biddisco at breno2 ~/build/mpich-master-v3.0.4-294-ga5719ca8 $ salloc -N 1 -n2 > salloc: Granted job allocation 25 > biddisco at breno2 ~/build/mpich-master-v3.0.4-294-ga5719ca8 $ ./install/bin/mpiexec -n 2 ./hello > Hello world from process 0 of 2 > Hello world from process 1 of 2 > > Thank you very much indeed. Not only a very welcome fix, but done at the weekend too! > > Great work. Thanks again. > > JB > > > > > > -----Original Message----- > From: Pavan Balaji [mailto:balaji at mcs.anl.gov] > Sent: 09 June 2013 17:17 > To: Biddiscombe, John A. > Cc: discuss at mpich.org > Subject: Re: [mpich-discuss] Problems running MPICH jobs under SLURM > > > On 06/09/2013 01:03 AM, Biddiscombe, John A. wrote: >> For reasons unclear to me, With this tarball, I get no mpiexec compiled. >> >> I'm doing a diff between the latest tarballs and trying to find the problem ... >> >> JB >> >> ll install/bin/ >> total 68 >> lrwxrwxrwx 1 biddisco biddisco 6 Jun 9 07:55 mpic++ -> mpicxx >> -rwxr-xr-x 1 biddisco biddisco 9905 Jun 9 07:55 mpicc -rwxr-xr-x 1 >> biddisco biddisco 9300 Jun 9 07:55 mpichversion -rwxr-xr-x 1 >> biddisco biddisco 9458 Jun 9 07:55 mpicxx -rwxr-xr-x 1 biddisco >> biddisco 11551 Jun 9 07:55 mpif77 -rwxr-xr-x 1 biddisco biddisco >> 13375 Jun 9 07:55 mpif90 -rwxr-xr-x 1 biddisco biddisco 3430 Jun 9 >> 07:55 parkill > > Gah. Sorry about that. Can you try once more? > > http://www.mpich.org/static/tarballs/nightly/master/mpich/ > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From geo at spatiogis.fr Mon Jun 10 01:04:03 2013 From: geo at spatiogis.fr (spatiogis) Date: Mon, 10 Jun 2013 08:04:03 +0200 Subject: [mpich-discuss] install + config on windows In-Reply-To: <1145414969.1150509.1367600905908.JavaMail.root@mcs.anl.gov> References: <1145414969.1150509.1367600905908.JavaMail.root@mcs.anl.gov> Message-ID: Hello, actually, it seems that Mpich must be compiled to work. The point is that the "Readme" file gives an explanation to compile the programs with Visual studio 2003. Anyway this last software is very difficult to make work on windows 7. Is there finally a way to compile Mpich on windows 7 with Visual Studio ? best regards, Benoit V?ler > Hi, > From the log output it looks like credentials (password) for > Utilisateur was not correct. > Is Utilisateur a valid Windows user on your machine? Have you > registered the username/password correctly (Try re-registering the > username+password by typing "mpiexec -register" at the command prompt)? > > Regards, > Jayesh > > ----- Original Message ----- > From: "spatiogis" > To: discuss at mpich.org > Sent: Friday, May 3, 2013 11:58:00 AM > Subject: Re: [mpich-discuss] install + config on windows > > Hello, > > for this command : > > # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 > C:\Progra~1\MPICH2\examples\cpi.exe > > result : > > ....../SMPDU_Sock_post_readv > ...../SMPDU_Sock_post_read > ..../smpd_handle_op_connect > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_challenge_string > ......read challenge string: '1.4.1p1 18467' > ......\smpd_verify_version > ....../smpd_verify_version > ......Verification of smpd version succeeded > ......\smpd_hash > ....../smpd_hash > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_state_reading_challenge_string > ..../smpd_handle_op_read > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > ....\smpd_handle_op_write > .....\smpd_state_writing_challenge_response > ......wrote challenge response: 'dafd1d07c1e6e9cb5fae968403d0d933' > ......\SMPDU_Sock_post_read > .......\SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_readv > ....../SMPDU_Sock_post_read > ...../smpd_state_writing_challenge_response > ..../smpd_handle_op_write > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_connect_result > ......read connect result: 'SUCCESS' > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_state_reading_connect_result > ..../smpd_handle_op_read > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > ....\smpd_handle_op_write > .....\smpd_state_writing_process_session_request > ......wrote process session request: 'process' > ......\SMPDU_Sock_post_read > .......\SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_readv > ....../SMPDU_Sock_post_read > ...../smpd_state_writing_process_session_request > ..../smpd_handle_op_write > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_cred_request > ......read cred request: 'credentials' > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > .......\smpd_option_on > ........\smpd_get_smpd_data > .........\smpd_get_smpd_data_from_environment > ........./smpd_get_smpd_data_from_environment > .........\smpd_get_smpd_data_default > ........./smpd_get_smpd_data_default > .........Unable to get the data for the key 'nocache' > ......../smpd_get_smpd_data > ......./smpd_option_on > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_handle_op_read > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_state_writing_cred_ack_yes > .......wrote cred request yes ack. > .......\SMPDU_Sock_post_write > ........\SMPDU_Sock_post_writev > ......../SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_write > ....../smpd_state_writing_cred_ack_yes > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_state_writing_account > .......wrote account: 'Utilisateur' > .......\smpd_encrypt_data > ......./smpd_encrypt_data > .......\SMPDU_Sock_post_write > ........\SMPDU_Sock_post_writev > ......../SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_write > ....../smpd_state_writing_account > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > .......\SMPDU_Sock_post_read > ........\SMPDU_Sock_post_readv > ......../SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_read > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_READ event.error = 0, result = 0, context=left > .....\smpd_handle_op_read > ......\smpd_state_reading_process_result > .......read process session result: 'FAIL' > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > Credentials for Utilisateur rejected connecting to Benoit > .......process session rejected > .......\SMPDU_Sock_post_close > ........\SMPDU_Sock_post_read > .........\SMPDU_Sock_post_readv > ........./SMPDU_Sock_post_readv > ......../SMPDU_Sock_post_read > ......./SMPDU_Sock_post_close > .......\smpd_post_abort_command > ........\smpd_create_command > .........\smpd_init_command > ........./smpd_init_command > ......../smpd_create_command > ........\smpd_add_command_arg > ......../smpd_add_command_arg > ........\smpd_command_destination > .........0 -> 0 : returning NULL context > ......../smpd_command_destination > Aborting: Unable to connect to Benoit > ......./smpd_post_abort_command > .......\smpd_exit > ........\smpd_kill_all_processes > ......../smpd_kill_all_processes > ........\smpd_finalize_drive_maps > ......../smpd_finalize_drive_maps > ........\smpd_dbs_finalize > ......../smpd_dbs_finalize > ........\SMPDU_Sock_finalize > ......../SMPDU_Sock_finalize > > C:\Users\Utilisateur> >> Hi, >> Looks like you missed the "-" before the status ("smpd -status" not >> "smpd status") argument. >> It also looks like you have multiple MPI libraries installed in your >> system. Try running this command (full path to mpiexec and smpd), >> >> # C:\Progra~1\MPICH2\bin\smpd -status >> >> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 >> C:\Progra~1\MPICH2\examples\cpi.exe >> >> >> Regards, >> Jayesh >> >> ----- Original Message ----- >> From: "spatiogis" >> To: "Jayesh Krishna" >> Sent: Friday, May 3, 2013 11:05:34 AM >> Subject: Re: [mpich-discuss] install + config on windows >> >> Hello, >> >> C:\Users\Utilisateur>smpd status >> Unexpected parameters: status >> >> C:\Users\Utilisateur>mpiexec -verbose -n 2 >> C:\Progra~1\MPICH2\examples\cpi.exe >> Unknown option: -verbose >> >> ----------------------------------------------------------------------------- >> C:\Program Files\MPICH2\examples>mpiexec -verbose -n 2 cpi.exe >> Unknown option: -verbose >> >> C:\Program Files\MPICH2\examples>smpd status >> Unexpected parameters: status >> ----------------------------------------------------------------------------- >> >> regards, Ben >> >>> Hi, >>> Ok. Please send us the output of the following commands, >>> >>> # smpd -status >>> # mpiexec -verbose -n 2 C:\Progra~1\MPICH2\examples\cpi.exe >>> >>> Please copy-paste the command and the complete output in your email. >>> >>> Regards, >>> Jayesh >>> >>> >>> ----- Original Message ----- >>> From: "spatiogis" >>> To: discuss at mpich.org >>> Sent: Friday, May 3, 2013 1:46:53 AM >>> Subject: Re: [mpich-discuss] install + config on windows >>> >>> Hello >>> >>> >>>> (PS: I am assuming from your reply in the previous email that you can >>>> run a command like "mpiexec -n 2 C:\Progra~1\MPICH2\examples\cpi.exe" >>>> correctly) >>> >>> In fact this command doesn't run. >>> >>> The message is this one >>> >>> [01:11728]....ERROR:unable to read the cmd header on the pmi context, >>> Error = -1 >>> >>> Ben >>> >>> >>>> ----- Original Message ----- >>>> From: "spatiogis" >>>> To: "Jayesh Krishna" >>>> Sent: Thursday, May 2, 2013 10:48:56 AM >>>> Subject: Re: [mpich-discuss] install + config on windows >>>> >>>> Hello, >>>> >>>>> Hi, >>>>> Are you able to run any other MPI programs? Try running the example >>>>> program, cpi.exe (C:\Program Files\MPICH2\examples\cpi.exe), to make >>>>> sure that your MPICH2 installation works. >>>> >>>> yes it does work >>>> >>>>> Installing MPICH2 on Windows 7 typically requires you to uninstall >>>>> any >>>>> previous versions of MPICH2, launch an administrative command promt >>>>> and >>>>> run "msiexec /i mpich2-installer.msi" to install MPICH2. >>>> >>>> yes it 's been installed like this... >>>> >>>> In wmpiconfig, the message is the following in the 'Get settings' >>>> line. >>>> >>>> Credentials for Utilisateur rejected connecting to host >>>> Aborting: Unable to connect to host >>>> >>>> The software I try to use is Taudem, which is intergrated inside >>>> Qgis. >>>> Launching a taudem process inside Qgis gives the same message. >>>> >>>> >>>>> Regards, >>>>> Jayesh >>>> >>>> Sincerely, Ben >>>> >>>>> >>>>> ----- Original Message ----- >>>>> From: "spatiogis" >>>>> To: discuss at mpich.org >>>>> Sent: Thursday, May 2, 2013 10:08:23 AM >>>>> Subject: Re: [mpich-discuss] install + config on windows >>>>> >>>>> Hello, >>>>> >>>>> in my case Mpich is normally used to run .exe programs. I guess that >>>>> they >>>>> are already compiled... >>>>> The .exe files are integrated into a software, and accessed from >>>>> menus >>>>> inside it. When I run one of the programs, the answer is actually >>>>> "unable >>>>> to query host". >>>>> At the end, the process is not realised. It seems that this 'host' >>>>> question is a problem to the software... >>>>> >>>>> Sincerely, >>>>> >>>>> Ben. >>>>> >>>>> >>>>>> Hi, >>>>>> You can download MPICH2 binaries for Windows at >>>>>> http://www.mpich.org/downloads/ . >>>>>> You need to compile your MPI programs with MPICH2 to make it work. >>>>>> I >>>>>> would recommend recompiling your code after you install MPICH2 (If >>>>>> you >>>>>> have MPI program binaries pre-built with MPICH2 - instead of >>>>>> compiling >>>>>> them on your own - make sure that you install the same version of >>>>>> MPICH2 >>>>>> that was used to build the binaries). >>>>>> The wmpiregister program has a bug and you can ignore this error >>>>>> message ("...unable to query host"). Can you run your MPI program >>>>>> using >>>>>> mpiexec from a command prompt? >>>>>> >>>>>> Regards, >>>>>> Jayesh >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: "spatiogis" >>>>>> To: discuss at mpich.org >>>>>> Sent: Tuesday, April 30, 2013 9:26:35 AM >>>>>> Subject: [mpich-discuss] install + config on windows >>>>>> >>>>>> Hello, >>>>>> >>>>>> I'm not very good at computing, but I would like to install Mpich2 >>>>>> on >>>>>> windows 7 - 64 bits. There is only one pc, with one user plus the >>>>>> admin, >>>>>> and a simple core processor. >>>>>> >>>>>> I would like to know if it's mandatory to have compiling softwares >>>>>> with >>>>>> it to make it work, whereas it is asked in this case only to make >>>>>> run >>>>>> another software, and not for compiling (that would maybe save some >>>>>> disk >>>>>> space and simplify the installation) ? >>>>>> >>>>>> My second issue is that I must be missing something about the >>>>>> server >>>>>> configuration. I have installed Mpich from the .msi file, then >>>>>> configured >>>>>> the wmpiregister program with the Domain/user informations. >>>>>> >>>>>> There is this message displayed when trying to connect in the >>>>>> 'configurable settings' window : 'MPICH2 not installed or unable to >>>>>> query >>>>>> the host'. >>>>>> >>>>>> What is the host actually ? >>>>>> >>>>>> I know I am starting from very far, I am sorry for these very >>>>>> simple >>>>>> questions. Thanks if you can reply me, that would certainly save me >>>>>> some >>>>>> long hours of reading and testing ;) >>>>>> >>>>>> sincerely, >>>>>> >>>>>> Ben >>>>>> _______________________________________________ >>>>>> discuss mailing list discuss at mpich.org >>>>>> To manage subscription options or unsubscribe: >>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- Benoit V?ler Adh?rent au groupe JAM Ing?nierie 180 Avenue du Genevois, Parc d'Activit? de Croix Rousse 73000 Chamb?ry http://www.spatiogis.fr 06-46-13-40-94 From fernando_luz at tpn.usp.br Mon Jun 10 09:16:20 2013 From: fernando_luz at tpn.usp.br (fernando_luz) Date: Mon, 10 Jun 2013 11:16:20 -0300 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: <-4466127925988670465@unknownmsgid> References: <51AE0EC6.7040903@tpn.usp.br> <51B22F89.3050107@tpn.usp.br> <51B23CDA.9070502@tpn.usp.br> <-4466127925988670465@unknownmsgid> Message-ID: <51B5DFB4.7070309@tpn.usp.br> Hi Jeff and Antonio, I checked my mpich installation and everything is ok. I used the variables CC=mpicc and MPI_CC=mpicc and I don't have any success. Looking the log, my guess is the command to link the conftest.c wasn't built in a correct way, because the Include flag '-I' and the library path '-L' was not inserted in the command line. configure:4099: /home/fernando_luz/usr/mpich/bin/mpicc -o conftest /home/fernando_luz/usr/mpich/include/ conftest.c /home/fernando_luz/usr/mpich/lib/ >&5 Running this command in the terminal (with -I and -L) I have success. It's my first time using the autoconf, and I don't have an idea where I can modify this. Regards On 06/07/2013 07:38 PM, Jeff Hammond wrote: > Did you read it? It's got the reason that configure is failing pretty > clearly stated. You need to debug your configure invocation. Maybe use > CC=mpicc etc instead... > > Jeff > > Sent from my iPhone > > On Jun 7, 2013, at 3:04 PM, fernando_luz wrote: > >> In attachment. >> >> Fernando >> >> On 06/07/2013 04:56 PM, Jeff Hammond wrote: >>> please attach config.log. >>> >>> jeff >>> >>> On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz wrote: >>>> Hi Rajeev, >>>> >>>> Thanks for the answers. >>>> >>>> I get the source code in repository, but I didn't succeed in the compile >>>> process. >>>> I ran the autogen.sh and after this I tried to configure my installation and >>>> I received the following error message. >>>> >>>> >>>> fernando_luz at TPN000300:~/git/mpe$ ./configure >>>> --prefix=/home/fernando_luz/usr/mpe >>>> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc >>>> --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ >>>> --with-mpiinc=/home/fernando_luz/usr/mpich/include/ >>>> Configuring MPE Profiling System with '--prefix=/home/fernando_luz/usr/mpe' >>>> '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' >>>> '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' >>>> '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' >>>> 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' >>>> 'MPI_INC=/home/fernando_luz/usr/mpich/include' >>>> 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' >>>> checking for current directory name... /home/fernando_luz/git/mpe >>>> checking gnumake... yes using --no-print-directory >>>> checking BSD 4.4 make... no - whew >>>> checking OSF V3 make... no >>>> checking for virtual path format... VPATH >>>> User supplied MPI implmentation (Good Luck!) >>>> checking for leftover Makefiles in subpackages ... none >>>> checking for gcc... cc >>>> checking whether the C compiler works... yes >>>> checking for C compiler default output file name... a.out >>>> checking for suffix of executables... >>>> checking whether we are cross compiling... no >>>> checking for suffix of object files... o >>>> checking whether we are using the GNU C compiler... yes >>>> checking whether cc accepts -g... yes >>>> checking for cc option to accept ISO C89... none needed >>>> checking whether MPI_CC has been set ... >>>> /home/fernando_luz/usr/mpich/bin/mpicc >>>> checking whether we are using the GNU Fortran 77 compiler... no >>>> checking whether f77 accepts -g... no >>>> checking whether MPI_F77 has been set ... f77 >>>> checking for the linkage of the supplied MPI C definitions ... no >>>> configure: error: Cannot link with basic MPI C program! >>>> Check your MPI include paths, MPI libraries and MPI CC compiler >>>> >>>> Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). >>>> >>>> I prefer to use the mpe in repository because in the site, the last version >>>> was dated in 2010 and in the git repository the last commit was in 2012. >>>> >>>> Regards >>>> >>>> Fernando >>>> >>>> >>>> >>>> On 06/04/2013 03:05 PM, Rajeev Thakur wrote: >>>>> It can be downloaded from >>>>> http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. >>>>> >>>>> The source repository is at http://git.mpich.org/mpe.git/ >>>>> >>>>> Rajeev >>>>> >>>>> On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: >>>>> >>>>>> MPE isn't actively developed and should sit strictly on top of any MPI >>>>>> implementation so you can just grab MPE from an older release of >>>>>> MPICH. >>>>>> >>>>>> My guess is that MPE will be a standalone download at some point in the >>>>>> future. >>>>>> >>>>>> Jeff >>>>>> >>>>>> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz >>>>>> wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I didn't find the MPE source in mpich-3.0.4 package. Where I can >>>>>>> download >>>>>>> the source? It is still compatible with mpich? >>>>>>> >>>>>>> And I tried to install the logging support available in this release, >>>>>>> but my >>>>>>> try didn't was successful. I received the follow error: >>>>>>> >>>>>>> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >>>>>>> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >>>>>>> configure: creating ./config.status >>>>>>> config.status: error: cannot find input file: `Makefile.in' >>>>>>> configure: error: src/util/logging/rlog configure failed >>>>>>> >>>>>>> I attached the c.txt file used in the configuration. >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Fernando >>>>>>> >>>>>>> _______________________________________________ >>>>>>> discuss mailing list discuss at mpich.org >>>>>>> To manage subscription options or unsubscribe: >>>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>>> >>>>>> -- >>>>>> Jeff Hammond >>>>>> Argonne Leadership Computing Facility >>>>>> University of Chicago Computation Institute >>>>>> jhammond at alcf.anl.gov / (630) 252-5381 >>>>>> http://www.linkedin.com/in/jeffhammond >>>>>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >>>>>> ALCF docs: http://www.alcf.anl.gov/user-guides >>>>>> _______________________________________________ >>>>>> discuss mailing list discuss at mpich.org >>>>>> To manage subscription options or unsubscribe: >>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> _______________________________________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From fernando_luz at tpn.usp.br Mon Jun 10 09:26:47 2013 From: fernando_luz at tpn.usp.br (fernando_luz) Date: Mon, 10 Jun 2013 11:26:47 -0300 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: <51B5DFB4.7070309@tpn.usp.br> References: <51AE0EC6.7040903@tpn.usp.br> <51B22F89.3050107@tpn.usp.br> <51B23CDA.9070502@tpn.usp.br> <-4466127925988670465@unknownmsgid> <51B5DFB4.7070309@tpn.usp.br> Message-ID: <51B5E227.1010107@tpn.usp.br> I made this modification in configure and I have success ./configure --prefix=/home/fernando_luz/usr/mpe --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc --with-mpilibs=-L/home/fernando_luz/usr/mpich/lib/ --with-mpiinc=-I/home/fernando_luz/usr/mpich/include/ --disable-f77 but it's the correct way to execute the configure? Regards On 06/10/2013 11:16 AM, fernando_luz wrote: > Hi Jeff and Antonio, > > > I checked my mpich installation and everything is ok. > > I used the variables CC=mpicc and MPI_CC=mpicc and I don't have any > success. > > Looking the log, my guess is the command to link the conftest.c wasn't > built in a correct way, because the Include flag '-I' and the library > path '-L' was not inserted in the command line. > > configure:4099: /home/fernando_luz/usr/mpich/bin/mpicc -o conftest > /home/fernando_luz/usr/mpich/include/ conftest.c > /home/fernando_luz/usr/mpich/lib/ >&5 > > Running this command in the terminal (with -I and -L) I have success. > > It's my first time using the autoconf, and I don't have an idea where > I can modify this. > > Regards > > On 06/07/2013 07:38 PM, Jeff Hammond wrote: >> Did you read it? It's got the reason that configure is failing pretty >> clearly stated. You need to debug your configure invocation. Maybe use >> CC=mpicc etc instead... >> >> Jeff >> >> Sent from my iPhone >> >> On Jun 7, 2013, at 3:04 PM, fernando_luz >> wrote: >> >>> In attachment. >>> >>> Fernando >>> >>> On 06/07/2013 04:56 PM, Jeff Hammond wrote: >>>> please attach config.log. >>>> >>>> jeff >>>> >>>> On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz >>>> wrote: >>>>> Hi Rajeev, >>>>> >>>>> Thanks for the answers. >>>>> >>>>> I get the source code in repository, but I didn't succeed in the >>>>> compile >>>>> process. >>>>> I ran the autogen.sh and after this I tried to configure my >>>>> installation and >>>>> I received the following error message. >>>>> >>>>> >>>>> fernando_luz at TPN000300:~/git/mpe$ ./configure >>>>> --prefix=/home/fernando_luz/usr/mpe >>>>> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc >>>>> --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ >>>>> --with-mpiinc=/home/fernando_luz/usr/mpich/include/ >>>>> Configuring MPE Profiling System with >>>>> '--prefix=/home/fernando_luz/usr/mpe' >>>>> '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' >>>>> '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' >>>>> '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' >>>>> 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' >>>>> 'MPI_INC=/home/fernando_luz/usr/mpich/include' >>>>> 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' >>>>> checking for current directory name... /home/fernando_luz/git/mpe >>>>> checking gnumake... yes using --no-print-directory >>>>> checking BSD 4.4 make... no - whew >>>>> checking OSF V3 make... no >>>>> checking for virtual path format... VPATH >>>>> User supplied MPI implmentation (Good Luck!) >>>>> checking for leftover Makefiles in subpackages ... none >>>>> checking for gcc... cc >>>>> checking whether the C compiler works... yes >>>>> checking for C compiler default output file name... a.out >>>>> checking for suffix of executables... >>>>> checking whether we are cross compiling... no >>>>> checking for suffix of object files... o >>>>> checking whether we are using the GNU C compiler... yes >>>>> checking whether cc accepts -g... yes >>>>> checking for cc option to accept ISO C89... none needed >>>>> checking whether MPI_CC has been set ... >>>>> /home/fernando_luz/usr/mpich/bin/mpicc >>>>> checking whether we are using the GNU Fortran 77 compiler... no >>>>> checking whether f77 accepts -g... no >>>>> checking whether MPI_F77 has been set ... f77 >>>>> checking for the linkage of the supplied MPI C definitions ... no >>>>> configure: error: Cannot link with basic MPI C program! >>>>> Check your MPI include paths, MPI libraries and MPI CC compiler >>>>> >>>>> Where /home/fernando_luz/usr/mpich/ is my mpi installation >>>>> (MPICH-3.0.4). >>>>> >>>>> I prefer to use the mpe in repository because in the site, the >>>>> last version >>>>> was dated in 2010 and in the git repository the last commit was in >>>>> 2012. >>>>> >>>>> Regards >>>>> >>>>> Fernando >>>>> >>>>> >>>>> >>>>> On 06/04/2013 03:05 PM, Rajeev Thakur wrote: >>>>>> It can be downloaded from >>>>>> http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. >>>>>> >>>>>> The source repository is at http://git.mpich.org/mpe.git/ >>>>>> >>>>>> Rajeev >>>>>> >>>>>> On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: >>>>>> >>>>>>> MPE isn't actively developed and should sit strictly on top of >>>>>>> any MPI >>>>>>> implementation so you can just grab MPE from an older release of >>>>>>> MPICH. >>>>>>> >>>>>>> My guess is that MPE will be a standalone download at some point >>>>>>> in the >>>>>>> future. >>>>>>> >>>>>>> Jeff >>>>>>> >>>>>>> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz >>>>>>> >>>>>>> wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I didn't find the MPE source in mpich-3.0.4 package. Where I can >>>>>>>> download >>>>>>>> the source? It is still compatible with mpich? >>>>>>>> >>>>>>>> And I tried to install the logging support available in this >>>>>>>> release, >>>>>>>> but my >>>>>>>> try didn't was successful. I received the follow error: >>>>>>>> >>>>>>>> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >>>>>>>> >>>>>>>> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >>>>>>>> configure: creating ./config.status >>>>>>>> config.status: error: cannot find input file: `Makefile.in' >>>>>>>> configure: error: src/util/logging/rlog configure failed >>>>>>>> >>>>>>>> I attached the c.txt file used in the configuration. >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Fernando >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> discuss mailing list discuss at mpich.org >>>>>>>> To manage subscription options or unsubscribe: >>>>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>>>> >>>>>>> -- >>>>>>> Jeff Hammond >>>>>>> Argonne Leadership Computing Facility >>>>>>> University of Chicago Computation Institute >>>>>>> jhammond at alcf.anl.gov / (630) 252-5381 >>>>>>> http://www.linkedin.com/in/jeffhammond >>>>>>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >>>>>>> ALCF docs: http://www.alcf.anl.gov/user-guides >>>>>>> _______________________________________________ >>>>>>> discuss mailing list discuss at mpich.org >>>>>>> To manage subscription options or unsubscribe: >>>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>>> _______________________________________________ >>>>>> discuss mailing list discuss at mpich.org >>>>>> To manage subscription options or unsubscribe: >>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> _______________________________________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From eibhlin.lee10 at imperial.ac.uk Mon Jun 10 09:42:36 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Mon, 10 Jun 2013 14:42:36 +0000 Subject: [mpich-discuss] Process exited without calling finalize Message-ID: <2D283C3861654E41AEB39AE4B6767663173A1B68@icexch-m3.ic.ac.uk> Hello, I am getting an error similar to that of some of the first MPICH versions when testing whether MPICH-3.0.4 has downloaded properly on my RaspberryPi. I downloaded and (thought) I installed mpich-3.0.4 correctly using the configure add ons --with-pm=smpd and --with-pmi=smpd I set the smpd phrase using smpd -getphrase. Did smpd -s. Then copied the image to a second machine. The test file cpi runs as expected on the master (only) and also on the slave (only). However, when I try to run cpi: mpiexec -phrase abc -machinefile file -n 2 /home/pi/examples/cpi Where abc is the phrase for both machines and file contains two IP addresses. I get the error: Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(433).................: MPID_Init(176)........................: channel initialization failed MPIDI_CH3_Init(70)....................: MPID_nem_init(238)....................: MPIDI_CH3I_Seg_commit(366)............: MPIU_SHMW_Hnd_deserialize(324)........: MPIU_SHMW_Seg_open(865)...............: MPIU_SHMW_Seg_create_attach_templ(637): open failed - No such file or directory job aborted: rank: node: exit code[: error message] 0: 129.32.139.gfd: -2 1: 129.32.139.plm: 1: process 1 exited without calling finalize where gfd and plm are the endings of the IP address. Please help me figure out what I've done wrong. Regards, Eibhlin -------------- next part -------------- An HTML attachment was scrubbed... URL: From eibhlin.lee10 at imperial.ac.uk Mon Jun 10 10:26:12 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Mon, 10 Jun 2013 15:26:12 +0000 Subject: [mpich-discuss] Process exited without calling finalize In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1B68@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1B68@icexch-m3.ic.ac.uk> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A1B88@icexch-m3.ic.ac.uk> Please disregard. I had forgotten to name the hosts differently. Thank you Eibhlin ________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] Sent: 10 June 2013 15:42 To: discuss at mpich.org Subject: [mpich-discuss] Process exited without calling finalize Hello, I am getting an error similar to that of some of the first MPICH versions when testing whether MPICH-3.0.4 has downloaded properly on my RaspberryPi. I downloaded and (thought) I installed mpich-3.0.4 correctly using the configure add ons --with-pm=smpd and --with-pmi=smpd I set the smpd phrase using smpd -getphrase. Did smpd -s. Then copied the image to a second machine. The test file cpi runs as expected on the master (only) and also on the slave (only). However, when I try to run cpi: mpiexec -phrase abc -machinefile file -n 2 /home/pi/examples/cpi Where abc is the phrase for both machines and file contains two IP addresses. I get the error: Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(433).................: MPID_Init(176)........................: channel initialization failed MPIDI_CH3_Init(70)....................: MPID_nem_init(238)....................: MPIDI_CH3I_Seg_commit(366)............: MPIU_SHMW_Hnd_deserialize(324)........: MPIU_SHMW_Seg_open(865)...............: MPIU_SHMW_Seg_create_attach_templ(637): open failed - No such file or directory job aborted: rank: node: exit code[: error message] 0: 129.32.139.gfd: -2 1: 129.32.139.plm: 1: process 1 exited without calling finalize where gfd and plm are the endings of the IP address. Please help me figure out what I've done wrong. Regards, Eibhlin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhammond at alcf.anl.gov Mon Jun 10 12:56:53 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Mon, 10 Jun 2013 11:56:53 -0600 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: <51B5E227.1010107@tpn.usp.br> References: <51AE0EC6.7040903@tpn.usp.br> <51B22F89.3050107@tpn.usp.br> <51B23CDA.9070502@tpn.usp.br> <-4466127925988670465@unknownmsgid> <51B5DFB4.7070309@tpn.usp.br> <51B5E227.1010107@tpn.usp.br> Message-ID: I don't think there's anything wrong with this, but can you confirm that the following does not work? ./configure --prefix=/home/fernando_luz/usr/mpe --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc --disable-f77 I would expect that --with-mpicc is sufficient and that mpilibs and mpiinc can be populated automatically using this information. Jeff On Mon, Jun 10, 2013 at 8:26 AM, fernando_luz wrote: > I made this modification in configure and I have success > > ./configure --prefix=/home/fernando_luz/usr/mpe > --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc > --with-mpilibs=-L/home/fernando_luz/usr/mpich/lib/ > --with-mpiinc=-I/home/fernando_luz/usr/mpich/include/ --disable-f77 > > but it's the correct way to execute the configure? > > Regards > > > On 06/10/2013 11:16 AM, fernando_luz wrote: > > Hi Jeff and Antonio, > > > I checked my mpich installation and everything is ok. > > I used the variables CC=mpicc and MPI_CC=mpicc and I don't have any success. > > Looking the log, my guess is the command to link the conftest.c wasn't built > in a correct way, because the Include flag '-I' and the library path '-L' > was not inserted in the command line. > > configure:4099: /home/fernando_luz/usr/mpich/bin/mpicc -o conftest > /home/fernando_luz/usr/mpich/include/ conftest.c > /home/fernando_luz/usr/mpich/lib/ >&5 > > Running this command in the terminal (with -I and -L) I have success. > > It's my first time using the autoconf, and I don't have an idea where I can > modify this. > > Regards > > On 06/07/2013 07:38 PM, Jeff Hammond wrote: > > Did you read it? It's got the reason that configure is failing pretty > clearly stated. You need to debug your configure invocation. Maybe use > CC=mpicc etc instead... > > Jeff > > Sent from my iPhone > > On Jun 7, 2013, at 3:04 PM, fernando_luz wrote: > > In attachment. > > Fernando > > On 06/07/2013 04:56 PM, Jeff Hammond wrote: > > please attach config.log. > > jeff > > On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz > wrote: > > Hi Rajeev, > > Thanks for the answers. > > I get the source code in repository, but I didn't succeed in the compile > process. > I ran the autogen.sh and after this I tried to configure my installation and > I received the following error message. > > > fernando_luz at TPN000300:~/git/mpe$ ./configure > --prefix=/home/fernando_luz/usr/mpe > --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc > --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ > --with-mpiinc=/home/fernando_luz/usr/mpich/include/ > Configuring MPE Profiling System with '--prefix=/home/fernando_luz/usr/mpe' > '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' > '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' > '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' > 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' > 'MPI_INC=/home/fernando_luz/usr/mpich/include' > 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' > checking for current directory name... /home/fernando_luz/git/mpe > checking gnumake... yes using --no-print-directory > checking BSD 4.4 make... no - whew > checking OSF V3 make... no > checking for virtual path format... VPATH > User supplied MPI implmentation (Good Luck!) > checking for leftover Makefiles in subpackages ... none > checking for gcc... cc > checking whether the C compiler works... yes > checking for C compiler default output file name... a.out > checking for suffix of executables... > checking whether we are cross compiling... no > checking for suffix of object files... o > checking whether we are using the GNU C compiler... yes > checking whether cc accepts -g... yes > checking for cc option to accept ISO C89... none needed > checking whether MPI_CC has been set ... > /home/fernando_luz/usr/mpich/bin/mpicc > checking whether we are using the GNU Fortran 77 compiler... no > checking whether f77 accepts -g... no > checking whether MPI_F77 has been set ... f77 > checking for the linkage of the supplied MPI C definitions ... no > configure: error: Cannot link with basic MPI C program! > Check your MPI include paths, MPI libraries and MPI CC compiler > > Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). > > I prefer to use the mpe in repository because in the site, the last version > was dated in 2010 and in the git repository the last commit was in 2012. > > Regards > > Fernando > > > > On 06/04/2013 03:05 PM, Rajeev Thakur wrote: > > It can be downloaded from > http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. > > The source repository is at http://git.mpich.org/mpe.git/ > > Rajeev > > On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: > > MPE isn't actively developed and should sit strictly on top of any MPI > implementation so you can just grab MPE from an older release of > MPICH. > > My guess is that MPE will be a standalone download at some point in the > future. > > Jeff > > On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz > wrote: > > Hi, > > I didn't find the MPE source in mpich-3.0.4 package. Where I can > download > the source? It is still compatible with mpich? > > And I tried to install the logging support available in this release, > but my > try didn't was successful. I received the follow error: > > /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: > line 3694: PAC_CC_SUBDIR_SHLIBS: command not found > configure: creating ./config.status > config.status: error: cannot find input file: `Makefile.in' > configure: error: src/util/logging/rlog configure failed > > I attached the c.txt file used in the configuration. > > Regards > > Fernando > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > -- > Jeff Hammond > Argonne Leadership Computing Facility > University of Chicago Computation Institute > jhammond at alcf.anl.gov / (630) 252-5381 > http://www.linkedin.com/in/jeffhammond > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond > ALCF docs: http://www.alcf.anl.gov/user-guides > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From fernando_luz at tpn.usp.br Mon Jun 10 14:27:48 2013 From: fernando_luz at tpn.usp.br (fernando_luz) Date: Mon, 10 Jun 2013 16:27:48 -0300 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: References: <51AE0EC6.7040903@tpn.usp.br> <51B22F89.3050107@tpn.usp.br> <51B23CDA.9070502@tpn.usp.br> <-4466127925988670465@unknownmsgid> <51B5DFB4.7070309@tpn.usp.br> <51B5E227.1010107@tpn.usp.br> Message-ID: <51B628B4.10403@tpn.usp.br> Jeff, I ran your configuration and I obtained the same error (I attached config_01.log) I think the main problem is in the composition of compile line (config_01.log:145), the include path and the library path was found, but not inserted in a correct way (without -I and -L). When I specified the -I for --with-mpiinc and -L for --with-mpilibs, the configuration works (config_02.log) I use the autoconf version 2.68, and to generate the configure, I ran autogen.sh, but some warnings was shown (warning_autogen.log) Fernando On 06/10/2013 02:56 PM, Jeff Hammond wrote: > I don't think there's anything wrong with this, but can you confirm > that the following does not work? > > ./configure --prefix=/home/fernando_luz/usr/mpe > --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc --disable-f77 > > I would expect that --with-mpicc is sufficient and that mpilibs and > mpiinc can be populated automatically using this information. > > Jeff > > On Mon, Jun 10, 2013 at 8:26 AM, fernando_luz wrote: >> I made this modification in configure and I have success >> >> ./configure --prefix=/home/fernando_luz/usr/mpe >> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc >> --with-mpilibs=-L/home/fernando_luz/usr/mpich/lib/ >> --with-mpiinc=-I/home/fernando_luz/usr/mpich/include/ --disable-f77 >> >> but it's the correct way to execute the configure? >> >> Regards >> >> >> On 06/10/2013 11:16 AM, fernando_luz wrote: >> >> Hi Jeff and Antonio, >> >> >> I checked my mpich installation and everything is ok. >> >> I used the variables CC=mpicc and MPI_CC=mpicc and I don't have any success. >> >> Looking the log, my guess is the command to link the conftest.c wasn't built >> in a correct way, because the Include flag '-I' and the library path '-L' >> was not inserted in the command line. >> >> configure:4099: /home/fernando_luz/usr/mpich/bin/mpicc -o conftest >> /home/fernando_luz/usr/mpich/include/ conftest.c >> /home/fernando_luz/usr/mpich/lib/ >&5 >> >> Running this command in the terminal (with -I and -L) I have success. >> >> It's my first time using the autoconf, and I don't have an idea where I can >> modify this. >> >> Regards >> >> On 06/07/2013 07:38 PM, Jeff Hammond wrote: >> >> Did you read it? It's got the reason that configure is failing pretty >> clearly stated. You need to debug your configure invocation. Maybe use >> CC=mpicc etc instead... >> >> Jeff >> >> Sent from my iPhone >> >> On Jun 7, 2013, at 3:04 PM, fernando_luz wrote: >> >> In attachment. >> >> Fernando >> >> On 06/07/2013 04:56 PM, Jeff Hammond wrote: >> >> please attach config.log. >> >> jeff >> >> On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz >> wrote: >> >> Hi Rajeev, >> >> Thanks for the answers. >> >> I get the source code in repository, but I didn't succeed in the compile >> process. >> I ran the autogen.sh and after this I tried to configure my installation and >> I received the following error message. >> >> >> fernando_luz at TPN000300:~/git/mpe$ ./configure >> --prefix=/home/fernando_luz/usr/mpe >> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc >> --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ >> --with-mpiinc=/home/fernando_luz/usr/mpich/include/ >> Configuring MPE Profiling System with '--prefix=/home/fernando_luz/usr/mpe' >> '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' >> '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' >> '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' >> 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' >> 'MPI_INC=/home/fernando_luz/usr/mpich/include' >> 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' >> checking for current directory name... /home/fernando_luz/git/mpe >> checking gnumake... yes using --no-print-directory >> checking BSD 4.4 make... no - whew >> checking OSF V3 make... no >> checking for virtual path format... VPATH >> User supplied MPI implmentation (Good Luck!) >> checking for leftover Makefiles in subpackages ... none >> checking for gcc... cc >> checking whether the C compiler works... yes >> checking for C compiler default output file name... a.out >> checking for suffix of executables... >> checking whether we are cross compiling... no >> checking for suffix of object files... o >> checking whether we are using the GNU C compiler... yes >> checking whether cc accepts -g... yes >> checking for cc option to accept ISO C89... none needed >> checking whether MPI_CC has been set ... >> /home/fernando_luz/usr/mpich/bin/mpicc >> checking whether we are using the GNU Fortran 77 compiler... no >> checking whether f77 accepts -g... no >> checking whether MPI_F77 has been set ... f77 >> checking for the linkage of the supplied MPI C definitions ... no >> configure: error: Cannot link with basic MPI C program! >> Check your MPI include paths, MPI libraries and MPI CC compiler >> >> Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). >> >> I prefer to use the mpe in repository because in the site, the last version >> was dated in 2010 and in the git repository the last commit was in 2012. >> >> Regards >> >> Fernando >> >> >> >> On 06/04/2013 03:05 PM, Rajeev Thakur wrote: >> >> It can be downloaded from >> http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. >> >> The source repository is at http://git.mpich.org/mpe.git/ >> >> Rajeev >> >> On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: >> >> MPE isn't actively developed and should sit strictly on top of any MPI >> implementation so you can just grab MPE from an older release of >> MPICH. >> >> My guess is that MPE will be a standalone download at some point in the >> future. >> >> Jeff >> >> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz >> wrote: >> >> Hi, >> >> I didn't find the MPE source in mpich-3.0.4 package. Where I can >> download >> the source? It is still compatible with mpich? >> >> And I tried to install the logging support available in this release, >> but my >> try didn't was successful. I received the follow error: >> >> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >> configure: creating ./config.status >> config.status: error: cannot find input file: `Makefile.in' >> configure: error: src/util/logging/rlog configure failed >> >> I attached the c.txt file used in the configuration. >> >> Regards >> >> Fernando >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> -- >> Jeff Hammond >> Argonne Leadership Computing Facility >> University of Chicago Computation Institute >> jhammond at alcf.anl.gov / (630) 252-5381 >> http://www.linkedin.com/in/jeffhammond >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >> ALCF docs: http://www.alcf.anl.gov/user-guides >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > -------------- next part -------------- A non-text attachment was scrubbed... Name: config_01.log Type: text/x-log Size: 10471 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: config_02.log Type: text/x-log Size: 16230 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: warning_autogen.log Type: text/x-log Size: 3348 bytes Desc: not available URL: From jhammond at alcf.anl.gov Mon Jun 10 16:50:05 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Mon, 10 Jun 2013 15:50:05 -0600 Subject: [mpich-discuss] MPE is available in mpich-3.0.4 ? rlog doesn't work in configure process. In-Reply-To: <51B628B4.10403@tpn.usp.br> References: <51AE0EC6.7040903@tpn.usp.br> <51B22F89.3050107@tpn.usp.br> <51B23CDA.9070502@tpn.usp.br> <-4466127925988670465@unknownmsgid> <51B5DFB4.7070309@tpn.usp.br> <51B5E227.1010107@tpn.usp.br> <51B628B4.10403@tpn.usp.br> Message-ID: Okay, I guess this is a bug in the autoconf. I'm glad you have identified the workaround. I'll put this in Trac so that it doesn't get lost. Jeff On Mon, Jun 10, 2013 at 1:27 PM, fernando_luz wrote: > Jeff, > > I ran your configuration and I obtained the same error (I attached > config_01.log) > > I think the main problem is in the composition of compile line > (config_01.log:145), the include path and the library path was found, but > not inserted in a correct way (without -I and -L). > > When I specified the -I for --with-mpiinc and -L for --with-mpilibs, the > configuration works (config_02.log) > > I use the autoconf version 2.68, and to generate the configure, I ran > autogen.sh, but some warnings was shown (warning_autogen.log) > > Fernando > > > > On 06/10/2013 02:56 PM, Jeff Hammond wrote: >> >> I don't think there's anything wrong with this, but can you confirm >> that the following does not work? >> >> ./configure --prefix=/home/fernando_luz/usr/mpe >> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc --disable-f77 >> >> I would expect that --with-mpicc is sufficient and that mpilibs and >> mpiinc can be populated automatically using this information. >> >> Jeff >> >> On Mon, Jun 10, 2013 at 8:26 AM, fernando_luz >> wrote: >>> >>> I made this modification in configure and I have success >>> >>> ./configure --prefix=/home/fernando_luz/usr/mpe >>> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc >>> --with-mpilibs=-L/home/fernando_luz/usr/mpich/lib/ >>> --with-mpiinc=-I/home/fernando_luz/usr/mpich/include/ --disable-f77 >>> >>> but it's the correct way to execute the configure? >>> >>> Regards >>> >>> >>> On 06/10/2013 11:16 AM, fernando_luz wrote: >>> >>> Hi Jeff and Antonio, >>> >>> >>> I checked my mpich installation and everything is ok. >>> >>> I used the variables CC=mpicc and MPI_CC=mpicc and I don't have any >>> success. >>> >>> Looking the log, my guess is the command to link the conftest.c wasn't >>> built >>> in a correct way, because the Include flag '-I' and the library path '-L' >>> was not inserted in the command line. >>> >>> configure:4099: /home/fernando_luz/usr/mpich/bin/mpicc -o conftest >>> /home/fernando_luz/usr/mpich/include/ conftest.c >>> /home/fernando_luz/usr/mpich/lib/ >&5 >>> >>> Running this command in the terminal (with -I and -L) I have success. >>> >>> It's my first time using the autoconf, and I don't have an idea where I >>> can >>> modify this. >>> >>> Regards >>> >>> On 06/07/2013 07:38 PM, Jeff Hammond wrote: >>> >>> Did you read it? It's got the reason that configure is failing pretty >>> clearly stated. You need to debug your configure invocation. Maybe use >>> CC=mpicc etc instead... >>> >>> Jeff >>> >>> Sent from my iPhone >>> >>> On Jun 7, 2013, at 3:04 PM, fernando_luz wrote: >>> >>> In attachment. >>> >>> Fernando >>> >>> On 06/07/2013 04:56 PM, Jeff Hammond wrote: >>> >>> please attach config.log. >>> >>> jeff >>> >>> On Fri, Jun 7, 2013 at 1:07 PM, fernando_luz >>> wrote: >>> >>> Hi Rajeev, >>> >>> Thanks for the answers. >>> >>> I get the source code in repository, but I didn't succeed in the compile >>> process. >>> I ran the autogen.sh and after this I tried to configure my installation >>> and >>> I received the following error message. >>> >>> >>> fernando_luz at TPN000300:~/git/mpe$ ./configure >>> --prefix=/home/fernando_luz/usr/mpe >>> --with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc >>> --with-mpilibs=/home/fernando_luz/usr/mpich/lib/ >>> --with-mpiinc=/home/fernando_luz/usr/mpich/include/ >>> Configuring MPE Profiling System with >>> '--prefix=/home/fernando_luz/usr/mpe' >>> '--with-mpicc=/home/fernando_luz/usr/mpich/bin/mpicc' >>> '--with-mpilibs=/home/fernando_luz/usr/mpich/lib/' >>> '--with-mpiinc=/home/fernando_luz/usr/mpich/include/' >>> 'MPI_CC=/home/fernando_luz/usr/mpich/bin/mpicc' >>> 'MPI_INC=/home/fernando_luz/usr/mpich/include' >>> 'MPI_LIBS=/home/fernando_luz/usr/mpich/lib' >>> checking for current directory name... /home/fernando_luz/git/mpe >>> checking gnumake... yes using --no-print-directory >>> checking BSD 4.4 make... no - whew >>> checking OSF V3 make... no >>> checking for virtual path format... VPATH >>> User supplied MPI implmentation (Good Luck!) >>> checking for leftover Makefiles in subpackages ... none >>> checking for gcc... cc >>> checking whether the C compiler works... yes >>> checking for C compiler default output file name... a.out >>> checking for suffix of executables... >>> checking whether we are cross compiling... no >>> checking for suffix of object files... o >>> checking whether we are using the GNU C compiler... yes >>> checking whether cc accepts -g... yes >>> checking for cc option to accept ISO C89... none needed >>> checking whether MPI_CC has been set ... >>> /home/fernando_luz/usr/mpich/bin/mpicc >>> checking whether we are using the GNU Fortran 77 compiler... no >>> checking whether f77 accepts -g... no >>> checking whether MPI_F77 has been set ... f77 >>> checking for the linkage of the supplied MPI C definitions ... no >>> configure: error: Cannot link with basic MPI C program! >>> Check your MPI include paths, MPI libraries and MPI CC compiler >>> >>> Where /home/fernando_luz/usr/mpich/ is my mpi installation (MPICH-3.0.4). >>> >>> I prefer to use the mpe in repository because in the site, the last >>> version >>> was dated in 2010 and in the git repository the last commit was in 2012. >>> >>> Regards >>> >>> Fernando >>> >>> >>> >>> On 06/04/2013 03:05 PM, Rajeev Thakur wrote: >>> >>> It can be downloaded from >>> http://www.mcs.anl.gov/research/projects/perfvis/download/index.htm. >>> >>> The source repository is at http://git.mpich.org/mpe.git/ >>> >>> Rajeev >>> >>> On Jun 4, 2013, at 12:48 PM, Jeff Hammond wrote: >>> >>> MPE isn't actively developed and should sit strictly on top of any MPI >>> implementation so you can just grab MPE from an older release of >>> MPICH. >>> >>> My guess is that MPE will be a standalone download at some point in the >>> future. >>> >>> Jeff >>> >>> On Tue, Jun 4, 2013 at 10:59 AM, fernando_luz >>> wrote: >>> >>> Hi, >>> >>> I didn't find the MPE source in mpich-3.0.4 package. Where I can >>> download >>> the source? It is still compatible with mpich? >>> >>> And I tried to install the logging support available in this release, >>> but my >>> try didn't was successful. I received the follow error: >>> >>> /home/fernando_luz/software/mpich-3.0.4/src/util/logging/rlog/configure: >>> line 3694: PAC_CC_SUBDIR_SHLIBS: command not found >>> configure: creating ./config.status >>> config.status: error: cannot find input file: `Makefile.in' >>> configure: error: src/util/logging/rlog configure failed >>> >>> I attached the c.txt file used in the configuration. >>> >>> Regards >>> >>> Fernando >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> -- >>> Jeff Hammond >>> Argonne Leadership Computing Facility >>> University of Chicago Computation Institute >>> jhammond at alcf.anl.gov / (630) 252-5381 >>> http://www.linkedin.com/in/jeffhammond >>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >>> ALCF docs: http://www.alcf.anl.gov/user-guides >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From johnd9886 at gmail.com Mon Jun 10 17:17:09 2013 From: johnd9886 at gmail.com (john donald) Date: Tue, 11 Jun 2013 00:17:09 +0200 Subject: [mpich-discuss] Fwd: ckpoint-num error In-Reply-To: <089CF900-3487-42BF-91EF-57984AE3943D@mcs.anl.gov> References: <88F47ECA-32D7-4A78-85AD-E6E69D74CC06@mcs.anl.gov> <089CF900-3487-42BF-91EF-57984AE3943D@mcs.anl.gov> Message-ID: i raised it to 20 sec but same results sorry i am new to checkpoint restart i am trying this initially on one multicore pc how should it look like if the restart succeed? should it work in the same terminal in which i am running restart command my test app has 5000 iterations , checkpoint is taken at iteration no 300 for example , if i choose to restart from this checkpoint file should it restart near this iteration no 300 2013/6/6 Wesley Bland > Is there actually anything in those checkpoints? With a checkpoint > happening every 4 seconds you may be overdoing it. > > Wesley > > On Jun 5, 2013, at 2:14 PM, Rajeev Thakur wrote: > > > I don't know, but see if anything on this page helps: > > http://wiki.mpich.org/mpich/index.php/Checkpointing > > > > On Jun 5, 2013, at 4:09 PM, john donald wrote: > > > >> > >> > >> ---------- Forwarded message ---------- > >> From: john donald > >> Date: 2013/6/3 > >> Subject: ckpoint-num error > >> To: mpich-discuss at mcs.anl.gov > >> > >> > >> i used mpiexec with checkpoint and created two checkpoint files: > >> > >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint > -ckpoint-interval 4 -n 4 /home/john/app/md > >> > >> context-num0-0-0 > >> context-num1-0-0 > >> > >> > >> i am trying to make a restart > >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint > -n 4 -ckpoint-num 1 > >> > >> but nothing happened it just hangs > >> i also tried: > >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint > -n 4 -ckpoint-num 0-0-0 > >> also hangs > >> > >> _______________________________________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbland at mcs.anl.gov Tue Jun 11 08:20:58 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Tue, 11 Jun 2013 08:20:58 -0500 Subject: [mpich-discuss] ckpoint-num error In-Reply-To: References: <88F47ECA-32D7-4A78-85AD-E6E69D74CC06@mcs.anl.gov> <089CF900-3487-42BF-91EF-57984AE3943D@mcs.anl.gov> Message-ID: <10C1BBC0-A308-4B46-9BFC-AAF411F0BAD1@mcs.anl.gov> Did you check if there's actually anything in the checkpoint files? If they're empty, that probably means that you're checkpointing too frequently. On Jun 10, 2013, at 5:17 PM, john donald wrote: > i raised it to 20 sec but same results > sorry i am new to checkpoint restart > i am trying this initially on one multicore pc > how should it look like if the restart succeed? should it work in the same terminal in which i am running restart command > my test app has 5000 iterations , checkpoint is taken at iteration no 300 for example , if i choose to restart from this checkpoint file should it restart near this iteration no 300 > > > 2013/6/6 Wesley Bland > Is there actually anything in those checkpoints? With a checkpoint happening every 4 seconds you may be overdoing it. > > Wesley > > On Jun 5, 2013, at 2:14 PM, Rajeev Thakur wrote: > > > I don't know, but see if anything on this page helps: > > http://wiki.mpich.org/mpich/index.php/Checkpointing > > > > On Jun 5, 2013, at 4:09 PM, john donald wrote: > > > >> > >> > >> ---------- Forwarded message ---------- > >> From: john donald > >> Date: 2013/6/3 > >> Subject: ckpoint-num error > >> To: mpich-discuss at mcs.anl.gov > >> > >> > >> i used mpiexec with checkpoint and created two checkpoint files: > >> > >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -ckpoint-interval 4 -n 4 /home/john/app/md > >> > >> context-num0-0-0 > >> context-num1-0-0 > >> > >> > >> i am trying to make a restart > >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 1 > >> > >> but nothing happened it just hangs > >> i also tried: > >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint -n 4 -ckpoint-num 0-0-0 > >> also hangs > >> > >> _______________________________________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apenya at mcs.anl.gov Tue Jun 11 17:17:20 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Tue, 11 Jun 2013 17:17:20 -0500 Subject: [mpich-discuss] install + config on windows In-Reply-To: References: <1145414969.1150509.1367600905908.JavaMail.root@mcs.anl.gov> Message-ID: <1820123.9Zsec46ICq@localhost.localdomain> Hi Benoit, As our support for Windows platforms has been discontinued, we cannot guarantee MPICH is going to be able to be compiled in newer versions of the Windows compilers and/or the operating system itself. Unless Jayesh has any comments on this regard, I'm not aware of any experience of MPICH + Windows 7. I apologize for the inconvenience. Thanks, Antonio On Monday, June 10, 2013 08:04:03 AM spatiogis wrote: > Hello, > > actually, it seems that Mpich must be compiled to work. The point is that > the "Readme" file gives an explanation to compile the programs with Visual > studio 2003. Anyway this last software is very difficult to make work on > windows 7. > > Is there finally a way to compile Mpich on windows 7 with Visual Studio ? > > best regards, > > Benoit V?ler > > > Hi, > > > > From the log output it looks like credentials (password) for > > > > Utilisateur was not correct. > > > > Is Utilisateur a valid Windows user on your machine? Have you > > > > registered the username/password correctly (Try re-registering the > > username+password by typing "mpiexec -register" at the command prompt)? > > > > Regards, > > Jayesh > > > > ----- Original Message ----- > > From: "spatiogis" > > To: discuss at mpich.org > > Sent: Friday, May 3, 2013 11:58:00 AM > > Subject: Re: [mpich-discuss] install + config on windows > > > > Hello, > > > > for this command : > > > > # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 > > > > C:\Progra~1\MPICH2\examples\cpi.exe > > > > result : > > > > ....../SMPDU_Sock_post_readv > > > > ...../SMPDU_Sock_post_read > > ..../smpd_handle_op_connect > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_READ event.error = 0, result = 0, context=left > > ....\smpd_handle_op_read > > .....\smpd_state_reading_challenge_string > > ......read challenge string: '1.4.1p1 18467' > > ......\smpd_verify_version > > ....../smpd_verify_version > > ......Verification of smpd version succeeded > > ......\smpd_hash > > ....../smpd_hash > > ......\SMPDU_Sock_post_write > > .......\SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_writev > > ....../SMPDU_Sock_post_write > > ...../smpd_state_reading_challenge_string > > ..../smpd_handle_op_read > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > ....\smpd_handle_op_write > > .....\smpd_state_writing_challenge_response > > ......wrote challenge response: 'dafd1d07c1e6e9cb5fae968403d0d933' > > ......\SMPDU_Sock_post_read > > .......\SMPDU_Sock_post_readv > > ......./SMPDU_Sock_post_readv > > ....../SMPDU_Sock_post_read > > ...../smpd_state_writing_challenge_response > > ..../smpd_handle_op_write > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_READ event.error = 0, result = 0, context=left > > ....\smpd_handle_op_read > > .....\smpd_state_reading_connect_result > > ......read connect result: 'SUCCESS' > > ......\SMPDU_Sock_post_write > > .......\SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_writev > > ....../SMPDU_Sock_post_write > > ...../smpd_state_reading_connect_result > > ..../smpd_handle_op_read > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > ....\smpd_handle_op_write > > .....\smpd_state_writing_process_session_request > > ......wrote process session request: 'process' > > ......\SMPDU_Sock_post_read > > .......\SMPDU_Sock_post_readv > > ......./SMPDU_Sock_post_readv > > ....../SMPDU_Sock_post_read > > ...../smpd_state_writing_process_session_request > > ..../smpd_handle_op_write > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_READ event.error = 0, result = 0, context=left > > ....\smpd_handle_op_read > > .....\smpd_state_reading_cred_request > > ......read cred request: 'credentials' > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > .......\smpd_option_on > > ........\smpd_get_smpd_data > > .........\smpd_get_smpd_data_from_environment > > ........./smpd_get_smpd_data_from_environment > > .........\smpd_get_smpd_data_default > > ........./smpd_get_smpd_data_default > > .........Unable to get the data for the key 'nocache' > > ......../smpd_get_smpd_data > > ......./smpd_option_on > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\SMPDU_Sock_post_write > > .......\SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_writev > > ....../SMPDU_Sock_post_write > > ...../smpd_handle_op_read > > .....sock_waiting for the next event. > > .....\SMPDU_Sock_wait > > ...../SMPDU_Sock_wait > > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > .....\smpd_handle_op_write > > ......\smpd_state_writing_cred_ack_yes > > .......wrote cred request yes ack. > > .......\SMPDU_Sock_post_write > > ........\SMPDU_Sock_post_writev > > ......../SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_write > > ....../smpd_state_writing_cred_ack_yes > > ...../smpd_handle_op_write > > .....sock_waiting for the next event. > > .....\SMPDU_Sock_wait > > ...../SMPDU_Sock_wait > > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > .....\smpd_handle_op_write > > ......\smpd_state_writing_account > > .......wrote account: 'Utilisateur' > > .......\smpd_encrypt_data > > ......./smpd_encrypt_data > > .......\SMPDU_Sock_post_write > > ........\SMPDU_Sock_post_writev > > ......../SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_write > > ....../smpd_state_writing_account > > ...../smpd_handle_op_write > > .....sock_waiting for the next event. > > .....\SMPDU_Sock_wait > > ...../SMPDU_Sock_wait > > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > .....\smpd_handle_op_write > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > .......\smpd_hide_string_arg > > ........\first_token > > ......../first_token > > ........\compare_token > > ......../compare_token > > ........\next_token > > .........\first_token > > ........./first_token > > .........\first_token > > ........./first_token > > ......../next_token > > ......./smpd_hide_string_arg > > ......./smpd_hide_string_arg > > .......\SMPDU_Sock_post_read > > ........\SMPDU_Sock_post_readv > > ......../SMPDU_Sock_post_readv > > ......./SMPDU_Sock_post_read > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ...../smpd_handle_op_write > > .....sock_waiting for the next event. > > .....\SMPDU_Sock_wait > > ...../SMPDU_Sock_wait > > .....SOCK_OP_READ event.error = 0, result = 0, context=left > > .....\smpd_handle_op_read > > ......\smpd_state_reading_process_result > > .......read process session result: 'FAIL' > > .......\smpd_hide_string_arg > > ........\first_token > > ......../first_token > > ........\compare_token > > ......../compare_token > > ........\next_token > > .........\first_token > > ........./first_token > > .........\first_token > > ........./first_token > > ......../next_token > > ......./smpd_hide_string_arg > > ......./smpd_hide_string_arg > > .......\smpd_hide_string_arg > > ........\first_token > > ......../first_token > > ........\compare_token > > ......../compare_token > > ........\next_token > > .........\first_token > > ........./first_token > > .........\first_token > > ........./first_token > > ......../next_token > > ......./smpd_hide_string_arg > > ......./smpd_hide_string_arg > > Credentials for Utilisateur rejected connecting to Benoit > > .......process session rejected > > .......\SMPDU_Sock_post_close > > ........\SMPDU_Sock_post_read > > .........\SMPDU_Sock_post_readv > > ........./SMPDU_Sock_post_readv > > ......../SMPDU_Sock_post_read > > ......./SMPDU_Sock_post_close > > .......\smpd_post_abort_command > > ........\smpd_create_command > > .........\smpd_init_command > > ........./smpd_init_command > > ......../smpd_create_command > > ........\smpd_add_command_arg > > ......../smpd_add_command_arg > > ........\smpd_command_destination > > .........0 -> 0 : returning NULL context > > ......../smpd_command_destination > > Aborting: Unable to connect to Benoit > > ......./smpd_post_abort_command > > .......\smpd_exit > > ........\smpd_kill_all_processes > > ......../smpd_kill_all_processes > > ........\smpd_finalize_drive_maps > > ......../smpd_finalize_drive_maps > > ........\smpd_dbs_finalize > > ......../smpd_dbs_finalize > > ........\SMPDU_Sock_finalize > > ......../SMPDU_Sock_finalize > > > > C:\Users\Utilisateur> > > > >> Hi, > >> > >> Looks like you missed the "-" before the status ("smpd -status" not > >> > >> "smpd status") argument. > >> > >> It also looks like you have multiple MPI libraries installed in your > >> > >> system. Try running this command (full path to mpiexec and smpd), > >> > >> # C:\Progra~1\MPICH2\bin\smpd -status > >> > >> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 > >> C:\Progra~1\MPICH2\examples\cpi.exe > >> > >> > >> Regards, > >> Jayesh > >> > >> ----- Original Message ----- > >> From: "spatiogis" > >> To: "Jayesh Krishna" > >> Sent: Friday, May 3, 2013 11:05:34 AM > >> Subject: Re: [mpich-discuss] install + config on windows > >> > >> Hello, > >> > >> C:\Users\Utilisateur>smpd status > >> Unexpected parameters: status > >> > >> C:\Users\Utilisateur>mpiexec -verbose -n 2 > >> C:\Progra~1\MPICH2\examples\cpi.exe > >> Unknown option: -verbose > >> > >> ------------------------------------------------------------------------- > >> ---- C:\Program Files\MPICH2\examples>mpiexec -verbose -n 2 cpi.exe > >> Unknown option: -verbose > >> > >> C:\Program Files\MPICH2\examples>smpd status > >> Unexpected parameters: status > >> ------------------------------------------------------------------------- > >> ---- > >> > >> regards, Ben > >> > >>> Hi, > >>> > >>> Ok. Please send us the output of the following commands, > >>> > >>> # smpd -status > >>> # mpiexec -verbose -n 2 C:\Progra~1\MPICH2\examples\cpi.exe > >>> > >>> Please copy-paste the command and the complete output in your email. > >>> > >>> Regards, > >>> Jayesh > >>> > >>> > >>> ----- Original Message ----- > >>> From: "spatiogis" > >>> To: discuss at mpich.org > >>> Sent: Friday, May 3, 2013 1:46:53 AM > >>> Subject: Re: [mpich-discuss] install + config on windows > >>> > >>> Hello > >>> > >>>> (PS: I am assuming from your reply in the previous email that you can > >>>> run a command like "mpiexec -n 2 C:\Progra~1\MPICH2\examples\cpi.exe" > >>>> correctly) > >>>> > >>> In fact this command doesn't run. > >>> > >>> The message is this one > >>> > >>> [01:11728]....ERROR:unable to read the cmd header on the pmi context, > >>> > >>> Error = -1 > >>> > >>> Ben > >>> > >>>> ----- Original Message ----- > >>>> From: "spatiogis" > >>>> To: "Jayesh Krishna" > >>>> Sent: Thursday, May 2, 2013 10:48:56 AM > >>>> Subject: Re: [mpich-discuss] install + config on windows > >>>> > >>>> Hello, > >>>> > >>>>> Hi, > >>>>> > >>>>> Are you able to run any other MPI programs? Try running the example > >>>>> > >>>>> program, cpi.exe (C:\Program Files\MPICH2\examples\cpi.exe), to make > >>>>> sure that your MPICH2 installation works. > >>>>> > >>>> yes it does work > >>>> > >>>>> Installing MPICH2 on Windows 7 typically requires you to uninstall > >>>>> > >>>>> any > >>>>> previous versions of MPICH2, launch an administrative command promt > >>>>> and > >>>>> run "msiexec /i mpich2-installer.msi" to install MPICH2. > >>>>> > >>>> yes it 's been installed like this... > >>>> > >>>> In wmpiconfig, the message is the following in the 'Get settings' > >>>> > >>>> line. > >>>> > >>>> Credentials for Utilisateur rejected connecting to host > >>>> Aborting: Unable to connect to host > >>>> > >>>> The software I try to use is Taudem, which is intergrated inside > >>>> > >>>> Qgis. > >>>> Launching a taudem process inside Qgis gives the same message. > >>>> > >>>>> Regards, > >>>>> Jayesh > >>>>> > >>>> Sincerely, Ben > >>>> > >>>>> ----- Original Message ----- > >>>>> From: "spatiogis" > >>>>> To: discuss at mpich.org > >>>>> Sent: Thursday, May 2, 2013 10:08:23 AM > >>>>> Subject: Re: [mpich-discuss] install + config on windows > >>>>> > >>>>> Hello, > >>>>> > >>>>> in my case Mpich is normally used to run .exe programs. I guess that > >>>>> > >>>>> they > >>>>> are already compiled... > >>>>> > >>>>> The .exe files are integrated into a software, and accessed from > >>>>> > >>>>> menus > >>>>> inside it. When I run one of the programs, the answer is actually > >>>>> "unable > >>>>> to query host". > >>>>> > >>>>> At the end, the process is not realised. It seems that this 'host' > >>>>> > >>>>> question is a problem to the software... > >>>>> > >>>>> Sincerely, > >>>>> > >>>>> Ben. > >>>>> > >>>>>> Hi, > >>>>>> > >>>>>> You can download MPICH2 binaries for Windows at > >>>>>> > >>>>>> http://www.mpich.org/downloads/ . > >>>>>> > >>>>>> You need to compile your MPI programs with MPICH2 to make it work. > >>>>>> > >>>>>> I > >>>>>> would recommend recompiling your code after you install MPICH2 (If > >>>>>> you > >>>>>> have MPI program binaries pre-built with MPICH2 - instead of > >>>>>> compiling > >>>>>> them on your own - make sure that you install the same version of > >>>>>> MPICH2 > >>>>>> that was used to build the binaries). > >>>>>> > >>>>>> The wmpiregister program has a bug and you can ignore this error > >>>>>> > >>>>>> message ("...unable to query host"). Can you run your MPI program > >>>>>> using > >>>>>> mpiexec from a command prompt? > >>>>>> > >>>>>> Regards, > >>>>>> Jayesh > >>>>>> > >>>>>> ----- Original Message ----- > >>>>>> From: "spatiogis" > >>>>>> To: discuss at mpich.org > >>>>>> Sent: Tuesday, April 30, 2013 9:26:35 AM > >>>>>> Subject: [mpich-discuss] install + config on windows > >>>>>> > >>>>>> Hello, > >>>>>> > >>>>>> I'm not very good at computing, but I would like to install Mpich2 > >>>>>> > >>>>>> on > >>>>>> windows 7 - 64 bits. There is only one pc, with one user plus the > >>>>>> admin, > >>>>>> and a simple core processor. > >>>>>> > >>>>>> I would like to know if it's mandatory to have compiling softwares > >>>>>> > >>>>>> with > >>>>>> it to make it work, whereas it is asked in this case only to make > >>>>>> run > >>>>>> another software, and not for compiling (that would maybe save some > >>>>>> disk > >>>>>> space and simplify the installation) ? > >>>>>> > >>>>>> My second issue is that I must be missing something about the > >>>>>> > >>>>>> server > >>>>>> configuration. I have installed Mpich from the .msi file, then > >>>>>> configured > >>>>>> the wmpiregister program with the Domain/user informations. > >>>>>> > >>>>>> There is this message displayed when trying to connect in the > >>>>>> > >>>>>> 'configurable settings' window : 'MPICH2 not installed or unable to > >>>>>> query > >>>>>> the host'. > >>>>>> > >>>>>> What is the host actually ? > >>>>>> > >>>>>> I know I am starting from very far, I am sorry for these very > >>>>>> > >>>>>> simple > >>>>>> questions. Thanks if you can reply me, that would certainly save me > >>>>>> some > >>>>>> long hours of reading and testing ;) > >>>>>> > >>>>>> sincerely, > >>>>>> > >>>>>> Ben > >>>>>> > >>>>>> _______________________________________________ > >>>>>> discuss mailing list discuss at mpich.org > >>>>>> To manage subscription options or unsubscribe: > >>>>>> https://lists.mpich.org/mailman/listinfo/discuss From seraphzl at gmail.com Wed Jun 12 01:15:51 2013 From: seraphzl at gmail.com (Zheng Li) Date: Wed, 12 Jun 2013 14:15:51 +0800 Subject: [mpich-discuss] Issue on compiling MPICH 3.0.4 with intel compiler on scientific linux Message-ID: Dear All, I tried to compile the MPICH 3.0.4 with Intel C/C++/Fortran compiler 13.1.1 on scientific linux 6.4 x64 platform. The building steps were as follows: export CC=icc CXX=icpc CPP='icc -E' CXXCPP='icpc -E' F77=ifort FC=ifort CFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' CXXFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' FFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' configure --prefix=/home/sl/mpich-3.0.4 --disable-fast 2>&1 | tee c.log make 2>&1 | tee m.log make install 2>&1 | tee mi.log The terminal reported many warning of pragma during 'make', such like the following: ../src/binding/f77/fdebug.c(22): warning #20: identifier "MPIR_IS_BOTTOM" is undefined #pragma weak MPIR_IS_BOTTOM = mpir_is_bottom_ ^ The log files were attached. How can I resolve this issue? Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: c.log Type: application/octet-stream Size: 80447 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: config.log Type: application/octet-stream Size: 502954 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: m.log Type: application/octet-stream Size: 92638 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mi.log Type: application/octet-stream Size: 20629 bytes Desc: not available URL: From thakur at mcs.anl.gov Wed Jun 12 01:50:52 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Wed, 12 Jun 2013 01:50:52 -0500 Subject: [mpich-discuss] Issue on compiling MPICH 3.0.4 with intel compiler on scientific linux In-Reply-To: References: Message-ID: <5D094F46-89C4-460C-9E9D-9C94F38E0CB1@mcs.anl.gov> The build seems to have completed. Can you run the example programs? Try running the cpi example in the examples directory. On Jun 12, 2013, at 1:15 AM, Zheng Li wrote: > Dear All, > > I tried to compile the MPICH 3.0.4 with Intel C/C++/Fortran compiler 13.1.1 on scientific linux 6.4 x64 platform. The building steps were as follows: > > export CC=icc CXX=icpc CPP='icc -E' CXXCPP='icpc -E' F77=ifort FC=ifort CFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' CXXFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' FFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' > configure --prefix=/home/sl/mpich-3.0.4 --disable-fast 2>&1 | tee c.log > make 2>&1 | tee m.log > make install 2>&1 | tee mi.log > > The terminal reported many warning of pragma during 'make', such like the following: > > ../src/binding/f77/fdebug.c(22): warning #20: identifier "MPIR_IS_BOTTOM" is undefined > #pragma weak MPIR_IS_BOTTOM = mpir_is_bottom_ > ^ > > The log files were attached. How can I resolve this issue? Thanks in advance. > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From seraphzl at gmail.com Wed Jun 12 09:20:49 2013 From: seraphzl at gmail.com (Zheng Li) Date: Wed, 12 Jun 2013 22:20:49 +0800 Subject: [mpich-discuss] Issue on compiling MPICH 3.0.4 with intel compiler on scientific linux In-Reply-To: <5D094F46-89C4-460C-9E9D-9C94F38E0CB1@mcs.anl.gov> References: <5D094F46-89C4-460C-9E9D-9C94F38E0CB1@mcs.anl.gov> Message-ID: I did the 'make testing' according to the README in the mpich-3.0.4.tar.gz file. All 797 tests passed, and the log files were attached. Yes, it seems that the build was ok in spite of many warnings in the 'make' step. I will run the cpi example further to test it. Thank you for your help. 2013/6/12 Rajeev Thakur > The build seems to have completed. Can you run the example programs? Try > running the cpi example in the examples directory. > > > On Jun 12, 2013, at 1:15 AM, Zheng Li wrote: > > > Dear All, > > > > I tried to compile the MPICH 3.0.4 with Intel C/C++/Fortran compiler > 13.1.1 on scientific linux 6.4 x64 platform. The building steps were as > follows: > > > > export CC=icc CXX=icpc CPP='icc -E' CXXCPP='icpc -E' F77=ifort FC=ifort > CFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' CXXFLAGS='-O3 -xHost -ip > -no-prec-div -static-intel' FFLAGS='-O3 -xHost -ip -no-prec-div > -static-intel' > > configure --prefix=/home/sl/mpich-3.0.4 --disable-fast 2>&1 | tee c.log > > make 2>&1 | tee m.log > > make install 2>&1 | tee mi.log > > > > The terminal reported many warning of pragma during 'make', such like > the following: > > > > ../src/binding/f77/fdebug.c(22): warning #20: identifier > "MPIR_IS_BOTTOM" is undefined > > #pragma weak MPIR_IS_BOTTOM = mpir_is_bottom_ > > ^ > > > > The log files were attached. How can I resolve this issue? Thanks in > advance. > > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Dr. Zheng Li Research Assistant State Key Laboratory of Environmental Aquatic Chemistry Research Center for Eco-Environmental Sciences Chinese Academy of Sciences P. O. Box 2871, Beijing, 100085 China seraphzl at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.log Type: application/octet-stream Size: 5742 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: summary.tap Type: application/octet-stream Size: 25231 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: summary.xml Type: text/xml Size: 89917 bytes Desc: not available URL: From jhammond at alcf.anl.gov Wed Jun 12 09:52:46 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Wed, 12 Jun 2013 09:52:46 -0500 Subject: [mpich-discuss] Issue on compiling MPICH 3.0.4 with intel compiler on scientific linux In-Reply-To: References: <5D094F46-89C4-460C-9E9D-9C94F38E0CB1@mcs.anl.gov> Message-ID: A large fraction of compile warnings are for innocuous aspects of the code. It's appropriate for developers to look at warnings carefully to ensure that they aren't indicative of problems, but users are better off just running the test suite to determine if the build is successful or not. In your case, 797/797 is a very good indication of a proper build of MPICH. Best, Jeff On Wed, Jun 12, 2013 at 9:20 AM, Zheng Li wrote: > I did the 'make testing' according to the README in the mpich-3.0.4.tar.gz > file. All 797 tests passed, and the log files were attached. Yes, it seems > that the build was ok in spite of many warnings in the 'make' step. I will > run the cpi example further to test it. Thank you for your help. > > 2013/6/12 Rajeev Thakur >> >> The build seems to have completed. Can you run the example programs? Try >> running the cpi example in the examples directory. >> >> >> On Jun 12, 2013, at 1:15 AM, Zheng Li wrote: >> >> > Dear All, >> > >> > I tried to compile the MPICH 3.0.4 with Intel C/C++/Fortran compiler >> > 13.1.1 on scientific linux 6.4 x64 platform. The building steps were as >> > follows: >> > >> > export CC=icc CXX=icpc CPP='icc -E' CXXCPP='icpc -E' F77=ifort FC=ifort >> > CFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' CXXFLAGS='-O3 -xHost -ip >> > -no-prec-div -static-intel' FFLAGS='-O3 -xHost -ip -no-prec-div >> > -static-intel' >> > configure --prefix=/home/sl/mpich-3.0.4 --disable-fast 2>&1 | tee c.log >> > make 2>&1 | tee m.log >> > make install 2>&1 | tee mi.log >> > >> > The terminal reported many warning of pragma during 'make', such like >> > the following: >> > >> > ../src/binding/f77/fdebug.c(22): warning #20: identifier >> > "MPIR_IS_BOTTOM" is undefined >> > #pragma weak MPIR_IS_BOTTOM = mpir_is_bottom_ >> > ^ >> > >> > The log files were attached. How can I resolve this issue? Thanks in >> > advance. >> > >> > >> > _______________________________________________ >> > discuss mailing list discuss at mpich.org >> > To manage subscription options or unsubscribe: >> > https://lists.mpich.org/mailman/listinfo/discuss >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > > -- > Dr. Zheng Li > Research Assistant > State Key Laboratory of Environmental Aquatic Chemistry > Research Center for Eco-Environmental Sciences > Chinese Academy of Sciences > P. O. Box 2871, Beijing, 100085 China > seraphzl at gmail.com > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From seraphzl at gmail.com Wed Jun 12 10:45:23 2013 From: seraphzl at gmail.com (Zheng Li) Date: Wed, 12 Jun 2013 23:45:23 +0800 Subject: [mpich-discuss] Issue on compiling MPICH 3.0.4 with intel compiler on scientific linux In-Reply-To: References: <5D094F46-89C4-460C-9E9D-9C94F38E0CB1@mcs.anl.gov> Message-ID: Thank you for comment. Regards, Lee 2013/6/12 Jeff Hammond > A large fraction of compile warnings are for innocuous aspects of the > code. It's appropriate for developers to look at warnings carefully > to ensure that they aren't indicative of problems, but users are > better off just running the test suite to determine if the build is > successful or not. In your case, 797/797 is a very good indication of > a proper build of MPICH. > > Best, > > Jeff > > On Wed, Jun 12, 2013 at 9:20 AM, Zheng Li wrote: > > I did the 'make testing' according to the README in the > mpich-3.0.4.tar.gz > > file. All 797 tests passed, and the log files were attached. Yes, it > seems > > that the build was ok in spite of many warnings in the 'make' step. I > will > > run the cpi example further to test it. Thank you for your help. > > > > 2013/6/12 Rajeev Thakur > >> > >> The build seems to have completed. Can you run the example programs? Try > >> running the cpi example in the examples directory. > >> > >> > >> On Jun 12, 2013, at 1:15 AM, Zheng Li wrote: > >> > >> > Dear All, > >> > > >> > I tried to compile the MPICH 3.0.4 with Intel C/C++/Fortran compiler > >> > 13.1.1 on scientific linux 6.4 x64 platform. The building steps were > as > >> > follows: > >> > > >> > export CC=icc CXX=icpc CPP='icc -E' CXXCPP='icpc -E' F77=ifort > FC=ifort > >> > CFLAGS='-O3 -xHost -ip -no-prec-div -static-intel' CXXFLAGS='-O3 > -xHost -ip > >> > -no-prec-div -static-intel' FFLAGS='-O3 -xHost -ip -no-prec-div > >> > -static-intel' > >> > configure --prefix=/home/sl/mpich-3.0.4 --disable-fast 2>&1 | tee > c.log > >> > make 2>&1 | tee m.log > >> > make install 2>&1 | tee mi.log > >> > > >> > The terminal reported many warning of pragma during 'make', such like > >> > the following: > >> > > >> > ../src/binding/f77/fdebug.c(22): warning #20: identifier > >> > "MPIR_IS_BOTTOM" is undefined > >> > #pragma weak MPIR_IS_BOTTOM = mpir_is_bottom_ > >> > ^ > >> > > >> > The log files were attached. How can I resolve this issue? Thanks in > >> > advance. > >> > > >> > > >> > > _______________________________________________ > >> > discuss mailing list discuss at mpich.org > >> > To manage subscription options or unsubscribe: > >> > https://lists.mpich.org/mailman/listinfo/discuss > >> > >> _______________________________________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/mailman/listinfo/discuss > > > > > > > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > Argonne Leadership Computing Facility > University of Chicago Computation Institute > jhammond at alcf.anl.gov / (630) 252-5381 > http://www.linkedin.com/in/jeffhammond > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond > ALCF docs: http://www.alcf.anl.gov/user-guides > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eibhlin.lee10 at imperial.ac.uk Thu Jun 13 06:56:34 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Thu, 13 Jun 2013 11:56:34 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem Message-ID: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> Hello all, I am trying to use two raspberry-pi to sample and then process some data. The first process samples while the second processes and vice versa. To do this I use gpio and also mpich-3.0.4 with the process manager smpd. I have successfully run cpi on both machines (from the master machine). I have also managed to run a similar program but without the MPI, this involved compiling with gcc and when running putting sudo in front of the binary file. When I combine these two processes I get various error messages. For input: mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: Can't open /dev/mem Did you forget to use 'sudo .. ?' For input: sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: sudo: mpiexec: Command not found I therefore put mpiexec into /usr/bin now for input: sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: Can't open /dev/mem Did you forget to use 'sudo .. ?' Does anyone know how I can work around this? Thanks, Eibhlin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhammond at alcf.anl.gov Thu Jun 13 07:58:04 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Thu, 13 Jun 2013 07:58:04 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> Message-ID: Just su to root instead of using sudo. Jeff On Thu, Jun 13, 2013 at 6:56 AM, Lee, Eibhlin wrote: > Hello all, > > I am trying to use two raspberry-pi to sample and then process some data. > The first process samples while the second processes and vice versa. To do > this I use gpio and also mpich-3.0.4 with the process manager smpd. I have > successfully run cpi on both machines (from the master machine). I have also > managed to run a similar program but without the MPI, this involved > compiling with gcc and when running putting sudo in front of the binary > file. > > When I combine these two processes I get various error messages. > For input: > mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > Can't open /dev/mem > Did you forget to use 'sudo .. ?' > > For input: > sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > sudo: mpiexec: Command not found > > I therefore put mpiexec into /usr/bin > > now for input: > sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > Can't open /dev/mem > Did you forget to use 'sudo .. ?' > > Does anyone know how I can work around this? > Thanks, > Eibhlin > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From balaji at mcs.anl.gov Thu Jun 13 08:05:20 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 13 Jun 2013 08:05:20 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> Message-ID: <51B9C390.8050705@mcs.anl.gov> What's "-phrase"? That's not a recognized option. I'm not sure where the /dev/mem check is coming from. Try running ~/main without mpiexec first. -- Pavan On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: > Hello all, > > I am trying to use two raspberry-pi to sample and then process some > data. The first process samples while the second processes and vice > versa. To do this I use gpio and also mpich-3.0.4 with the process > manager smpd. I have successfully run cpi on both machines (from the > master machine). I have also managed to run a similar program but > without the MPI, this involved compiling with gcc and when running > putting sudo in front of the binary file. > > When I combine these two processes I get various error messages. > For input: > mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > Can't open /dev/mem > Did you forget to use 'sudo .. ?' > > For input: > sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > sudo: mpiexec: Command not found > > I therefore put mpiexec into /usr/bin > > now for input: > sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > Can't open /dev/mem > Did you forget to use 'sudo .. ?' > > Does anyone know how I can work around this? > Thanks, > Eibhlin > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From fxiao at mymail.mines.edu Thu Jun 13 08:31:09 2013 From: fxiao at mymail.mines.edu (Feng Xiao) Date: Thu, 13 Jun 2013 07:31:09 -0600 Subject: [mpich-discuss] How to keep the VS console window open on exceptions during debug of a parallel code Message-ID: Hello, I am writing about how to keep the console window open on exceptions during debug of a parallel code (either F5 or Ctrl+F5). I am running MPICH2 FORTRAN program in Visual Studio 2010, everything works fine, expect that I could not see the exception/error messages on the console window, it closes right after the program exits on exceptions. I do know how to keep the console window open when program meets the end, or put a read statement to pause the program during the execution. I did some google search, the only relevant solution I could find is http://www.boost.org/doc/libs/1_36_0/libs/test/doc/html/utf/usage-recommendations/dot-net-specific.html, which is about making debugger break at the point the failure by adding extra command line argument and seeing the runtime error in the output window. However, it looks like something for a serial code without mpiexec.exe in the command line, and I am not a advanced VS user, so I don't know how to do it for a parallel code, or if there is any other way out. Thanks in advance! -- Feng Xiao Doctoral Student Petroleum Engineering Colorado School of Mines Cell 918-814-2644 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 13 08:34:27 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 13 Jun 2013 08:34:27 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51B9C390.8050705@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> Message-ID: <51B9CA63.6010100@mcs.anl.gov> I just saw your older email. Why are you using smpd instead of the default process manager (hydra)? -- Pavan On 06/13/2013 08:05 AM, Pavan Balaji wrote: > > What's "-phrase"? That's not a recognized option. I'm not sure where > the /dev/mem check is coming from. Try running ~/main without mpiexec > first. > > -- Pavan > > On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >> Hello all, >> >> I am trying to use two raspberry-pi to sample and then process some >> data. The first process samples while the second processes and vice >> versa. To do this I use gpio and also mpich-3.0.4 with the process >> manager smpd. I have successfully run cpi on both machines (from the >> master machine). I have also managed to run a similar program but >> without the MPI, this involved compiling with gcc and when running >> putting sudo in front of the binary file. >> >> When I combine these two processes I get various error messages. >> For input: >> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> Can't open /dev/mem >> Did you forget to use 'sudo .. ?' >> >> For input: >> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> sudo: mpiexec: Command not found >> >> I therefore put mpiexec into /usr/bin >> >> now for input: >> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> Can't open /dev/mem >> Did you forget to use 'sudo .. ?' >> >> Does anyone know how I can work around this? >> Thanks, >> Eibhlin >> >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From fxiao at mymail.mines.edu Thu Jun 13 08:38:14 2013 From: fxiao at mymail.mines.edu (Feng Xiao) Date: Thu, 13 Jun 2013 07:38:14 -0600 Subject: [mpich-discuss] How to keep the VS console window open on exceptions during debug of a parallel code Message-ID: Hello all, I am writing about how to keep the console window open on exceptions during debug of a parallel code (either F5 or Ctrl+F5). I am running MPICH2 FORTRAN program in Visual Studio 2010, everything works fine, expect that I could not see the exception/error messages on the console window, it closes right after the program exits on exceptions. I do know how to keep the console window open when program meets the end, or put a read statement to pause the program during the execution. I did some google search, the only relevant solution I could find is http://www.boost.org/doc/libs/1_36_0/libs/test/doc/html/utf/usage-recommendations/dot-net-specific.html, which is about making debugger break at the point the failure by adding extra command line argument and seeing the runtime error in the output window. However, it looks like something for a serial code without mpiexec.exe in the command line, and I am not a advanced VS user, so I don't know how to do it for a parallel code, or if there is any other way out. Thanks in advance! -- Feng Xiao Doctoral Student Petroleum Engineering Colorado School of Mines Cell 918-814-2644 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbland at mcs.anl.gov Thu Jun 13 08:54:30 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Thu, 13 Jun 2013 08:54:30 -0500 Subject: [mpich-discuss] How to keep the VS console window open on exceptions during debug of a parallel code In-Reply-To: References: Message-ID: <46491D7E-4B20-4709-A276-11FF11E95059@mcs.anl.gov> Sorry, you probably won't find any VS developers here either. MPICH stopped supporting Windows a few versions back and none of us are likely to know much about the specifics of Visual Studio. You might try somewhere like StackExchange or a similar forum where you're more likely to get a wide variety of people. Wesley On Jun 13, 2013, at 8:38 AM, Feng Xiao wrote: > Hello all, > > I am writing about how to keep the console window open on exceptions during debug of a parallel code (either F5 or Ctrl+F5). > I am running MPICH2 FORTRAN program in Visual Studio 2010, everything works fine, expect that I could not see the exception/error messages on the console window, it closes right after the program exits on exceptions. > I do know how to keep the console window open when program meets the end, or put a read statement to pause the program during the execution. > I did some google search, the only relevant solution I could find is > http://www.boost.org/doc/libs/1_36_0/libs/test/doc/html/utf/usage-recommendations/dot-net-specific.html, which is about making debugger break at the point the failure by adding extra command line argument and seeing the runtime error in the output window. However, it looks like something for a serial code without mpiexec.exe in the command line, and I am not a advanced VS user, so I don't know how to do it for a parallel code, or if there is any other way out. > > Thanks in advance! > > -- > Feng Xiao > > Doctoral Student > Petroleum Engineering > Colorado School of Mines > > Cell 918-814-2644 > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From eibhlin.lee10 at imperial.ac.uk Thu Jun 13 09:09:52 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Thu, 13 Jun 2013 14:09:52 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51B9CA63.6010100@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>,<51B9CA63.6010100@mcs.anl.gov> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> Pavan, I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] Sent: 13 June 2013 14:34 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem I just saw your older email. Why are you using smpd instead of the default process manager (hydra)? -- Pavan On 06/13/2013 08:05 AM, Pavan Balaji wrote: > > What's "-phrase"? That's not a recognized option. I'm not sure where > the /dev/mem check is coming from. Try running ~/main without mpiexec > first. > > -- Pavan > > On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >> Hello all, >> >> I am trying to use two raspberry-pi to sample and then process some >> data. The first process samples while the second processes and vice >> versa. To do this I use gpio and also mpich-3.0.4 with the process >> manager smpd. I have successfully run cpi on both machines (from the >> master machine). I have also managed to run a similar program but >> without the MPI, this involved compiling with gcc and when running >> putting sudo in front of the binary file. >> >> When I combine these two processes I get various error messages. >> For input: >> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> Can't open /dev/mem >> Did you forget to use 'sudo .. ?' >> >> For input: >> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> sudo: mpiexec: Command not found >> >> I therefore put mpiexec into /usr/bin >> >> now for input: >> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> Can't open /dev/mem >> Did you forget to use 'sudo .. ?' >> >> Does anyone know how I can work around this? >> Thanks, >> Eibhlin >> >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > -- Pavan Balaji http://www.mcs.anl.gov/~balaji _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From eibhlin.lee10 at imperial.ac.uk Thu Jun 13 09:15:16 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Thu, 13 Jun 2013 14:15:16 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk>, Message-ID: <2D283C3861654E41AEB39AE4B6767663173A1F32@icexch-m3.ic.ac.uk> Jeff, Does that put you into the same environment as sudo bash? Because I have already tried that with no success. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jeff Hammond [jhammond at alcf.anl.gov] Sent: 13 June 2013 13:58 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Just su to root instead of using sudo. Jeff On Thu, Jun 13, 2013 at 6:56 AM, Lee, Eibhlin wrote: > Hello all, > > I am trying to use two raspberry-pi to sample and then process some data. > The first process samples while the second processes and vice versa. To do > this I use gpio and also mpich-3.0.4 with the process manager smpd. I have > successfully run cpi on both machines (from the master machine). I have also > managed to run a similar program but without the MPI, this involved > compiling with gcc and when running putting sudo in front of the binary > file. > > When I combine these two processes I get various error messages. > For input: > mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > Can't open /dev/mem > Did you forget to use 'sudo .. ?' > > For input: > sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > sudo: mpiexec: Command not found > > I therefore put mpiexec into /usr/bin > > now for input: > sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > the error is: > Can't open /dev/mem > Did you forget to use 'sudo .. ?' > > Does anyone know how I can work around this? > Thanks, > Eibhlin > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From balaji at mcs.anl.gov Thu Jun 13 09:21:59 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 13 Jun 2013 09:21:59 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> Message-ID: <51B9D587.8070104@mcs.anl.gov> smpd was mainly meant for windows support. It is now deprecated, so we don't officially support that. Someone on the mailing list might help, but no promises. With respect to hydra, problems with that is what you should have reported on this mailing list and we would have tried to help. :-) If ./main itself is not working, there's no hope of getting it to work with mpiexec. You'll need to debug that first. -- Pavan On 06/13/2013 09:09 AM, Lee, Eibhlin wrote: > Pavan, > I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. > As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] > Sent: 13 June 2013 14:34 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > I just saw your older email. Why are you using smpd instead of the > default process manager (hydra)? > > -- Pavan > > On 06/13/2013 08:05 AM, Pavan Balaji wrote: >> >> What's "-phrase"? That's not a recognized option. I'm not sure where >> the /dev/mem check is coming from. Try running ~/main without mpiexec >> first. >> >> -- Pavan >> >> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>> Hello all, >>> >>> I am trying to use two raspberry-pi to sample and then process some >>> data. The first process samples while the second processes and vice >>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>> manager smpd. I have successfully run cpi on both machines (from the >>> master machine). I have also managed to run a similar program but >>> without the MPI, this involved compiling with gcc and when running >>> putting sudo in front of the binary file. >>> >>> When I combine these two processes I get various error messages. >>> For input: >>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> Can't open /dev/mem >>> Did you forget to use 'sudo .. ?' >>> >>> For input: >>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> sudo: mpiexec: Command not found >>> >>> I therefore put mpiexec into /usr/bin >>> >>> now for input: >>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> Can't open /dev/mem >>> Did you forget to use 'sudo .. ?' >>> >>> Does anyone know how I can work around this? >>> Thanks, >>> Eibhlin >>> >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jhammond at alcf.anl.gov Thu Jun 13 09:27:15 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Thu, 13 Jun 2013 09:27:15 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1F32@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A1F32@icexch-m3.ic.ac.uk> Message-ID: No, of course the root environment is not the same as your user environment. There are plenty of "intro to Linux" resources online if you need this kind of information. If you want your user environment as root, which I don't think is particularly safe or a good idea, you can set it with "source ~${yourusername}/.${yourshell}rc" where yourusername is $USERNAME when you are not root and yourshell is $SHELL when you are not root. The better option is to figure out what environment variables you need to set in the mpiexec environment that are currently defined in your user environment and set those manually. Jeff On Thu, Jun 13, 2013 at 9:15 AM, Lee, Eibhlin wrote: > Jeff, > Does that put you into the same environment as sudo bash? Because I have already tried that with no success. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jeff Hammond [jhammond at alcf.anl.gov] > Sent: 13 June 2013 13:58 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Just su to root instead of using sudo. > > Jeff > > On Thu, Jun 13, 2013 at 6:56 AM, Lee, Eibhlin > wrote: >> Hello all, >> >> I am trying to use two raspberry-pi to sample and then process some data. >> The first process samples while the second processes and vice versa. To do >> this I use gpio and also mpich-3.0.4 with the process manager smpd. I have >> successfully run cpi on both machines (from the master machine). I have also >> managed to run a similar program but without the MPI, this involved >> compiling with gcc and when running putting sudo in front of the binary >> file. >> >> When I combine these two processes I get various error messages. >> For input: >> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> Can't open /dev/mem >> Did you forget to use 'sudo .. ?' >> >> For input: >> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> sudo: mpiexec: Command not found >> >> I therefore put mpiexec into /usr/bin >> >> now for input: >> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> Can't open /dev/mem >> Did you forget to use 'sudo .. ?' >> >> Does anyone know how I can work around this? >> Thanks, >> Eibhlin >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > Argonne Leadership Computing Facility > University of Chicago Computation Institute > jhammond at alcf.anl.gov / (630) 252-5381 > http://www.linkedin.com/in/jeffhammond > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond > ALCF docs: http://www.alcf.anl.gov/user-guides > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From fxiao at mymail.mines.edu Thu Jun 13 09:28:37 2013 From: fxiao at mymail.mines.edu (Feng Xiao) Date: Thu, 13 Jun 2013 08:28:37 -0600 Subject: [mpich-discuss] How to keep the VS console window open on exceptions during debug of a parallel code In-Reply-To: References: Message-ID: Thinking most of people here don't use visual studio very often (me either), I tried several ways and found a solution. It might be useful for someone, so I am writing to you all again to close my own question. I added the following line to the Command Line of Post-Build Event of Build Events on the property pages, "D:\Program Files\MPICH2\bin\mpiexec.exe" -n 2 "$(TargetDir)\$(TargetName).exe" --result_code=no --report_level=no The code executes after the build and all messages are written in the VS output window. On Thu, Jun 13, 2013 at 7:38 AM, Feng Xiao wrote: > Hello all, > > I am writing about how to keep the console window open on exceptions > during debug of a parallel code (either F5 or Ctrl+F5). > I am running MPICH2 FORTRAN program in Visual Studio 2010, everything > works fine, expect that I could not see the exception/error messages on the > console window, it closes right after the program exits on exceptions. > I do know how to keep the console window open when program meets the end, > or put a read statement to pause the program during the execution. > I did some google search, the only relevant solution I could find is > > http://www.boost.org/doc/libs/1_36_0/libs/test/doc/html/utf/usage-recommendations/dot-net-specific.html, > which is about making debugger break at the point the failure by adding > extra command line argument and seeing the runtime error in the output > window. However, it looks like something for a serial code without > mpiexec.exe in the command line, and I am not a advanced VS user, so I > don't know how to do it for a parallel code, or if there is any other way > out. > > Thanks in advance! > > -- > Feng Xiao > > Doctoral Student > Petroleum Engineering > Colorado School of Mines > > Cell 918-814-2644 > > -- Feng Xiao Doctoral Student Petroleum Engineering Colorado School of Mines Cell 918-814-2644 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gus at ldeo.columbia.edu Thu Jun 13 09:37:45 2013 From: gus at ldeo.columbia.edu (Gus Correa) Date: Thu, 13 Jun 2013 10:37:45 -0400 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> Message-ID: <51B9D939.5040203@ldeo.columbia.edu> Hi Lee How about replacing "~/main" in the mpiexec command line by one-liner script? Say, "sudo_main.sh", something like this: #! /bin/bash sudo ~/main After all, it is "main" that accesses /dev/mem, and needs "sudo" permissions, not mpiexec, right? [Or do the mpiexec-launched processes inherit the "sudo" stuff from mpiexec?] Not related, but, instead of putting mpiexec in /usr/bin, can't you just use the full path to it? I hope this helps, Gus Correa On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: > Pavan, > I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. > As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] > Sent: 13 June 2013 14:34 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > I just saw your older email. Why are you using smpd instead of the > default process manager (hydra)? > > -- Pavan > > On 06/13/2013 08:05 AM, Pavan Balaji wrote: >> >> What's "-phrase"? That's not a recognized option. I'm not sure where >> the /dev/mem check is coming from. Try running ~/main without mpiexec >> first. >> >> -- Pavan >> >> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>> Hello all, >>> >>> I am trying to use two raspberry-pi to sample and then process some >>> data. The first process samples while the second processes and vice >>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>> manager smpd. I have successfully run cpi on both machines (from the >>> master machine). I have also managed to run a similar program but >>> without the MPI, this involved compiling with gcc and when running >>> putting sudo in front of the binary file. >>> >>> When I combine these two processes I get various error messages. >>> For input: >>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> Can't open /dev/mem >>> Did you forget to use 'sudo .. ?' >>> >>> For input: >>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> sudo: mpiexec: Command not found >>> >>> I therefore put mpiexec into /usr/bin >>> >>> now for input: >>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> Can't open /dev/mem >>> Did you forget to use 'sudo .. ?' >>> >>> Does anyone know how I can work around this? >>> Thanks, >>> Eibhlin >>> >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From eibhlin.lee10 at imperial.ac.uk Thu Jun 13 09:36:55 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Thu, 13 Jun 2013 14:36:55 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A1F32@icexch-m3.ic.ac.uk>, Message-ID: <2D283C3861654E41AEB39AE4B6767663173A1F67@icexch-m3.ic.ac.uk> In that case there are definitely misleading guides out there. http://www.cyberciti.biz/faq/ubuntu-linux-root-password-default-password/ This implies that sudo bash does log you in as root user. Either my second question confused you; it wasn't the right question to ask; or the guide I linked to is completely wrong. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jeff Hammond [jhammond at alcf.anl.gov] Sent: 13 June 2013 15:27 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem No, of course the root environment is not the same as your user environment. There are plenty of "intro to Linux" resources online if you need this kind of information. If you want your user environment as root, which I don't think is particularly safe or a good idea, you can set it with "source ~${yourusername}/.${yourshell}rc" where yourusername is $USERNAME when you are not root and yourshell is $SHELL when you are not root. The better option is to figure out what environment variables you need to set in the mpiexec environment that are currently defined in your user environment and set those manually. Jeff On Thu, Jun 13, 2013 at 9:15 AM, Lee, Eibhlin wrote: > Jeff, > Does that put you into the same environment as sudo bash? Because I have already tried that with no success. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jeff Hammond [jhammond at alcf.anl.gov] > Sent: 13 June 2013 13:58 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Just su to root instead of using sudo. > > Jeff > > On Thu, Jun 13, 2013 at 6:56 AM, Lee, Eibhlin > wrote: >> Hello all, >> >> I am trying to use two raspberry-pi to sample and then process some data. >> The first process samples while the second processes and vice versa. To do >> this I use gpio and also mpich-3.0.4 with the process manager smpd. I have >> successfully run cpi on both machines (from the master machine). I have also >> managed to run a similar program but without the MPI, this involved >> compiling with gcc and when running putting sudo in front of the binary >> file. >> >> When I combine these two processes I get various error messages. >> For input: >> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> Can't open /dev/mem >> Did you forget to use 'sudo .. ?' >> >> For input: >> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> sudo: mpiexec: Command not found >> >> I therefore put mpiexec into /usr/bin >> >> now for input: >> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >> the error is: >> Can't open /dev/mem >> Did you forget to use 'sudo .. ?' >> >> Does anyone know how I can work around this? >> Thanks, >> Eibhlin >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > Argonne Leadership Computing Facility > University of Chicago Computation Institute > jhammond at alcf.anl.gov / (630) 252-5381 > http://www.linkedin.com/in/jeffhammond > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond > ALCF docs: http://www.alcf.anl.gov/user-guides > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From jhammond at alcf.anl.gov Thu Jun 13 09:46:48 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Thu, 13 Jun 2013 09:46:48 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1F67@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A1F32@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A1F67@icexch-m3.ic.ac.uk> Message-ID: Sorry, I misinterpreted/misread your question or otherwise acted stupidly. I think http://www.linfo.org/su.html and http://www.howtogeek.com/111479/htg-explains-whats-the-difference-between-sudo-su/ answer everything in detail, but "sudo" and "su" give you the same environment. Jeff On Thu, Jun 13, 2013 at 9:36 AM, Lee, Eibhlin wrote: > In that case there are definitely misleading guides out there. http://www.cyberciti.biz/faq/ubuntu-linux-root-password-default-password/ This implies that sudo bash does log you in as root user. > Either my second question confused you; it wasn't the right question to ask; or the guide I linked to is completely wrong. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jeff Hammond [jhammond at alcf.anl.gov] > Sent: 13 June 2013 15:27 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > No, of course the root environment is not the same as your user > environment. There are plenty of "intro to Linux" resources online if > you need this kind of information. > > If you want your user environment as root, which I don't think is > particularly safe or a good idea, you can set it with "source > ~${yourusername}/.${yourshell}rc" where yourusername is $USERNAME when > you are not root and yourshell is $SHELL when you are not root. > > The better option is to figure out what environment variables you need > to set in the mpiexec environment that are currently defined in your > user environment and set those manually. > > Jeff > > On Thu, Jun 13, 2013 at 9:15 AM, Lee, Eibhlin > wrote: >> Jeff, >> Does that put you into the same environment as sudo bash? Because I have already tried that with no success. >> Eibhlin >> ________________________________________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jeff Hammond [jhammond at alcf.anl.gov] >> Sent: 13 June 2013 13:58 >> To: discuss at mpich.org >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> Just su to root instead of using sudo. >> >> Jeff >> >> On Thu, Jun 13, 2013 at 6:56 AM, Lee, Eibhlin >> wrote: >>> Hello all, >>> >>> I am trying to use two raspberry-pi to sample and then process some data. >>> The first process samples while the second processes and vice versa. To do >>> this I use gpio and also mpich-3.0.4 with the process manager smpd. I have >>> successfully run cpi on both machines (from the master machine). I have also >>> managed to run a similar program but without the MPI, this involved >>> compiling with gcc and when running putting sudo in front of the binary >>> file. >>> >>> When I combine these two processes I get various error messages. >>> For input: >>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> Can't open /dev/mem >>> Did you forget to use 'sudo .. ?' >>> >>> For input: >>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> sudo: mpiexec: Command not found >>> >>> I therefore put mpiexec into /usr/bin >>> >>> now for input: >>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> Can't open /dev/mem >>> Did you forget to use 'sudo .. ?' >>> >>> Does anyone know how I can work around this? >>> Thanks, >>> Eibhlin >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> -- >> Jeff Hammond >> Argonne Leadership Computing Facility >> University of Chicago Computation Institute >> jhammond at alcf.anl.gov / (630) 252-5381 >> http://www.linkedin.com/in/jeffhammond >> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond >> ALCF docs: http://www.alcf.anl.gov/user-guides >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > Argonne Leadership Computing Facility > University of Chicago Computation Institute > jhammond at alcf.anl.gov / (630) 252-5381 > http://www.linkedin.com/in/jeffhammond > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond > ALCF docs: http://www.alcf.anl.gov/user-guides > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From balaji at mcs.anl.gov Thu Jun 13 10:02:48 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 13 Jun 2013 10:02:48 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1F56@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D587.8070104@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F56@icexch-m3.ic.ac.uk> Message-ID: <51B9DF18.20104@mcs.anl.gov> Please don't drop the mailing list from the cc. You can do an strace to see where /dev/mem is being used. If you are using multiple nodes, hydra only relies on passwordless ssh. It doesn't really care what the user id is (root or non-root). -- Pavan On 06/13/2013 09:27 AM, Lee, Eibhlin wrote: > Unfortunately I didn't come across this mailing list until recently. Sods law. > ./main does not work but sudo ./main does work. > Could hydra compile and run a program where this is the case? > I ask because the only way I'll be able to get around using sudo is to change ownership of /dev/mem. It goes without saying that that's dangerous! > Eibhlin > ________________________________________ > From: Pavan Balaji [balaji at mcs.anl.gov] > Sent: 13 June 2013 15:21 > To: discuss at mpich.org > Cc: Lee, Eibhlin > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > smpd was mainly meant for windows support. It is now deprecated, so we > don't officially support that. Someone on the mailing list might help, > but no promises. > > With respect to hydra, problems with that is what you should have > reported on this mailing list and we would have tried to help. :-) > > If ./main itself is not working, there's no hope of getting it to work > with mpiexec. You'll need to debug that first. > > -- Pavan > > On 06/13/2013 09:09 AM, Lee, Eibhlin wrote: >> Pavan, >> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >> Eibhlin >> ________________________________________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >> Sent: 13 June 2013 14:34 >> To: discuss at mpich.org >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> I just saw your older email. Why are you using smpd instead of the >> default process manager (hydra)? >> >> -- Pavan >> >> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>> >>> What's "-phrase"? That's not a recognized option. I'm not sure where >>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>> first. >>> >>> -- Pavan >>> >>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>> Hello all, >>>> >>>> I am trying to use two raspberry-pi to sample and then process some >>>> data. The first process samples while the second processes and vice >>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>> manager smpd. I have successfully run cpi on both machines (from the >>>> master machine). I have also managed to run a similar program but >>>> without the MPI, this involved compiling with gcc and when running >>>> putting sudo in front of the binary file. >>>> >>>> When I combine these two processes I get various error messages. >>>> For input: >>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> For input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> sudo: mpiexec: Command not found >>>> >>>> I therefore put mpiexec into /usr/bin >>>> >>>> now for input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> Does anyone know how I can work around this? >>>> Thanks, >>>> Eibhlin >>>> >>>> >>>> >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >>> >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From eibhlin.lee10 at imperial.ac.uk Thu Jun 13 10:08:21 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Thu, 13 Jun 2013 15:08:21 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51B9DF18.20104@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>,<51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D587.8070104@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F56@icexch-m3.ic.ac.uk>, <51B9DF18.20104@mcs.anl.gov> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A1F97@icexch-m3.ic.ac.uk> My apologies, So hydra is in effect a super user? Eibhlin ________________________________________ From: Pavan Balaji [balaji at mcs.anl.gov] Sent: 13 June 2013 16:02 To: Lee, Eibhlin Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Please don't drop the mailing list from the cc. You can do an strace to see where /dev/mem is being used. If you are using multiple nodes, hydra only relies on passwordless ssh. It doesn't really care what the user id is (root or non-root). -- Pavan On 06/13/2013 09:27 AM, Lee, Eibhlin wrote: > Unfortunately I didn't come across this mailing list until recently. Sods law. > ./main does not work but sudo ./main does work. > Could hydra compile and run a program where this is the case? > I ask because the only way I'll be able to get around using sudo is to change ownership of /dev/mem. It goes without saying that that's dangerous! > Eibhlin > ________________________________________ > From: Pavan Balaji [balaji at mcs.anl.gov] > Sent: 13 June 2013 15:21 > To: discuss at mpich.org > Cc: Lee, Eibhlin > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > smpd was mainly meant for windows support. It is now deprecated, so we > don't officially support that. Someone on the mailing list might help, > but no promises. > > With respect to hydra, problems with that is what you should have > reported on this mailing list and we would have tried to help. :-) > > If ./main itself is not working, there's no hope of getting it to work > with mpiexec. You'll need to debug that first. > > -- Pavan > > On 06/13/2013 09:09 AM, Lee, Eibhlin wrote: >> Pavan, >> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >> Eibhlin >> ________________________________________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >> Sent: 13 June 2013 14:34 >> To: discuss at mpich.org >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> I just saw your older email. Why are you using smpd instead of the >> default process manager (hydra)? >> >> -- Pavan >> >> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>> >>> What's "-phrase"? That's not a recognized option. I'm not sure where >>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>> first. >>> >>> -- Pavan >>> >>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>> Hello all, >>>> >>>> I am trying to use two raspberry-pi to sample and then process some >>>> data. The first process samples while the second processes and vice >>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>> manager smpd. I have successfully run cpi on both machines (from the >>>> master machine). I have also managed to run a similar program but >>>> without the MPI, this involved compiling with gcc and when running >>>> putting sudo in front of the binary file. >>>> >>>> When I combine these two processes I get various error messages. >>>> For input: >>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> For input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> sudo: mpiexec: Command not found >>>> >>>> I therefore put mpiexec into /usr/bin >>>> >>>> now for input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> Does anyone know how I can work around this? >>>> Thanks, >>>> Eibhlin >>>> >>>> >>>> >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >>> >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Thu Jun 13 10:17:15 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 13 Jun 2013 10:17:15 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1F97@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D587.8070104@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F56@icexch-m3.ic.ac.uk>, <51B9DF18.20104@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F97@icexch-m3.ic.ac.uk> Message-ID: <51B9E27B.5070805@mcs.anl.gov> No. Think of hydra as an ssh program. If you run mpiexec as root, it'll try to ssh as root. You'll need to make sure ssh as root is passwordless. -- Pavan On 06/13/2013 10:08 AM, Lee, Eibhlin wrote: > My apologies, > So hydra is in effect a super user? > Eibhlin > ________________________________________ > From: Pavan Balaji [balaji at mcs.anl.gov] > Sent: 13 June 2013 16:02 > To: Lee, Eibhlin > Cc: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Please don't drop the mailing list from the cc. > > You can do an strace to see where /dev/mem is being used. > > If you are using multiple nodes, hydra only relies on passwordless ssh. > It doesn't really care what the user id is (root or non-root). > > -- Pavan > > On 06/13/2013 09:27 AM, Lee, Eibhlin wrote: >> Unfortunately I didn't come across this mailing list until recently. Sods law. >> ./main does not work but sudo ./main does work. >> Could hydra compile and run a program where this is the case? >> I ask because the only way I'll be able to get around using sudo is to change ownership of /dev/mem. It goes without saying that that's dangerous! >> Eibhlin >> ________________________________________ >> From: Pavan Balaji [balaji at mcs.anl.gov] >> Sent: 13 June 2013 15:21 >> To: discuss at mpich.org >> Cc: Lee, Eibhlin >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> smpd was mainly meant for windows support. It is now deprecated, so we >> don't officially support that. Someone on the mailing list might help, >> but no promises. >> >> With respect to hydra, problems with that is what you should have >> reported on this mailing list and we would have tried to help. :-) >> >> If ./main itself is not working, there's no hope of getting it to work >> with mpiexec. You'll need to debug that first. >> >> -- Pavan >> >> On 06/13/2013 09:09 AM, Lee, Eibhlin wrote: >>> Pavan, >>> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >>> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >>> Eibhlin >>> ________________________________________ >>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >>> Sent: 13 June 2013 14:34 >>> To: discuss at mpich.org >>> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >>> >>> I just saw your older email. Why are you using smpd instead of the >>> default process manager (hydra)? >>> >>> -- Pavan >>> >>> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>>> >>>> What's "-phrase"? That's not a recognized option. I'm not sure where >>>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>>> first. >>>> >>>> -- Pavan >>>> >>>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>>> Hello all, >>>>> >>>>> I am trying to use two raspberry-pi to sample and then process some >>>>> data. The first process samples while the second processes and vice >>>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>>> manager smpd. I have successfully run cpi on both machines (from the >>>>> master machine). I have also managed to run a similar program but >>>>> without the MPI, this involved compiling with gcc and when running >>>>> putting sudo in front of the binary file. >>>>> >>>>> When I combine these two processes I get various error messages. >>>>> For input: >>>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> Can't open /dev/mem >>>>> Did you forget to use 'sudo .. ?' >>>>> >>>>> For input: >>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> sudo: mpiexec: Command not found >>>>> >>>>> I therefore put mpiexec into /usr/bin >>>>> >>>>> now for input: >>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> Can't open /dev/mem >>>>> Did you forget to use 'sudo .. ?' >>>>> >>>>> Does anyone know how I can work around this? >>>>> Thanks, >>>>> Eibhlin >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> >>>> >>> >>> -- >>> Pavan Balaji >>> http://www.mcs.anl.gov/~balaji >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From apenya at mcs.anl.gov Thu Jun 13 11:06:58 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Thu, 13 Jun 2013 11:06:58 -0500 Subject: [mpich-discuss] How to keep the VS console window open on exceptions during debug of a parallel code In-Reply-To: References: Message-ID: <1762103.xcWpRZDbIR@localhost.localdomain> Dear Feng Xiao, Unfortunately we discontinued support for Windows some time ago. In addition, our Windows expert is no longer working in our team. If you need support on Windows, I'd suggest considering using some other implementation. I hope some other user subscribed to this list can help you. Best, Antonio On Thursday, June 13, 2013 07:31:09 AM Feng Xiao wrote: Hello, I am writing about how to keep the console window open on exceptions during debug of a parallel code (either F5 or Ctrl+F5). I am running MPICH2 FORTRAN program in Visual Studio 2010, everything works fine, expect that I could not see the exception/error messages on the console window, it closes right after the program exits on exceptions. I do know how to keep the console window open when program meets the end, or put a read statement to pause the program during the execution. I did some google search, the only relevant solution I could find is http://www.boost.org/doc/libs/1_36_0/libs/test/doc/html/utf/usage-recommendations/dot-net-specific.html[1], which is about making debugger break at the point the failure by adding extra command line argument and seeing the runtime error in the output window. However, it looks like something for a serial code without mpiexec.exe in the command line, and I am not a advanced VS user, so I don't know how to do it for a parallel code, or if there is any other way out. Thanks in advance! -- Feng Xiao Doctoral Student Petroleum Engineering Colorado School of Mines Cell 918-814-2644 -------- [1] http://www.boost.org/doc/libs/1_36_0/libs/test/doc/html/utf/usage-recommendations/dot-net-specific.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jayesh at mcs.anl.gov Thu Jun 13 11:34:49 2013 From: jayesh at mcs.anl.gov (Jayesh Krishna) Date: Thu, 13 Jun 2013 11:34:49 -0500 (CDT) Subject: [mpich-discuss] install + config on windows In-Reply-To: <1820123.9Zsec46ICq@localhost.localdomain> Message-ID: <979782457.4816323.1371141289107.JavaMail.root@mcs.anl.gov> Hi, >> ...actually, it seems that Mpich must be compiled to work... You shouldn't have to build MPICH2 on Windows to run your MPI programs. You should be able to install MPICH2 on your system using the msi files provided in the MPICH website (However, if you are using a 3rd party software compiled using MPICH2 you need to install the same version of MPICH on your system - ask the developers of the software for the version of MPICH). Since you are using Windows 7, you need to make sure that you install MPICH2 from an administrator command prompt (Uninstall any versions of MPICH2 installed in your system and follow guidelines in the installer's guide to install MPICH2). As Antonio mentioned we just don't have the developer bandwidth to keep the Windows part of code in MPICH up to date. So if you need to use MPICH2 on Windows please use the older versions of MPICH2 available on the MPICH website. Regards, Jayesh ----- Original Message ----- From: "Antonio J. Pe?a" To: geo at spatiogis.fr, discuss at mpich.org Sent: Tuesday, June 11, 2013 5:17:20 PM Subject: Re: [mpich-discuss] install + config on windows Hi Benoit, As our support for Windows platforms has been discontinued, we cannot guarantee MPICH is going to be able to be compiled in newer versions of the Windows compilers and/or the operating system itself. Unless Jayesh has any comments on this regard, I'm not aware of any experience of MPICH + Windows 7. I apologize for the inconvenience. Thanks, Antonio On Monday, June 10, 2013 08:04:03 AM spatiogis wrote: > Hello, > > actually, it seems that Mpich must be compiled to work. The point is that > the "Readme" file gives an explanation to compile the programs with Visual > studio 2003. Anyway this last software is very difficult to make work on > windows 7. > > Is there finally a way to compile Mpich on windows 7 with Visual Studio ? > > best regards, > > Benoit V?ler > > > Hi, > > > > From the log output it looks like credentials (password) for > > > > Utilisateur was not correct. > > > > Is Utilisateur a valid Windows user on your machine? Have you > > > > registered the username/password correctly (Try re-registering the > > username+password by typing "mpiexec -register" at the command prompt)? > > > > Regards, > > Jayesh > > > > ----- Original Message ----- > > From: "spatiogis" > > To: discuss at mpich.org > > Sent: Friday, May 3, 2013 11:58:00 AM > > Subject: Re: [mpich-discuss] install + config on windows > > > > Hello, > > > > for this command : > > > > # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 > > > > C:\Progra~1\MPICH2\examples\cpi.exe > > > > result : > > > > ....../SMPDU_Sock_post_readv > > > > ...../SMPDU_Sock_post_read > > ..../smpd_handle_op_connect > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_READ event.error = 0, result = 0, context=left > > ....\smpd_handle_op_read > > .....\smpd_state_reading_challenge_string > > ......read challenge string: '1.4.1p1 18467' > > ......\smpd_verify_version > > ....../smpd_verify_version > > ......Verification of smpd version succeeded > > ......\smpd_hash > > ....../smpd_hash > > ......\SMPDU_Sock_post_write > > .......\SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_writev > > ....../SMPDU_Sock_post_write > > ...../smpd_state_reading_challenge_string > > ..../smpd_handle_op_read > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > ....\smpd_handle_op_write > > .....\smpd_state_writing_challenge_response > > ......wrote challenge response: 'dafd1d07c1e6e9cb5fae968403d0d933' > > ......\SMPDU_Sock_post_read > > .......\SMPDU_Sock_post_readv > > ......./SMPDU_Sock_post_readv > > ....../SMPDU_Sock_post_read > > ...../smpd_state_writing_challenge_response > > ..../smpd_handle_op_write > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_READ event.error = 0, result = 0, context=left > > ....\smpd_handle_op_read > > .....\smpd_state_reading_connect_result > > ......read connect result: 'SUCCESS' > > ......\SMPDU_Sock_post_write > > .......\SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_writev > > ....../SMPDU_Sock_post_write > > ...../smpd_state_reading_connect_result > > ..../smpd_handle_op_read > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > ....\smpd_handle_op_write > > .....\smpd_state_writing_process_session_request > > ......wrote process session request: 'process' > > ......\SMPDU_Sock_post_read > > .......\SMPDU_Sock_post_readv > > ......./SMPDU_Sock_post_readv > > ....../SMPDU_Sock_post_read > > ...../smpd_state_writing_process_session_request > > ..../smpd_handle_op_write > > ....sock_waiting for the next event. > > ....\SMPDU_Sock_wait > > ..../SMPDU_Sock_wait > > ....SOCK_OP_READ event.error = 0, result = 0, context=left > > ....\smpd_handle_op_read > > .....\smpd_state_reading_cred_request > > ......read cred request: 'credentials' > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > .......\smpd_option_on > > ........\smpd_get_smpd_data > > .........\smpd_get_smpd_data_from_environment > > ........./smpd_get_smpd_data_from_environment > > .........\smpd_get_smpd_data_default > > ........./smpd_get_smpd_data_default > > .........Unable to get the data for the key 'nocache' > > ......../smpd_get_smpd_data > > ......./smpd_option_on > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ......\SMPDU_Sock_post_write > > .......\SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_writev > > ....../SMPDU_Sock_post_write > > ...../smpd_handle_op_read > > .....sock_waiting for the next event. > > .....\SMPDU_Sock_wait > > ...../SMPDU_Sock_wait > > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > .....\smpd_handle_op_write > > ......\smpd_state_writing_cred_ack_yes > > .......wrote cred request yes ack. > > .......\SMPDU_Sock_post_write > > ........\SMPDU_Sock_post_writev > > ......../SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_write > > ....../smpd_state_writing_cred_ack_yes > > ...../smpd_handle_op_write > > .....sock_waiting for the next event. > > .....\SMPDU_Sock_wait > > ...../SMPDU_Sock_wait > > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > .....\smpd_handle_op_write > > ......\smpd_state_writing_account > > .......wrote account: 'Utilisateur' > > .......\smpd_encrypt_data > > ......./smpd_encrypt_data > > .......\SMPDU_Sock_post_write > > ........\SMPDU_Sock_post_writev > > ......../SMPDU_Sock_post_writev > > ......./SMPDU_Sock_post_write > > ....../smpd_state_writing_account > > ...../smpd_handle_op_write > > .....sock_waiting for the next event. > > .....\SMPDU_Sock_wait > > ...../SMPDU_Sock_wait > > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > > .....\smpd_handle_op_write > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > .......\smpd_hide_string_arg > > ........\first_token > > ......../first_token > > ........\compare_token > > ......../compare_token > > ........\next_token > > .........\first_token > > ........./first_token > > .........\first_token > > ........./first_token > > ......../next_token > > ......./smpd_hide_string_arg > > ......./smpd_hide_string_arg > > .......\SMPDU_Sock_post_read > > ........\SMPDU_Sock_post_readv > > ......../SMPDU_Sock_post_readv > > ......./SMPDU_Sock_post_read > > ......\smpd_hide_string_arg > > .......\first_token > > ......./first_token > > .......\compare_token > > ......./compare_token > > .......\next_token > > ........\first_token > > ......../first_token > > ........\first_token > > ......../first_token > > ......./next_token > > ....../smpd_hide_string_arg > > ....../smpd_hide_string_arg > > ...../smpd_handle_op_write > > .....sock_waiting for the next event. > > .....\SMPDU_Sock_wait > > ...../SMPDU_Sock_wait > > .....SOCK_OP_READ event.error = 0, result = 0, context=left > > .....\smpd_handle_op_read > > ......\smpd_state_reading_process_result > > .......read process session result: 'FAIL' > > .......\smpd_hide_string_arg > > ........\first_token > > ......../first_token > > ........\compare_token > > ......../compare_token > > ........\next_token > > .........\first_token > > ........./first_token > > .........\first_token > > ........./first_token > > ......../next_token > > ......./smpd_hide_string_arg > > ......./smpd_hide_string_arg > > .......\smpd_hide_string_arg > > ........\first_token > > ......../first_token > > ........\compare_token > > ......../compare_token > > ........\next_token > > .........\first_token > > ........./first_token > > .........\first_token > > ........./first_token > > ......../next_token > > ......./smpd_hide_string_arg > > ......./smpd_hide_string_arg > > Credentials for Utilisateur rejected connecting to Benoit > > .......process session rejected > > .......\SMPDU_Sock_post_close > > ........\SMPDU_Sock_post_read > > .........\SMPDU_Sock_post_readv > > ........./SMPDU_Sock_post_readv > > ......../SMPDU_Sock_post_read > > ......./SMPDU_Sock_post_close > > .......\smpd_post_abort_command > > ........\smpd_create_command > > .........\smpd_init_command > > ........./smpd_init_command > > ......../smpd_create_command > > ........\smpd_add_command_arg > > ......../smpd_add_command_arg > > ........\smpd_command_destination > > .........0 -> 0 : returning NULL context > > ......../smpd_command_destination > > Aborting: Unable to connect to Benoit > > ......./smpd_post_abort_command > > .......\smpd_exit > > ........\smpd_kill_all_processes > > ......../smpd_kill_all_processes > > ........\smpd_finalize_drive_maps > > ......../smpd_finalize_drive_maps > > ........\smpd_dbs_finalize > > ......../smpd_dbs_finalize > > ........\SMPDU_Sock_finalize > > ......../SMPDU_Sock_finalize > > > > C:\Users\Utilisateur> > > > >> Hi, > >> > >> Looks like you missed the "-" before the status ("smpd -status" not > >> > >> "smpd status") argument. > >> > >> It also looks like you have multiple MPI libraries installed in your > >> > >> system. Try running this command (full path to mpiexec and smpd), > >> > >> # C:\Progra~1\MPICH2\bin\smpd -status > >> > >> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 > >> C:\Progra~1\MPICH2\examples\cpi.exe > >> > >> > >> Regards, > >> Jayesh > >> > >> ----- Original Message ----- > >> From: "spatiogis" > >> To: "Jayesh Krishna" > >> Sent: Friday, May 3, 2013 11:05:34 AM > >> Subject: Re: [mpich-discuss] install + config on windows > >> > >> Hello, > >> > >> C:\Users\Utilisateur>smpd status > >> Unexpected parameters: status > >> > >> C:\Users\Utilisateur>mpiexec -verbose -n 2 > >> C:\Progra~1\MPICH2\examples\cpi.exe > >> Unknown option: -verbose > >> > >> ------------------------------------------------------------------------- > >> ---- C:\Program Files\MPICH2\examples>mpiexec -verbose -n 2 cpi.exe > >> Unknown option: -verbose > >> > >> C:\Program Files\MPICH2\examples>smpd status > >> Unexpected parameters: status > >> ------------------------------------------------------------------------- > >> ---- > >> > >> regards, Ben > >> > >>> Hi, > >>> > >>> Ok. Please send us the output of the following commands, > >>> > >>> # smpd -status > >>> # mpiexec -verbose -n 2 C:\Progra~1\MPICH2\examples\cpi.exe > >>> > >>> Please copy-paste the command and the complete output in your email. > >>> > >>> Regards, > >>> Jayesh > >>> > >>> > >>> ----- Original Message ----- > >>> From: "spatiogis" > >>> To: discuss at mpich.org > >>> Sent: Friday, May 3, 2013 1:46:53 AM > >>> Subject: Re: [mpich-discuss] install + config on windows > >>> > >>> Hello > >>> > >>>> (PS: I am assuming from your reply in the previous email that you can > >>>> run a command like "mpiexec -n 2 C:\Progra~1\MPICH2\examples\cpi.exe" > >>>> correctly) > >>>> > >>> In fact this command doesn't run. > >>> > >>> The message is this one > >>> > >>> [01:11728]....ERROR:unable to read the cmd header on the pmi context, > >>> > >>> Error = -1 > >>> > >>> Ben > >>> > >>>> ----- Original Message ----- > >>>> From: "spatiogis" > >>>> To: "Jayesh Krishna" > >>>> Sent: Thursday, May 2, 2013 10:48:56 AM > >>>> Subject: Re: [mpich-discuss] install + config on windows > >>>> > >>>> Hello, > >>>> > >>>>> Hi, > >>>>> > >>>>> Are you able to run any other MPI programs? Try running the example > >>>>> > >>>>> program, cpi.exe (C:\Program Files\MPICH2\examples\cpi.exe), to make > >>>>> sure that your MPICH2 installation works. > >>>>> > >>>> yes it does work > >>>> > >>>>> Installing MPICH2 on Windows 7 typically requires you to uninstall > >>>>> > >>>>> any > >>>>> previous versions of MPICH2, launch an administrative command promt > >>>>> and > >>>>> run "msiexec /i mpich2-installer.msi" to install MPICH2. > >>>>> > >>>> yes it 's been installed like this... > >>>> > >>>> In wmpiconfig, the message is the following in the 'Get settings' > >>>> > >>>> line. > >>>> > >>>> Credentials for Utilisateur rejected connecting to host > >>>> Aborting: Unable to connect to host > >>>> > >>>> The software I try to use is Taudem, which is intergrated inside > >>>> > >>>> Qgis. > >>>> Launching a taudem process inside Qgis gives the same message. > >>>> > >>>>> Regards, > >>>>> Jayesh > >>>>> > >>>> Sincerely, Ben > >>>> > >>>>> ----- Original Message ----- > >>>>> From: "spatiogis" > >>>>> To: discuss at mpich.org > >>>>> Sent: Thursday, May 2, 2013 10:08:23 AM > >>>>> Subject: Re: [mpich-discuss] install + config on windows > >>>>> > >>>>> Hello, > >>>>> > >>>>> in my case Mpich is normally used to run .exe programs. I guess that > >>>>> > >>>>> they > >>>>> are already compiled... > >>>>> > >>>>> The .exe files are integrated into a software, and accessed from > >>>>> > >>>>> menus > >>>>> inside it. When I run one of the programs, the answer is actually > >>>>> "unable > >>>>> to query host". > >>>>> > >>>>> At the end, the process is not realised. It seems that this 'host' > >>>>> > >>>>> question is a problem to the software... > >>>>> > >>>>> Sincerely, > >>>>> > >>>>> Ben. > >>>>> > >>>>>> Hi, > >>>>>> > >>>>>> You can download MPICH2 binaries for Windows at > >>>>>> > >>>>>> http://www.mpich.org/downloads/ . > >>>>>> > >>>>>> You need to compile your MPI programs with MPICH2 to make it work. > >>>>>> > >>>>>> I > >>>>>> would recommend recompiling your code after you install MPICH2 (If > >>>>>> you > >>>>>> have MPI program binaries pre-built with MPICH2 - instead of > >>>>>> compiling > >>>>>> them on your own - make sure that you install the same version of > >>>>>> MPICH2 > >>>>>> that was used to build the binaries). > >>>>>> > >>>>>> The wmpiregister program has a bug and you can ignore this error > >>>>>> > >>>>>> message ("...unable to query host"). Can you run your MPI program > >>>>>> using > >>>>>> mpiexec from a command prompt? > >>>>>> > >>>>>> Regards, > >>>>>> Jayesh > >>>>>> > >>>>>> ----- Original Message ----- > >>>>>> From: "spatiogis" > >>>>>> To: discuss at mpich.org > >>>>>> Sent: Tuesday, April 30, 2013 9:26:35 AM > >>>>>> Subject: [mpich-discuss] install + config on windows > >>>>>> > >>>>>> Hello, > >>>>>> > >>>>>> I'm not very good at computing, but I would like to install Mpich2 > >>>>>> > >>>>>> on > >>>>>> windows 7 - 64 bits. There is only one pc, with one user plus the > >>>>>> admin, > >>>>>> and a simple core processor. > >>>>>> > >>>>>> I would like to know if it's mandatory to have compiling softwares > >>>>>> > >>>>>> with > >>>>>> it to make it work, whereas it is asked in this case only to make > >>>>>> run > >>>>>> another software, and not for compiling (that would maybe save some > >>>>>> disk > >>>>>> space and simplify the installation) ? > >>>>>> > >>>>>> My second issue is that I must be missing something about the > >>>>>> > >>>>>> server > >>>>>> configuration. I have installed Mpich from the .msi file, then > >>>>>> configured > >>>>>> the wmpiregister program with the Domain/user informations. > >>>>>> > >>>>>> There is this message displayed when trying to connect in the > >>>>>> > >>>>>> 'configurable settings' window : 'MPICH2 not installed or unable to > >>>>>> query > >>>>>> the host'. > >>>>>> > >>>>>> What is the host actually ? > >>>>>> > >>>>>> I know I am starting from very far, I am sorry for these very > >>>>>> > >>>>>> simple > >>>>>> questions. Thanks if you can reply me, that would certainly save me > >>>>>> some > >>>>>> long hours of reading and testing ;) > >>>>>> > >>>>>> sincerely, > >>>>>> > >>>>>> Ben > >>>>>> > >>>>>> _______________________________________________ > >>>>>> discuss mailing list discuss at mpich.org > >>>>>> To manage subscription options or unsubscribe: > >>>>>> https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From eibhlin.lee10 at imperial.ac.uk Thu Jun 13 11:46:03 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Thu, 13 Jun 2013 16:46:03 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51B9E27B.5070805@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>,<51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D587.8070104@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F56@icexch-m3.ic.ac.uk>, <51B9DF18.20104@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F97@icexch-m3.ic.ac.uk>, <51B9E27B.5070805@mcs.anl.gov> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A1FEF@icexch-m3.ic.ac.uk> It sounds like that won't solve the problem I have. I have run mpiexec as the root user and still get the same problem. Eibhlin ________________________________________ From: Pavan Balaji [balaji at mcs.anl.gov] Sent: 13 June 2013 16:17 To: Lee, Eibhlin Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem No. Think of hydra as an ssh program. If you run mpiexec as root, it'll try to ssh as root. You'll need to make sure ssh as root is passwordless. -- Pavan On 06/13/2013 10:08 AM, Lee, Eibhlin wrote: > My apologies, > So hydra is in effect a super user? > Eibhlin > ________________________________________ > From: Pavan Balaji [balaji at mcs.anl.gov] > Sent: 13 June 2013 16:02 > To: Lee, Eibhlin > Cc: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Please don't drop the mailing list from the cc. > > You can do an strace to see where /dev/mem is being used. > > If you are using multiple nodes, hydra only relies on passwordless ssh. > It doesn't really care what the user id is (root or non-root). > > -- Pavan > > On 06/13/2013 09:27 AM, Lee, Eibhlin wrote: >> Unfortunately I didn't come across this mailing list until recently. Sods law. >> ./main does not work but sudo ./main does work. >> Could hydra compile and run a program where this is the case? >> I ask because the only way I'll be able to get around using sudo is to change ownership of /dev/mem. It goes without saying that that's dangerous! >> Eibhlin >> ________________________________________ >> From: Pavan Balaji [balaji at mcs.anl.gov] >> Sent: 13 June 2013 15:21 >> To: discuss at mpich.org >> Cc: Lee, Eibhlin >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> smpd was mainly meant for windows support. It is now deprecated, so we >> don't officially support that. Someone on the mailing list might help, >> but no promises. >> >> With respect to hydra, problems with that is what you should have >> reported on this mailing list and we would have tried to help. :-) >> >> If ./main itself is not working, there's no hope of getting it to work >> with mpiexec. You'll need to debug that first. >> >> -- Pavan >> >> On 06/13/2013 09:09 AM, Lee, Eibhlin wrote: >>> Pavan, >>> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >>> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >>> Eibhlin >>> ________________________________________ >>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >>> Sent: 13 June 2013 14:34 >>> To: discuss at mpich.org >>> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >>> >>> I just saw your older email. Why are you using smpd instead of the >>> default process manager (hydra)? >>> >>> -- Pavan >>> >>> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>>> >>>> What's "-phrase"? That's not a recognized option. I'm not sure where >>>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>>> first. >>>> >>>> -- Pavan >>>> >>>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>>> Hello all, >>>>> >>>>> I am trying to use two raspberry-pi to sample and then process some >>>>> data. The first process samples while the second processes and vice >>>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>>> manager smpd. I have successfully run cpi on both machines (from the >>>>> master machine). I have also managed to run a similar program but >>>>> without the MPI, this involved compiling with gcc and when running >>>>> putting sudo in front of the binary file. >>>>> >>>>> When I combine these two processes I get various error messages. >>>>> For input: >>>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> Can't open /dev/mem >>>>> Did you forget to use 'sudo .. ?' >>>>> >>>>> For input: >>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> sudo: mpiexec: Command not found >>>>> >>>>> I therefore put mpiexec into /usr/bin >>>>> >>>>> now for input: >>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> Can't open /dev/mem >>>>> Did you forget to use 'sudo .. ?' >>>>> >>>>> Does anyone know how I can work around this? >>>>> Thanks, >>>>> Eibhlin >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> >>>> >>> >>> -- >>> Pavan Balaji >>> http://www.mcs.anl.gov/~balaji >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Thu Jun 13 11:48:35 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 13 Jun 2013 11:48:35 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A1FEF@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D587.8070104@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F56@icexch-m3.ic.ac.uk>, <51B9DF18.20104@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F97@icexch-m3.ic.ac.uk>, <51B9E27B.5070805@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1FEF@icexch-m3.ic.ac.uk> Message-ID: <51B9F7E3.5070505@mcs.anl.gov> Try running: % mpiexec [whatever-argument] id This will give you what user id you are running your program as. -- Pavan On 06/13/2013 11:46 AM, Lee, Eibhlin wrote: > It sounds like that won't solve the problem I have. I have run mpiexec as the root user and still get the same problem. > Eibhlin > ________________________________________ > From: Pavan Balaji [balaji at mcs.anl.gov] > Sent: 13 June 2013 16:17 > To: Lee, Eibhlin > Cc: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > No. Think of hydra as an ssh program. If you run mpiexec as root, > it'll try to ssh as root. You'll need to make sure ssh as root is > passwordless. > > -- Pavan > > On 06/13/2013 10:08 AM, Lee, Eibhlin wrote: >> My apologies, >> So hydra is in effect a super user? >> Eibhlin >> ________________________________________ >> From: Pavan Balaji [balaji at mcs.anl.gov] >> Sent: 13 June 2013 16:02 >> To: Lee, Eibhlin >> Cc: discuss at mpich.org >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> Please don't drop the mailing list from the cc. >> >> You can do an strace to see where /dev/mem is being used. >> >> If you are using multiple nodes, hydra only relies on passwordless ssh. >> It doesn't really care what the user id is (root or non-root). >> >> -- Pavan >> >> On 06/13/2013 09:27 AM, Lee, Eibhlin wrote: >>> Unfortunately I didn't come across this mailing list until recently. Sods law. >>> ./main does not work but sudo ./main does work. >>> Could hydra compile and run a program where this is the case? >>> I ask because the only way I'll be able to get around using sudo is to change ownership of /dev/mem. It goes without saying that that's dangerous! >>> Eibhlin >>> ________________________________________ >>> From: Pavan Balaji [balaji at mcs.anl.gov] >>> Sent: 13 June 2013 15:21 >>> To: discuss at mpich.org >>> Cc: Lee, Eibhlin >>> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >>> >>> smpd was mainly meant for windows support. It is now deprecated, so we >>> don't officially support that. Someone on the mailing list might help, >>> but no promises. >>> >>> With respect to hydra, problems with that is what you should have >>> reported on this mailing list and we would have tried to help. :-) >>> >>> If ./main itself is not working, there's no hope of getting it to work >>> with mpiexec. You'll need to debug that first. >>> >>> -- Pavan >>> >>> On 06/13/2013 09:09 AM, Lee, Eibhlin wrote: >>>> Pavan, >>>> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >>>> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >>>> Eibhlin >>>> ________________________________________ >>>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >>>> Sent: 13 June 2013 14:34 >>>> To: discuss at mpich.org >>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >>>> >>>> I just saw your older email. Why are you using smpd instead of the >>>> default process manager (hydra)? >>>> >>>> -- Pavan >>>> >>>> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>>>> >>>>> What's "-phrase"? That's not a recognized option. I'm not sure where >>>>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>>>> first. >>>>> >>>>> -- Pavan >>>>> >>>>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>>>> Hello all, >>>>>> >>>>>> I am trying to use two raspberry-pi to sample and then process some >>>>>> data. The first process samples while the second processes and vice >>>>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>>>> manager smpd. I have successfully run cpi on both machines (from the >>>>>> master machine). I have also managed to run a similar program but >>>>>> without the MPI, this involved compiling with gcc and when running >>>>>> putting sudo in front of the binary file. >>>>>> >>>>>> When I combine these two processes I get various error messages. >>>>>> For input: >>>>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>> the error is: >>>>>> Can't open /dev/mem >>>>>> Did you forget to use 'sudo .. ?' >>>>>> >>>>>> For input: >>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>> the error is: >>>>>> sudo: mpiexec: Command not found >>>>>> >>>>>> I therefore put mpiexec into /usr/bin >>>>>> >>>>>> now for input: >>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>> the error is: >>>>>> Can't open /dev/mem >>>>>> Did you forget to use 'sudo .. ?' >>>>>> >>>>>> Does anyone know how I can work around this? >>>>>> Thanks, >>>>>> Eibhlin >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> discuss mailing list discuss at mpich.org >>>>>> To manage subscription options or unsubscribe: >>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>>> >>>>> >>>> >>>> -- >>>> Pavan Balaji >>>> http://www.mcs.anl.gov/~balaji >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >>> >>> -- >>> Pavan Balaji >>> http://www.mcs.anl.gov/~balaji >>> >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From eibhlin.lee10 at imperial.ac.uk Thu Jun 13 11:59:14 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Thu, 13 Jun 2013 16:59:14 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51B9D939.5040203@ldeo.columbia.edu> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D939.5040203@ldeo.columbia.edu> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> Gus, I believe your first assumption is correct. Unfortunately it just seemed to hang. I think this might be because each one is being made to have the same rank... It may already be obvious but this is the first time I am using Linux. I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both without success. Is putting the full path to it similar to/is a symlink? (This still doesn't make main have super user privileges though.) Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 15:37 To: Discuss Mpich Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Hi Lee How about replacing "~/main" in the mpiexec command line by one-liner script? Say, "sudo_main.sh", something like this: #! /bin/bash sudo ~/main After all, it is "main" that accesses /dev/mem, and needs "sudo" permissions, not mpiexec, right? [Or do the mpiexec-launched processes inherit the "sudo" stuff from mpiexec?] Not related, but, instead of putting mpiexec in /usr/bin, can't you just use the full path to it? I hope this helps, Gus Correa On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: > Pavan, > I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. > As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] > Sent: 13 June 2013 14:34 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > I just saw your older email. Why are you using smpd instead of the > default process manager (hydra)? > > -- Pavan > > On 06/13/2013 08:05 AM, Pavan Balaji wrote: >> >> What's "-phrase"? That's not a recognized option. I'm not sure where >> the /dev/mem check is coming from. Try running ~/main without mpiexec >> first. >> >> -- Pavan >> >> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>> Hello all, >>> >>> I am trying to use two raspberry-pi to sample and then process some >>> data. The first process samples while the second processes and vice >>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>> manager smpd. I have successfully run cpi on both machines (from the >>> master machine). I have also managed to run a similar program but >>> without the MPI, this involved compiling with gcc and when running >>> putting sudo in front of the binary file. >>> >>> When I combine these two processes I get various error messages. >>> For input: >>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> Can't open /dev/mem >>> Did you forget to use 'sudo .. ?' >>> >>> For input: >>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> sudo: mpiexec: Command not found >>> >>> I therefore put mpiexec into /usr/bin >>> >>> now for input: >>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>> the error is: >>> Can't open /dev/mem >>> Did you forget to use 'sudo .. ?' >>> >>> Does anyone know how I can work around this? >>> Thanks, >>> Eibhlin >>> >>> >>> >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From gus at ldeo.columbia.edu Thu Jun 13 15:11:06 2013 From: gus at ldeo.columbia.edu (Gus Correa) Date: Thu, 13 Jun 2013 16:11:06 -0400 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> Message-ID: <51BA275A.90608@ldeo.columbia.edu> Hi Eibhlin On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: > Gus, > I believe your first assumption is correct. Unfortunately it just seemed to hang. I think this might be because each one is being made to have the same rank... Darn! I was afraid that it might give only rank 0 to all MPI processes. So, with the script wrapper the process being launched by mpiexec may indeed be sudo, not the actual mpi executable (main) :( Then it may actually launch a bunch of separate rank 0 replicas of your program, instead of assigning to them different ranks. However, without any output or error message, it is hard to tell. No output at all? No error message, just hangs? Have you tried a verbose flag (-v) to mpiexec? (Not sure if it exists in MPICH mpiexec, you'd need to check.) Would you care to try it with another mpi program, one that doesn't deal with /dev/mem (a risky business), say cpi.c (in the examples directory), or an mpi version of Hello, world, just to see if the mpiexec+sudo_script_wrapper works as expected or if everybody gets rank 0? > It may already be obvious but this is the first time I am using Linux. I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both without success. "which mpiexec" will return the path to mpiexec, but won't execute it. You could try this (with backquotes): `which mpiexec` -n 2 ~/main On a side note, make sure the mpiexec you're using matches the mpicc/mpif90/MPI library from the MPICH that you used to compile the program. Often times computers have several flavors of MPI installed, and mixing them just doesn't work. > Is putting the full path to it similar to/is a symlink? (This still doesn't make main have super user privileges though.) No, nothing to do with sudo privileges. This suggestion was just to avoid messing up your /usr/bin, which is a directory that despite the somewhat misleading name (/usr, for historical reasons I think), is supposed to hold system (Linux) programs (that users can use), but not user-installed programs. Normally things are that are installed in /usr get there via some Linux package manager program (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. I belive MPICH would install by default in /usr/local/ (and put mpiexec in /usr/local/bin), which is kind of a default location for non-system applications. The full path suggestion would be something like: /path/to/where/you/installed/mpiexec -n 2 ~/main However, this won't solve the other problem w.r.t. sudo and /dev/mem. You must know what you are doing, but it made me wonder, even if your program were sequential, why would you want to mess with /dev/mem directly? Just curious about it. Gus Correa > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] > Sent: 13 June 2013 15:37 > To: Discuss Mpich > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Hi Lee > > How about replacing "~/main" in the mpiexec command line > by one-liner script? > Say, "sudo_main.sh", something like this: > > #! /bin/bash > sudo ~/main > > After all, it is "main" that accesses /dev/mem, > and needs "sudo" permissions, not mpiexec, right? > [Or do the mpiexec-launched processes inherit > the "sudo" stuff from mpiexec?] > > Not related, but, instead of putting mpiexec in /usr/bin, > can't you just use the full path to it? > > I hope this helps, > Gus Correa > > On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: >> Pavan, >> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >> Eibhlin >> ________________________________________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >> Sent: 13 June 2013 14:34 >> To: discuss at mpich.org >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> I just saw your older email. Why are you using smpd instead of the >> default process manager (hydra)? >> >> -- Pavan >> >> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>> What's "-phrase"? That's not a recognized option. I'm not sure where >>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>> first. >>> >>> -- Pavan >>> >>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>> Hello all, >>>> >>>> I am trying to use two raspberry-pi to sample and then process some >>>> data. The first process samples while the second processes and vice >>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>> manager smpd. I have successfully run cpi on both machines (from the >>>> master machine). I have also managed to run a similar program but >>>> without the MPI, this involved compiling with gcc and when running >>>> putting sudo in front of the binary file. >>>> >>>> When I combine these two processes I get various error messages. >>>> For input: >>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> For input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> sudo: mpiexec: Command not found >>>> >>>> I therefore put mpiexec into /usr/bin >>>> >>>> now for input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> Does anyone know how I can work around this? >>>> Thanks, >>>> Eibhlin >>>> >>>> >>>> >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From eibhlin.lee10 at imperial.ac.uk Fri Jun 14 05:20:18 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Fri, 14 Jun 2013 10:20:18 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51BA275A.90608@ldeo.columbia.edu> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk>, <51BA275A.90608@ldeo.columbia.edu> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> Gus, I tried running cpi, as is included in the installation of MPI, on two machines with two processes. The output message confirmed that it had started only 1 process instead of 2. Process 0 of 1 is on raspi pi is approximately... Then it just hung. I think this is because the other machine didn't know where to output the data? When I tried running two processes on the one machine using the wrapper you suggested the output was the same but doubled. It didn't hang. This confirms that every process was started with rank 0. I'm not entirely sure why /dev/mem is needed. I'm working in a group and another member set up io and gpio and it seemed it needed access to /dev/mem I am going to do a strace as suggested by Pavan Balaji to see where it is used and see if I can somehow work around it. Thank you for your help. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 21:11 To: Discuss Mpich Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Hi Eibhlin On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: > Gus, > I believe your first assumption is correct. Unfortunately it just seemed to hang. I think this might be because each one is being made to have the same rank... Darn! I was afraid that it might give only rank 0 to all MPI processes. So, with the script wrapper the process being launched by mpiexec may indeed be sudo, not the actual mpi executable (main) :( Then it may actually launch a bunch of separate rank 0 replicas of your program, instead of assigning to them different ranks. However, without any output or error message, it is hard to tell. No output at all? No error message, just hangs? Have you tried a verbose flag (-v) to mpiexec? (Not sure if it exists in MPICH mpiexec, you'd need to check.) Would you care to try it with another mpi program, one that doesn't deal with /dev/mem (a risky business), say cpi.c (in the examples directory), or an mpi version of Hello, world, just to see if the mpiexec+sudo_script_wrapper works as expected or if everybody gets rank 0? > It may already be obvious but this is the first time I am using Linux. I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both without success. "which mpiexec" will return the path to mpiexec, but won't execute it. You could try this (with backquotes): `which mpiexec` -n 2 ~/main On a side note, make sure the mpiexec you're using matches the mpicc/mpif90/MPI library from the MPICH that you used to compile the program. Often times computers have several flavors of MPI installed, and mixing them just doesn't work. > Is putting the full path to it similar to/is a symlink? (This still doesn't make main have super user privileges though.) No, nothing to do with sudo privileges. This suggestion was just to avoid messing up your /usr/bin, which is a directory that despite the somewhat misleading name (/usr, for historical reasons I think), is supposed to hold system (Linux) programs (that users can use), but not user-installed programs. Normally things are that are installed in /usr get there via some Linux package manager program (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. I belive MPICH would install by default in /usr/local/ (and put mpiexec in /usr/local/bin), which is kind of a default location for non-system applications. The full path suggestion would be something like: /path/to/where/you/installed/mpiexec -n 2 ~/main However, this won't solve the other problem w.r.t. sudo and /dev/mem. You must know what you are doing, but it made me wonder, even if your program were sequential, why would you want to mess with /dev/mem directly? Just curious about it. Gus Correa > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] > Sent: 13 June 2013 15:37 > To: Discuss Mpich > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Hi Lee > > How about replacing "~/main" in the mpiexec command line > by one-liner script? > Say, "sudo_main.sh", something like this: > > #! /bin/bash > sudo ~/main > > After all, it is "main" that accesses /dev/mem, > and needs "sudo" permissions, not mpiexec, right? > [Or do the mpiexec-launched processes inherit > the "sudo" stuff from mpiexec?] > > Not related, but, instead of putting mpiexec in /usr/bin, > can't you just use the full path to it? > > I hope this helps, > Gus Correa > > On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: >> Pavan, >> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >> Eibhlin >> ________________________________________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >> Sent: 13 June 2013 14:34 >> To: discuss at mpich.org >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> I just saw your older email. Why are you using smpd instead of the >> default process manager (hydra)? >> >> -- Pavan >> >> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>> What's "-phrase"? That's not a recognized option. I'm not sure where >>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>> first. >>> >>> -- Pavan >>> >>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>> Hello all, >>>> >>>> I am trying to use two raspberry-pi to sample and then process some >>>> data. The first process samples while the second processes and vice >>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>> manager smpd. I have successfully run cpi on both machines (from the >>>> master machine). I have also managed to run a similar program but >>>> without the MPI, this involved compiling with gcc and when running >>>> putting sudo in front of the binary file. >>>> >>>> When I combine these two processes I get various error messages. >>>> For input: >>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> For input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> sudo: mpiexec: Command not found >>>> >>>> I therefore put mpiexec into /usr/bin >>>> >>>> now for input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> Does anyone know how I can work around this? >>>> Thanks, >>>> Eibhlin >>>> >>>> >>>> >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From eibhlin.lee10 at imperial.ac.uk Fri Jun 14 06:27:25 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Fri, 14 Jun 2013 11:27:25 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk>, <51BA275A.90608@ldeo.columbia.edu>, <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> I found that the reason we want to access /dev/mem is to setup memory regions to access the peripherals. (We are trying to read the output of an ADC). At this point it becomes more a linux/raspberry-pi specific problem than an MPICH problem. Although the fact that you can't run a program that needs access to memory mapping (even as the root user) seems something that MPICH could improve on for future versions. I know I am using smpd instead of hydra so this problem may already be solved. But if someone could confirm that, it would be really helpful. ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] Sent: 14 June 2013 11:20 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Gus, I tried running cpi, as is included in the installation of MPI, on two machines with two processes. The output message confirmed that it had started only 1 process instead of 2. Process 0 of 1 is on raspi pi is approximately... Then it just hung. I think this is because the other machine didn't know where to output the data? When I tried running two processes on the one machine using the wrapper you suggested the output was the same but doubled. It didn't hang. This confirms that every process was started with rank 0. I'm not entirely sure why /dev/mem is needed. I'm working in a group and another member set up io and gpio and it seemed it needed access to /dev/mem I am going to do a strace as suggested by Pavan Balaji to see where it is used and see if I can somehow work around it. Thank you for your help. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 21:11 To: Discuss Mpich Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Hi Eibhlin On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: > Gus, > I believe your first assumption is correct. Unfortunately it just seemed to hang. I think this might be because each one is being made to have the same rank... Darn! I was afraid that it might give only rank 0 to all MPI processes. So, with the script wrapper the process being launched by mpiexec may indeed be sudo, not the actual mpi executable (main) :( Then it may actually launch a bunch of separate rank 0 replicas of your program, instead of assigning to them different ranks. However, without any output or error message, it is hard to tell. No output at all? No error message, just hangs? Have you tried a verbose flag (-v) to mpiexec? (Not sure if it exists in MPICH mpiexec, you'd need to check.) Would you care to try it with another mpi program, one that doesn't deal with /dev/mem (a risky business), say cpi.c (in the examples directory), or an mpi version of Hello, world, just to see if the mpiexec+sudo_script_wrapper works as expected or if everybody gets rank 0? > It may already be obvious but this is the first time I am using Linux. I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both without success. "which mpiexec" will return the path to mpiexec, but won't execute it. You could try this (with backquotes): `which mpiexec` -n 2 ~/main On a side note, make sure the mpiexec you're using matches the mpicc/mpif90/MPI library from the MPICH that you used to compile the program. Often times computers have several flavors of MPI installed, and mixing them just doesn't work. > Is putting the full path to it similar to/is a symlink? (This still doesn't make main have super user privileges though.) No, nothing to do with sudo privileges. This suggestion was just to avoid messing up your /usr/bin, which is a directory that despite the somewhat misleading name (/usr, for historical reasons I think), is supposed to hold system (Linux) programs (that users can use), but not user-installed programs. Normally things are that are installed in /usr get there via some Linux package manager program (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. I belive MPICH would install by default in /usr/local/ (and put mpiexec in /usr/local/bin), which is kind of a default location for non-system applications. The full path suggestion would be something like: /path/to/where/you/installed/mpiexec -n 2 ~/main However, this won't solve the other problem w.r.t. sudo and /dev/mem. You must know what you are doing, but it made me wonder, even if your program were sequential, why would you want to mess with /dev/mem directly? Just curious about it. Gus Correa > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] > Sent: 13 June 2013 15:37 > To: Discuss Mpich > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Hi Lee > > How about replacing "~/main" in the mpiexec command line > by one-liner script? > Say, "sudo_main.sh", something like this: > > #! /bin/bash > sudo ~/main > > After all, it is "main" that accesses /dev/mem, > and needs "sudo" permissions, not mpiexec, right? > [Or do the mpiexec-launched processes inherit > the "sudo" stuff from mpiexec?] > > Not related, but, instead of putting mpiexec in /usr/bin, > can't you just use the full path to it? > > I hope this helps, > Gus Correa > > On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: >> Pavan, >> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >> Eibhlin >> ________________________________________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >> Sent: 13 June 2013 14:34 >> To: discuss at mpich.org >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> I just saw your older email. Why are you using smpd instead of the >> default process manager (hydra)? >> >> -- Pavan >> >> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>> What's "-phrase"? That's not a recognized option. I'm not sure where >>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>> first. >>> >>> -- Pavan >>> >>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>> Hello all, >>>> >>>> I am trying to use two raspberry-pi to sample and then process some >>>> data. The first process samples while the second processes and vice >>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>> manager smpd. I have successfully run cpi on both machines (from the >>>> master machine). I have also managed to run a similar program but >>>> without the MPI, this involved compiling with gcc and when running >>>> putting sudo in front of the binary file. >>>> >>>> When I combine these two processes I get various error messages. >>>> For input: >>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> For input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> sudo: mpiexec: Command not found >>>> >>>> I therefore put mpiexec into /usr/bin >>>> >>>> now for input: >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>> the error is: >>>> Can't open /dev/mem >>>> Did you forget to use 'sudo .. ?' >>>> >>>> Does anyone know how I can work around this? >>>> Thanks, >>>> Eibhlin >>>> >>>> >>>> >>>> _______________________________________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/mailman/listinfo/discuss >>>> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From apenya at mcs.anl.gov Fri Jun 14 10:10:38 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Fri, 14 Jun 2013 10:10:38 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> Message-ID: <6323101.aPdelFVpQ1@localhost.localdomain> Hi Eibhlin, If you share a piece of code to test your issue, I can run it for you using hydra to check if that solves it. Antonio On Friday, June 14, 2013 11:27:25 AM Lee, Eibhlin wrote: > I found that the reason we want to access /dev/mem is to setup memory > regions to access the peripherals. (We are trying to read the output of an > ADC). At this point it becomes more a linux/raspberry-pi specific problem > than an MPICH problem. Although the fact that you can't run a program that > needs access to memory mapping (even as the root user) seems something that > MPICH could improve on for future versions. I know I am using smpd instead > of hydra so this problem may already be solved. But if someone could > confirm that, it would be really helpful. > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] Sent: 14 June 2013 11:20 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to > access /dev/mem > > Gus, > I tried running cpi, as is included in the installation of MPI, on two > machines with two processes. The output message confirmed that it had > started only 1 process instead of 2. Process 0 of 1 is on raspi > pi is approximately... > > Then it just hung. I think this is because the other machine didn't know > where to output the data? > > When I tried running two processes on the one machine using the wrapper you > suggested the output was the same but doubled. It didn't hang. This > confirms that every process was started with rank 0. > > I'm not entirely sure why /dev/mem is needed. I'm working in a group and > another member set up io and gpio and it seemed it needed access to > /dev/mem I am going to do a strace as suggested by Pavan Balaji to see > where it is used and see if I can somehow work around it. > > Thank you for your help. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus > Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 21:11 > To: Discuss Mpich > Subject: Re: [mpich-discuss] Running an mpi program that needs to > access /dev/mem > > Hi Eibhlin > > On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: > > Gus, > > I believe your first assumption is correct. Unfortunately it just seemed > > to hang. I think this might be because each one is being made to have the > > same rank... > Darn! I was afraid that it might give only rank 0 to all MPI processes. > So, with the script wrapper the process being launched by mpiexec may > indeed be sudo, > not the actual mpi executable (main) :( > Then it may actually launch a bunch of separate rank 0 replicas of your > program, > instead of assigning to them different ranks. > However, without any output or error message, it is hard to tell. > > No output at all? > No error message, just hangs? > Have you tried a verbose flag (-v) to mpiexec? > (Not sure if it exists in MPICH mpiexec, you'd need to check.) > > Would you care to try it with another mpi program, > one that doesn't deal with /dev/mem (a risky business), > say cpi.c (in the examples directory), or an mpi version of Hello, world, > just to see if the mpiexec+sudo_script_wrapper works as expected or > if everybody gets rank 0? > > > It may already be obvious but this is the first time I am using Linux. I > > had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both > > without success. > "which mpiexec" will return the path to mpiexec, but won't execute it. > > You could try this (with backquotes): > > `which mpiexec` -n 2 ~/main > > On a side note, make sure the mpiexec you're using matches the > mpicc/mpif90/MPI library from the MPICH that > you used to compile the program. > Often times computers have several flavors of MPI installed, and mixing > them just doesn't work. > > > Is putting the full path to it similar to/is a symlink? (This still > > doesn't make main have super user privileges though.) > No, nothing to do with sudo privileges. > > This suggestion was just to avoid messing up your /usr/bin, > which is a directory that despite the somewhat misleading name (/usr, > for historical reasons I think), > is supposed to hold system (Linux) programs (that users can use), but > not user-installed programs. > Normally things are that are installed in /usr get there via some Linux > package manager program > (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. > > I belive MPICH would install by default in /usr/local/ (and put mpiexec > in /usr/local/bin), > which is kind of a default location for non-system applications. > > The full path suggestion would be something like: > /path/to/where/you/installed/mpiexec -n 2 ~/main > > However, this won't solve the other problem w.r.t. sudo and /dev/mem. > > You must know what you are doing, but it made me wonder, > even if your program were sequential, why would you want to mess with > /dev/mem directly? > Just curious about it. > > Gus Correa > > > Eibhlin > > ________________________________________ > > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > > Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 15:37 > > To: Discuss Mpich > > Subject: Re: [mpich-discuss] Running an mpi program that needs to > > access /dev/mem > > > > Hi Lee > > > > How about replacing "~/main" in the mpiexec command line > > by one-liner script? > > Say, "sudo_main.sh", something like this: > > > > #! /bin/bash > > sudo ~/main > > > > After all, it is "main" that accesses /dev/mem, > > and needs "sudo" permissions, not mpiexec, right? > > [Or do the mpiexec-launched processes inherit > > the "sudo" stuff from mpiexec?] > > > > Not related, but, instead of putting mpiexec in /usr/bin, > > can't you just use the full path to it? > > > > I hope this helps, > > Gus Correa > > > > On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: > >> Pavan, > >> I had a lot of trouble getting hydra to work without having to enter a > >> password/passphrase. I saw the option to pass a phrase in the mpich > >> installers guide. I eventually found that for that command you needed to > >> use the smpd process manager. That's the only reason I chose smpd over > >> hydra. As to your other suggestion. I ran ./main and the same error > >> (Can't open /dev/mem...) appeared. sudo ./main works but of course > >> without multiple processes. Eibhlin > >> ________________________________________ > >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > >> Pavan Balaji [balaji at mcs.anl.gov] Sent: 13 June 2013 14:34 > >> To: discuss at mpich.org > >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access > >> /dev/mem > >> > >> I just saw your older email. Why are you using smpd instead of the > >> default process manager (hydra)? > >> > >> -- Pavan > >> > >> On 06/13/2013 08:05 AM, Pavan Balaji wrote: > >>> What's "-phrase"? That's not a recognized option. I'm not sure where > >>> the /dev/mem check is coming from. Try running ~/main without mpiexec > >>> first. > >>> > >>> -- Pavan > >>> > >>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: > >>>> Hello all, > >>>> > >>>> I am trying to use two raspberry-pi to sample and then process some > >>>> data. The first process samples while the second processes and vice > >>>> versa. To do this I use gpio and also mpich-3.0.4 with the process > >>>> manager smpd. I have successfully run cpi on both machines (from the > >>>> master machine). I have also managed to run a similar program but > >>>> without the MPI, this involved compiling with gcc and when running > >>>> putting sudo in front of the binary file. > >>>> > >>>> When I combine these two processes I get various error messages. > >>>> For input: > >>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>> the error is: > >>>> Can't open /dev/mem > >>>> Did you forget to use 'sudo .. ?' > >>>> > >>>> For input: > >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>> the error is: > >>>> sudo: mpiexec: Command not found > >>>> > >>>> I therefore put mpiexec into /usr/bin > >>>> > >>>> now for input: > >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>> the error is: > >>>> Can't open /dev/mem > >>>> Did you forget to use 'sudo .. ?' > >>>> > >>>> Does anyone know how I can work around this? > >>>> Thanks, > >>>> Eibhlin > >>>> > >>>> > >>>> > >>>> _______________________________________________ > >>>> discuss mailing list discuss at mpich.org > >>>> To manage subscription options or unsubscribe: > >>>> https://lists.mpich.org/mailman/listinfo/discuss > >> > >> -- > >> Pavan Balaji > >> http://www.mcs.anl.gov/~balaji > >> _______________________________________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/mailman/listinfo/discuss > >> _______________________________________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From eibhlin.lee10 at imperial.ac.uk Fri Jun 14 11:18:06 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Fri, 14 Jun 2013 16:18:06 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <6323101.aPdelFVpQ1@localhost.localdomain> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk>, <6323101.aPdelFVpQ1@localhost.localdomain> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A213D@icexch-m3.ic.ac.uk> Thank you Antonio. http://www.skpang.co.uk/blog/archives/615 If you follow the steps to download adc you will get 4 files that need to be included in compilation: gb_common.h gb_spi.h gb_common.c and gb_spi.c I have included the files here as well. I compile them using mpicc example_for_MPICH.c gb_common.c gb_spi.c examp -lm -Lbbcm2835-1.25/src -lbcm2835 and execute in the standard manner for smpd but this will be different in hydra. I normally run on 2 machines and have 2 processes started. If you could please reply with the output in full that would help me. (I do NEED /dev/mem it is directly accessed in gb_common.h) Thanks, Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Antonio J. Pe?a [apenya at mcs.anl.gov] Sent: 14 June 2013 16:10 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Hi Eibhlin, If you share a piece of code to test your issue, I can run it for you using hydra to check if that solves it. Antonio On Friday, June 14, 2013 11:27:25 AM Lee, Eibhlin wrote: > I found that the reason we want to access /dev/mem is to setup memory > regions to access the peripherals. (We are trying to read the output of an > ADC). At this point it becomes more a linux/raspberry-pi specific problem > than an MPICH problem. Although the fact that you can't run a program that > needs access to memory mapping (even as the root user) seems something that > MPICH could improve on for future versions. I know I am using smpd instead > of hydra so this problem may already be solved. But if someone could > confirm that, it would be really helpful. > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] Sent: 14 June 2013 11:20 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to > access /dev/mem > > Gus, > I tried running cpi, as is included in the installation of MPI, on two > machines with two processes. The output message confirmed that it had > started only 1 process instead of 2. Process 0 of 1 is on raspi > pi is approximately... > > Then it just hung. I think this is because the other machine didn't know > where to output the data? > > When I tried running two processes on the one machine using the wrapper you > suggested the output was the same but doubled. It didn't hang. This > confirms that every process was started with rank 0. > > I'm not entirely sure why /dev/mem is needed. I'm working in a group and > another member set up io and gpio and it seemed it needed access to > /dev/mem I am going to do a strace as suggested by Pavan Balaji to see > where it is used and see if I can somehow work around it. > > Thank you for your help. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus > Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 21:11 > To: Discuss Mpich > Subject: Re: [mpich-discuss] Running an mpi program that needs to > access /dev/mem > > Hi Eibhlin > > On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: > > Gus, > > I believe your first assumption is correct. Unfortunately it just seemed > > to hang. I think this might be because each one is being made to have the > > same rank... > Darn! I was afraid that it might give only rank 0 to all MPI processes. > So, with the script wrapper the process being launched by mpiexec may > indeed be sudo, > not the actual mpi executable (main) :( > Then it may actually launch a bunch of separate rank 0 replicas of your > program, > instead of assigning to them different ranks. > However, without any output or error message, it is hard to tell. > > No output at all? > No error message, just hangs? > Have you tried a verbose flag (-v) to mpiexec? > (Not sure if it exists in MPICH mpiexec, you'd need to check.) > > Would you care to try it with another mpi program, > one that doesn't deal with /dev/mem (a risky business), > say cpi.c (in the examples directory), or an mpi version of Hello, world, > just to see if the mpiexec+sudo_script_wrapper works as expected or > if everybody gets rank 0? > > > It may already be obvious but this is the first time I am using Linux. I > > had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both > > without success. > "which mpiexec" will return the path to mpiexec, but won't execute it. > > You could try this (with backquotes): > > `which mpiexec` -n 2 ~/main > > On a side note, make sure the mpiexec you're using matches the > mpicc/mpif90/MPI library from the MPICH that > you used to compile the program. > Often times computers have several flavors of MPI installed, and mixing > them just doesn't work. > > > Is putting the full path to it similar to/is a symlink? (This still > > doesn't make main have super user privileges though.) > No, nothing to do with sudo privileges. > > This suggestion was just to avoid messing up your /usr/bin, > which is a directory that despite the somewhat misleading name (/usr, > for historical reasons I think), > is supposed to hold system (Linux) programs (that users can use), but > not user-installed programs. > Normally things are that are installed in /usr get there via some Linux > package manager program > (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. > > I belive MPICH would install by default in /usr/local/ (and put mpiexec > in /usr/local/bin), > which is kind of a default location for non-system applications. > > The full path suggestion would be something like: > /path/to/where/you/installed/mpiexec -n 2 ~/main > > However, this won't solve the other problem w.r.t. sudo and /dev/mem. > > You must know what you are doing, but it made me wonder, > even if your program were sequential, why would you want to mess with > /dev/mem directly? > Just curious about it. > > Gus Correa > > > Eibhlin > > ________________________________________ > > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > > Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 15:37 > > To: Discuss Mpich > > Subject: Re: [mpich-discuss] Running an mpi program that needs to > > access /dev/mem > > > > Hi Lee > > > > How about replacing "~/main" in the mpiexec command line > > by one-liner script? > > Say, "sudo_main.sh", something like this: > > > > #! /bin/bash > > sudo ~/main > > > > After all, it is "main" that accesses /dev/mem, > > and needs "sudo" permissions, not mpiexec, right? > > [Or do the mpiexec-launched processes inherit > > the "sudo" stuff from mpiexec?] > > > > Not related, but, instead of putting mpiexec in /usr/bin, > > can't you just use the full path to it? > > > > I hope this helps, > > Gus Correa > > > > On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: > >> Pavan, > >> I had a lot of trouble getting hydra to work without having to enter a > >> password/passphrase. I saw the option to pass a phrase in the mpich > >> installers guide. I eventually found that for that command you needed to > >> use the smpd process manager. That's the only reason I chose smpd over > >> hydra. As to your other suggestion. I ran ./main and the same error > >> (Can't open /dev/mem...) appeared. sudo ./main works but of course > >> without multiple processes. Eibhlin > >> ________________________________________ > >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > >> Pavan Balaji [balaji at mcs.anl.gov] Sent: 13 June 2013 14:34 > >> To: discuss at mpich.org > >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access > >> /dev/mem > >> > >> I just saw your older email. Why are you using smpd instead of the > >> default process manager (hydra)? > >> > >> -- Pavan > >> > >> On 06/13/2013 08:05 AM, Pavan Balaji wrote: > >>> What's "-phrase"? That's not a recognized option. I'm not sure where > >>> the /dev/mem check is coming from. Try running ~/main without mpiexec > >>> first. > >>> > >>> -- Pavan > >>> > >>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: > >>>> Hello all, > >>>> > >>>> I am trying to use two raspberry-pi to sample and then process some > >>>> data. The first process samples while the second processes and vice > >>>> versa. To do this I use gpio and also mpich-3.0.4 with the process > >>>> manager smpd. I have successfully run cpi on both machines (from the > >>>> master machine). I have also managed to run a similar program but > >>>> without the MPI, this involved compiling with gcc and when running > >>>> putting sudo in front of the binary file. > >>>> > >>>> When I combine these two processes I get various error messages. > >>>> For input: > >>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>> the error is: > >>>> Can't open /dev/mem > >>>> Did you forget to use 'sudo .. ?' > >>>> > >>>> For input: > >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>> the error is: > >>>> sudo: mpiexec: Command not found > >>>> > >>>> I therefore put mpiexec into /usr/bin > >>>> > >>>> now for input: > >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>> the error is: > >>>> Can't open /dev/mem > >>>> Did you forget to use 'sudo .. ?' > >>>> > >>>> Does anyone know how I can work around this? > >>>> Thanks, > >>>> Eibhlin > >>>> > >>>> > >>>> > >>>> _______________________________________________ > >>>> discuss mailing list discuss at mpich.org > >>>> To manage subscription options or unsubscribe: > >>>> https://lists.mpich.org/mailman/listinfo/discuss > >> > >> -- > >> Pavan Balaji > >> http://www.mcs.anl.gov/~balaji > >> _______________________________________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/mailman/listinfo/discuss > >> _______________________________________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gb_common.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gb_common.h URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gb_spi.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gb_spi.h URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: example_for_MPICH.c URL: From apenya at mcs.anl.gov Fri Jun 14 11:57:30 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Fri, 14 Jun 2013 11:57:30 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A213D@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <6323101.aPdelFVpQ1@localhost.localdomain> <2D283C3861654E41AEB39AE4B6767663173A213D@icexch-m3.ic.ac.uk> Message-ID: <5577395.gWWW778rnd@localhost.localdomain> I've created ticket #1885 to check this issue. Here it's the URL: http://trac.mpich.org/projects/mpich/ticket/1885 Antonio On Friday, June 14, 2013 04:18:06 PM Lee, Eibhlin wrote: > Thank you Antonio. > > http://www.skpang.co.uk/blog/archives/615 If you follow the steps to > download adc you will get 4 files that need to be included in compilation: > gb_common.h gb_spi.h gb_common.c and gb_spi.c I have included the files > here as well. > > I compile them using > mpicc example_for_MPICH.c gb_common.c gb_spi.c examp -lm -Lbbcm2835-1.25/src > -lbcm2835 and execute in the standard manner for smpd but this will be > different in hydra. I normally run on 2 machines and have 2 processes > started. > > If you could please reply with the output in full that would help me. (I do > NEED /dev/mem it is directly accessed in gb_common.h) > > Thanks, > Eibhlin > > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > Antonio J. Pe?a [apenya at mcs.anl.gov] Sent: 14 June 2013 16:10 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to > access /dev/mem > > Hi Eibhlin, > > If you share a piece of code to test your issue, I can run it for you using > hydra to check if that solves it. > > Antonio > > On Friday, June 14, 2013 11:27:25 AM Lee, Eibhlin wrote: > > I found that the reason we want to access /dev/mem is to setup memory > > regions to access the peripherals. (We are trying to read the output of an > > ADC). At this point it becomes more a linux/raspberry-pi specific problem > > than an MPICH problem. Although the fact that you can't run a program that > > needs access to memory mapping (even as the root user) seems something > > that > > MPICH could improve on for future versions. I know I am using smpd instead > > of hydra so this problem may already be solved. But if someone could > > confirm that, it would be really helpful. > > ________________________________________ > > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > > Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] Sent: 14 June 2013 11:20 > > To: discuss at mpich.org > > Subject: Re: [mpich-discuss] Running an mpi program that needs to > > > > access /dev/mem > > > > Gus, > > I tried running cpi, as is included in the installation of MPI, on two > > machines with two processes. The output message confirmed that it had > > started only 1 process instead of 2. Process 0 of 1 is on raspi > > pi is approximately... > > > > Then it just hung. I think this is because the other machine didn't know > > where to output the data? > > > > When I tried running two processes on the one machine using the wrapper > > you > > suggested the output was the same but doubled. It didn't hang. This > > confirms that every process was started with rank 0. > > > > I'm not entirely sure why /dev/mem is needed. I'm working in a group and > > another member set up io and gpio and it seemed it needed access to > > /dev/mem I am going to do a strace as suggested by Pavan Balaji to see > > where it is used and see if I can somehow work around it. > > > > Thank you for your help. > > Eibhlin > > ________________________________________ > > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > > Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 21:11 > > To: Discuss Mpich > > Subject: Re: [mpich-discuss] Running an mpi program that needs to > > access /dev/mem > > > > Hi Eibhlin > > > > On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: > > > Gus, > > > I believe your first assumption is correct. Unfortunately it just seemed > > > to hang. I think this might be because each one is being made to have > > > the > > > same rank... > > > > Darn! I was afraid that it might give only rank 0 to all MPI processes. > > So, with the script wrapper the process being launched by mpiexec may > > indeed be sudo, > > not the actual mpi executable (main) :( > > Then it may actually launch a bunch of separate rank 0 replicas of your > > program, > > instead of assigning to them different ranks. > > However, without any output or error message, it is hard to tell. > > > > No output at all? > > No error message, just hangs? > > Have you tried a verbose flag (-v) to mpiexec? > > (Not sure if it exists in MPICH mpiexec, you'd need to check.) > > > > Would you care to try it with another mpi program, > > one that doesn't deal with /dev/mem (a risky business), > > say cpi.c (in the examples directory), or an mpi version of Hello, world, > > just to see if the mpiexec+sudo_script_wrapper works as expected or > > if everybody gets rank 0? > > > > > It may already be obvious but this is the first time I am using Linux. I > > > had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both > > > without success. > > > > "which mpiexec" will return the path to mpiexec, but won't execute it. > > > > You could try this (with backquotes): > > > > `which mpiexec` -n 2 ~/main > > > > On a side note, make sure the mpiexec you're using matches the > > mpicc/mpif90/MPI library from the MPICH that > > you used to compile the program. > > Often times computers have several flavors of MPI installed, and mixing > > them just doesn't work. > > > > > Is putting the full path to it similar to/is a symlink? (This still > > > doesn't make main have super user privileges though.) > > > > No, nothing to do with sudo privileges. > > > > This suggestion was just to avoid messing up your /usr/bin, > > which is a directory that despite the somewhat misleading name (/usr, > > for historical reasons I think), > > is supposed to hold system (Linux) programs (that users can use), but > > not user-installed programs. > > Normally things are that are installed in /usr get there via some Linux > > package manager program > > (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. > > > > I belive MPICH would install by default in /usr/local/ (and put mpiexec > > in /usr/local/bin), > > which is kind of a default location for non-system applications. > > > > The full path suggestion would be something like: > > /path/to/where/you/installed/mpiexec -n 2 ~/main > > > > However, this won't solve the other problem w.r.t. sudo and /dev/mem. > > > > You must know what you are doing, but it made me wonder, > > even if your program were sequential, why would you want to mess with > > /dev/mem directly? > > Just curious about it. > > > > Gus Correa > > > > > Eibhlin > > > ________________________________________ > > > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of > > > Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 15:37 > > > To: Discuss Mpich > > > Subject: Re: [mpich-discuss] Running an mpi program that needs to > > > access /dev/mem > > > > > > Hi Lee > > > > > > How about replacing "~/main" in the mpiexec command line > > > by one-liner script? > > > Say, "sudo_main.sh", something like this: > > > > > > #! /bin/bash > > > sudo ~/main > > > > > > After all, it is "main" that accesses /dev/mem, > > > and needs "sudo" permissions, not mpiexec, right? > > > [Or do the mpiexec-launched processes inherit > > > the "sudo" stuff from mpiexec?] > > > > > > Not related, but, instead of putting mpiexec in /usr/bin, > > > can't you just use the full path to it? > > > > > > I hope this helps, > > > Gus Correa > > > > > > On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: > > >> Pavan, > > >> I had a lot of trouble getting hydra to work without having to enter a > > >> password/passphrase. I saw the option to pass a phrase in the mpich > > >> installers guide. I eventually found that for that command you needed > > >> to > > >> use the smpd process manager. That's the only reason I chose smpd over > > >> hydra. As to your other suggestion. I ran ./main and the same error > > >> (Can't open /dev/mem...) appeared. sudo ./main works but of course > > >> without multiple processes. Eibhlin > > >> ________________________________________ > > >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf > > >> of > > >> Pavan Balaji [balaji at mcs.anl.gov] Sent: 13 June 2013 14:34 > > >> To: discuss at mpich.org > > >> Subject: Re: [mpich-discuss] Running an mpi program that needs to > > >> access > > >> > > >> /dev/mem > > >> > > >> I just saw your older email. Why are you using smpd instead of the > > >> default process manager (hydra)? > > >> > > >> -- Pavan > > >> > > >> On 06/13/2013 08:05 AM, Pavan Balaji wrote: > > >>> What's "-phrase"? That's not a recognized option. I'm not sure where > > >>> the /dev/mem check is coming from. Try running ~/main without mpiexec > > >>> first. > > >>> > > >>> -- Pavan > > >>> > > >>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: > > >>>> Hello all, > > >>>> > > >>>> I am trying to use two raspberry-pi to sample and then process some > > >>>> data. The first process samples while the second processes and vice > > >>>> versa. To do this I use gpio and also mpich-3.0.4 with the process > > >>>> manager smpd. I have successfully run cpi on both machines (from the > > >>>> master machine). I have also managed to run a similar program but > > >>>> without the MPI, this involved compiling with gcc and when running > > >>>> putting sudo in front of the binary file. > > >>>> > > >>>> When I combine these two processes I get various error messages. > > >>>> For input: > > >>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > > >>>> the error is: > > >>>> Can't open /dev/mem > > >>>> Did you forget to use 'sudo .. ?' > > >>>> > > >>>> For input: > > >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > > >>>> the error is: > > >>>> sudo: mpiexec: Command not found > > >>>> > > >>>> I therefore put mpiexec into /usr/bin > > >>>> > > >>>> now for input: > > >>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > > >>>> the error is: > > >>>> Can't open /dev/mem > > >>>> Did you forget to use 'sudo .. ?' > > >>>> > > >>>> Does anyone know how I can work around this? > > >>>> Thanks, > > >>>> Eibhlin > > >>>> > > >>>> > > >>>> > > >>>> _______________________________________________ > > >>>> discuss mailing list discuss at mpich.org > > >>>> To manage subscription options or unsubscribe: > > >>>> https://lists.mpich.org/mailman/listinfo/discuss > > >> > > >> -- > > >> Pavan Balaji > > >> http://www.mcs.anl.gov/~balaji > > >> _______________________________________________ > > >> discuss mailing list discuss at mpich.org > > >> To manage subscription options or unsubscribe: > > >> https://lists.mpich.org/mailman/listinfo/discuss > > >> _______________________________________________ > > >> discuss mailing list discuss at mpich.org > > >> To manage subscription options or unsubscribe: > > >> https://lists.mpich.org/mailman/listinfo/discuss > > > > > > _______________________________________________ > > > discuss mailing list discuss at mpich.org > > > To manage subscription options or unsubscribe: > > > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > > > discuss mailing list discuss at mpich.org > > > To manage subscription options or unsubscribe: > > > https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From balaji at mcs.anl.gov Fri Jun 14 13:24:39 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 14 Jun 2013 13:24:39 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov>, <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk>, <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk>, <51BA275A.90608@ldeo.columbia.edu>, <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> Message-ID: <51BB5FE7.2030107@mcs.anl.gov> You can run mpich as root. There's no restriction on that. You still haven't tried out my suggestion of running "id" to check what user ID you are running your processes as. My guess is that you are not setting your user ID correctly. -- Pavan On 06/14/2013 06:27 AM, Lee, Eibhlin wrote: > I found that the reason we want to access /dev/mem is to setup memory regions to access the peripherals. (We are trying to read the output of an ADC). At this point it becomes more a linux/raspberry-pi specific problem than an MPICH problem. Although the fact that you can't run a program that needs access to memory mapping (even as the root user) seems something that MPICH could improve on for future versions. I know I am using smpd instead of hydra so this problem may already be solved. But if someone could confirm that, it would be really helpful. > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] > Sent: 14 June 2013 11:20 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Gus, > I tried running cpi, as is included in the installation of MPI, on two machines with two processes. The output message confirmed that it had started only 1 process instead of 2. > Process 0 of 1 is on raspi > pi is approximately... > > Then it just hung. I think this is because the other machine didn't know where to output the data? > > When I tried running two processes on the one machine using the wrapper you suggested the output was the same but doubled. It didn't hang. This confirms that every process was started with rank 0. > > I'm not entirely sure why /dev/mem is needed. I'm working in a group and another member set up io and gpio and it seemed it needed access to /dev/mem I am going to do a strace as suggested by Pavan Balaji to see where it is used and see if I can somehow work around it. > > Thank you for your help. > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] > Sent: 13 June 2013 21:11 > To: Discuss Mpich > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > Hi Eibhlin > > On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: >> Gus, >> I believe your first assumption is correct. Unfortunately it just seemed to hang. I think this might be because each one is being made to have the same rank... > > Darn! I was afraid that it might give only rank 0 to all MPI processes. > So, with the script wrapper the process being launched by mpiexec may > indeed be sudo, > not the actual mpi executable (main) :( > Then it may actually launch a bunch of separate rank 0 replicas of your > program, > instead of assigning to them different ranks. > However, without any output or error message, it is hard to tell. > > No output at all? > No error message, just hangs? > Have you tried a verbose flag (-v) to mpiexec? > (Not sure if it exists in MPICH mpiexec, you'd need to check.) > > Would you care to try it with another mpi program, > one that doesn't deal with /dev/mem (a risky business), > say cpi.c (in the examples directory), or an mpi version of Hello, world, > just to see if the mpiexec+sudo_script_wrapper works as expected or > if everybody gets rank 0? > > >> It may already be obvious but this is the first time I am using Linux. I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both without success. > > "which mpiexec" will return the path to mpiexec, but won't execute it. > > You could try this (with backquotes): > > `which mpiexec` -n 2 ~/main > > On a side note, make sure the mpiexec you're using matches the > mpicc/mpif90/MPI library from the MPICH that > you used to compile the program. > Often times computers have several flavors of MPI installed, and mixing > them just doesn't work. > >> Is putting the full path to it similar to/is a symlink? (This still doesn't make main have super user privileges though.) > > No, nothing to do with sudo privileges. > > This suggestion was just to avoid messing up your /usr/bin, > which is a directory that despite the somewhat misleading name (/usr, > for historical reasons I think), > is supposed to hold system (Linux) programs (that users can use), but > not user-installed programs. > Normally things are that are installed in /usr get there via some Linux > package manager program > (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. > > I belive MPICH would install by default in /usr/local/ (and put mpiexec > in /usr/local/bin), > which is kind of a default location for non-system applications. > > The full path suggestion would be something like: > /path/to/where/you/installed/mpiexec -n 2 ~/main > > However, this won't solve the other problem w.r.t. sudo and /dev/mem. > > You must know what you are doing, but it made me wonder, > even if your program were sequential, why would you want to mess with > /dev/mem directly? > Just curious about it. > > Gus Correa > > > >> Eibhlin >> ________________________________________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] >> Sent: 13 June 2013 15:37 >> To: Discuss Mpich >> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >> >> Hi Lee >> >> How about replacing "~/main" in the mpiexec command line >> by one-liner script? >> Say, "sudo_main.sh", something like this: >> >> #! /bin/bash >> sudo ~/main >> >> After all, it is "main" that accesses /dev/mem, >> and needs "sudo" permissions, not mpiexec, right? >> [Or do the mpiexec-launched processes inherit >> the "sudo" stuff from mpiexec?] >> >> Not related, but, instead of putting mpiexec in /usr/bin, >> can't you just use the full path to it? >> >> I hope this helps, >> Gus Correa >> >> On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: >>> Pavan, >>> I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. >>> As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. >>> Eibhlin >>> ________________________________________ >>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] >>> Sent: 13 June 2013 14:34 >>> To: discuss at mpich.org >>> Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem >>> >>> I just saw your older email. Why are you using smpd instead of the >>> default process manager (hydra)? >>> >>> -- Pavan >>> >>> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>>> What's "-phrase"? That's not a recognized option. I'm not sure where >>>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>>> first. >>>> >>>> -- Pavan >>>> >>>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>>> Hello all, >>>>> >>>>> I am trying to use two raspberry-pi to sample and then process some >>>>> data. The first process samples while the second processes and vice >>>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>>> manager smpd. I have successfully run cpi on both machines (from the >>>>> master machine). I have also managed to run a similar program but >>>>> without the MPI, this involved compiling with gcc and when running >>>>> putting sudo in front of the binary file. >>>>> >>>>> When I combine these two processes I get various error messages. >>>>> For input: >>>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> Can't open /dev/mem >>>>> Did you forget to use 'sudo .. ?' >>>>> >>>>> For input: >>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> sudo: mpiexec: Command not found >>>>> >>>>> I therefore put mpiexec into /usr/bin >>>>> >>>>> now for input: >>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>> the error is: >>>>> Can't open /dev/mem >>>>> Did you forget to use 'sudo .. ?' >>>>> >>>>> Does anyone know how I can work around this? >>>>> Thanks, >>>>> Eibhlin >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> >>> -- >>> Pavan Balaji >>> http://www.mcs.anl.gov/~balaji >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >>> _______________________________________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From james.dinan at gmail.com Fri Jun 14 15:31:52 2013 From: james.dinan at gmail.com (Jim Dinan) Date: Fri, 14 Jun 2013 15:31:52 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51BB5FE7.2030107@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov> Message-ID: I don't know if this has been suggested, but you could also add your user to the kmem group and chmod /dev/mem so that you have the access you need. ~Jim. On Fri, Jun 14, 2013 at 1:24 PM, Pavan Balaji wrote: > > You can run mpich as root. There's no restriction on that. You still > haven't tried out my suggestion of running "id" to check what user ID you > are running your processes as. My guess is that you are not setting your > user ID correctly. > > -- Pavan > > > On 06/14/2013 06:27 AM, Lee, Eibhlin wrote: > >> I found that the reason we want to access /dev/mem is to setup memory >> regions to access the peripherals. (We are trying to read the output of an >> ADC). At this point it becomes more a linux/raspberry-pi specific problem >> than an MPICH problem. Although the fact that you can't run a program that >> needs access to memory mapping (even as the root user) seems something that >> MPICH could improve on for future versions. I know I am using smpd instead >> of hydra so this problem may already be solved. But if someone could >> confirm that, it would be really helpful. >> ______________________________**__________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of >> Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] >> Sent: 14 June 2013 11:20 >> To: discuss at mpich.org >> Subject: Re: [mpich-discuss] Running an mpi program that needs >> to access /dev/mem >> >> Gus, >> I tried running cpi, as is included in the installation of MPI, on two >> machines with two processes. The output message confirmed that it had >> started only 1 process instead of 2. >> Process 0 of 1 is on raspi >> pi is approximately... >> >> Then it just hung. I think this is because the other machine didn't know >> where to output the data? >> >> When I tried running two processes on the one machine using the wrapper >> you suggested the output was the same but doubled. It didn't hang. This >> confirms that every process was started with rank 0. >> >> I'm not entirely sure why /dev/mem is needed. I'm working in a group and >> another member set up io and gpio and it seemed it needed access to >> /dev/mem I am going to do a strace as suggested by Pavan Balaji to see >> where it is used and see if I can somehow work around it. >> >> Thank you for your help. >> Eibhlin >> ______________________________**__________ >> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of >> Gus Correa [gus at ldeo.columbia.edu] >> Sent: 13 June 2013 21:11 >> To: Discuss Mpich >> Subject: Re: [mpich-discuss] Running an mpi program that needs to >> access /dev/mem >> >> Hi Eibhlin >> >> On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: >> >>> Gus, >>> I believe your first assumption is correct. Unfortunately it just seemed >>> to hang. I think this might be because each one is being made to have the >>> same rank... >>> >> >> Darn! I was afraid that it might give only rank 0 to all MPI processes. >> So, with the script wrapper the process being launched by mpiexec may >> indeed be sudo, >> not the actual mpi executable (main) :( >> Then it may actually launch a bunch of separate rank 0 replicas of your >> program, >> instead of assigning to them different ranks. >> However, without any output or error message, it is hard to tell. >> >> No output at all? >> No error message, just hangs? >> Have you tried a verbose flag (-v) to mpiexec? >> (Not sure if it exists in MPICH mpiexec, you'd need to check.) >> >> Would you care to try it with another mpi program, >> one that doesn't deal with /dev/mem (a risky business), >> say cpi.c (in the examples directory), or an mpi version of Hello, world, >> just to see if the mpiexec+sudo_script_wrapper works as expected or >> if everybody gets rank 0? >> >> >> It may already be obvious but this is the first time I am using Linux. I >>> had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both >>> without success. >>> >> >> "which mpiexec" will return the path to mpiexec, but won't execute it. >> >> You could try this (with backquotes): >> >> `which mpiexec` -n 2 ~/main >> >> On a side note, make sure the mpiexec you're using matches the >> mpicc/mpif90/MPI library from the MPICH that >> you used to compile the program. >> Often times computers have several flavors of MPI installed, and mixing >> them just doesn't work. >> >> Is putting the full path to it similar to/is a symlink? (This still >>> doesn't make main have super user privileges though.) >>> >> >> No, nothing to do with sudo privileges. >> >> This suggestion was just to avoid messing up your /usr/bin, >> which is a directory that despite the somewhat misleading name (/usr, >> for historical reasons I think), >> is supposed to hold system (Linux) programs (that users can use), but >> not user-installed programs. >> Normally things are that are installed in /usr get there via some Linux >> package manager program >> (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. >> >> I belive MPICH would install by default in /usr/local/ (and put mpiexec >> in /usr/local/bin), >> which is kind of a default location for non-system applications. >> >> The full path suggestion would be something like: >> /path/to/where/you/installed/**mpiexec -n 2 ~/main >> >> However, this won't solve the other problem w.r.t. sudo and /dev/mem. >> >> You must know what you are doing, but it made me wonder, >> even if your program were sequential, why would you want to mess with >> /dev/mem directly? >> Just curious about it. >> >> Gus Correa >> >> >> >> Eibhlin >>> ______________________________**__________ >>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf >>> of Gus Correa [gus at ldeo.columbia.edu] >>> Sent: 13 June 2013 15:37 >>> To: Discuss Mpich >>> Subject: Re: [mpich-discuss] Running an mpi program that needs to >>> access /dev/mem >>> >>> Hi Lee >>> >>> How about replacing "~/main" in the mpiexec command line >>> by one-liner script? >>> Say, "sudo_main.sh", something like this: >>> >>> #! /bin/bash >>> sudo ~/main >>> >>> After all, it is "main" that accesses /dev/mem, >>> and needs "sudo" permissions, not mpiexec, right? >>> [Or do the mpiexec-launched processes inherit >>> the "sudo" stuff from mpiexec?] >>> >>> Not related, but, instead of putting mpiexec in /usr/bin, >>> can't you just use the full path to it? >>> >>> I hope this helps, >>> Gus Correa >>> >>> On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: >>> >>>> Pavan, >>>> I had a lot of trouble getting hydra to work without having to enter a >>>> password/passphrase. I saw the option to pass a phrase in the mpich >>>> installers guide. I eventually found that for that command you needed to >>>> use the smpd process manager. That's the only reason I chose smpd over >>>> hydra. >>>> As to your other suggestion. I ran ./main and the same error (Can't >>>> open /dev/mem...) appeared. sudo ./main works but of course without >>>> multiple processes. >>>> Eibhlin >>>> ______________________________**__________ >>>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf >>>> of Pavan Balaji [balaji at mcs.anl.gov] >>>> Sent: 13 June 2013 14:34 >>>> To: discuss at mpich.org >>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to >>>> access /dev/mem >>>> >>>> I just saw your older email. Why are you using smpd instead of the >>>> default process manager (hydra)? >>>> >>>> -- Pavan >>>> >>>> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>>> >>>>> What's "-phrase"? That's not a recognized option. I'm not sure where >>>>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>>>> first. >>>>> >>>>> -- Pavan >>>>> >>>>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>>> >>>>>> Hello all, >>>>>> >>>>>> I am trying to use two raspberry-pi to sample and then process some >>>>>> data. The first process samples while the second processes and vice >>>>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>>>> manager smpd. I have successfully run cpi on both machines (from the >>>>>> master machine). I have also managed to run a similar program but >>>>>> without the MPI, this involved compiling with gcc and when running >>>>>> putting sudo in front of the binary file. >>>>>> >>>>>> When I combine these two processes I get various error messages. >>>>>> For input: >>>>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>> the error is: >>>>>> Can't open /dev/mem >>>>>> Did you forget to use 'sudo .. ?' >>>>>> >>>>>> For input: >>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>> the error is: >>>>>> sudo: mpiexec: Command not found >>>>>> >>>>>> I therefore put mpiexec into /usr/bin >>>>>> >>>>>> now for input: >>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>> the error is: >>>>>> Can't open /dev/mem >>>>>> Did you forget to use 'sudo .. ?' >>>>>> >>>>>> Does anyone know how I can work around this? >>>>>> Thanks, >>>>>> Eibhlin >>>>>> >>>>>> >>>>>> >>>>>> ______________________________**_________________ >>>>>> discuss mailing list discuss at mpich.org >>>>>> To manage subscription options or unsubscribe: >>>>>> https://lists.mpich.org/**mailman/listinfo/discuss >>>>>> >>>>>> -- >>>> Pavan Balaji >>>> http://www.mcs.anl.gov/~balaji >>>> ______________________________**_________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/**mailman/listinfo/discuss >>>> ______________________________**_________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/**mailman/listinfo/discuss >>>> >>> ______________________________**_________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/**mailman/listinfo/discuss >>> ______________________________**_________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/**mailman/listinfo/discuss >>> >> >> ______________________________**_________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/**mailman/listinfo/discuss >> ______________________________**_________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/**mailman/listinfo/discuss >> ______________________________**_________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/**mailman/listinfo/discuss >> >> > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > ______________________________**_________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/**mailman/listinfo/discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnd9886 at gmail.com Fri Jun 14 15:58:24 2013 From: johnd9886 at gmail.com (john donald) Date: Fri, 14 Jun 2013 22:58:24 +0200 Subject: [mpich-discuss] ckpoint-num error In-Reply-To: <10C1BBC0-A308-4B46-9BFC-AAF411F0BAD1@mcs.anl.gov> References: <88F47ECA-32D7-4A78-85AD-E6E69D74CC06@mcs.anl.gov> <089CF900-3487-42BF-91EF-57984AE3943D@mcs.anl.gov> <10C1BBC0-A308-4B46-9BFC-AAF411F0BAD1@mcs.anl.gov> Message-ID: the file size is 121.6 MB after i raised the interval to 20 sec my test application is MPI/c integer sort program with 5000 iterations sorry for the trivial question but how to know that the checkpoint file is empty or not how can i open it? 2013/6/11 Wesley Bland > Did you check if there's actually anything in the checkpoint files? If > they're empty, that probably means that you're checkpointing too frequently. > > On Jun 10, 2013, at 5:17 PM, john donald wrote: > > i raised it to 20 sec but same results > sorry i am new to checkpoint restart > i am trying this initially on one multicore pc > how should it look like if the restart succeed? should it work in the same > terminal in which i am running restart command > my test app has 5000 iterations , checkpoint is taken at iteration no 300 > for example , if i choose to restart from this checkpoint file should it > restart near this iteration no 300 > > > 2013/6/6 Wesley Bland > >> Is there actually anything in those checkpoints? With a checkpoint >> happening every 4 seconds you may be overdoing it. >> >> Wesley >> >> On Jun 5, 2013, at 2:14 PM, Rajeev Thakur wrote: >> >> > I don't know, but see if anything on this page helps: >> > http://wiki.mpich.org/mpich/index.php/Checkpointing >> > >> > On Jun 5, 2013, at 4:09 PM, john donald wrote: >> > >> >> >> >> >> >> ---------- Forwarded message ---------- >> >> From: john donald >> >> Date: 2013/6/3 >> >> Subject: ckpoint-num error >> >> To: mpich-discuss at mcs.anl.gov >> >> >> >> >> >> i used mpiexec with checkpoint and created two checkpoint files: >> >> >> >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint >> -ckpoint-interval 4 -n 4 /home/john/app/md >> >> >> >> context-num0-0-0 >> >> context-num1-0-0 >> >> >> >> >> >> i am trying to make a restart >> >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint >> -n 4 -ckpoint-num 1 >> >> >> >> but nothing happened it just hangs >> >> i also tried: >> >> mpiexec -ckpointlib blcr -ckpoint-prefix /home/john/ckpts/app.ckpoint >> -n 4 -ckpoint-num 0-0-0 >> >> also hangs >> >> >> >> _______________________________________________ >> >> discuss mailing list discuss at mpich.org >> >> To manage subscription options or unsubscribe: >> >> https://lists.mpich.org/mailman/listinfo/discuss >> > >> > _______________________________________________ >> > discuss mailing list discuss at mpich.org >> > To manage subscription options or unsubscribe: >> > https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sniu at hawk.iit.edu Fri Jun 14 16:35:39 2013 From: sniu at hawk.iit.edu (Sufeng Niu) Date: Fri, 14 Jun 2013 16:35:39 -0500 Subject: [mpich-discuss] MPI server setup issue Message-ID: Hello, I am a beginner on MPI programming, and right now I am working on an MPI project. I got a few questions related to implementation issues: 1. when I run a simple MPI hello world on multiple nodes, (I already installed mpich3 library on master node, mount the nfs, shared the executable file and mpi library, set slave node to be keyless ssh), my program was stoped there say: bash: /mnt/mpi/mpich-install/bin/hydra_pmi_proxy: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory. I can not get rid of it for a long times. even though I reset everything (I already add PATH=/mnt/mpi/mpich-install/bin:$PATH in .bash_profile). Do you have any clues on this problems? 2. for multiple servers, each of them has 10G ethernet card. for example, one network card address is eth5: 10.0.5.55. So if I want to launch MPI communication through 10G network card. Should I set the hostfile as: 10.0.5.55:$(PROCESS_NUM)? Or using iface eth5 Thanks a lot! -- Best Regards, Sufeng Niu ECASP lab, ECE department, Illinois Institute of Technology Tel: 312-731-7219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eibhlin.lee10 at imperial.ac.uk Fri Jun 14 16:43:02 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Fri, 14 Jun 2013 21:43:02 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, Message-ID: <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk> Pavan, sorry when I do run mpiexec id the output is uid=1000(pi) gid=1000(pi) groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) regardless of whether I'm in root or my usual user. root at raspi or pi at raspi. Is this output what you would expect? Jim, I have tried changing the ownership of /dev/mem by chmod 755 /dev/mem so that the output of ls -l /dev/mem is crwxr-xr-x 1 root kmem 1, 1 Jan 1 1970 /dev/mem but I still can't open /dev/mem inside my program. I also tried with the code 777. I tried adding my user to the kmem group by doing usermod -a -G kmem pi but this doesn't fix the problem. Have I gotten totally confused and pi isn't my user? Thank you in advance, Eibhlin ________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jim Dinan [james.dinan at gmail.com] Sent: 14 June 2013 21:31 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem I don't know if this has been suggested, but you could also add your user to the kmem group and chmod /dev/mem so that you have the access you need. ~Jim. On Fri, Jun 14, 2013 at 1:24 PM, Pavan Balaji > wrote: You can run mpich as root. There's no restriction on that. You still haven't tried out my suggestion of running "id" to check what user ID you are running your processes as. My guess is that you are not setting your user ID correctly. -- Pavan On 06/14/2013 06:27 AM, Lee, Eibhlin wrote: I found that the reason we want to access /dev/mem is to setup memory regions to access the peripherals. (We are trying to read the output of an ADC). At this point it becomes more a linux/raspberry-pi specific problem than an MPICH problem. Although the fact that you can't run a program that needs access to memory mapping (even as the root user) seems something that MPICH could improve on for future versions. I know I am using smpd instead of hydra so this problem may already be solved. But if someone could confirm that, it would be really helpful. ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] Sent: 14 June 2013 11:20 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Gus, I tried running cpi, as is included in the installation of MPI, on two machines with two processes. The output message confirmed that it had started only 1 process instead of 2. Process 0 of 1 is on raspi pi is approximately... Then it just hung. I think this is because the other machine didn't know where to output the data? When I tried running two processes on the one machine using the wrapper you suggested the output was the same but doubled. It didn't hang. This confirms that every process was started with rank 0. I'm not entirely sure why /dev/mem is needed. I'm working in a group and another member set up io and gpio and it seemed it needed access to /dev/mem I am going to do a strace as suggested by Pavan Balaji to see where it is used and see if I can somehow work around it. Thank you for your help. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 21:11 To: Discuss Mpich Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Hi Eibhlin On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: Gus, I believe your first assumption is correct. Unfortunately it just seemed to hang. I think this might be because each one is being made to have the same rank... Darn! I was afraid that it might give only rank 0 to all MPI processes. So, with the script wrapper the process being launched by mpiexec may indeed be sudo, not the actual mpi executable (main) :( Then it may actually launch a bunch of separate rank 0 replicas of your program, instead of assigning to them different ranks. However, without any output or error message, it is hard to tell. No output at all? No error message, just hangs? Have you tried a verbose flag (-v) to mpiexec? (Not sure if it exists in MPICH mpiexec, you'd need to check.) Would you care to try it with another mpi program, one that doesn't deal with /dev/mem (a risky business), say cpi.c (in the examples directory), or an mpi version of Hello, world, just to see if the mpiexec+sudo_script_wrapper works as expected or if everybody gets rank 0? It may already be obvious but this is the first time I am using Linux. I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both without success. "which mpiexec" will return the path to mpiexec, but won't execute it. You could try this (with backquotes): `which mpiexec` -n 2 ~/main On a side note, make sure the mpiexec you're using matches the mpicc/mpif90/MPI library from the MPICH that you used to compile the program. Often times computers have several flavors of MPI installed, and mixing them just doesn't work. Is putting the full path to it similar to/is a symlink? (This still doesn't make main have super user privileges though.) No, nothing to do with sudo privileges. This suggestion was just to avoid messing up your /usr/bin, which is a directory that despite the somewhat misleading name (/usr, for historical reasons I think), is supposed to hold system (Linux) programs (that users can use), but not user-installed programs. Normally things are that are installed in /usr get there via some Linux package manager program (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. I belive MPICH would install by default in /usr/local/ (and put mpiexec in /usr/local/bin), which is kind of a default location for non-system applications. The full path suggestion would be something like: /path/to/where/you/installed/mpiexec -n 2 ~/main However, this won't solve the other problem w.r.t. sudo and /dev/mem. You must know what you are doing, but it made me wonder, even if your program were sequential, why would you want to mess with /dev/mem directly? Just curious about it. Gus Correa Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 15:37 To: Discuss Mpich Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Hi Lee How about replacing "~/main" in the mpiexec command line by one-liner script? Say, "sudo_main.sh", something like this: #! /bin/bash sudo ~/main After all, it is "main" that accesses /dev/mem, and needs "sudo" permissions, not mpiexec, right? [Or do the mpiexec-launched processes inherit the "sudo" stuff from mpiexec?] Not related, but, instead of putting mpiexec in /usr/bin, can't you just use the full path to it? I hope this helps, Gus Correa On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: Pavan, I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] Sent: 13 June 2013 14:34 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem I just saw your older email. Why are you using smpd instead of the default process manager (hydra)? -- Pavan On 06/13/2013 08:05 AM, Pavan Balaji wrote: What's "-phrase"? That's not a recognized option. I'm not sure where the /dev/mem check is coming from. Try running ~/main without mpiexec first. -- Pavan On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: Hello all, I am trying to use two raspberry-pi to sample and then process some data. The first process samples while the second processes and vice versa. To do this I use gpio and also mpich-3.0.4 with the process manager smpd. I have successfully run cpi on both machines (from the master machine). I have also managed to run a similar program but without the MPI, this involved compiling with gcc and when running putting sudo in front of the binary file. When I combine these two processes I get various error messages. For input: mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: Can't open /dev/mem Did you forget to use 'sudo .. ?' For input: sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: sudo: mpiexec: Command not found I therefore put mpiexec into /usr/bin now for input: sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: Can't open /dev/mem Did you forget to use 'sudo .. ?' Does anyone know how I can work around this? Thanks, Eibhlin _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -- Pavan Balaji http://www.mcs.anl.gov/~balaji _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -- Pavan Balaji http://www.mcs.anl.gov/~balaji _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From apenya at mcs.anl.gov Fri Jun 14 16:46:20 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Fri, 14 Jun 2013 16:46:20 -0500 Subject: [mpich-discuss] MPI server setup issue In-Reply-To: References: Message-ID: <3198378.OIJ6uL42Ef@localhost.localdomain> Hi Sufeng, > On Friday, June 14, 2013 04:35:39 PM Sufeng Niu wrote: > Hello, > > I am a beginner on MPI programming, and right now I am working on an MPI project. I got a few questions related to implementation issues: > > 1. when I run a simple MPI hello world on multiple nodes, (I already installed mpich3 library on master node, mount the nfs, shared the executable file and mpi library, set slave node to be keyless ssh), my program was stoped there say: > bash: /mnt/mpi/mpich-install/bin/hydra_pmi_proxy: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory. > I can not get rid of it for a long times. even though I reset everything (I already add PATH=/mnt/mpi/mpich-install/bin:$PATH in .bash_profile). Do you have any clues on this problems? > This issue may be related to mismatch between 32 and 64 bit libraries. Are you running 64 or 32 bit operating systems in all of your nodes consistently? > 2. for multiple servers, each of them has 10G ethernet card. for example, one network card address is eth5: 10.0.5.55. So if I want to launch MPI communication through 10G network card. Should I set the hostfile as: 10.0.5.55:$(PROCESS_NUM)? Or using iface eth5 You can address those nodes by either IP or DNS name in the hostfile, depending on how your system is configured. Using IP addresses is completely OK. Best, Antonio > > Thanks a lot! > > -- Best Regards, > Sufeng Niu > ECASP lab, ECE department, Illinois Institute of Technology > Tel: 312-731-7219 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.dinan at gmail.com Sat Jun 15 10:03:02 2013 From: james.dinan at gmail.com (Jim Dinan) Date: Sat, 15 Jun 2013 10:03:02 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk> Message-ID: Eibhlin, Did you make those permissions changes on every node where your program runs? What happens if you run "mpiexec touch /dev/mem"? ~Jim. On Fri, Jun 14, 2013 at 4:43 PM, Lee, Eibhlin wrote: > Pavan, > sorry when I do run mpiexec id the output is > uid=1000(pi) gid=1000(pi) > groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) > > regardless of whether I'm in root or my usual user. root at raspi or > pi at raspi. Is this output what you would expect? > > Jim, > I have tried changing the ownership of /dev/mem by > chmod 755 /dev/mem so that the output of ls -l /dev/mem is > crwxr-xr-x 1 root kmem 1, 1 Jan 1 1970 /dev/mem > but I still can't open /dev/mem inside my program. I also tried with the > code 777. > > I tried adding my user to the kmem group by doing > usermod -a -G kmem pi > but this doesn't fix the problem. > > > Have I gotten totally confused and pi isn't my user? > > Thank you in advance, > Eibhlin > ------------------------------ > *From:* discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf > of Jim Dinan [james.dinan at gmail.com] > *Sent:* 14 June 2013 21:31 > > *To:* discuss at mpich.org > *Subject:* Re: [mpich-discuss] Running an mpi program that needs to > access /dev/mem > > I don't know if this has been suggested, but you could also add your > user to the kmem group and chmod /dev/mem so that you have the access you > need. > > ~Jim. > > > On Fri, Jun 14, 2013 at 1:24 PM, Pavan Balaji wrote: > >> >> You can run mpich as root. There's no restriction on that. You still >> haven't tried out my suggestion of running "id" to check what user ID you >> are running your processes as. My guess is that you are not setting your >> user ID correctly. >> >> -- Pavan >> >> >> On 06/14/2013 06:27 AM, Lee, Eibhlin wrote: >> >>> I found that the reason we want to access /dev/mem is to setup memory >>> regions to access the peripherals. (We are trying to read the output of an >>> ADC). At this point it becomes more a linux/raspberry-pi specific problem >>> than an MPICH problem. Although the fact that you can't run a program that >>> needs access to memory mapping (even as the root user) seems something that >>> MPICH could improve on for future versions. I know I am using smpd instead >>> of hydra so this problem may already be solved. But if someone could >>> confirm that, it would be really helpful. >>> ______________________________**__________ >>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf >>> of Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] >>> Sent: 14 June 2013 11:20 >>> To: discuss at mpich.org >>> Subject: Re: [mpich-discuss] Running an mpi program that needs >>> to access /dev/mem >>> >>> Gus, >>> I tried running cpi, as is included in the installation of MPI, on two >>> machines with two processes. The output message confirmed that it had >>> started only 1 process instead of 2. >>> Process 0 of 1 is on raspi >>> pi is approximately... >>> >>> Then it just hung. I think this is because the other machine didn't know >>> where to output the data? >>> >>> When I tried running two processes on the one machine using the wrapper >>> you suggested the output was the same but doubled. It didn't hang. This >>> confirms that every process was started with rank 0. >>> >>> I'm not entirely sure why /dev/mem is needed. I'm working in a group and >>> another member set up io and gpio and it seemed it needed access to >>> /dev/mem I am going to do a strace as suggested by Pavan Balaji to see >>> where it is used and see if I can somehow work around it. >>> >>> Thank you for your help. >>> Eibhlin >>> ______________________________**__________ >>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf >>> of Gus Correa [gus at ldeo.columbia.edu] >>> Sent: 13 June 2013 21:11 >>> To: Discuss Mpich >>> Subject: Re: [mpich-discuss] Running an mpi program that needs to >>> access /dev/mem >>> >>> Hi Eibhlin >>> >>> On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: >>> >>>> Gus, >>>> I believe your first assumption is correct. Unfortunately it just >>>> seemed to hang. I think this might be because each one is being made to >>>> have the same rank... >>>> >>> >>> Darn! I was afraid that it might give only rank 0 to all MPI processes. >>> So, with the script wrapper the process being launched by mpiexec may >>> indeed be sudo, >>> not the actual mpi executable (main) :( >>> Then it may actually launch a bunch of separate rank 0 replicas of your >>> program, >>> instead of assigning to them different ranks. >>> However, without any output or error message, it is hard to tell. >>> >>> No output at all? >>> No error message, just hangs? >>> Have you tried a verbose flag (-v) to mpiexec? >>> (Not sure if it exists in MPICH mpiexec, you'd need to check.) >>> >>> Would you care to try it with another mpi program, >>> one that doesn't deal with /dev/mem (a risky business), >>> say cpi.c (in the examples directory), or an mpi version of Hello, world, >>> just to see if the mpiexec+sudo_script_wrapper works as expected or >>> if everybody gets rank 0? >>> >>> >>> It may already be obvious but this is the first time I am using Linux. >>>> I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both >>>> without success. >>>> >>> >>> "which mpiexec" will return the path to mpiexec, but won't execute it. >>> >>> You could try this (with backquotes): >>> >>> `which mpiexec` -n 2 ~/main >>> >>> On a side note, make sure the mpiexec you're using matches the >>> mpicc/mpif90/MPI library from the MPICH that >>> you used to compile the program. >>> Often times computers have several flavors of MPI installed, and mixing >>> them just doesn't work. >>> >>> Is putting the full path to it similar to/is a symlink? (This still >>>> doesn't make main have super user privileges though.) >>>> >>> >>> No, nothing to do with sudo privileges. >>> >>> This suggestion was just to avoid messing up your /usr/bin, >>> which is a directory that despite the somewhat misleading name (/usr, >>> for historical reasons I think), >>> is supposed to hold system (Linux) programs (that users can use), but >>> not user-installed programs. >>> Normally things are that are installed in /usr get there via some Linux >>> package manager program >>> (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. >>> >>> I belive MPICH would install by default in /usr/local/ (and put mpiexec >>> in /usr/local/bin), >>> which is kind of a default location for non-system applications. >>> >>> The full path suggestion would be something like: >>> /path/to/where/you/installed/**mpiexec -n 2 ~/main >>> >>> However, this won't solve the other problem w.r.t. sudo and /dev/mem. >>> >>> You must know what you are doing, but it made me wonder, >>> even if your program were sequential, why would you want to mess with >>> /dev/mem directly? >>> Just curious about it. >>> >>> Gus Correa >>> >>> >>> >>> Eibhlin >>>> ______________________________**__________ >>>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf >>>> of Gus Correa [gus at ldeo.columbia.edu] >>>> Sent: 13 June 2013 15:37 >>>> To: Discuss Mpich >>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to >>>> access /dev/mem >>>> >>>> Hi Lee >>>> >>>> How about replacing "~/main" in the mpiexec command line >>>> by one-liner script? >>>> Say, "sudo_main.sh", something like this: >>>> >>>> #! /bin/bash >>>> sudo ~/main >>>> >>>> After all, it is "main" that accesses /dev/mem, >>>> and needs "sudo" permissions, not mpiexec, right? >>>> [Or do the mpiexec-launched processes inherit >>>> the "sudo" stuff from mpiexec?] >>>> >>>> Not related, but, instead of putting mpiexec in /usr/bin, >>>> can't you just use the full path to it? >>>> >>>> I hope this helps, >>>> Gus Correa >>>> >>>> On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: >>>> >>>>> Pavan, >>>>> I had a lot of trouble getting hydra to work without having to enter a >>>>> password/passphrase. I saw the option to pass a phrase in the mpich >>>>> installers guide. I eventually found that for that command you needed to >>>>> use the smpd process manager. That's the only reason I chose smpd over >>>>> hydra. >>>>> As to your other suggestion. I ran ./main and the same error (Can't >>>>> open /dev/mem...) appeared. sudo ./main works but of course without >>>>> multiple processes. >>>>> Eibhlin >>>>> ______________________________**__________ >>>>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf >>>>> of Pavan Balaji [balaji at mcs.anl.gov] >>>>> Sent: 13 June 2013 14:34 >>>>> To: discuss at mpich.org >>>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to >>>>> access /dev/mem >>>>> >>>>> I just saw your older email. Why are you using smpd instead of the >>>>> default process manager (hydra)? >>>>> >>>>> -- Pavan >>>>> >>>>> On 06/13/2013 08:05 AM, Pavan Balaji wrote: >>>>> >>>>>> What's "-phrase"? That's not a recognized option. I'm not sure where >>>>>> the /dev/mem check is coming from. Try running ~/main without mpiexec >>>>>> first. >>>>>> >>>>>> -- Pavan >>>>>> >>>>>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: >>>>>> >>>>>>> Hello all, >>>>>>> >>>>>>> I am trying to use two raspberry-pi to sample and then process some >>>>>>> data. The first process samples while the second processes and vice >>>>>>> versa. To do this I use gpio and also mpich-3.0.4 with the process >>>>>>> manager smpd. I have successfully run cpi on both machines (from the >>>>>>> master machine). I have also managed to run a similar program but >>>>>>> without the MPI, this involved compiling with gcc and when running >>>>>>> putting sudo in front of the binary file. >>>>>>> >>>>>>> When I combine these two processes I get various error messages. >>>>>>> For input: >>>>>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>>> the error is: >>>>>>> Can't open /dev/mem >>>>>>> Did you forget to use 'sudo .. ?' >>>>>>> >>>>>>> For input: >>>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>>> the error is: >>>>>>> sudo: mpiexec: Command not found >>>>>>> >>>>>>> I therefore put mpiexec into /usr/bin >>>>>>> >>>>>>> now for input: >>>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main >>>>>>> the error is: >>>>>>> Can't open /dev/mem >>>>>>> Did you forget to use 'sudo .. ?' >>>>>>> >>>>>>> Does anyone know how I can work around this? >>>>>>> Thanks, >>>>>>> Eibhlin >>>>>>> >>>>>>> >>>>>>> >>>>>>> ______________________________**_________________ >>>>>>> discuss mailing list discuss at mpich.org >>>>>>> To manage subscription options or unsubscribe: >>>>>>> https://lists.mpich.org/**mailman/listinfo/discuss >>>>>>> >>>>>>> -- >>>>> Pavan Balaji >>>>> http://www.mcs.anl.gov/~balaji >>>>> ______________________________**_________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/**mailman/listinfo/discuss >>>>> ______________________________**_________________ >>>>> discuss mailing list discuss at mpich.org >>>>> To manage subscription options or unsubscribe: >>>>> https://lists.mpich.org/**mailman/listinfo/discuss >>>>> >>>> ______________________________**_________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/**mailman/listinfo/discuss >>>> ______________________________**_________________ >>>> discuss mailing list discuss at mpich.org >>>> To manage subscription options or unsubscribe: >>>> https://lists.mpich.org/**mailman/listinfo/discuss >>>> >>> >>> ______________________________**_________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/**mailman/listinfo/discuss >>> ______________________________**_________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/**mailman/listinfo/discuss >>> ______________________________**_________________ >>> discuss mailing list discuss at mpich.org >>> To manage subscription options or unsubscribe: >>> https://lists.mpich.org/**mailman/listinfo/discuss >>> >>> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> ______________________________**_________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/**mailman/listinfo/discuss >> > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Sat Jun 15 11:02:37 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sat, 15 Jun 2013 11:02:37 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk> Message-ID: <51BC901D.9030107@mcs.anl.gov> On 06/14/2013 04:43 PM, Lee, Eibhlin wrote: > sorry when I do run mpiexec id the output is > uid=1000(pi) gid=1000(pi) > groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) This is the crux of the problem. Your application processes are being launched as the regular user (uid=1000) instead of as root (uid=0). I assume this is how you ran the program; can you confirm? /* Login as root */ % su % mpiexec [whatever_options] id -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From eibhlin.lee10 at imperial.ac.uk Sat Jun 15 12:18:51 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Sat, 15 Jun 2013 17:18:51 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51BC901D.9030107@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk> Pavan, Normally I start smpd when I'm the normal user. This time I did it in root. It then prompted me for a smpd phrase (This happened the first time I ever started it in normal user but the first time it had appeared in root.) The uid=0 now. However, when I try to execute the program now, in root, it just hangs. With no messages at all! I checked and the cpi example does work when in root. Eibhlin ________________________________________ From: Pavan Balaji [balaji at mcs.anl.gov] Sent: 15 June 2013 17:02 To: discuss at mpich.org Cc: Lee, Eibhlin Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem On 06/14/2013 04:43 PM, Lee, Eibhlin wrote: > sorry when I do run mpiexec id the output is > uid=1000(pi) gid=1000(pi) > groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) This is the crux of the problem. Your application processes are being launched as the regular user (uid=1000) instead of as root (uid=0). I assume this is how you ran the program; can you confirm? /* Login as root */ % su % mpiexec [whatever_options] id -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From eibhlin.lee10 at imperial.ac.uk Sat Jun 15 12:19:48 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Sat, 15 Jun 2013 17:19:48 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, Message-ID: <2D283C3861654E41AEB39AE4B6767663173A24C4@icexch-m3.ic.ac.uk> Jim, At the time I was checking it would run on one machine so I only changed it on the one machine I was running the program on. Eibhlin ________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jim Dinan [james.dinan at gmail.com] Sent: 15 June 2013 16:03 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Eibhlin, Did you make those permissions changes on every node where your program runs? What happens if you run "mpiexec touch /dev/mem"? ~Jim. On Fri, Jun 14, 2013 at 4:43 PM, Lee, Eibhlin > wrote: Pavan, sorry when I do run mpiexec id the output is uid=1000(pi) gid=1000(pi) groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) regardless of whether I'm in root or my usual user. root at raspi or pi at raspi. Is this output what you would expect? Jim, I have tried changing the ownership of /dev/mem by chmod 755 /dev/mem so that the output of ls -l /dev/mem is crwxr-xr-x 1 root kmem 1, 1 Jan 1 1970 /dev/mem but I still can't open /dev/mem inside my program. I also tried with the code 777. I tried adding my user to the kmem group by doing usermod -a -G kmem pi but this doesn't fix the problem. Have I gotten totally confused and pi isn't my user? Thank you in advance, Eibhlin ________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jim Dinan [james.dinan at gmail.com] Sent: 14 June 2013 21:31 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem I don't know if this has been suggested, but you could also add your user to the kmem group and chmod /dev/mem so that you have the access you need. ~Jim. On Fri, Jun 14, 2013 at 1:24 PM, Pavan Balaji > wrote: You can run mpich as root. There's no restriction on that. You still haven't tried out my suggestion of running "id" to check what user ID you are running your processes as. My guess is that you are not setting your user ID correctly. -- Pavan On 06/14/2013 06:27 AM, Lee, Eibhlin wrote: I found that the reason we want to access /dev/mem is to setup memory regions to access the peripherals. (We are trying to read the output of an ADC). At this point it becomes more a linux/raspberry-pi specific problem than an MPICH problem. Although the fact that you can't run a program that needs access to memory mapping (even as the root user) seems something that MPICH could improve on for future versions. I know I am using smpd instead of hydra so this problem may already be solved. But if someone could confirm that, it would be really helpful. ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] Sent: 14 June 2013 11:20 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Gus, I tried running cpi, as is included in the installation of MPI, on two machines with two processes. The output message confirmed that it had started only 1 process instead of 2. Process 0 of 1 is on raspi pi is approximately... Then it just hung. I think this is because the other machine didn't know where to output the data? When I tried running two processes on the one machine using the wrapper you suggested the output was the same but doubled. It didn't hang. This confirms that every process was started with rank 0. I'm not entirely sure why /dev/mem is needed. I'm working in a group and another member set up io and gpio and it seemed it needed access to /dev/mem I am going to do a strace as suggested by Pavan Balaji to see where it is used and see if I can somehow work around it. Thank you for your help. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 21:11 To: Discuss Mpich Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Hi Eibhlin On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: Gus, I believe your first assumption is correct. Unfortunately it just seemed to hang. I think this might be because each one is being made to have the same rank... Darn! I was afraid that it might give only rank 0 to all MPI processes. So, with the script wrapper the process being launched by mpiexec may indeed be sudo, not the actual mpi executable (main) :( Then it may actually launch a bunch of separate rank 0 replicas of your program, instead of assigning to them different ranks. However, without any output or error message, it is hard to tell. No output at all? No error message, just hangs? Have you tried a verbose flag (-v) to mpiexec? (Not sure if it exists in MPICH mpiexec, you'd need to check.) Would you care to try it with another mpi program, one that doesn't deal with /dev/mem (a risky business), say cpi.c (in the examples directory), or an mpi version of Hello, world, just to see if the mpiexec+sudo_script_wrapper works as expected or if everybody gets rank 0? It may already be obvious but this is the first time I am using Linux. I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both without success. "which mpiexec" will return the path to mpiexec, but won't execute it. You could try this (with backquotes): `which mpiexec` -n 2 ~/main On a side note, make sure the mpiexec you're using matches the mpicc/mpif90/MPI library from the MPICH that you used to compile the program. Often times computers have several flavors of MPI installed, and mixing them just doesn't work. Is putting the full path to it similar to/is a symlink? (This still doesn't make main have super user privileges though.) No, nothing to do with sudo privileges. This suggestion was just to avoid messing up your /usr/bin, which is a directory that despite the somewhat misleading name (/usr, for historical reasons I think), is supposed to hold system (Linux) programs (that users can use), but not user-installed programs. Normally things are that are installed in /usr get there via some Linux package manager program (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. I belive MPICH would install by default in /usr/local/ (and put mpiexec in /usr/local/bin), which is kind of a default location for non-system applications. The full path suggestion would be something like: /path/to/where/you/installed/mpiexec -n 2 ~/main However, this won't solve the other problem w.r.t. sudo and /dev/mem. You must know what you are doing, but it made me wonder, even if your program were sequential, why would you want to mess with /dev/mem directly? Just curious about it. Gus Correa Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Gus Correa [gus at ldeo.columbia.edu] Sent: 13 June 2013 15:37 To: Discuss Mpich Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem Hi Lee How about replacing "~/main" in the mpiexec command line by one-liner script? Say, "sudo_main.sh", something like this: #! /bin/bash sudo ~/main After all, it is "main" that accesses /dev/mem, and needs "sudo" permissions, not mpiexec, right? [Or do the mpiexec-launched processes inherit the "sudo" stuff from mpiexec?] Not related, but, instead of putting mpiexec in /usr/bin, can't you just use the full path to it? I hope this helps, Gus Correa On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: Pavan, I had a lot of trouble getting hydra to work without having to enter a password/passphrase. I saw the option to pass a phrase in the mpich installers guide. I eventually found that for that command you needed to use the smpd process manager. That's the only reason I chose smpd over hydra. As to your other suggestion. I ran ./main and the same error (Can't open /dev/mem...) appeared. sudo ./main works but of course without multiple processes. Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Pavan Balaji [balaji at mcs.anl.gov] Sent: 13 June 2013 14:34 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem I just saw your older email. Why are you using smpd instead of the default process manager (hydra)? -- Pavan On 06/13/2013 08:05 AM, Pavan Balaji wrote: What's "-phrase"? That's not a recognized option. I'm not sure where the /dev/mem check is coming from. Try running ~/main without mpiexec first. -- Pavan On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: Hello all, I am trying to use two raspberry-pi to sample and then process some data. The first process samples while the second processes and vice versa. To do this I use gpio and also mpich-3.0.4 with the process manager smpd. I have successfully run cpi on both machines (from the master machine). I have also managed to run a similar program but without the MPI, this involved compiling with gcc and when running putting sudo in front of the binary file. When I combine these two processes I get various error messages. For input: mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: Can't open /dev/mem Did you forget to use 'sudo .. ?' For input: sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: sudo: mpiexec: Command not found I therefore put mpiexec into /usr/bin now for input: sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main the error is: Can't open /dev/mem Did you forget to use 'sudo .. ?' Does anyone know how I can work around this? Thanks, Eibhlin _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -- Pavan Balaji http://www.mcs.anl.gov/~balaji _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -- Pavan Balaji http://www.mcs.anl.gov/~balaji _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From sniu at hawk.iit.edu Sat Jun 15 15:17:41 2013 From: sniu at hawk.iit.edu (Sufeng Niu) Date: Sat, 15 Jun 2013 15:17:41 -0500 Subject: [mpich-discuss] MPI server setup issue In-Reply-To: References: Message-ID: Hi, Antonio Thanks a lot for your reply. I run my program on 64 bit OS for each nodes. Do you know how can overcome this OS problems? Should I add compile flags as mpicc -m64 ....? Thanks a lot! Sufeng On Sat, Jun 15, 2013 at 10:03 AM, wrote: > Send discuss mailing list submissions to > discuss at mpich.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mpich.org/mailman/listinfo/discuss > or, via email, send a message with subject or body 'help' to > discuss-request at mpich.org > > You can reach the person managing the list at > discuss-owner at mpich.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of discuss digest..." > > > Today's Topics: > > 1. Re: MPI server setup issue (Antonio J. Pe?a) > 2. Re: Running an mpi program that needs to access /dev/mem > (Jim Dinan) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 14 Jun 2013 16:46:20 -0500 > From: Antonio J. Pe?a > To: discuss at mpich.org > Subject: Re: [mpich-discuss] MPI server setup issue > Message-ID: <3198378.OIJ6uL42Ef at localhost.localdomain> > Content-Type: text/plain; charset="iso-8859-1" > > > Hi Sufeng, > > > > On Friday, June 14, 2013 04:35:39 PM Sufeng Niu wrote: > > > > Hello, > > > > > > I am a beginner on MPI programming, and right now I am working on an > MPI project. I got a few questions related to implementation issues: > > > > > > 1. when I run a simple MPI hello world on multiple nodes, (I already > installed mpich3 library on master node, mount the nfs, shared the > executable file and mpi library, set slave node to be keyless ssh), my > program was stoped there say: > > bash: /mnt/mpi/mpich-install/bin/hydra_pmi_proxy: /lib/ld-linux.so.2: bad > ELF interpreter: No such file or directory. > > I can not get rid of it for a long times. even though I reset everything > (I > already add PATH=/mnt/mpi/mpich-install/bin:$PATH in .bash_profile). Do > you have any clues on this problems? > > > > > This issue may be related to mismatch between 32 and 64 bit libraries. Are > you running 64 or 32 bit operating systems in all of your nodes > consistently? > > > 2. for multiple servers, each of them has 10G ethernet card. for > example, one network card address is eth5: 10.0.5.55. So if I want to > launch MPI communication through 10G network card. Should I set the > hostfile as: 10.0.5.55:$(PROCESS_NUM)? Or using iface eth5 > > > You can address those nodes by either IP or DNS name in the hostfile, > depending on how your system is configured. Using IP addresses is > completely OK. > > > Best, > Antonio > > > > > > > Thanks a lot! > > > > > > -- Best Regards, > > Sufeng Niu > > ECASP lab, ECE department, Illinois Institute of Technology > > Tel: 312-731-7219 > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mpich.org/pipermail/discuss/attachments/20130614/67207b83/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Sat, 15 Jun 2013 10:03:02 -0500 > From: Jim Dinan > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to > access /dev/mem > Message-ID: > T0q4Kq2a7kDJHV2q54WT34nBg at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Eibhlin, > > Did you make those permissions changes on every node where your program > runs? What happens if you run "mpiexec touch /dev/mem"? > > ~Jim. > > > On Fri, Jun 14, 2013 at 4:43 PM, Lee, Eibhlin > wrote: > > > Pavan, > > sorry when I do run mpiexec id the output is > > uid=1000(pi) gid=1000(pi) > > > groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) > > > > regardless of whether I'm in root or my usual user. root at raspi or > > pi at raspi. Is this output what you would expect? > > > > Jim, > > I have tried changing the ownership of /dev/mem by > > chmod 755 /dev/mem so that the output of ls -l /dev/mem is > > crwxr-xr-x 1 root kmem 1, 1 Jan 1 1970 /dev/mem > > but I still can't open /dev/mem inside my program. I also tried with the > > code 777. > > > > I tried adding my user to the kmem group by doing > > usermod -a -G kmem pi > > but this doesn't fix the problem. > > > > > > Have I gotten totally confused and pi isn't my user? > > > > Thank you in advance, > > Eibhlin > > ------------------------------ > > *From:* discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf > > of Jim Dinan [james.dinan at gmail.com] > > *Sent:* 14 June 2013 21:31 > > > > *To:* discuss at mpich.org > > *Subject:* Re: [mpich-discuss] Running an mpi program that needs to > > access /dev/mem > > > > I don't know if this has been suggested, but you could also add your > > user to the kmem group and chmod /dev/mem so that you have the access you > > need. > > > > ~Jim. > > > > > > On Fri, Jun 14, 2013 at 1:24 PM, Pavan Balaji > wrote: > > > >> > >> You can run mpich as root. There's no restriction on that. You still > >> haven't tried out my suggestion of running "id" to check what user ID > you > >> are running your processes as. My guess is that you are not setting > your > >> user ID correctly. > >> > >> -- Pavan > >> > >> > >> On 06/14/2013 06:27 AM, Lee, Eibhlin wrote: > >> > >>> I found that the reason we want to access /dev/mem is to setup memory > >>> regions to access the peripherals. (We are trying to read the output > of an > >>> ADC). At this point it becomes more a linux/raspberry-pi specific > problem > >>> than an MPICH problem. Although the fact that you can't run a program > that > >>> needs access to memory mapping (even as the root user) seems something > that > >>> MPICH could improve on for future versions. I know I am using smpd > instead > >>> of hydra so this problem may already be solved. But if someone could > >>> confirm that, it would be really helpful. > >>> ______________________________**__________ > >>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf > >>> of Lee, Eibhlin [eibhlin.lee10 at imperial.ac.uk] > >>> Sent: 14 June 2013 11:20 > >>> To: discuss at mpich.org > >>> Subject: Re: [mpich-discuss] Running an mpi program that needs > >>> to access /dev/mem > >>> > >>> Gus, > >>> I tried running cpi, as is included in the installation of MPI, on two > >>> machines with two processes. The output message confirmed that it had > >>> started only 1 process instead of 2. > >>> Process 0 of 1 is on raspi > >>> pi is approximately... > >>> > >>> Then it just hung. I think this is because the other machine didn't > know > >>> where to output the data? > >>> > >>> When I tried running two processes on the one machine using the wrapper > >>> you suggested the output was the same but doubled. It didn't hang. This > >>> confirms that every process was started with rank 0. > >>> > >>> I'm not entirely sure why /dev/mem is needed. I'm working in a group > and > >>> another member set up io and gpio and it seemed it needed access to > >>> /dev/mem I am going to do a strace as suggested by Pavan Balaji to see > >>> where it is used and see if I can somehow work around it. > >>> > >>> Thank you for your help. > >>> Eibhlin > >>> ______________________________**__________ > >>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf > >>> of Gus Correa [gus at ldeo.columbia.edu] > >>> Sent: 13 June 2013 21:11 > >>> To: Discuss Mpich > >>> Subject: Re: [mpich-discuss] Running an mpi program that needs to > >>> access /dev/mem > >>> > >>> Hi Eibhlin > >>> > >>> On 06/13/2013 12:59 PM, Lee, Eibhlin wrote: > >>> > >>>> Gus, > >>>> I believe your first assumption is correct. Unfortunately it just > >>>> seemed to hang. I think this might be because each one is being made > to > >>>> have the same rank... > >>>> > >>> > >>> Darn! I was afraid that it might give only rank 0 to all MPI > processes. > >>> So, with the script wrapper the process being launched by mpiexec may > >>> indeed be sudo, > >>> not the actual mpi executable (main) :( > >>> Then it may actually launch a bunch of separate rank 0 replicas of your > >>> program, > >>> instead of assigning to them different ranks. > >>> However, without any output or error message, it is hard to tell. > >>> > >>> No output at all? > >>> No error message, just hangs? > >>> Have you tried a verbose flag (-v) to mpiexec? > >>> (Not sure if it exists in MPICH mpiexec, you'd need to check.) > >>> > >>> Would you care to try it with another mpi program, > >>> one that doesn't deal with /dev/mem (a risky business), > >>> say cpi.c (in the examples directory), or an mpi version of Hello, > world, > >>> just to see if the mpiexec+sudo_script_wrapper works as expected or > >>> if everybody gets rank 0? > >>> > >>> > >>> It may already be obvious but this is the first time I am using Linux. > >>>> I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... > both > >>>> without success. > >>>> > >>> > >>> "which mpiexec" will return the path to mpiexec, but won't execute it. > >>> > >>> You could try this (with backquotes): > >>> > >>> `which mpiexec` -n 2 ~/main > >>> > >>> On a side note, make sure the mpiexec you're using matches the > >>> mpicc/mpif90/MPI library from the MPICH that > >>> you used to compile the program. > >>> Often times computers have several flavors of MPI installed, and mixing > >>> them just doesn't work. > >>> > >>> Is putting the full path to it similar to/is a symlink? (This still > >>>> doesn't make main have super user privileges though.) > >>>> > >>> > >>> No, nothing to do with sudo privileges. > >>> > >>> This suggestion was just to avoid messing up your /usr/bin, > >>> which is a directory that despite the somewhat misleading name (/usr, > >>> for historical reasons I think), > >>> is supposed to hold system (Linux) programs (that users can use), but > >>> not user-installed programs. > >>> Normally things are that are installed in /usr get there via some Linux > >>> package manager program > >>> (yum, rpm, apt-get, etc), to keep consistency with libraries, etc. > >>> > >>> I belive MPICH would install by default in /usr/local/ (and put mpiexec > >>> in /usr/local/bin), > >>> which is kind of a default location for non-system applications. > >>> > >>> The full path suggestion would be something like: > >>> /path/to/where/you/installed/**mpiexec -n 2 ~/main > >>> > >>> However, this won't solve the other problem w.r.t. sudo and /dev/mem. > >>> > >>> You must know what you are doing, but it made me wonder, > >>> even if your program were sequential, why would you want to mess with > >>> /dev/mem directly? > >>> Just curious about it. > >>> > >>> Gus Correa > >>> > >>> > >>> > >>> Eibhlin > >>>> ______________________________**__________ > >>>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf > >>>> of Gus Correa [gus at ldeo.columbia.edu] > >>>> Sent: 13 June 2013 15:37 > >>>> To: Discuss Mpich > >>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to > >>>> access /dev/mem > >>>> > >>>> Hi Lee > >>>> > >>>> How about replacing "~/main" in the mpiexec command line > >>>> by one-liner script? > >>>> Say, "sudo_main.sh", something like this: > >>>> > >>>> #! /bin/bash > >>>> sudo ~/main > >>>> > >>>> After all, it is "main" that accesses /dev/mem, > >>>> and needs "sudo" permissions, not mpiexec, right? > >>>> [Or do the mpiexec-launched processes inherit > >>>> the "sudo" stuff from mpiexec?] > >>>> > >>>> Not related, but, instead of putting mpiexec in /usr/bin, > >>>> can't you just use the full path to it? > >>>> > >>>> I hope this helps, > >>>> Gus Correa > >>>> > >>>> On 06/13/2013 10:09 AM, Lee, Eibhlin wrote: > >>>> > >>>>> Pavan, > >>>>> I had a lot of trouble getting hydra to work without having to enter > a > >>>>> password/passphrase. I saw the option to pass a phrase in the mpich > >>>>> installers guide. I eventually found that for that command you > needed to > >>>>> use the smpd process manager. That's the only reason I chose smpd > over > >>>>> hydra. > >>>>> As to your other suggestion. I ran ./main and the same error (Can't > >>>>> open /dev/mem...) appeared. sudo ./main works but of course without > >>>>> multiple processes. > >>>>> Eibhlin > >>>>> ______________________________**__________ > >>>>> From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on > behalf > >>>>> of Pavan Balaji [balaji at mcs.anl.gov] > >>>>> Sent: 13 June 2013 14:34 > >>>>> To: discuss at mpich.org > >>>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to > >>>>> access /dev/mem > >>>>> > >>>>> I just saw your older email. Why are you using smpd instead of the > >>>>> default process manager (hydra)? > >>>>> > >>>>> -- Pavan > >>>>> > >>>>> On 06/13/2013 08:05 AM, Pavan Balaji wrote: > >>>>> > >>>>>> What's "-phrase"? That's not a recognized option. I'm not sure > where > >>>>>> the /dev/mem check is coming from. Try running ~/main without > mpiexec > >>>>>> first. > >>>>>> > >>>>>> -- Pavan > >>>>>> > >>>>>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote: > >>>>>> > >>>>>>> Hello all, > >>>>>>> > >>>>>>> I am trying to use two raspberry-pi to sample and then process some > >>>>>>> data. The first process samples while the second processes and vice > >>>>>>> versa. To do this I use gpio and also mpich-3.0.4 with the process > >>>>>>> manager smpd. I have successfully run cpi on both machines (from > the > >>>>>>> master machine). I have also managed to run a similar program but > >>>>>>> without the MPI, this involved compiling with gcc and when running > >>>>>>> putting sudo in front of the binary file. > >>>>>>> > >>>>>>> When I combine these two processes I get various error messages. > >>>>>>> For input: > >>>>>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>>>>> the error is: > >>>>>>> Can't open /dev/mem > >>>>>>> Did you forget to use 'sudo .. ?' > >>>>>>> > >>>>>>> For input: > >>>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>>>>> the error is: > >>>>>>> sudo: mpiexec: Command not found > >>>>>>> > >>>>>>> I therefore put mpiexec into /usr/bin > >>>>>>> > >>>>>>> now for input: > >>>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main > >>>>>>> the error is: > >>>>>>> Can't open /dev/mem > >>>>>>> Did you forget to use 'sudo .. ?' > >>>>>>> > >>>>>>> Does anyone know how I can work around this? > >>>>>>> Thanks, > >>>>>>> Eibhlin > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> ______________________________**_________________ > >>>>>>> discuss mailing list discuss at mpich.org > >>>>>>> To manage subscription options or unsubscribe: > >>>>>>> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >>>>>>> > >>>>>>> -- > >>>>> Pavan Balaji > >>>>> http://www.mcs.anl.gov/~balaji > >>>>> ______________________________**_________________ > >>>>> discuss mailing list discuss at mpich.org > >>>>> To manage subscription options or unsubscribe: > >>>>> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >>>>> ______________________________**_________________ > >>>>> discuss mailing list discuss at mpich.org > >>>>> To manage subscription options or unsubscribe: > >>>>> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >>>>> > >>>> ______________________________**_________________ > >>>> discuss mailing list discuss at mpich.org > >>>> To manage subscription options or unsubscribe: > >>>> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >>>> ______________________________**_________________ > >>>> discuss mailing list discuss at mpich.org > >>>> To manage subscription options or unsubscribe: > >>>> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >>>> > >>> > >>> ______________________________**_________________ > >>> discuss mailing list discuss at mpich.org > >>> To manage subscription options or unsubscribe: > >>> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >>> ______________________________**_________________ > >>> discuss mailing list discuss at mpich.org > >>> To manage subscription options or unsubscribe: > >>> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >>> ______________________________**_________________ > >>> discuss mailing list discuss at mpich.org > >>> To manage subscription options or unsubscribe: > >>> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >>> > >>> > >> -- > >> Pavan Balaji > >> http://www.mcs.anl.gov/~balaji > >> ______________________________**_________________ > >> discuss mailing list discuss at mpich.org > >> To manage subscription options or unsubscribe: > >> https://lists.mpich.org/**mailman/listinfo/discuss< > https://lists.mpich.org/mailman/listinfo/discuss> > >> > > > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mpich.org/pipermail/discuss/attachments/20130615/d29dd202/attachment.html > > > > ------------------------------ > > _______________________________________________ > discuss mailing list > discuss at mpich.org > https://lists.mpich.org/mailman/listinfo/discuss > > End of discuss Digest, Vol 8, Issue 29 > ************************************** > -- Best Regards, Sufeng Niu ECASP lab, ECE department, Illinois Institute of Technology Tel: 312-731-7219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Sat Jun 15 20:25:08 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sat, 15 Jun 2013 20:25:08 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk> Message-ID: <51BD13F4.7030708@mcs.anl.gov> I thought you moved to Hydra when we mentioned that smpd was not supported. With smpd, you are on your own. Sorry. -- Pavan On 06/15/2013 12:18 PM, Lee, Eibhlin wrote: > Pavan, > > Normally I start smpd when I'm the normal user. This time I did it in root. It then prompted me for a smpd phrase (This happened the first time I ever started it in normal user but the first time it had appeared in root.) The uid=0 now. > > However, when I try to execute the program now, in root, it just hangs. With no messages at all! I checked and the cpi example does work when in root. > > Eibhlin > ________________________________________ > From: Pavan Balaji [balaji at mcs.anl.gov] > Sent: 15 June 2013 17:02 > To: discuss at mpich.org > Cc: Lee, Eibhlin > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > On 06/14/2013 04:43 PM, Lee, Eibhlin wrote: >> sorry when I do run mpiexec id the output is >> uid=1000(pi) gid=1000(pi) >> groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) > > This is the crux of the problem. Your application processes are being > launched as the regular user (uid=1000) instead of as root (uid=0). I > assume this is how you ran the program; can you confirm? > > /* Login as root */ > % su > > % mpiexec [whatever_options] id > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From eibhlin.lee10 at imperial.ac.uk Sun Jun 16 04:53:21 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Sun, 16 Jun 2013 09:53:21 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51BD13F4.7030708@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk>, <51BD13F4.7030708@mcs.anl.gov> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk> Pavan, Antonio set up a ticket http://trac.mpich.org/projects/mpich/ticket/1885 to check whether the problem would still occur in hydra. Could you please run the test provided there with hydra and see if it works? Thanks, Eibhlin ________________________________________ From: Pavan Balaji [balaji at mcs.anl.gov] Sent: 16 June 2013 02:25 To: Lee, Eibhlin Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem I thought you moved to Hydra when we mentioned that smpd was not supported. With smpd, you are on your own. Sorry. -- Pavan On 06/15/2013 12:18 PM, Lee, Eibhlin wrote: > Pavan, > > Normally I start smpd when I'm the normal user. This time I did it in root. It then prompted me for a smpd phrase (This happened the first time I ever started it in normal user but the first time it had appeared in root.) The uid=0 now. > > However, when I try to execute the program now, in root, it just hangs. With no messages at all! I checked and the cpi example does work when in root. > > Eibhlin > ________________________________________ > From: Pavan Balaji [balaji at mcs.anl.gov] > Sent: 15 June 2013 17:02 > To: discuss at mpich.org > Cc: Lee, Eibhlin > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > > On 06/14/2013 04:43 PM, Lee, Eibhlin wrote: >> sorry when I do run mpiexec id the output is >> uid=1000(pi) gid=1000(pi) >> groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) > > This is the crux of the problem. Your application processes are being > launched as the regular user (uid=1000) instead of as root (uid=0). I > assume this is how you ran the program; can you confirm? > > /* Login as root */ > % su > > % mpiexec [whatever_options] id > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Sun Jun 16 07:14:02 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 16 Jun 2013 07:14:02 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk>, <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk> Message-ID: <51BDAC0A.4070300@mcs.anl.gov> On 06/16/2013 04:53 AM, Lee, Eibhlin wrote: > Antonio set up a ticket > http://trac.mpich.org/projects/mpich/ticket/1885 to check whether the > problem would still occur in hydra. Could you please run the test > provided there with hydra and see if it works? I've verified that it works correctly and resolved the ticket. Please only use hydra. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From eibhlin.lee10 at imperial.ac.uk Sun Jun 16 07:37:40 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Sun, 16 Jun 2013 12:37:40 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51BDAC0A.4070300@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk>, <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk>, <51BDAC0A.4070300@mcs.anl.gov> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk> Thank you Pavan, Just to clarify: you were able to run the program that accesses /dev/mem not just mpiexec id? I did not want to spend a whole day installing hydra instead of smpd if it would not work. Thank you everyone for your help! Eibhlin ________________________________________ From: Pavan Balaji [balaji at mcs.anl.gov] Sent: 16 June 2013 13:14 To: Lee, Eibhlin Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem On 06/16/2013 04:53 AM, Lee, Eibhlin wrote: > Antonio set up a ticket > http://trac.mpich.org/projects/mpich/ticket/1885 to check whether the > problem would still occur in hydra. Could you please run the test > provided there with hydra and see if it works? I've verified that it works correctly and resolved the ticket. Please only use hydra. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Sun Jun 16 08:13:17 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 16 Jun 2013 08:13:17 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk>, <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk>, <51BDAC0A.4070300@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk> Message-ID: <51BDB9ED.8050404@mcs.anl.gov> On 06/16/2013 07:37 AM, Lee, Eibhlin wrote: > Just to clarify: you were able to run the program that accesses > /dev/mem not just mpiexec id? I didn't run your application, but it doesn't matter. As I pointed out the primary problem you have is that your application processes are not running as root. If you don't fix that, you can't expect your application to work. I confirmed that you can run Hydra as root correctly. Till you get to that stage, I don't think we can do anything to help you. > I did not want to spend a whole day installing hydra instead of smpd > if it would not work. Well, all I can do is discourage you from using it. It's finally your call. But we can't spend more time debugging this for you unless you follow our recommendations. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Sun Jun 16 08:15:35 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 16 Jun 2013 08:15:35 -0500 Subject: [mpich-discuss] MPI server setup issue In-Reply-To: References: Message-ID: <51BDBA77.2000704@mcs.anl.gov> Hi Sufeng, On 06/14/2013 04:35 PM, Sufeng Niu wrote: > 1. when I run a simple MPI hello world on multiple nodes, (I already > installed mpich3 library on master node, mount the nfs, shared the > executable file and mpi library, set slave node to be keyless ssh), my > program was stoped there say: > bash: /mnt/mpi/mpich-install/bin/hydra_pmi_proxy: /lib/ld-linux.so.2: > bad ELF interpreter: No such file or directory. 1. Did you make sure /mnt/mpi/mpich-install/bin/hydra_pmi_proxy is available on each node? 2. Did you also make sure all libraries it is linked to are available on each node? You can check these libraries using "ldd /mnt/mpi/mpich-install/bin/hydra_pmi_proxy" -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From eibhlin.lee10 at imperial.ac.uk Sun Jun 16 08:19:20 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Sun, 16 Jun 2013 13:19:20 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51BDB9ED.8050404@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk>, <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk>, <51BDAC0A.4070300@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk>, <51BDB9ED.8050404@mcs.anl.gov> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A2562@icexch-m3.ic.ac.uk> Actually, I can run mpiexec id AS ROOT. It is only when I try to run the program that contains /dev/mem in root that there is an issue. Sorry that I wasn't so clear in my earlier post. Please re open the ticket to see if the program that contains /dev/mem will work properly using hydra. Eibhlin ________________________________________ From: Pavan Balaji [balaji at mcs.anl.gov] Sent: 16 June 2013 14:13 To: Lee, Eibhlin Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem On 06/16/2013 07:37 AM, Lee, Eibhlin wrote: > Just to clarify: you were able to run the program that accesses > /dev/mem not just mpiexec id? I didn't run your application, but it doesn't matter. As I pointed out the primary problem you have is that your application processes are not running as root. If you don't fix that, you can't expect your application to work. I confirmed that you can run Hydra as root correctly. Till you get to that stage, I don't think we can do anything to help you. > I did not want to spend a whole day installing hydra instead of smpd > if it would not work. Well, all I can do is discourage you from using it. It's finally your call. But we can't spend more time debugging this for you unless you follow our recommendations. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From balaji at mcs.anl.gov Sun Jun 16 08:23:18 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sun, 16 Jun 2013 08:23:18 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A2562@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk>, <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk>, <51BDAC0A.4070300@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk>, <51BDB9ED.8050404@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2562@icexch-m3.ic.ac.uk> Message-ID: <51BDBC46.5010906@mcs.anl.gov> On 06/16/2013 08:19 AM, Lee, Eibhlin wrote: > Actually, I can run mpiexec id AS ROOT. It is only when I try to run > the program that contains /dev/mem in root that there is an issue. > Sorry that I wasn't so clear in my earlier post. Are you sure? Here's what you wrote in a previous email, which indicates that your processes are not running as root: ================================================== sorry when I do run mpiexec id the output is uid=1000(pi) gid=1000(pi) groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) regardless of whether I'm in root or my usual user. root at raspi or pi at raspi. Is this output what you would expect? ================================================== You further added that you are running this with smpd. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From eibhlin.lee10 at imperial.ac.uk Sun Jun 16 08:31:17 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Sun, 16 Jun 2013 13:31:17 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <51BDBC46.5010906@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov>, <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk>, <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk>, <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk>, <51BDAC0A.4070300@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk>, <51BDB9ED.8050404@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2562@icexch-m3.ic.ac.uk>, <51BDBC46.5010906@mcs.anl.gov> Message-ID: <2D283C3861654E41AEB39AE4B6767663173A257B@icexch-m3.ic.ac.uk> First time I tried that was the case, I then started smpd in root instead of in user. This made the uid=0 as you said it should. So I can confirm that I can run mpiexec id as root and get the expected answers. Yes I am still running smpd but if someone can verify that the program I provided works with hydra I will configure MPICH with hydra and never look at smpd again. Eibhlin ________________________________________ From: Pavan Balaji [balaji at mcs.anl.gov] Sent: 16 June 2013 14:23 To: Lee, Eibhlin Cc: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem On 06/16/2013 08:19 AM, Lee, Eibhlin wrote: > Actually, I can run mpiexec id AS ROOT. It is only when I try to run > the program that contains /dev/mem in root that there is an issue. > Sorry that I wasn't so clear in my earlier post. Are you sure? Here's what you wrote in a previous email, which indicates that your processes are not running as root: ================================================== sorry when I do run mpiexec id the output is uid=1000(pi) gid=1000(pi) groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input) regardless of whether I'm in root or my usual user. root at raspi or pi at raspi. Is this output what you would expect? ================================================== You further added that you are running this with smpd. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jhammond at alcf.anl.gov Sun Jun 16 12:19:48 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Sun, 16 Jun 2013 12:19:48 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk> <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk> <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk> <51BDAC0A.4070300@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk> Message-ID: > I did not want to spend a whole day installing hydra instead of smpd if it would not work. It took me 41 seconds to install Hydra ex nihilo and I didn't even use "make -j". The argument that installation time is a bottleneck is invalid. smpd is not supported and you shouldn't use it. End of discussion. Jeff ALL DONE Sun Jun 16 12:16:12 CDT 2013 real 0m40.912s user 0m17.405s sys 0m11.028s Jeffs-MacBook-Pro:tmp jhammond$ cat auto #!/bin/bash date wget http://www.mpich.org/static/downloads/3.0.4/hydra-3.0.4.tar.gz && \ tar -xzf hydra-3.0.4.tar.gz && \ cd hydra-3.0.4 && \ mkdir build && \ cd build && \ ../configure CC=gcc --prefix=/tmp/hydra-install && \ make && \ make install && \ make check echo "ALL DONE" date -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From eibhlin.lee10 at imperial.ac.uk Sun Jun 16 12:26:59 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Sun, 16 Jun 2013 17:26:59 +0000 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk> <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk> <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk> <51BDAC0A.4070300@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk>, Message-ID: <2D283C3861654E41AEB39AE4B6767663173A25DF@icexch-m3.ic.ac.uk> Unfortunately my past experience with hydra is VERY different. I'm baffled as to how you managed to do it so quickly. Surely you have to download the file from mpich.org then unzip (which takes at least 5 mins) then configure (up to an hour) make (a long time as well) and finally make install (another chance in which you can go have a cuppa). And then get the image and put it onto another device which first needs formatting; for me that takes anywhere between 10 minutes and an hour. What did you do so that it took so little time? Eibhlin ________________________________________ From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jeff Hammond [jhammond at alcf.anl.gov] Sent: 16 June 2013 18:19 To: discuss at mpich.org Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > I did not want to spend a whole day installing hydra instead of smpd if it would not work. It took me 41 seconds to install Hydra ex nihilo and I didn't even use "make -j". The argument that installation time is a bottleneck is invalid. smpd is not supported and you shouldn't use it. End of discussion. Jeff ALL DONE Sun Jun 16 12:16:12 CDT 2013 real 0m40.912s user 0m17.405s sys 0m11.028s Jeffs-MacBook-Pro:tmp jhammond$ cat auto #!/bin/bash date wget http://www.mpich.org/static/downloads/3.0.4/hydra-3.0.4.tar.gz && \ tar -xzf hydra-3.0.4.tar.gz && \ cd hydra-3.0.4 && \ mkdir build && \ cd build && \ ../configure CC=gcc --prefix=/tmp/hydra-install && \ make && \ make install && \ make check echo "ALL DONE" date -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From jhammond at alcf.anl.gov Sun Jun 16 13:43:57 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Sun, 16 Jun 2013 13:43:57 -0500 Subject: [mpich-discuss] Running an mpi program that needs to access /dev/mem In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173A25DF@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173A1ECD@icexch-m3.ic.ac.uk> <51B9C390.8050705@mcs.anl.gov> <51B9CA63.6010100@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A1F0E@icexch-m3.ic.ac.uk> <51B9D939.5040203@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A2004@icexch-m3.ic.ac.uk> <51BA275A.90608@ldeo.columbia.edu> <2D283C3861654E41AEB39AE4B6767663173A207D@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A20B8@icexch-m3.ic.ac.uk> <51BB5FE7.2030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A21B1@icexch-m3.ic.ac.uk> <51BC901D.9030107@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A24B1@icexch-m3.ic.ac.uk> <51BD13F4.7030708@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2520@icexch-m3.ic.ac.uk> <51BDAC0A.4070300@mcs.anl.gov> <2D283C3861654E41AEB39AE4B6767663173A2541@icexch-m3.ic.ac.uk> <2D283C3861654E41AEB39AE4B6767663173A25DF@icexch-m3.ic.ac.uk> Message-ID: I do not own a Rasberry Pi. Is this thing self-hosted, hence you are driving the Hydra download+build from a wimpy ARM core? Or is your Pi attached to a real computer? In case it wasn't obvious from the shell prompt ("Jeffs-MacBook-Pro:tmp jhammond"), I was timing on a Macbook Pro, which happens to have a 2.5 GHz Intel Core i7 quad-core processor. When I time on the oldest processor to which I have access (PPC970MP 2.5 GHz circa 2007), the whole process takes twice as long (77 seconds). In both cases, I built in /tmp, which on Linux is a ramdisk (http://en.wikipedia.org/wiki/Tmpfs). If building Hydra is a bottleneck because the Pi is too slow for basic operations like download a file, unpack it and run gcc on its contents, then you should go back to your very first statement ("I am trying to use two raspberry-pi to sample and then process some data") and consider whether or not what you're doing makes sense. If a computer can't compile very simple software like Hydra in a reasonable amount of time, I fail to see how it can be used to process data quickly. While configure isn't necessarily a good model for data analysis, I imagine that your time is best spent implementing and running your data analysis on a computer with more substantial processing power. I guess I should also point out that you should be able to cross compile Hydra if you have the appropriate cross-compiler toolchain, hence you can download and build Hydra on an x86 laptop and then merely copy the resulting binary over to your toy processor. Best, Jeff On Sun, Jun 16, 2013 at 12:26 PM, Lee, Eibhlin wrote: > Unfortunately my past experience with hydra is VERY different. > I'm baffled as to how you managed to do it so quickly. Surely you have to download the file from mpich.org then unzip (which takes at least 5 mins) then configure (up to an hour) make (a long time as well) and finally make install (another chance in which you can go have a cuppa). And then get the image and put it onto another device which first needs formatting; for me that takes anywhere between 10 minutes and an hour. > What did you do so that it took so little time? > Eibhlin > ________________________________________ > From: discuss-bounces at mpich.org [discuss-bounces at mpich.org] on behalf of Jeff Hammond [jhammond at alcf.anl.gov] > Sent: 16 June 2013 18:19 > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Running an mpi program that needs to access /dev/mem > >> I did not want to spend a whole day installing hydra instead of smpd if it would not work. > > It took me 41 seconds to install Hydra ex nihilo and I didn't even use > "make -j". The argument that installation time is a bottleneck is > invalid. smpd is not supported and you shouldn't use it. End of > discussion. > > Jeff > > > ALL DONE > Sun Jun 16 12:16:12 CDT 2013 > > real 0m40.912s > user 0m17.405s > sys 0m11.028s > > Jeffs-MacBook-Pro:tmp jhammond$ cat auto > #!/bin/bash > date > wget http://www.mpich.org/static/downloads/3.0.4/hydra-3.0.4.tar.gz && \ > tar -xzf hydra-3.0.4.tar.gz && \ > cd hydra-3.0.4 && \ > mkdir build && \ > cd build && \ > ../configure CC=gcc --prefix=/tmp/hydra-install && \ > make && \ > make install && \ > make check > echo "ALL DONE" > date > > -- > Jeff Hammond > Argonne Leadership Computing Facility > University of Chicago Computation Institute > jhammond at alcf.anl.gov / (630) 252-5381 > http://www.linkedin.com/in/jeffhammond > https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond > ALCF docs: http://www.alcf.anl.gov/user-guides > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond Argonne Leadership Computing Facility University of Chicago Computation Institute jhammond at alcf.anl.gov / (630) 252-5381 http://www.linkedin.com/in/jeffhammond https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond ALCF docs: http://www.alcf.anl.gov/user-guides From panigrahi.pinak at gmail.com Sun Jun 16 21:59:53 2013 From: panigrahi.pinak at gmail.com (pinak panigrahi) Date: Mon, 17 Jun 2013 08:29:53 +0530 Subject: [mpich-discuss] MPI_Barrier Message-ID: Hi, I would like to know what algorithms are used for MPI_Barrier implementation in MPICH ... specifically, for intra-node MPI calls ! -- Pinak Panigrahi pursuing Masters in Computer Science at Sri Sathya Sai Institute Of Higher Learning, Puttaparti, India. "Thank God for what you have, Trust Him for what you need !" -------------- next part -------------- An HTML attachment was scrubbed... URL: From thakur at mcs.anl.gov Sun Jun 16 22:01:23 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Sun, 16 Jun 2013 22:01:23 -0500 Subject: [mpich-discuss] MPI_Barrier In-Reply-To: References: Message-ID: There are comments in the source code. See src/mpi/coll/barrier.c Rajeev On Jun 16, 2013, at 9:59 PM, pinak panigrahi wrote: > Hi, > I would like to know what algorithms are used for MPI_Barrier implementation in MPICH ... specifically, for intra-node MPI calls ! > > -- > Pinak Panigrahi > pursuing Masters in Computer Science > at Sri Sathya Sai Institute Of Higher Learning, > Puttaparti, India. > > "Thank God for what you have, Trust Him for what you need !" > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From panigrahi.pinak at gmail.com Mon Jun 17 09:41:59 2013 From: panigrahi.pinak at gmail.com (pinak panigrahi) Date: Mon, 17 Jun 2013 20:11:59 +0530 Subject: [mpich-discuss] MPI_Barrier In-Reply-To: References: Message-ID: Thank you Sir. On Mon, Jun 17, 2013 at 8:31 AM, Rajeev Thakur wrote: > There are comments in the source code. See src/mpi/coll/barrier.c > > Rajeev > > On Jun 16, 2013, at 9:59 PM, pinak panigrahi wrote: > > > Hi, > > I would like to know what algorithms are used for MPI_Barrier > implementation in MPICH ... specifically, for intra-node MPI calls ! > > > > -- > > Pinak Panigrahi > > pursuing Masters in Computer Science > > at Sri Sathya Sai Institute Of Higher Learning, > > Puttaparti, India. > > > > "Thank God for what you have, Trust Him for what you need !" > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pinak Panigrahi pursuing Masters in Computer Science at Sri Sathya Sai Institute Of Higher Learning, Puttaparti, India. "Thank God for what you have, Trust Him for what you need !" -------------- next part -------------- An HTML attachment was scrubbed... URL: From haroogan at gmail.com Mon Jun 17 13:07:55 2013 From: haroogan at gmail.com (Haroogan) Date: Mon, 17 Jun 2013 20:07:55 +0200 Subject: [mpich-discuss] Troubles Building MPICH on MinGW-w64 (GCC 4.8.0) In-Reply-To: <51BF4E96.2050002@gmail.com> References: <51BF4E96.2050002@gmail.com> Message-ID: <51BF507B.4030502@gmail.com> Hello, I'm trying to build MPICH under MinGW-w64 based on GCC 4.8.0 (POSIX Threads), and here is what I get: configure: error: Unable to determine the size of MPI_BSEND_OVERHEAD" Any ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbland at mcs.anl.gov Mon Jun 17 13:12:18 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Mon, 17 Jun 2013 13:12:18 -0500 Subject: [mpich-discuss] Troubles Building MPICH on MinGW-w64 (GCC 4.8.0) In-Reply-To: <51BF507B.4030502@gmail.com> References: <51BF4E96.2050002@gmail.com> <51BF507B.4030502@gmail.com> Message-ID: <240663FA-F8B5-45F6-8DDB-52C46F887A18@mcs.anl.gov> Which version of MPICH are you trying to build? Can you send us the config.log? On Jun 17, 2013, at 1:07 PM, Haroogan wrote: > Hello, > > I'm trying to build MPICH under MinGW-w64 based on GCC 4.8.0 (POSIX Threads), and here is what I get: > > configure: error: Unable to determine the size of MPI_BSEND_OVERHEAD" > > Any ideas? _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From haroogan at gmail.com Mon Jun 17 13:17:12 2013 From: haroogan at gmail.com (Haroogan) Date: Mon, 17 Jun 2013 20:17:12 +0200 Subject: [mpich-discuss] Troubles Building MPICH on MinGW-w64 (GCC 4.8.0) In-Reply-To: <240663FA-F8B5-45F6-8DDB-52C46F887A18@mcs.anl.gov> References: <51BF4E96.2050002@gmail.com> <51BF507B.4030502@gmail.com> <240663FA-F8B5-45F6-8DDB-52C46F887A18@mcs.anl.gov> Message-ID: <51BF52A8.2020209@gmail.com> > Which version of MPICH are you trying to build? mpich-3.0.4(stable release) > Can you send us the config.log? I'm not sure what do you want to see there, but here is the attachment. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by MPICH configure 3.0.4, which was generated by GNU Autoconf 2.69. Invocation command line was $ ../configure -prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH --enable-fast=all,O3 ## --------- ## ## Platform. ## ## --------- ## hostname = Haroogan-PC uname -m = i686 uname -r = 1.0.17(0.48/3/2) uname -s = MINGW32_NT-6.1 uname -v = 2011-04-24 23:39 /usr/bin/uname -p = unknown /bin/uname -X = unknown /bin/arch = unknown /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = unknown /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: . PATH: /usr/local/bin PATH: /mingw/bin PATH: /bin PATH: /c/Windows PATH: /c/Windows/System32 PATH: /c/Windows/System32/Wbem PATH: /c/Windows/System32/sysprep PATH: /c/Windows/System32/WindowsPowerShell/v1.0 PATH: /d/ProgramFiles/x64/Windows Imaging PATH: /d/ProgramFiles/x64/Intel/Intel(R) Management Engine Components/DAL PATH: /d/ProgramFiles/x64/Intel/Intel(R) Management Engine Components/IPT PATH: /d/ProgramFiles/x86/Intel/Intel(R) Management Engine Components/DAL PATH: /d/ProgramFiles/x86/Intel/Intel(R) Management Engine Components/IPT PATH: /d/ProgramFiles/x86/Intel/iCLS Client/ PATH: /d/ProgramFiles/x64/Intel/iCLS Client/ PATH: /d/ProgramFiles/x86/NVIDIA Corporation/PhysX/Common PATH: /d/Users/Haroogan/Environment PATH: /usr/bin PATH: /d/Toolchains/x64/MinGW-w64/4.8.0/bin PATH: /d/Toolchains/x64/LLVM/3.3/bin PATH: /d/Tools/Ninja/bin PATH: /d/Applications/Vim PATH: /d/Applications/Python 2.7.3 PATH: /d/Applications/Python 2.7.3/Scripts PATH: /d/Applications/ConTeXt/tex/texmf-mswin/bin PATH: /d/Applications/Microsoft Visual Studio 2012/VC/bin/x86_amd64 PATH: /d/Applications/Microsoft Visual Studio 2012/Common7/IDE PATH: /d/Libraries/x64/MinGW-w64/4.7.2/GCF/2.6.2/bin PATH: /d/Applications/Emacs/bin PATH: /d/Tools/PuTTY/bin ## ----------- ## ## Core tests. ## ## ----------- ## configure:5052: checking for icc configure:5082: result: no configure:5052: checking for pgcc configure:5082: result: no configure:5052: checking for xlc configure:5082: result: no configure:5052: checking for xlC configure:5082: result: no configure:5052: checking for pathcc configure:5082: result: no configure:5052: checking for cc configure:5082: result: no configure:5052: checking for gcc configure:5068: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/gcc configure:5079: result: gcc configure:5110: checking for C compiler version configure:5119: gcc --version >&5 gcc.exe (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. configure:5130: $? = 0 configure:5119: gcc -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\gcc.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:5130: $? = 0 configure:5119: gcc -V >&5 gcc.exe: error: unrecognized command line option '-V' gcc.exe: fatal error: no input files compilation terminated. configure:5130: $? = 1 configure:5119: gcc -qversion >&5 gcc.exe: error: unrecognized command line option '-qversion' gcc.exe: fatal error: no input files compilation terminated. configure:5130: $? = 1 configure:5150: checking whether the C compiler works configure:5172: gcc conftest.c >&5 configure:5176: $? = 0 configure:5224: result: yes configure:5227: checking for C compiler default output file name configure:5229: result: a.exe configure:5235: checking for suffix of executables configure:5242: gcc -o conftest.exe conftest.c >&5 configure:5246: $? = 0 configure:5268: result: .exe configure:5290: checking whether we are cross compiling configure:5298: gcc -o conftest.exe conftest.c >&5 configure:5302: $? = 0 configure:5309: ./conftest.exe configure:5313: $? = 0 configure:5328: result: no configure:5333: checking for suffix of object files configure:5355: gcc -c conftest.c >&5 configure:5359: $? = 0 configure:5380: result: o configure:5384: checking whether we are using the GNU C compiler configure:5403: gcc -c conftest.c >&5 configure:5403: $? = 0 configure:5412: result: yes configure:5421: checking whether gcc accepts -g configure:5441: gcc -c -g conftest.c >&5 configure:5441: $? = 0 configure:5482: result: yes configure:5499: checking for gcc option to accept ISO C89 configure:5562: gcc -c conftest.c >&5 configure:5562: $? = 0 configure:5575: result: none needed configure:5602: checking whether gcc and cc understand -c and -o together configure:5633: gcc -c conftest.c -o conftest2.o >&5 configure:5637: $? = 0 configure:5643: gcc -c conftest.c -o conftest2.o >&5 configure:5647: $? = 0 configure:5658: cc -c conftest.c >&5 ../configure: line 5660: cc: command not found configure:5662: $? = 127 configure:5702: result: yes configure:5741: checking how to run the C preprocessor configure:5772: gcc -E conftest.c configure:5772: $? = 0 configure:5786: gcc -E conftest.c conftest.c:10:28: fatal error: ac_nonexistent.h: No such file or directory #include ^ compilation terminated. configure:5786: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | /* end confdefs.h. */ | #include configure:5811: result: gcc -E configure:5831: gcc -E conftest.c configure:5831: $? = 0 configure:5845: gcc -E conftest.c conftest.c:10:28: fatal error: ac_nonexistent.h: No such file or directory #include ^ compilation terminated. configure:5845: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | /* end confdefs.h. */ | #include configure:5905: checking for a BSD-compatible install configure:5973: result: /bin/install -c configure:5984: checking whether build environment is sane configure:6039: result: yes configure:6187: checking for a thread-safe mkdir -p configure:6226: result: /bin/mkdir -p configure:6233: checking for gawk configure:6249: found /bin/gawk configure:6260: result: gawk configure:6271: checking whether make sets $(MAKE) configure:6293: result: yes configure:6323: checking for style of include used by make configure:6351: result: GNU configure:6385: checking whether make supports nested variables configure:6402: result: yes configure:6482: checking dependency style of gcc configure:6593: result: gcc3 configure:6610: checking whether to enable maintainer-specific portions of Makefiles configure:6619: result: yes configure:6683: checking for ar configure:6699: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/ar configure:6710: result: ar configure:6736: checking the archiver (ar) interface configure:6746: gcc -c conftest.c >&5 configure:6746: $? = 0 configure:6748: ar cru libconftest.a conftest.o >&5 configure:6751: $? = 0 configure:6774: result: ar configure:6824: checking build system type configure:6838: result: i686-pc-mingw32 configure:6858: checking host system type configure:6871: result: i686-pc-mingw32 configure:6912: checking how to print strings configure:6939: result: printf configure:6960: checking for a sed that does not truncate output configure:7024: result: /bin/sed configure:7042: checking for grep that handles long lines and -e configure:7100: result: /bin/grep configure:7105: checking for egrep configure:7167: result: /bin/grep -E configure:7172: checking for fgrep configure:7234: result: /bin/grep -F configure:7269: checking for ld used by gcc configure:7336: result: d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe configure:7343: checking if the linker (d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe) is GNU ld configure:7358: result: yes configure:7370: checking for BSD- or MS-compatible name lister (nm) configure:7419: result: /d/Toolchains/x64/MinGW-w64/4.8.0/bin/nm configure:7549: checking the name lister (/d/Toolchains/x64/MinGW-w64/4.8.0/bin/nm) interface configure:7556: gcc -c conftest.c >&5 configure:7559: /d/Toolchains/x64/MinGW-w64/4.8.0/bin/nm "conftest.o" configure:7562: output 0000000000000000 b .bss 0000000000000000 d .data 0000000000000000 r .rdata$zzz 0000000000000000 t .text 0000000000000000 B some_variable configure:7569: result: BSD nm configure:7572: checking whether ln -s works configure:7579: result: no, using cp -pR configure:7584: checking the maximum length of command line arguments configure:7714: result: 8192 configure:7731: checking whether the shell understands some XSI constructs configure:7741: result: yes configure:7745: checking whether the shell understands "+=" configure:7751: result: yes configure:7786: checking how to convert i686-pc-mingw32 file names to i686-pc-mingw32 format configure:7826: result: func_convert_file_msys_to_w32 configure:7833: checking how to convert i686-pc-mingw32 file names to toolchain format configure:7853: result: func_convert_file_msys_to_w32 configure:7860: checking for d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe option to reload object files configure:7867: result: -r configure:7941: checking for objdump configure:7957: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/objdump configure:7968: result: objdump configure:8000: checking how to recognize dependent libraries configure:8202: result: file_magic ^x86 archive import|^x86 DLL configure:8287: checking for dlltool configure:8303: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/dlltool configure:8314: result: dlltool configure:8347: checking how to associate runtime and link libraries configure:8374: result: func_cygming_dll_for_implib configure:8498: checking for archiver @FILE support configure:8515: gcc -c conftest.c >&5 configure:8515: $? = 0 configure:8518: ar cru libconftest.a @conftest.lst >&5 configure:8521: $? = 0 configure:8526: ar cru libconftest.a @conftest.lst >&5 d:\Toolchains\x64\MinGW-w64\4.8.0\bin\ar.exe: conftest.o: No such file or directory configure:8529: $? = 1 configure:8541: result: @ configure:8599: checking for strip configure:8615: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/strip configure:8626: result: strip configure:8698: checking for ranlib configure:8714: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/ranlib configure:8725: result: ranlib configure:8827: checking command to parse /d/Toolchains/x64/MinGW-w64/4.8.0/bin/nm output from gcc object configure:8947: gcc -c conftest.c >&5 configure:8950: $? = 0 configure:8954: /d/Toolchains/x64/MinGW-w64/4.8.0/bin/nm conftest.o \| sed -n -e 's/^.*[ ]\([ABCDGIRSTW][ABCDGIRSTW]*\)[ ][ ]*\([_A-Za-z][_A-Za-z0-9]*\)\{0,1\}$/\1 \2 \2/p' | sed '/ __gnu_lto/d' \> conftest.nm configure:8957: $? = 0 configure:9023: gcc -o conftest.exe conftest.c conftstm.o >&5 configure:9026: $? = 0 configure:9064: result: ok configure:9101: checking for sysroot configure:9131: result: no configure:9387: checking for mt configure:9417: result: no configure:9437: checking if : is a manifest tool configure:9443: : '-?' configure:9451: result: no configure:10089: checking for ANSI C header files configure:10109: gcc -c conftest.c >&5 configure:10109: $? = 0 configure:10182: gcc -o conftest.exe conftest.c >&5 configure:10182: $? = 0 configure:10182: ./conftest.exe configure:10182: $? = 0 configure:10193: result: yes configure:10206: checking for sys/types.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10206: checking for sys/stat.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10206: checking for stdlib.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10206: checking for string.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10206: checking for memory.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10206: checking for strings.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10206: checking for inttypes.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10206: checking for stdint.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10206: checking for unistd.h configure:10206: gcc -c conftest.c >&5 configure:10206: $? = 0 configure:10206: result: yes configure:10220: checking for dlfcn.h configure:10220: gcc -c conftest.c >&5 conftest.c:56:19: fatal error: dlfcn.h: No such file or directory #include ^ compilation terminated. configure:10220: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | | #include configure:10220: result: no configure:10425: checking for objdir configure:10440: result: .libs configure:10711: checking if gcc supports -fno-rtti -fno-exceptions configure:10729: gcc -c -fno-rtti -fno-exceptions conftest.c >&5 cc1.exe: warning: command line option '-fno-rtti' is valid for C++/ObjC++ but not for C [enabled by default] configure:10733: $? = 0 configure:10746: result: no configure:11073: checking for gcc option to produce PIC configure:11080: result: -DDLL_EXPORT -DPIC configure:11088: checking if gcc PIC flag -DDLL_EXPORT -DPIC works configure:11106: gcc -c -DDLL_EXPORT -DPIC -DPIC conftest.c >&5 configure:11110: $? = 0 configure:11123: result: yes configure:11152: checking if gcc static flag -static works configure:11180: result: yes configure:11195: checking if gcc supports -c -o file.o configure:11216: gcc -c -o out/conftest2.o conftest.c >&5 configure:11220: $? = 0 configure:11242: result: yes configure:11250: checking if gcc supports -c -o file.o configure:11297: result: yes configure:11330: checking whether the gcc linker (d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe) supports shared libraries configure:12483: result: yes configure:12723: checking dynamic linker characteristics configure:13456: result: Win32 ld.exe configure:13563: checking how to hardcode library paths into programs configure:13588: result: immediate configure:14128: checking whether stripping libraries is possible configure:14133: result: yes configure:14168: checking if libtool supports shared libraries configure:14170: result: yes configure:14173: checking whether to build shared libraries configure:14194: result: no configure:14197: checking whether to build static libraries configure:14201: result: yes configure:14250: checking whether make supports nested variables configure:14267: result: yes configure:14361: checking for icpc configure:14391: result: no configure:14361: checking for pgCC configure:14391: result: no configure:14361: checking for xlC configure:14391: result: no configure:14361: checking for pathCC configure:14391: result: no configure:14361: checking for c++ configure:14377: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/c++ configure:14388: result: c++ configure:14415: checking for C++ compiler version configure:14424: c++ --version >&5 c++.exe (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. configure:14435: $? = 0 configure:14424: c++ -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\c++.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:14435: $? = 0 configure:14424: c++ -V >&5 c++.exe: error: unrecognized command line option '-V' c++.exe: fatal error: no input files compilation terminated. configure:14435: $? = 1 configure:14424: c++ -qversion >&5 c++.exe: error: unrecognized command line option '-qversion' c++.exe: fatal error: no input files compilation terminated. configure:14435: $? = 1 configure:14439: checking whether we are using the GNU C++ compiler configure:14458: c++ -c conftest.cpp >&5 configure:14458: $? = 0 configure:14467: result: yes configure:14476: checking whether c++ accepts -g configure:14496: c++ -c -g conftest.cpp >&5 configure:14496: $? = 0 configure:14537: result: yes configure:14562: checking dependency style of c++ configure:14673: result: gcc3 configure:14706: checking how to run the C++ preprocessor configure:14733: c++ -E conftest.cpp configure:14733: $? = 0 configure:14747: c++ -E conftest.cpp conftest.cpp:23:28: fatal error: ac_nonexistent.h: No such file or directory #include ^ compilation terminated. configure:14747: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | /* end confdefs.h. */ | #include configure:14772: result: c++ -E configure:14792: c++ -E conftest.cpp configure:14792: $? = 0 configure:14806: c++ -E conftest.cpp conftest.cpp:23:28: fatal error: ac_nonexistent.h: No such file or directory #include ^ compilation terminated. configure:14806: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | /* end confdefs.h. */ | #include configure:14975: checking for ld used by c++ configure:15042: result: d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe configure:15049: checking if the linker (d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe) is GNU ld configure:15064: result: yes configure:15119: checking whether the c++ linker (d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe) supports shared libraries configure:16124: result: yes configure:16160: c++ -c conftest.cpp >&5 configure:16163: $? = 0 configure:16683: checking for c++ option to produce PIC configure:16690: result: -DDLL_EXPORT -DPIC configure:16698: checking if c++ PIC flag -DDLL_EXPORT -DPIC works configure:16716: c++ -c -DDLL_EXPORT -DPIC -DPIC conftest.cpp >&5 configure:16720: $? = 0 configure:16733: result: yes configure:16756: checking if c++ static flag -static works configure:16784: result: yes configure:16796: checking if c++ supports -c -o file.o configure:16817: c++ -c -o out/conftest2.o conftest.cpp >&5 configure:16821: $? = 0 configure:16843: result: yes configure:16848: checking if c++ supports -c -o file.o configure:16895: result: yes configure:16925: checking whether the c++ linker (d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe) supports shared libraries configure:16961: result: yes configure:17102: checking dynamic linker characteristics configure:17769: result: Win32 ld.exe configure:17822: checking how to hardcode library paths into programs configure:17847: result: immediate configure:17943: checking for ifort configure:17973: result: no configure:17943: checking for pgf77 configure:17973: result: no configure:17943: checking for af77 configure:17973: result: no configure:17943: checking for xlf configure:17973: result: no configure:17943: checking for frt configure:17973: result: no configure:17943: checking for cf77 configure:17973: result: no configure:17943: checking for fort77 configure:17973: result: no configure:17943: checking for fl32 configure:17973: result: no configure:17943: checking for fort configure:17973: result: no configure:17943: checking for ifc configure:17973: result: no configure:17943: checking for efc configure:17973: result: no configure:17943: checking for ftn configure:17973: result: no configure:17943: checking for gfortran configure:17959: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/gfortran configure:17970: result: gfortran configure:17996: checking for Fortran 77 compiler version configure:18005: gfortran --version >&5 GNU Fortran (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING configure:18016: $? = 0 configure:18005: gfortran -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\gfortran.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:18016: $? = 0 configure:18005: gfortran -V >&5 gfortran.exe: error: unrecognized command line option '-V' gfortran.exe: fatal error: no input files compilation terminated. configure:18016: $? = 1 configure:18005: gfortran -qversion >&5 gfortran.exe: error: unrecognized command line option '-qversion' gfortran.exe: fatal error: no input files compilation terminated. configure:18016: $? = 1 configure:18025: checking whether we are using the GNU Fortran 77 compiler configure:18038: gfortran -c conftest.F >&5 configure:18038: $? = 0 configure:18047: result: yes configure:18053: checking whether gfortran accepts -g configure:18064: gfortran -c -g conftest.f >&5 configure:18064: $? = 0 configure:18072: result: yes configure:18208: checking if libtool supports shared libraries configure:18210: result: yes configure:18213: checking whether to build shared libraries configure:18233: result: no configure:18236: checking whether to build static libraries configure:18240: result: yes configure:18561: checking for gfortran option to produce PIC configure:18568: result: -DDLL_EXPORT configure:18576: checking if gfortran PIC flag -DDLL_EXPORT works configure:18594: gfortran -c -DDLL_EXPORT conftest.f >&5 configure:18598: $? = 0 configure:18611: result: yes configure:18634: checking if gfortran static flag -static works configure:18662: result: yes configure:18674: checking if gfortran supports -c -o file.o configure:18695: gfortran -c -o out/conftest2.o conftest.f >&5 configure:18699: $? = 0 configure:18721: result: yes configure:18726: checking if gfortran supports -c -o file.o configure:18773: result: yes configure:18803: checking whether the gfortran linker (d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe) supports shared libraries configure:19906: result: yes configure:20047: checking dynamic linker characteristics configure:20708: result: Win32 ld.exe configure:20761: checking how to hardcode library paths into programs configure:20786: result: immediate configure:20875: checking for ifort configure:20905: result: no configure:20875: checking for pgf90 configure:20905: result: no configure:20875: checking for pathf90 configure:20905: result: no configure:20875: checking for pathf95 configure:20905: result: no configure:20875: checking for xlf90 configure:20905: result: no configure:20875: checking for xlf95 configure:20905: result: no configure:20875: checking for xlf2003 configure:20905: result: no configure:20875: checking for gfortran configure:20891: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/gfortran configure:20902: result: gfortran configure:20928: checking for Fortran compiler version configure:20937: gfortran --version >&5 GNU Fortran (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING configure:20948: $? = 0 configure:20937: gfortran -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\gfortran.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:20948: $? = 0 configure:20937: gfortran -V >&5 gfortran.exe: error: unrecognized command line option '-V' gfortran.exe: fatal error: no input files compilation terminated. configure:20948: $? = 1 configure:20937: gfortran -qversion >&5 gfortran.exe: error: unrecognized command line option '-qversion' gfortran.exe: fatal error: no input files compilation terminated. configure:20948: $? = 1 configure:20957: checking whether we are using the GNU Fortran compiler configure:20970: gfortran -c conftest.F >&5 configure:20970: $? = 0 configure:20979: result: yes configure:20985: checking whether gfortran accepts -g configure:20996: gfortran -c -g conftest.f >&5 configure:20996: $? = 0 configure:21004: result: yes configure:21143: checking if libtool supports shared libraries configure:21145: result: yes configure:21148: checking whether to build shared libraries configure:21168: result: no configure:21171: checking whether to build static libraries configure:21175: result: yes configure:21209: gfortran -c conftest.f >&5 configure:21212: $? = 0 configure:21641: checking for gfortran option to produce PIC configure:21648: result: -DDLL_EXPORT configure:21656: checking if gfortran PIC flag -DDLL_EXPORT works configure:21674: gfortran -c -DDLL_EXPORT conftest.f >&5 configure:21678: $? = 0 configure:21691: result: yes configure:21714: checking if gfortran static flag -static works configure:21742: result: yes configure:21754: checking if gfortran supports -c -o file.o configure:21775: gfortran -c -o out/conftest2.o conftest.f >&5 configure:21779: $? = 0 configure:21801: result: yes configure:21806: checking if gfortran supports -c -o file.o configure:21853: result: yes configure:21883: checking whether the gfortran linker (d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe) supports shared libraries configure:22986: result: yes configure:23127: checking dynamic linker characteristics configure:23788: result: Win32 ld.exe configure:23841: checking how to hardcode library paths into programs configure:23866: result: immediate configure:25066: RUNNING PREREQ FOR ch3:nemesis configure:25150: checking for getpagesize configure:25150: gcc -o conftest.exe conftest.c >&5 configure:25150: $? = 0 configure:25150: result: yes configure:25779: ===== configuring src/mpl ===== configure:25944: running /bin/sh ../../../src/mpl/configure --disable-option-checking '--prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH' '--enable-fast=all,O3' '--disable-checkerrors' --cache-file=/dev/null --srcdir=../../../src/mpl configure:25964: ===== done with src/mpl configure ===== configure:25970: sourcing src/mpl/localdefs WRAPPER_LIBS(='') does not contain '-lmpl', prepending CPPFLAGS(=' ') does not contain '-I/d/Distributions/mpich-3.0.4/build/src/mpl/include', appending CPPFLAGS(=' -I/d/Distributions/mpich-3.0.4/build/src/mpl/include') does not contain '-I/d/Distributions/mpich-3.0.4/src/mpl/include', appending LIBS(=' ') does not contain '-lopa', prepending configure:26050: gcc -o conftest.exe -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include conftest.c -lopa >&5 conftest.c:26:28: fatal error: opa_primitives.h: No such file or directory #include "opa_primitives.h" ^ compilation terminated. configure:26050: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | /* end confdefs.h. */ | #include "opa_primitives.h" | | int | main () | { | | OPA_int_t i; | OPA_store_int(i,10); | OPA_fetch_and_incr_int(&i,5); | | ; | return 0; | } CPPFLAGS(=' -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include') does not contain '-I/d/Distributions/mpich-3.0.4/src/openpa/src', appending CPPFLAGS(=' -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src') does not contain '-I/d/Distributions/mpich-3.0.4/build/src/openpa/src', appending configure:26105: ===== configuring src/openpa ===== configure:26270: running /bin/sh ../../../src/openpa/configure --disable-option-checking '--prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH' --with-atomic-primitives=auto_allow_emulation '--enable-fast=all,O3' '--disable-checkerrors' --cache-file=/dev/null --srcdir=../../../src/openpa configure:26290: ===== done with src/openpa configure ===== WRAPPER_LIBS(='-lmpl ') does not contain '-lopa', prepending CPPFLAGS(=' -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src') does not contain '-I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include', appending configure:26982: checking whether the compiler defines __func__ configure:27009: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:27009: $? = 0 configure:27009: ./conftest.exe configure:27009: $? = 0 configure:27046: result: yes configure:27055: checking whether the compiler defines __FUNC__ configure:27082: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'foo': conftest.c:39:20: error: '__FUNC__' undeclared (first use in this function) return (strcmp(__FUNC__, "foo") == 0); ^ conftest.c:39:20: note: each undeclared identifier is reported only once for each function it appears in configure:27082: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | /* end confdefs.h. */ | | | #include | int foo(void); | int foo(void) | { | return (strcmp(__FUNC__, "foo") == 0); | } | int main(int argc, char ** argv) | { | return (foo() ? 0 : 1); | } | | configure:27119: result: no configure:27128: checking whether the compiler sets __FUNCTION__ configure:27155: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:27155: $? = 0 configure:27155: ./conftest.exe configure:27155: $? = 0 configure:27192: result: yes configure:27208: checking whether C compiler accepts option -O3 configure:27264: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c > pac_test1.log 2>&1 configure:27264: $? = 0 configure:27299: gcc -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c > pac_test2.log 2>&1 configure:27299: $? = 0 configure:27307: diff -b pac_test1.log pac_test2.log > pac_test.log configure:27310: $? = 0 configure:27414: result: yes configure:27430: checking whether C compiler option -O3 works with an invalid prototype program configure:27438: gcc -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:27438: $? = 0 configure:27445: result: yes configure:27450: checking whether routines compiled with -O3 can be linked with ones compiled without -O3 configure:27494: gcc -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c > pac_test3.log 2>&1 configure:27494: $? = 0 configure:27498: mv conftest.o pac_conftest.o configure:27501: $? = 0 configure:27548: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c pac_conftest.o > pac_test4.log 2>&1 configure:27548: $? = 0 configure:27594: gcc -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c pac_conftest.o > pac_test5.log 2>&1 configure:27594: $? = 0 configure:27602: diff -b pac_test4.log pac_test5.log > pac_test.log configure:27605: $? = 0 configure:27750: result: yes configure:27783: checking for type of weak symbol alias support configure:27807: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:27807: $? = 0 configure:27831: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:27831: $? = 0 configure:27834: mv conftest.o pac_conftest.o configure:27837: $? = 0 configure:27878: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c pac_conftest.o >&5 D:\Users\Haroogan\AppData\Local\Temp\ccuN7W93.o:conftest.c:(.text.startup+0x10): undefined reference to `PFoo' collect2.exe: error: ld returned 1 exit status configure:27878: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | | | extern int PFoo(int); | int main(int argc, char **argv) { | return PFoo(0);} | | configure:28066: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\cc6jJDvR.o:conftest.c:(.text.startup+0x13): undefined reference to `PFoo' collect2.exe: error: ld returned 1 exit status configure:28066: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | | extern int PFoo(int); | #pragma _HP_SECONDARY_DEF Foo PFoo | int Foo(int a) { return a; } | | int | main () | { | return PFoo(1); | ; | return 0; | } configure:28088: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccIXxJsI.o:conftest.c:(.text.startup+0x13): undefined reference to `PFoo' collect2.exe: error: ld returned 1 exit status configure:28088: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | | extern int PFoo(int); | #pragma _CRI duplicate PFoo as Foo | int Foo(int a) { return a; } | | int | main () | { | return PFoo(1); | ; | return 0; | } configure:28102: result: no configure:28123: checking whether __attribute__ ((weak)) allowed configure:28140: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:28140: $? = 0 configure:28147: result: yes configure:28151: checking whether __attribute__ ((weak_import)) allowed configure:28168: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c:34:1: warning: 'weak_import' attribute directive ignored [-Wattributes] int foo(int) __attribute__ ((weak_import)); ^ configure:28168: $? = 0 configure:28175: result: yes configure:28178: checking whether __attribute__((weak,alias(...))) allowed configure:28195: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c:34:5: error: 'foo' aliased to undefined symbol '__foo' int foo(int) __attribute__((weak,alias("__foo"))); ^ configure:28195: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | int foo(int) __attribute__((weak,alias("__foo"))); | int | main () | { | int a; | ; | return 0; | } configure:28202: result: no configure:28380: checking for shared library (esp. rpath) characteristics of CC configure:28479: result: done (results in src/env/cc_shlib.conf) configure:28583: checking whether Fortran 77 compiler accepts option -O3 configure:28632: gfortran -o conftest.exe conftest.f > pac_test1.log 2>&1 configure:28632: $? = 0 configure:28667: gfortran -o conftest.exe -O3 conftest.f > pac_test2.log 2>&1 configure:28667: $? = 0 configure:28675: diff -b pac_test1.log pac_test2.log > pac_test.log configure:28678: $? = 0 configure:28782: result: yes configure:28787: checking whether routines compiled with -O3 can be linked with ones compiled without -O3 configure:28830: gfortran -c conftest.f > pac_test3.log 2>&1 configure:28830: $? = 0 configure:28834: mv conftest.o pac_conftest.o configure:28837: $? = 0 configure:28879: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o > pac_test4.log 2>&1 configure:28879: $? = 0 configure:28887: diff -b pac_test2.log pac_test4.log > pac_test.log configure:28890: $? = 0 configure:28995: result: yes configure:29033: checking how to get verbose linking output from gfortran configure:29043: gfortran -c -O3 conftest.f >&5 configure:29043: $? = 0 configure:29061: gfortran -o conftest.exe -O3 -v conftest.f Using built-in specs. Target: x86_64-w64-mingw32 Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/f951.exe conftest.f -ffixed-form -quiet -dumpbase conftest.f -mtune=core2 -march=nocona -auxbase conftest -O3 -version -fintrinsic-modules-path d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/finclude -o D:\Users\Haroogan\AppData\Local\Temp\ccBu4lgq.s GNU Fortran (rev2, Built by MinGW-builds project) version 4.8.0 (x86_64-w64-mingw32) compiled by GNU C version 4.7.2, GMP version 5.1.1, MPFR version 3.1.2, MPC version 1.0.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 GNU Fortran (rev2, Built by MinGW-builds project) version 4.8.0 (x86_64-w64-mingw32) compiled by GNU C version 4.7.2, GMP version 5.1.1, MPFR version 3.1.2, MPC version 1.0.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/bin/as.exe -v -o D:\Users\Haroogan\AppData\Local\Temp\cceXV10K.o D:\Users\Haroogan\AppData\Local\Temp\ccBu4lgq.s GNU assembler version 2.23.2 (x86_64-w64-mingw32) using BFD version (GNU Binutils) 2.23.2 Reading specs from d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/libgfortran.spec rename spec lib to liborig d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/collect2.exe --sysroot=C:/gccbuild/msys/temp/x64-480-posix-seh-r2/mingw64 -m i386pep -Bdynamic -o conftest.exe d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crt2.o d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crtbegin.o -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. D:\Users\Haroogan\AppData\Local\Temp\cceXV10K.o -lgfortran -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crtend.o configure:29144: result: -v configure:29146: checking for Fortran 77 libraries of gfortran configure:29169: gfortran -o conftest.exe -O3 -v conftest.f Using built-in specs. Target: x86_64-w64-mingw32 Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/f951.exe conftest.f -ffixed-form -quiet -dumpbase conftest.f -mtune=core2 -march=nocona -auxbase conftest -O3 -version -fintrinsic-modules-path d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/finclude -o D:\Users\Haroogan\AppData\Local\Temp\ccZEY1dQ.s GNU Fortran (rev2, Built by MinGW-builds project) version 4.8.0 (x86_64-w64-mingw32) compiled by GNU C version 4.7.2, GMP version 5.1.1, MPFR version 3.1.2, MPC version 1.0.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 GNU Fortran (rev2, Built by MinGW-builds project) version 4.8.0 (x86_64-w64-mingw32) compiled by GNU C version 4.7.2, GMP version 5.1.1, MPFR version 3.1.2, MPC version 1.0.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/bin/as.exe -v -o D:\Users\Haroogan\AppData\Local\Temp\cc2s59Ix.o D:\Users\Haroogan\AppData\Local\Temp\ccZEY1dQ.s GNU assembler version 2.23.2 (x86_64-w64-mingw32) using BFD version (GNU Binutils) 2.23.2 Reading specs from d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/libgfortran.spec rename spec lib to liborig d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/collect2.exe --sysroot=C:/gccbuild/msys/temp/x64-480-posix-seh-r2/mingw64 -m i386pep -Bdynamic -o conftest.exe d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crt2.o d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crtbegin.o -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. D:\Users\Haroogan\AppData\Local\Temp\cc2s59Ix.o -lgfortran -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crtend.o configure:29365: result: -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv configure:29381: checking whether gfortran accepts the FLIBS found by autoconf configure:29397: gfortran -o conftest.exe -O3 conftest.f >&5 configure:29397: $? = 0 configure:29399: result: yes configure:29437: checking whether gcc links with FLIBS found by autoconf configure:29462: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv >&5 configure:29462: $? = 0 configure:29464: result: yes configure:29514: checking whether Fortran 77 and C objects are compatible configure:29593: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:29593: $? = 0 configure:29596: mv conftest.o pac_conftest.o configure:29599: $? = 0 configure:29608: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:29608: $? = 0 configure:29611: result: yes configure:29770: checking for linker for Fortran main program configure:29791: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:29791: $? = 0 configure:29850: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:29850: $? = 0 configure:29853: mv conftest.o pac_conftest.o configure:29856: $? = 0 configure:29865: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:29865: $? = 0 configure:29867: result: Use Fortran to link programs configure:29978: checking for Fortran 77 name mangling configure:30000: gfortran -c -O3 conftest.f >&5 configure:30000: $? = 0 configure:30003: mv conftest.o f77conftest.o configure:30006: $? = 0 configure:30031: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c f77conftest.o -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv >&5 configure:30031: $? = 0 configure:30154: result: lower uscore configure:30220: checking for libraries to link Fortran main with C stdio routines configure:30246: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:30246: $? = 0 configure:30249: mv conftest.o pac_conftest.o configure:30252: $? = 0 configure:30271: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:30271: $? = 0 configure:30302: result: none configure:30353: checking whether Fortran init will work with C configure:30375: gfortran -c -O3 conftest.f >&5 configure:30375: $? = 0 configure:30378: mv conftest.o pac_f77conftest.o configure:30381: $? = 0 configure:30425: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c pac_f77conftest.o >&5 configure:30425: $? = 0 configure:30448: result: yes configure:30547: checking for extension for Fortran 90 programs configure:30562: gfortran -c conftest.f90 >&5 configure:30562: $? = 0 configure:30564: result: f90 configure:30606: checking whether the Fortran 90 compiler (gfortran ) works configure:30617: gfortran -o conftest.exe conftest.f90 >&5 configure:30617: $? = 0 configure:30620: result: yes configure:30622: checking whether the Fortran 90 compiler (gfortran ) is a cross-compiler configure:30628: gfortran -o conftest.exe conftest.f90 >&5 configure:30628: $? = 0 configure:30628: ./conftest.exe configure:30628: $? = 0 configure:30637: result: no configure:30660: checking whether Fortran 90 compiler works with Fortran 77 compiler configure:30688: gfortran -c -O3 conftest.f >&5 configure:30688: $? = 0 configure:30692: mv conftest.o pac_f77conftest.o configure:30695: $? = 0 configure:30716: gfortran -o conftest.exe conftest.f90 pac_f77conftest.o >&5 configure:30716: $? = 0 configure:30758: result: yes configure:30818: checking for shared library (esp. rpath) characteristics of F77 configure:30919: result: done (results in src/env/f77_shlib.conf) configure:30930: checking whether Fortran 77 accepts ! for comments configure:30948: gfortran -c -O3 conftest.f >&5 configure:30948: $? = 0 configure:30965: result: yes configure:30975: checking for include directory flag for Fortran configure:31002: gfortran -c -I src -O3 conftest.f >&5 configure:31002: $? = 0 configure:31020: result: -I configure:31028: checking for Fortran 77 flag for library directories configure:31047: gfortran -c -O3 conftest.f >&5 configure:31047: $? = 0 configure:31051: mv conftest.o pac_f77conftest.o configure:31054: $? = 0 configure:31057: test -d conftestdir || mkdir conftestdir configure:31060: $? = 0 configure:31063: ar cru conftestdir/libf77conftest.a pac_f77conftest.o configure:31066: $? = 0 configure:31069: ranlib conftestdir/libf77conftest.a configure:31072: $? = 0 configure:31090: gfortran -o conftest.exe -O3 -Lconftestdir conftest.f -lf77conftest >&5 configure:31090: $? = 0 configure:31110: result: -L configure:31186: checking whether Fortran 77 compiler processes .F files with C preprocessor configure:31207: gfortran -c -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.F >&5 Warning: Nonexistent include directory "d:/Distributions/mpich-3.0.4/build/src/mpi/romio/include" configure:31207: $? = 0 configure:31275: result: yes configure:31283: checking whether gfortran allows mismatched arguments configure:31306: gfortran -c -O3 conftest.f >&5 configure:31306: $? = 0 configure:31350: result: yes configure:31419: checking for shared library (esp. rpath) characteristics of FC configure:31520: result: done (results in src/env/fc_shlib.conf) configure:31536: checking whether the Fortran 90 compiler (gfortran ) works configure:31547: gfortran -o conftest.exe conftest.f90 >&5 configure:31547: $? = 0 configure:31550: result: yes configure:31552: checking whether the Fortran 90 compiler (gfortran ) is a cross-compiler configure:31558: gfortran -o conftest.exe conftest.f90 >&5 configure:31558: $? = 0 configure:31558: ./conftest.exe configure:31558: $? = 0 configure:31567: result: no configure:31613: checking for Fortran 90 module extension configure:31635: gfortran -c conftest.f90 >&5 configure:31635: $? = 0 configure:31687: result: mod configure:31698: checking for Fortran 90 module include flag configure:31726: gfortran -c conftest.f90 >&5 configure:31726: $? = 0 configure:31763: gfortran -c -Iconftestdir conftest.f90 >&5 configure:31763: $? = 0 configure:31822: result: -I configure:31830: checking for Fortran 90 module output directory flag configure:31861: gfortran -c conftest.f90 >&5 configure:31861: $? = 0 configure:31911: gfortran -c -Jconftestdir conftest.f90 >&5 configure:31911: $? = 0 configure:31951: result: -J configure:31989: checking whether Fortran 90 compiler accepts option -O3 configure:32038: gfortran -o conftest.exe conftest.f90 > pac_test1.log 2>&1 configure:32038: $? = 0 configure:32073: gfortran -o conftest.exe -O3 conftest.f90 > pac_test2.log 2>&1 configure:32073: $? = 0 configure:32081: diff -b pac_test1.log pac_test2.log > pac_test.log configure:32084: $? = 0 configure:32188: result: yes configure:32193: checking whether routines compiled with -O3 can be linked with ones compiled without -O3 configure:32236: gfortran -c conftest.f90 > pac_test3.log 2>&1 configure:32236: $? = 0 configure:32240: mv conftest.o pac_conftest.o configure:32243: $? = 0 configure:32285: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_test4.log 2>&1 configure:32285: $? = 0 configure:32293: diff -b pac_test2.log pac_test4.log > pac_test.log configure:32296: $? = 0 configure:32401: result: yes configure:32433: checking whether Fortran 90 compiler processes .F90 files with C preprocessor configure:32454: gfortran -c -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.F90 >&5 Warning: Nonexistent include directory "d:/Distributions/mpich-3.0.4/build/src/mpi/romio/include" configure:32454: $? = 0 configure:32522: result: yes configure:32543: checking what libraries are needed to link Fortran90 programs with C routines that use stdio configure:32569: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:32569: $? = 0 configure:32573: mv conftest.o pac_conftest.o configure:32576: $? = 0 configure:32593: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o >&5 configure:32593: $? = 0 configure:32634: result: none configure:32658: checking whether the C++ compiler c++ can build an executable configure:32687: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:32687: $? = 0 configure:32701: result: yes configure:32708: checking whether C++ compiler works with string configure:32731: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:32731: $? = 0 configure:32744: result: yes configure:32757: checking whether the compiler supports exceptions configure:32780: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:32780: $? = 0 configure:32794: result: yes configure:32802: checking whether the compiler recognizes bool as a built-in type configure:32829: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:32829: $? = 0 configure:32843: result: yes configure:32851: checking whether the compiler implements namespaces configure:32874: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:32874: $? = 0 configure:32888: result: yes configure:32905: checking whether available configure:32924: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:32924: $? = 0 configure:32931: result: yes configure:32934: checking whether the compiler implements the namespace std configure:32961: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:32961: $? = 0 configure:32976: result: yes configure:32985: checking whether available configure:33004: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 conftest.cpp:45:16: fatal error: math: No such file or directory #include ^ compilation terminated. configure:33004: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | /* end confdefs.h. */ | | #include | | int | main () | { | using namespace std; | ; | return 0; | } configure:33011: result: no configure:33036: checking for GNU g++ version configure:33061: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:33061: $? = 0 configure:33061: ./conftest.exe configure:33061: $? = 0 configure:33071: result: 4 . 8 configure:33127: checking for shared library (esp. rpath) characteristics of CXX configure:33228: result: done (results in src/env/cxx_shlib.conf) configure:33240: checking whether C++ compiler accepts option -O3 configure:33296: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp > pac_test1.log 2>&1 configure:33296: $? = 0 configure:33331: c++ -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp > pac_test2.log 2>&1 configure:33331: $? = 0 configure:33339: diff -b pac_test1.log pac_test2.log > pac_test.log configure:33342: $? = 0 configure:33446: result: yes configure:33451: checking whether routines compiled with -O3 can be linked with ones compiled without -O3 configure:33495: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp > pac_test3.log 2>&1 configure:33495: $? = 0 configure:33499: mv conftest.o pac_conftest.o configure:33502: $? = 0 configure:33550: c++ -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp pac_conftest.o > pac_test4.log 2>&1 configure:33550: $? = 0 configure:33558: diff -b pac_test2.log pac_test4.log > pac_test.log configure:33561: $? = 0 configure:33666: result: yes configure:33719: checking for perl configure:33737: found /bin/perl configure:33749: result: /bin/perl configure:33762: checking for ar configure:33789: result: ar configure:33825: checking for ranlib configure:33852: result: ranlib configure:33871: checking for killall configure:33901: result: no configure:33941: checking whether install works configure:33949: result: yes configure:33964: checking whether mkdir -p works configure:33980: result: yes configure:33998: checking for make configure:34014: found /bin/make configure:34025: result: make configure:34039: checking whether clock skew breaks make configure:34064: result: no configure:34074: checking whether make supports include configure:34102: result: yes configure:34111: checking whether make allows comments in actions configure:34138: result: yes configure:34152: checking for virtual path format configure:34195: result: VPATH configure:34205: checking whether make sets CFLAGS configure:34231: result: yes configure:34280: checking for bash configure:34298: found /bin/bash configure:34310: result: /bin/bash configure:34333: checking whether /bin/bash supports arrays configure:34342: result: yes configure:34577: checking for doctext configure:34608: result: false configure:34621: checking for an ANSI C-conforming const configure:34687: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:34687: $? = 0 configure:34694: result: yes configure:34702: checking for working volatile configure:34721: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:34721: $? = 0 configure:34728: result: yes configure:34736: checking for C/C++ restrict keyword configure:34761: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:34761: $? = 0 configure:34769: result: __restrict configure:34782: checking for inline configure:34798: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:34798: $? = 0 configure:34806: result: inline configure:34829: checking whether __attribute__ allowed configure:34846: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:34846: $? = 0 configure:34853: result: yes configure:34855: checking whether __attribute__((format)) allowed configure:34872: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:34872: $? = 0 configure:34879: result: yes configure:34902: checking whether byte ordering is bigendian configure:34917: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c:51:9: error: unknown type name 'not' not a universal capable compiler ^ conftest.c:51:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'universal' not a universal capable compiler ^ conftest.c:51:15: error: unknown type name 'universal' configure:34917: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | /* end confdefs.h. */ | #ifndef __APPLE_CC__ | not a universal capable compiler | #endif | typedef int dummy; | configure:34962: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:34962: $? = 0 configure:34980: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:57:4: error: unknown type name 'not' not big endian ^ conftest.c:57:12: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'endian' not big endian ^ configure:34980: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | /* end confdefs.h. */ | #include | #include | | int | main () | { | #if BYTE_ORDER != BIG_ENDIAN | not big endian | #endif | | ; | return 0; | } configure:35108: result: no configure:35146: checking whether C compiler allows unaligned doubles configure:35179: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35179: $? = 0 configure:35179: ./conftest.exe configure:35179: $? = 0 configure:35189: result: yes configure:35208: checking whether gcc supports __func__ configure:35224: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35224: $? = 0 configure:35231: result: yes configure:35268: checking whether long double is supported configure:35285: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35285: $? = 0 configure:35292: result: yes configure:35301: checking whether long long is supported configure:35318: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35318: $? = 0 configure:35325: result: yes configure:35336: checking for max C struct integer alignment configure:35454: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35454: $? = 0 configure:35454: ./conftest.exe configure:35454: $? = 0 configure:35466: result: eight configure:35497: checking for max C struct floating point alignment configure:35599: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35599: $? = 0 configure:35599: ./conftest.exe configure:35599: $? = 0 configure:35611: result: sixteen configure:35644: checking for max C struct alignment of structs with doubles configure:35715: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35715: $? = 0 configure:35715: ./conftest.exe configure:35715: $? = 0 configure:35727: result: eight configure:35734: checking for max C struct floating point alignment with long doubles configure:35806: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35806: $? = 0 configure:35806: ./conftest.exe configure:35806: $? = 0 configure:35818: result: sixteen configure:35828: WARNING: Structures containing long doubles may be aligned differently from structures with floats or longs. MPICH does not handle this case automatically and you should avoid assumed extents for structures containing float types. configure:35863: checking if alignment of structs with doubles is based on position configure:35897: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35897: $? = 0 configure:35897: ./conftest.exe configure:35897: $? = 0 configure:35909: result: no configure:35925: checking if alignment of structs with long long ints is based on position configure:35961: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:35961: $? = 0 configure:35961: ./conftest.exe configure:35961: $? = 0 configure:35973: result: no configure:35989: checking if double alignment breaks rules, find actual alignment configure:36036: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36036: $? = 0 configure:36036: ./conftest.exe configure:36036: $? = 0 configure:36048: result: no configure:36064: checking for alignment restrictions on pointers configure:36084: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36084: $? = 0 configure:36084: ./conftest.exe configure:36084: $? = 0 configure:36101: result: int or better configure:36113: checking size of char configure:36118: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36118: $? = 0 configure:36118: ./conftest.exe configure:36118: $? = 0 configure:36132: result: 1 configure:36146: checking size of unsigned char configure:36151: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36151: $? = 0 configure:36151: ./conftest.exe configure:36151: $? = 0 configure:36165: result: 1 configure:36179: checking size of short configure:36184: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36184: $? = 0 configure:36184: ./conftest.exe configure:36184: $? = 0 configure:36198: result: 2 configure:36212: checking size of unsigned short configure:36217: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36217: $? = 0 configure:36217: ./conftest.exe configure:36217: $? = 0 configure:36231: result: 2 configure:36245: checking size of int configure:36250: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36250: $? = 0 configure:36250: ./conftest.exe configure:36250: $? = 0 configure:36264: result: 4 configure:36278: checking size of unsigned int configure:36283: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36283: $? = 0 configure:36283: ./conftest.exe configure:36283: $? = 0 configure:36297: result: 4 configure:36311: checking size of long configure:36316: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36316: $? = 0 configure:36316: ./conftest.exe configure:36316: $? = 0 configure:36330: result: 4 configure:36344: checking size of unsigned long configure:36349: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36349: $? = 0 configure:36349: ./conftest.exe configure:36349: $? = 0 configure:36363: result: 4 configure:36377: checking size of long long configure:36382: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36382: $? = 0 configure:36382: ./conftest.exe configure:36382: $? = 0 configure:36396: result: 8 configure:36410: checking size of unsigned long long configure:36415: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36415: $? = 0 configure:36415: ./conftest.exe configure:36415: $? = 0 configure:36429: result: 8 configure:36443: checking size of float configure:36448: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36448: $? = 0 configure:36448: ./conftest.exe configure:36448: $? = 0 configure:36462: result: 4 configure:36476: checking size of double configure:36481: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36481: $? = 0 configure:36481: ./conftest.exe configure:36481: $? = 0 configure:36495: result: 8 configure:36509: checking size of long double configure:36514: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36514: $? = 0 configure:36514: ./conftest.exe configure:36514: $? = 0 configure:36528: result: 16 configure:36542: checking size of void * configure:36547: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36547: $? = 0 configure:36547: ./conftest.exe configure:36547: $? = 0 configure:36561: result: 8 configure:36572: checking for ANSI C header files configure:36676: result: yes configure:36686: checking stddef.h usability configure:36686: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36686: $? = 0 configure:36686: result: yes configure:36686: checking stddef.h presence configure:36686: gcc -E -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c configure:36686: $? = 0 configure:36686: result: yes configure:36686: checking for stddef.h configure:36686: result: yes configure:36700: checking size of wchar_t configure:36705: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36705: $? = 0 configure:36705: ./conftest.exe configure:36705: $? = 0 configure:36724: result: 2 configure:36739: checking size of float_int configure:36744: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36744: $? = 0 configure:36744: ./conftest.exe configure:36744: $? = 0 configure:36759: result: 8 configure:36773: checking size of double_int configure:36778: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36778: $? = 0 configure:36778: ./conftest.exe configure:36778: $? = 0 configure:36793: result: 16 configure:36807: checking size of long_int configure:36812: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36812: $? = 0 configure:36812: ./conftest.exe configure:36812: $? = 0 configure:36827: result: 8 configure:36841: checking size of short_int configure:36846: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36846: $? = 0 configure:36846: ./conftest.exe configure:36846: $? = 0 configure:36861: result: 8 configure:36875: checking size of two_int configure:36880: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36880: $? = 0 configure:36880: ./conftest.exe configure:36880: $? = 0 configure:36895: result: 8 configure:36909: checking size of long_double_int configure:36914: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36914: $? = 0 configure:36914: ./conftest.exe configure:36914: $? = 0 configure:36929: result: 32 configure:36942: checking sys/bitypes.h usability configure:36942: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c:114:25: fatal error: sys/bitypes.h: No such file or directory #include ^ compilation terminated. configure:36942: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | #include configure:36942: result: no configure:36942: checking sys/bitypes.h presence configure:36942: gcc -E -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c conftest.c:81:25: fatal error: sys/bitypes.h: No such file or directory #include ^ compilation terminated. configure:36942: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | /* end confdefs.h. */ | #include configure:36942: result: no configure:36942: checking for sys/bitypes.h configure:36942: result: no configure:36955: checking for inttypes.h configure:36955: result: yes configure:36955: checking for stdint.h configure:36955: result: yes configure:36967: checking for int8_t configure:36967: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36967: $? = 0 configure:36967: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:120:12: error: size of array 'test_array' is negative static int test_array [1 - 2 * !((int8_t) (((((int8_t) 1 << N) << N) - 1) * 2 + 1) ^ configure:36967: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | enum { N = 8 / 2 - 1 }; | int | main () | { | static int test_array [1 - 2 * !((int8_t) (((((int8_t) 1 << N) << N) - 1) * 2 + 1) | < (int8_t) (((((int8_t) 1 << N) << N) - 1) * 2 + 2))]; | test_array [0] = 0; | return test_array [0]; | | ; | return 0; | } configure:36967: result: yes configure:36978: checking for int16_t configure:36978: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36978: $? = 0 configure:36978: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:120:12: error: size of array 'test_array' is negative static int test_array [1 - 2 * !((int16_t) (((((int16_t) 1 << N) << N) - 1) * 2 + 1) ^ configure:36978: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | enum { N = 16 / 2 - 1 }; | int | main () | { | static int test_array [1 - 2 * !((int16_t) (((((int16_t) 1 << N) << N) - 1) * 2 + 1) | < (int16_t) (((((int16_t) 1 << N) << N) - 1) * 2 + 2))]; | test_array [0] = 0; | return test_array [0]; | | ; | return 0; | } configure:36978: result: yes configure:36989: checking for int32_t configure:36989: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:36989: $? = 0 configure:36989: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:121:53: warning: integer overflow in expression [-Woverflow] < (int32_t) (((((int32_t) 1 << N) << N) - 1) * 2 + 2))]; ^ conftest.c:120:12: error: storage size of 'test_array' isn't constant static int test_array [1 - 2 * !((int32_t) (((((int32_t) 1 << N) << N) - 1) * 2 + 1) ^ configure:36989: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | enum { N = 32 / 2 - 1 }; | int | main () | { | static int test_array [1 - 2 * !((int32_t) (((((int32_t) 1 << N) << N) - 1) * 2 + 1) | < (int32_t) (((((int32_t) 1 << N) << N) - 1) * 2 + 2))]; | test_array [0] = 0; | return test_array [0]; | | ; | return 0; | } configure:36989: result: yes configure:37000: checking for int64_t configure:37000: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37000: $? = 0 configure:37000: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:121:53: warning: integer overflow in expression [-Woverflow] < (int64_t) (((((int64_t) 1 << N) << N) - 1) * 2 + 2))]; ^ conftest.c:120:12: error: storage size of 'test_array' isn't constant static int test_array [1 - 2 * !((int64_t) (((((int64_t) 1 << N) << N) - 1) * 2 + 1) ^ configure:37000: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | enum { N = 64 / 2 - 1 }; | int | main () | { | static int test_array [1 - 2 * !((int64_t) (((((int64_t) 1 << N) << N) - 1) * 2 + 1) | < (int64_t) (((((int64_t) 1 << N) << N) - 1) * 2 + 2))]; | test_array [0] = 0; | return test_array [0]; | | ; | return 0; | } configure:37000: result: yes configure:37041: checking for uint8_t configure:37041: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37041: $? = 0 configure:37041: result: yes configure:37055: checking for uint16_t configure:37055: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37055: $? = 0 configure:37055: result: yes configure:37067: checking for uint32_t configure:37067: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37067: $? = 0 configure:37067: result: yes configure:37081: checking for uint64_t configure:37081: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37081: $? = 0 configure:37081: result: yes configure:37123: checking stdbool.h usability configure:37123: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37123: $? = 0 configure:37123: result: yes configure:37123: checking stdbool.h presence configure:37123: gcc -E -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c configure:37123: $? = 0 configure:37123: result: yes configure:37123: checking for stdbool.h configure:37123: result: yes configure:37123: checking complex.h usability configure:37123: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37123: $? = 0 configure:37123: result: yes configure:37123: checking complex.h presence configure:37123: gcc -E -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c configure:37123: $? = 0 configure:37123: result: yes configure:37123: checking for complex.h configure:37123: result: yes configure:37137: checking size of _Bool configure:37142: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37142: $? = 0 configure:37142: ./conftest.exe configure:37142: $? = 0 configure:37161: result: 1 configure:37175: checking size of float _Complex configure:37180: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37180: $? = 0 configure:37180: ./conftest.exe configure:37180: $? = 0 configure:37199: result: 8 configure:37213: checking size of double _Complex configure:37218: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37218: $? = 0 configure:37218: ./conftest.exe configure:37218: $? = 0 configure:37237: result: 16 configure:37253: checking size of long double _Complex configure:37258: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37258: $? = 0 configure:37258: ./conftest.exe configure:37258: $? = 0 configure:37277: result: 32 configure:37292: checking for _Bool configure:37292: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37292: $? = 0 configure:37292: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:133:20: error: expected expression before ')' token if (sizeof ((_Bool))) ^ configure:37292: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((_Bool))) | return 0; | ; | return 0; | } configure:37292: result: yes configure:37301: checking for float _Complex configure:37301: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37301: $? = 0 configure:37301: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:134:29: error: expected expression before ')' token if (sizeof ((float _Complex))) ^ configure:37301: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((float _Complex))) | return 0; | ; | return 0; | } configure:37301: result: yes configure:37310: checking for double _Complex configure:37310: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37310: $? = 0 configure:37310: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:135:30: error: expected expression before ')' token if (sizeof ((double _Complex))) ^ configure:37310: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((double _Complex))) | return 0; | ; | return 0; | } configure:37310: result: yes configure:37323: checking for long double _Complex configure:37323: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37323: $? = 0 configure:37323: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:136:35: error: expected expression before ')' token if (sizeof ((long double _Complex))) ^ configure:37323: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((long double _Complex))) | return 0; | ; | return 0; | } configure:37323: result: yes configure:37743: checking for size of Fortran type integer configure:37784: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37784: $? = 0 configure:37788: mv conftest.o pac_conftest.o configure:37791: $? = 0 configure:37818: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:37818: $? = 0 configure:37818: ./conftest.exe configure:37818: $? = 0 configure:37856: result: 4 configure:37871: checking for size of Fortran type real configure:37912: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:37912: $? = 0 configure:37916: mv conftest.o pac_conftest.o configure:37919: $? = 0 configure:37946: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:37946: $? = 0 configure:37946: ./conftest.exe configure:37946: $? = 0 configure:37984: result: 4 configure:37999: checking for size of Fortran type double precision configure:38040: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:38040: $? = 0 configure:38044: mv conftest.o pac_conftest.o configure:38047: $? = 0 configure:38074: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:38074: $? = 0 configure:38074: ./conftest.exe configure:38074: $? = 0 configure:38112: result: 8 configure:38135: checking whether integer*1 is supported configure:38146: gfortran -c -O3 conftest.f >&5 configure:38146: $? = 0 configure:38153: result: yes configure:38155: checking whether integer*2 is supported configure:38166: gfortran -c -O3 conftest.f >&5 configure:38166: $? = 0 configure:38173: result: yes configure:38175: checking whether integer*4 is supported configure:38186: gfortran -c -O3 conftest.f >&5 configure:38186: $? = 0 configure:38193: result: yes configure:38195: checking whether integer*8 is supported configure:38206: gfortran -c -O3 conftest.f >&5 configure:38206: $? = 0 configure:38213: result: yes configure:38215: checking whether integer*16 is supported configure:38226: gfortran -c -O3 conftest.f >&5 configure:38226: $? = 0 configure:38233: result: yes configure:38235: checking whether real*4 is supported configure:38246: gfortran -c -O3 conftest.f >&5 configure:38246: $? = 0 configure:38253: result: yes configure:38255: checking whether real*8 is supported configure:38266: gfortran -c -O3 conftest.f >&5 configure:38266: $? = 0 configure:38273: result: yes configure:38275: checking whether real*16 is supported configure:38286: gfortran -c -O3 conftest.f >&5 configure:38286: $? = 0 configure:38293: result: yes configure:38391: checking for C type matching Fortran real configure:38399: result: float configure:38414: checking for C type matching Fortran double configure:38421: result: double configure:38750: checking for C type matching Fortran integer configure:38757: result: int configure:38910: checking for values of Fortran logicals configure:38949: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 configure:38949: $? = 0 configure:38952: mv conftest.o pac_conftest.o configure:38955: $? = 0 configure:38987: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:38987: $? = 0 configure:38987: ./conftest.exe configure:38987: $? = 0 configure:39025: result: True is 1 and False is 0 configure:39136: checking for Fortran 90 integer kind for 8-byte integers configure:39176: gfortran -o conftest.exe -O3 conftest.f90 >&5 configure:39176: $? = 0 configure:39176: ./conftest.exe configure:39176: $? = 0 configure:39193: result: 8 configure:39305: checking for Fortran 90 integer kind for 4-byte integers configure:39345: gfortran -o conftest.exe -O3 conftest.f90 >&5 configure:39345: $? = 0 configure:39345: ./conftest.exe configure:39345: $? = 0 configure:39362: result: 4 configure:39395: checking if real*8 is supported in Fortran 90 configure:39408: gfortran -c -O3 conftest.f90 >&5 configure:39408: $? = 0 configure:39420: result: yes configure:39576: checking size of bool configure:39581: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:39581: $? = 0 configure:39581: ./conftest.exe configure:39581: $? = 0 configure:39595: result: 1 configure:39637: checking complex usability configure:39637: c++ -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:39637: $? = 0 configure:39637: result: yes configure:39637: checking complex presence configure:39637: c++ -E -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp configure:39637: $? = 0 configure:39637: result: yes configure:39637: checking for complex configure:39637: result: yes configure:39650: checking size of Complex configure:39655: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:39655: $? = 0 configure:39655: ./conftest.exe configure:39655: $? = 0 configure:39674: result: 8 configure:39688: checking size of DoubleComplex configure:39693: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:39693: $? = 0 configure:39693: ./conftest.exe configure:39693: $? = 0 configure:39712: result: 16 configure:39727: checking size of LongDoubleComplex configure:39732: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.cpp >&5 configure:39732: $? = 0 configure:39732: ./conftest.exe configure:39732: $? = 0 configure:39751: result: 32 configure:39867: checking for alignment restrictions on int64_t configure:39901: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:134:5: error: unknown type name 'int64_t' int64_t *p1, v; ^ conftest.c:138:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x7 ) ) bp += 4; ^ conftest.c:139:11: error: 'int64_t' undeclared (first use in this function) p1 = (int64_t *)bp; ^ conftest.c:139:11: note: each undeclared identifier is reported only once for each function it appears in conftest.c:139:20: error: expected expression before ')' token p1 = (int64_t *)bp; ^ conftest.c:142:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x3 ) ) bp += 2; ^ conftest.c:143:20: error: expected expression before ')' token p1 = (int64_t *)bp; ^ conftest.c:145:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x1 ) ) bp += 1; ^ conftest.c:146:20: error: expected expression before ')' token p1 = (int64_t *)bp; ^ configure:39901: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define MPIR_FC_REAL_CTYPE float | #define MPIR_FC_DOUBLE_CTYPE double | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | /* end confdefs.h. */ | | #include | #include | int main(int argc, char **argv ) | { | int64_t *p1, v; | char *buf_p = (char *)malloc( 64 ), *bp; | bp = buf_p; | /* Make bp aligned on 4, not 8 bytes */ | if (!( (long)bp & 0x7 ) ) bp += 4; | p1 = (int64_t *)bp; | v = -1; | *p1 = v; | if (!( (long)bp & 0x3 ) ) bp += 2; | p1 = (int64_t *)bp; | *p1 = 1; | if (!( (long)bp & 0x1 ) ) bp += 1; | p1 = (int64_t *)bp; | *p1 = 1; | return 0; | } | configure:39912: result: yes configure:39931: checking for alignment restrictions on int32_t configure:39965: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c: In function 'main': conftest.c:134:5: error: unknown type name 'int32_t' int32_t *p1, v; ^ conftest.c:138:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x7 ) ) bp += 4; ^ conftest.c:139:11: error: 'int32_t' undeclared (first use in this function) p1 = (int32_t *)bp; ^ conftest.c:139:11: note: each undeclared identifier is reported only once for each function it appears in conftest.c:139:20: error: expected expression before ')' token p1 = (int32_t *)bp; ^ conftest.c:142:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x3 ) ) bp += 2; ^ conftest.c:143:20: error: expected expression before ')' token p1 = (int32_t *)bp; ^ conftest.c:145:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x1 ) ) bp += 1; ^ conftest.c:146:20: error: expected expression before ')' token p1 = (int32_t *)bp; ^ configure:39965: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define MPIR_FC_REAL_CTYPE float | #define MPIR_FC_DOUBLE_CTYPE double | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | /* end confdefs.h. */ | | #include | #include | int main(int argc, char **argv ) | { | int32_t *p1, v; | char *buf_p = (char *)malloc( 64 ), *bp; | bp = buf_p; | /* Make bp aligned on 4, not 8 bytes */ | if (!( (long)bp & 0x7 ) ) bp += 4; | p1 = (int32_t *)bp; | v = -1; | *p1 = v; | if (!( (long)bp & 0x3 ) ) bp += 2; | p1 = (int32_t *)bp; | *p1 = 1; | if (!( (long)bp & 0x1 ) ) bp += 1; | p1 = (int32_t *)bp; | *p1 = 1; | return 0; | } | configure:39976: result: yes configure:39990: checking size of MPIR_Bsend_data_t configure:39995: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include conftest.c >&5 conftest.c:137:63: fatal error: /d/Distributions/mpich-3.0.4/src/include/mpibsend.h: No such file or directory #include "/d/Distributions/mpich-3.0.4/src/include/mpibsend.h" ^ compilation terminated. configure:39995: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "MPICH" | #define PACKAGE_TARNAME "mpich" | #define PACKAGE_VERSION "3.0.4" | #define PACKAGE_STRING "MPICH 3.0.4" | #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" | #define PACKAGE_URL "http://www.mpich.org/" | #define USE_SMP_COLLECTIVES 1 | #define PACKAGE "mpich" | #define VERSION "3.0.4" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_GETPAGESIZE 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define ENABLE_NEM_STATISTICS 1 | #define ENABLE_RECVQ_STATISTICS 1 | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define MPIR_FC_REAL_CTYPE float | #define MPIR_FC_DOUBLE_CTYPE double | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | /* end confdefs.h. */ | | #define MPI_Datatype int | #ifdef HAVE_STDLIB_H | #include | #endif | #ifdef HAVE_STDINT_H | #include | #endif | #include "/d/Distributions/mpich-3.0.4/src/include/mpibsend.h" | | | static long int longval () { return (long int) (sizeof (MPIR_Bsend_data_t)); } | static unsigned long int ulongval () { return (long int) (sizeof (MPIR_Bsend_data_t)); } | #include | #include | int | main () | { | | FILE *f = fopen ("conftest.val", "w"); | if (! f) | return 1; | if (((long int) (sizeof (MPIR_Bsend_data_t))) < 0) | { | long int i = longval (); | if (i != ((long int) (sizeof (MPIR_Bsend_data_t)))) | return 1; | fprintf (f, "%ld", i); | } | else | { | unsigned long int i = ulongval (); | if (i != ((long int) (sizeof (MPIR_Bsend_data_t)))) | return 1; | fprintf (f, "%lu", i); | } | /* Do not output a trailing newline, as this causes \r\n confusion | on some platforms. */ | return ferror (f) || fclose (f) != 0; | | ; | return 0; | } configure:40019: result: 0 configure:40030: error: Unable to determine the size of MPI_BSEND_OVERHEAD" ## ---------------- ## ## Cache variables. ## ## ---------------- ## ac_cv_build=i686-pc-mingw32 ac_cv_c_bigendian=no ac_cv_c_compiler_gnu=yes ac_cv_c_const=yes ac_cv_c_inline=inline ac_cv_c_int16_t=yes ac_cv_c_int32_t=yes ac_cv_c_int64_t=yes ac_cv_c_int8_t=yes ac_cv_c_restrict=__restrict ac_cv_c_uint16_t=yes ac_cv_c_uint32_t=yes ac_cv_c_uint64_t=yes ac_cv_c_uint8_t=yes ac_cv_c_volatile=yes ac_cv_cxx_bool=yes ac_cv_cxx_compiler_gnu=yes ac_cv_cxx_exceptions=yes ac_cv_cxx_namespace_std=yes ac_cv_cxx_namespaces=yes ac_cv_env_AR_FLAGS_set= ac_cv_env_AR_FLAGS_value= ac_cv_env_CCC_set= ac_cv_env_CCC_value= ac_cv_env_CC_set= ac_cv_env_CC_value= ac_cv_env_CFLAGS_set= ac_cv_env_CFLAGS_value= ac_cv_env_CPPFLAGS_set= ac_cv_env_CPPFLAGS_value= ac_cv_env_CPP_set= ac_cv_env_CPP_value= ac_cv_env_CXXCPP_set= ac_cv_env_CXXCPP_value= ac_cv_env_CXXFLAGS_set= ac_cv_env_CXXFLAGS_value= ac_cv_env_CXX_set= ac_cv_env_CXX_value= ac_cv_env_F77_set= ac_cv_env_F77_value= ac_cv_env_FCFLAGS_set= ac_cv_env_FCFLAGS_value= ac_cv_env_FC_set= ac_cv_env_FC_value= ac_cv_env_FFLAGS_set= ac_cv_env_FFLAGS_value= ac_cv_env_GCOV_set= ac_cv_env_GCOV_value= ac_cv_env_LDFLAGS_set= ac_cv_env_LDFLAGS_value= ac_cv_env_LIBS_set= ac_cv_env_LIBS_value= ac_cv_env_MPICHLIB_CFLAGS_set= ac_cv_env_MPICHLIB_CFLAGS_value= ac_cv_env_MPICHLIB_CPPFLAGS_set= ac_cv_env_MPICHLIB_CPPFLAGS_value= ac_cv_env_MPICHLIB_CXXFLAGS_set= ac_cv_env_MPICHLIB_CXXFLAGS_value= ac_cv_env_MPICHLIB_FCFLAGS_set= ac_cv_env_MPICHLIB_FCFLAGS_value= ac_cv_env_MPICHLIB_FFLAGS_set= ac_cv_env_MPICHLIB_FFLAGS_value= ac_cv_env_MPICHLIB_LDFLAGS_set= ac_cv_env_MPICHLIB_LDFLAGS_value= ac_cv_env_MPICHLIB_LIBS_set= ac_cv_env_MPICHLIB_LIBS_value= ac_cv_env_MPICXXLIBNAME_set= ac_cv_env_MPICXXLIBNAME_value= ac_cv_env_MPILIBNAME_set= ac_cv_env_MPILIBNAME_value= ac_cv_env_PMPILIBNAME_set= ac_cv_env_PMPILIBNAME_value= ac_cv_env_TCP_LIBS_set= ac_cv_env_TCP_LIBS_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set= ac_cv_env_host_alias_value= ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_exeext=.exe ac_cv_f77_compiler_gnu=yes ac_cv_f77_libs=' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv' ac_cv_fc_compiler_gnu=yes ac_cv_func_getpagesize=yes ac_cv_header_complex=yes ac_cv_header_complex_h=yes ac_cv_header_dlfcn_h=no ac_cv_header_inttypes_h=yes ac_cv_header_memory_h=yes ac_cv_header_stdbool_h=yes ac_cv_header_stdc=yes ac_cv_header_stddef_h=yes ac_cv_header_stdint_h=yes ac_cv_header_stdlib_h=yes ac_cv_header_string_h=yes ac_cv_header_strings_h=yes ac_cv_header_sys_bitypes_h=no ac_cv_header_sys_stat_h=yes ac_cv_header_sys_types_h=yes ac_cv_header_unistd_h=yes ac_cv_host=i686-pc-mingw32 ac_cv_objext=o ac_cv_path_BASH_SHELL=/bin/bash ac_cv_path_DOCTEXT=false ac_cv_path_EGREP='/bin/grep -E' ac_cv_path_FGREP='/bin/grep -F' ac_cv_path_GREP=/bin/grep ac_cv_path_PERL=/bin/perl ac_cv_path_SED=/bin/sed ac_cv_path_install='/bin/install -c' ac_cv_path_mkdir=/bin/mkdir ac_cv_prog_AR=ar ac_cv_prog_AWK=gawk ac_cv_prog_CPP='gcc -E' ac_cv_prog_CXXCPP='c++ -E' ac_cv_prog_MAKE=make ac_cv_prog_RANLIB=ranlib ac_cv_prog_ac_ct_AR=ar ac_cv_prog_ac_ct_CC=gcc ac_cv_prog_ac_ct_CXX=c++ ac_cv_prog_ac_ct_DLLTOOL=dlltool ac_cv_prog_ac_ct_F77=gfortran ac_cv_prog_ac_ct_FC=gfortran ac_cv_prog_ac_ct_OBJDUMP=objdump ac_cv_prog_ac_ct_RANLIB=ranlib ac_cv_prog_ac_ct_STRIP=strip ac_cv_prog_cc_c89= ac_cv_prog_cc_g=yes ac_cv_prog_cc_gcc_c_o=yes ac_cv_prog_cxx_g=yes ac_cv_prog_f77_g=yes ac_cv_prog_f77_v=-v ac_cv_prog_fc_g=yes ac_cv_prog_make_make_set=yes ac_cv_sizeof_Complex=8 ac_cv_sizeof_DoubleComplex=16 ac_cv_sizeof_LongDoubleComplex=32 ac_cv_sizeof_MPIR_Bsend_data_t=0 ac_cv_sizeof__Bool=1 ac_cv_sizeof_bool=1 ac_cv_sizeof_char=1 ac_cv_sizeof_double=8 ac_cv_sizeof_double__Complex=16 ac_cv_sizeof_double_int=16 ac_cv_sizeof_float=4 ac_cv_sizeof_float__Complex=8 ac_cv_sizeof_float_int=8 ac_cv_sizeof_int=4 ac_cv_sizeof_long=4 ac_cv_sizeof_long_double=16 ac_cv_sizeof_long_double__Complex=32 ac_cv_sizeof_long_double_int=32 ac_cv_sizeof_long_int=8 ac_cv_sizeof_long_long=8 ac_cv_sizeof_short=2 ac_cv_sizeof_short_int=8 ac_cv_sizeof_two_int=8 ac_cv_sizeof_unsigned_char=1 ac_cv_sizeof_unsigned_int=4 ac_cv_sizeof_unsigned_long=4 ac_cv_sizeof_unsigned_long_long=8 ac_cv_sizeof_unsigned_short=2 ac_cv_sizeof_void_p=8 ac_cv_sizeof_wchar_t=2 ac_cv_type__Bool=yes ac_cv_type_double__Complex=yes ac_cv_type_float__Complex=yes ac_cv_type_long_double__Complex=yes am_cv_CC_dependencies_compiler_type=gcc3 am_cv_CXX_dependencies_compiler_type=gcc3 am_cv_ar_interface=ar am_cv_make_support_nested_variables=yes lt_cv_ar_at_file=@ lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd=func_win32_libid lt_cv_file_magic_test_file= lt_cv_ld_reload_flag=-r lt_cv_nm_interface='BSD nm' lt_cv_objdir=.libs lt_cv_path_LD=d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe lt_cv_path_LDCXX=d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe lt_cv_path_NM=/d/Toolchains/x64/MinGW-w64/4.8.0/bin/nm lt_cv_path_mainfest_tool=no lt_cv_prog_compiler_c_o=yes lt_cv_prog_compiler_c_o_CXX=yes lt_cv_prog_compiler_c_o_F77=yes lt_cv_prog_compiler_c_o_FC=yes lt_cv_prog_compiler_pic='-DDLL_EXPORT -DPIC' lt_cv_prog_compiler_pic_CXX='-DDLL_EXPORT -DPIC' lt_cv_prog_compiler_pic_F77=-DDLL_EXPORT lt_cv_prog_compiler_pic_FC=-DDLL_EXPORT lt_cv_prog_compiler_pic_works=yes lt_cv_prog_compiler_pic_works_CXX=yes lt_cv_prog_compiler_pic_works_F77=yes lt_cv_prog_compiler_pic_works_FC=yes lt_cv_prog_compiler_rtti_exceptions=no lt_cv_prog_compiler_static_works=yes lt_cv_prog_compiler_static_works_CXX=yes lt_cv_prog_compiler_static_works_F77=yes lt_cv_prog_compiler_static_works_FC=yes lt_cv_prog_gnu_ld=yes lt_cv_prog_gnu_ldcxx=yes lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib lt_cv_sys_global_symbol_pipe='sed -n -e '\''s/^.*[ ]\([ABCDGIRSTW][ABCDGIRSTW]*\)[ ][ ]*\([_A-Za-z][_A-Za-z0-9]*\) \{0,1\}$/\1 \2 \2/p'\'' | sed '\''/ __gnu_lto/d'\''' lt_cv_sys_global_symbol_to_c_name_address='sed -n -e '\''s/^: \([^ ]*\)[ ]*$/ {\"\1\", (void *) 0},/p'\'' -e '\''s/^[ABCDGIRSTW]* \([^ ]*\) \([^ ]*\)$/ {"\2", (void *) \&\2},/p'\''' lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='sed -n -e '\''s/^: \([^ ]*\)[ ]*$/ {\"\1\", (void *) 0},/p'\'' -e '\''s/^[ABCDGIRSTW]* \([^ ]*\) \(lib[^ ]*\)$/ {"\2", (void *) \&\2},/p'\'' -e '\''s/^[ABCDGIRSTW]* \([^ ]*\) \([^ ]*\)$/ {"lib\2", (void *) \&\2},/p'\''' lt_cv_sys_global_symbol_to_cdecl='sed -n -e '\''s/^T .* \(.*\)$/extern int \1();/p'\'' -e '\''s/^[ABCDGIRSTW]* .* \(.*\)$/extern char \1;/p'\''' lt_cv_sys_max_cmd_len=8192 lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 pac_cv_attr_weak=yes pac_cv_attr_weak_alias=no pac_cv_attr_weak_import=yes pac_cv_c_double_alignment_exception=no pac_cv_c_double_pos_align=no pac_cv_c_fp_align_nr=16 pac_cv_c_llint_pos_align=no pac_cv_c_max_double_fp_align=eight pac_cv_c_max_fp_align=sixteen pac_cv_c_max_integer_align=eight pac_cv_c_max_longdouble_fp_align=sixteen pac_cv_c_struct_align_nr=8 pac_cv_cc_has___func__=yes pac_cv_cxx_builds_exe=yes pac_cv_cxx_compiles_string=yes pac_cv_cxx_has_iostream=yes pac_cv_cxx_has_math=no pac_cv_f77_accepts_F=yes pac_cv_f77_flibs_valid=unknown pac_cv_f77_sizeof_double_precision=8 pac_cv_f77_sizeof_integer=4 pac_cv_f77_sizeof_real=4 pac_cv_fc_accepts_F90=yes pac_cv_fc_and_f77=yes pac_cv_fc_module_case=lower pac_cv_fc_module_ext=mod pac_cv_fc_module_incflag=-I pac_cv_fc_module_outflag=-J pac_cv_fort90_real8=yes pac_cv_fort_integer16=yes pac_cv_fort_integer1=yes pac_cv_fort_integer2=yes pac_cv_fort_integer4=yes pac_cv_fort_integer8=yes pac_cv_fort_real16=yes pac_cv_fort_real4=yes pac_cv_fort_real8=yes pac_cv_gnu_attr_format=yes pac_cv_gnu_attr_pure=yes pac_cv_have__func__=yes pac_cv_have__function__=yes pac_cv_have_cap__func__=no pac_cv_have_long_double=yes pac_cv_have_long_long=yes pac_cv_int32_t_alignment=yes pac_cv_int64_t_alignment=yes pac_cv_mkdir_p=yes pac_cv_my_conf_dir=/d/Distributions/mpich-3.0.4/build pac_cv_pointers_have_int_alignment=yes pac_cv_prog_c_unaligned_doubles=yes pac_cv_prog_c_weak_symbols=no pac_cv_prog_f77_and_c_stdio_libs=none pac_cv_prog_f77_exclaim_comments=yes pac_cv_prog_f77_has_incdir=-I pac_cv_prog_f77_library_dir_flag=-L pac_cv_prog_f77_mismatched_args=yes pac_cv_prog_f77_mismatched_args_parm= pac_cv_prog_f77_name_mangle='lower uscore' pac_cv_prog_f77_true_false_value='1 0' pac_cv_prog_fc_and_c_stdio_libs=none pac_cv_prog_fc_cross=no pac_cv_prog_fc_int_kind_16=8 pac_cv_prog_fc_int_kind_8=4 pac_cv_prog_fc_works=yes pac_cv_prog_make_allows_comments=yes pac_cv_prog_make_found_clock_skew=no pac_cv_prog_make_include=yes pac_cv_prog_make_set_cflags=yes pac_cv_prog_make_vpath=VPATH ## ----------------- ## ## Output variables. ## ## ----------------- ## ABIVERSION='10:4:0' ABIVERSIONFLAGS='-version-info $(ABIVERSION)' ACLOCAL='${SHELL} /d/Distributions/mpich-3.0.4/confdb/missing --run aclocal-1.12' ADDRESS_KIND='8' ALLOCA='' AMDEPBACKSLASH='\' AMDEP_FALSE='#' AMDEP_TRUE='' AMTAR='$${TAR-tar}' AM_BACKSLASH='\' AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)' AM_DEFAULT_VERBOSITY='0' AM_V='$(V)' AR='ar' AR_FLAGS='cr' AS='' ASSERT_LEVEL='' AUTOCONF='${SHELL} /d/Distributions/mpich-3.0.4/confdb/missing --run autoconf' AUTOHEADER='${SHELL} /d/Distributions/mpich-3.0.4/confdb/missing --run autoheader' AUTOMAKE='${SHELL} /d/Distributions/mpich-3.0.4/confdb/missing --run automake-1.12' AWK='gawk' BASH_SHELL='/bin/bash' BGQ_INSTALL_DIR='' BSEND_OVERHEAD='' BUILD_BASH_SCRIPTS_FALSE='#' BUILD_BASH_SCRIPTS_TRUE='' BUILD_CH3_FALSE='#' BUILD_CH3_NEMESIS_FALSE='#' BUILD_CH3_NEMESIS_TRUE='' BUILD_CH3_SOCK_FALSE='' BUILD_CH3_SOCK_TRUE='#' BUILD_CH3_TRUE='' BUILD_CH3_UTIL_FTB_FALSE='' BUILD_CH3_UTIL_FTB_TRUE='#' BUILD_CH3_UTIL_SOCK_FALSE='' BUILD_CH3_UTIL_SOCK_TRUE='#' BUILD_COVERAGE_FALSE='' BUILD_COVERAGE_TRUE='' BUILD_CXX_LIB_FALSE='' BUILD_CXX_LIB_TRUE='' BUILD_DEBUGGER_DLL_FALSE='' BUILD_DEBUGGER_DLL_TRUE='#' BUILD_F77_BINDING_FALSE='' BUILD_F77_BINDING_TRUE='' BUILD_F90_LIB_FALSE='#' BUILD_F90_LIB_TRUE='' BUILD_LOGGING_RLOG_FALSE='' BUILD_LOGGING_RLOG_TRUE='#' BUILD_MPID_COMMON_DATATYPE_FALSE='#' BUILD_MPID_COMMON_DATATYPE_TRUE='' BUILD_MPID_COMMON_SCHED_FALSE='#' BUILD_MPID_COMMON_SCHED_TRUE='' BUILD_MPID_COMMON_SOCK_FALSE='' BUILD_MPID_COMMON_SOCK_POLL_FALSE='' BUILD_MPID_COMMON_SOCK_POLL_TRUE='#' BUILD_MPID_COMMON_SOCK_TRUE='#' BUILD_MPID_COMMON_THREAD_FALSE='#' BUILD_MPID_COMMON_THREAD_TRUE='' BUILD_NAMEPUB_FILE_FALSE='#' BUILD_NAMEPUB_FILE_TRUE='' BUILD_NAMEPUB_MPD_FALSE='' BUILD_NAMEPUB_MPD_TRUE='#' BUILD_NAMEPUB_PMI_FALSE='' BUILD_NAMEPUB_PMI_TRUE='#' BUILD_NEMESIS_NETMOD_MX_FALSE='' BUILD_NEMESIS_NETMOD_MX_TRUE='#' BUILD_NEMESIS_NETMOD_NEWMAD_FALSE='' BUILD_NEMESIS_NETMOD_NEWMAD_TRUE='#' BUILD_NEMESIS_NETMOD_PORTALS4_FALSE='' BUILD_NEMESIS_NETMOD_PORTALS4_TRUE='#' BUILD_NEMESIS_NETMOD_SCIF_FALSE='' BUILD_NEMESIS_NETMOD_SCIF_TRUE='#' BUILD_NEMESIS_NETMOD_TCP_FALSE='#' BUILD_NEMESIS_NETMOD_TCP_TRUE='' BUILD_PAMID_FALSE='' BUILD_PAMID_TRUE='#' BUILD_PMI_PMI2_FALSE='' BUILD_PMI_PMI2_TRUE='' BUILD_PMI_SIMPLE_FALSE='' BUILD_PMI_SIMPLE_TRUE='' BUILD_PMI_SLURM_FALSE='' BUILD_PMI_SLURM_TRUE='' BUILD_PMI_SMPD_FALSE='' BUILD_PMI_SMPD_TRUE='' BUILD_PM_GFORKER_FALSE='' BUILD_PM_GFORKER_TRUE='' BUILD_PM_HYDRA_FALSE='' BUILD_PM_HYDRA_TRUE='' BUILD_PM_MPD_FALSE='' BUILD_PM_MPD_TRUE='' BUILD_PM_REMSHELL_FALSE='' BUILD_PM_REMSHELL_TRUE='' BUILD_PM_SMPD_FALSE='' BUILD_PM_SMPD_TRUE='' BUILD_PM_UTIL_FALSE='' BUILD_PM_UTIL_TRUE='' BUILD_PROFILING_LIB_FALSE='#' BUILD_PROFILING_LIB_TRUE='' BUILD_ROMIO_FALSE='#' BUILD_ROMIO_TRUE='' CC='gcc' CCDEPMODE='depmode=gcc3' CFLAGS=' -DNDEBUG -DNVALGRIND -O3' CMB_1INT_ALIGNMENT='' CMB_STATUS_ALIGNMENT='' CONFIGURE_ARGS_CLEAN='-prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH --enable-fast=all,O3' CONFIGURE_ARGUMENTS=' '\''-prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH'\'' '\''--enable-fast=all,O3'\''' COUNT_KIND='' CPP='gcc -E' CPPFLAGS=' -I/d/Distributions/mpich-3.0.4/build/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/mpl/include -I/d/Distributions/mpich-3.0.4/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/openpa/src -I/d/Distributions/mpich-3.0.4/build/src/mpi/romio/include' CXX='c++' CXXCPP='c++ -E' CXXDEPMODE='depmode=gcc3' CXXFLAGS=' -DNDEBUG -DNVALGRIND -O3' CYGPATH_W='echo' C_LINKPATH_SHL='' DEFS='' DEPDIR='.deps' DEVICE='ch3:nemesis' DLLIMPORT='' DLLTOOL='dlltool' DOCTEXT='false' DSYMUTIL='' DUMPBIN='' ECHO_C='' ECHO_N='-n' ECHO_T='' EGREP='/bin/grep -E' EXEEXT='.exe' EXTRA_STATUS_DECL='' F77='gfortran' F77CPP='' F77_COMPLEX16='1275072554' F77_COMPLEX32='1275076652' F77_COMPLEX8='1275070504' F77_INCDIR='-I' F77_INTEGER16='MPI_DATATYPE_NULL' F77_INTEGER1='1275068717' F77_INTEGER2='1275068975' F77_INTEGER4='1275069488' F77_INTEGER8='1275070513' F77_LIBDIR_LEADER='-L' F77_NAME_MANGLE='F77_NAME_LOWER_USCORE' F77_OTHER_LIBS='' F77_REAL16='1275072555' F77_REAL4='1275069479' F77_REAL8='1275070505' FC='gfortran' FCCPP='' FCEXT='f90' FCFLAGS=' -O3' FCINC='-I' FCINCFLAG='-I' FCMODEXT='mod' FCMODINCFLAG='-I' FCMODINCSPEC='' FCMODOUTFLAG='-J' FC_ALL_INTEGER_MODELS='' FC_DOUBLE_MODEL='' FC_INTEGER_MODEL='' FC_INTEGER_MODEL_MAP='' FC_OTHER_LIBS='' FC_REAL_MODEL='' FC_WORK_FILES_ARG='' FFLAGS=' -O3' FGREP='/bin/grep -F' FILE='' FLIBS=' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv' FORTRAN_BINDING='1' FORTRAN_MPI_OFFSET='' FWRAPNAME='fmpich' GCOV='' GNUCXX_MINORVERSION='8' GNUCXX_VERSION='4' GREP='/bin/grep' HAVE_CXX_EXCEPTIONS='1' HAVE_ERROR_CHECKING='0' HAVE_ROMIO='#include "mpio.h"' INCLUDE_MPICXX_H='#include "mpicxx.h"' INSTALL_DATA='${INSTALL} -m 644' INSTALL_PROGRAM='${INSTALL}' INSTALL_SCRIPT='${INSTALL}' INSTALL_STRIP_PROGRAM='$(install_sh) -c -s' INTEGER_KIND='4' KILLALL='true' LD='d:/toolchains/x64/mingw-w64/4.8.0/x86_64-w64-mingw32/bin/ld.exe' LDFLAGS=' ' LIBOBJS='' LIBS=' ' LIBTOOL='$(SHELL) $(top_builddir)/libtool' LIPO='' LN_S='cp -pR' LPMPILIBNAME='' LTLIBOBJS='' MAINT='' MAINTAINER_MODE_FALSE='#' MAINTAINER_MODE_TRUE='' MAKE='make' MAKEINFO='${SHELL} /d/Distributions/mpich-3.0.4/confdb/missing --run makeinfo' MANIFEST_TOOL=':' MKDIR_P='mkdir -p' MPIBASEMODNAME='mpi_base' MPICHLIB_CFLAGS='' MPICHLIB_CPPFLAGS='' MPICHLIB_CXXFLAGS='' MPICHLIB_FCFLAGS='' MPICHLIB_FFLAGS='' MPICHLIB_LDFLAGS='' MPICHLIB_LIBS='' MPICH_NUMVERSION='30004300' MPICH_RELEASE_DATE='Wed Apr 24 10:08:10 CDT 2013' MPICH_TIMER_KIND='' MPICH_VERSION='3.0.4' MPICONSTMODNAME='mpi_constants' MPICXXLIBNAME='mpichcxx' MPID_TIMER_TYPE='' MPIFLIBNAME='mpich' MPIFPMPI=',PMPI_WTIME,PMPI_WTICK' MPILIBNAME='mpich' MPIMODNAME='mpi' MPIR_CXX_BOOL='0x4c000133' MPIR_CXX_COMPLEX='0x4c000834' MPIR_CXX_DOUBLE_COMPLEX='0x4c001035' MPIR_CXX_LONG_DOUBLE_COMPLEX='0x4c002036' MPIR_PINT='' MPISIZEOFMODNAME='mpi_sizeofs' MPIU_DLL_SPEC_DEF='' MPI_2COMPLEX='1275072548' MPI_2DOUBLE_COMPLEX='1275076645' MPI_2DOUBLE_PRECISION='1275072547' MPI_2INT='0x4c000816' MPI_2INTEGER='1275070496' MPI_2REAL='1275070497' MPI_AINT='' MPI_AINT_DATATYPE='' MPI_AINT_FMT_DEC_SPEC='' MPI_AINT_FMT_HEX_SPEC='' MPI_BYTE='0x4c00010d' MPI_CHAR='0x4c000101' MPI_CHARACTER='1275068698' MPI_COMPLEX16='0x4c00102a' MPI_COMPLEX32='0x4c00202c' MPI_COMPLEX8='0x4c000828' MPI_COMPLEX='1275070494' MPI_COUNT='' MPI_COUNT_DATATYPE='' MPI_C_BOOL='0x4c00013f' MPI_C_DOUBLE_COMPLEX='0x4c001041' MPI_C_FLOAT_COMPLEX='0x4c000840' MPI_C_LONG_DOUBLE_COMPLEX='0x4c002042' MPI_DOUBLE='0x4c00080b' MPI_DOUBLE_COMPLEX='1275072546' MPI_DOUBLE_INT='0x8c000001' MPI_DOUBLE_PRECISION='1275070495' MPI_F77_2INT='1275070486' MPI_F77_AINT='MPI_DATATYPE_NULL' MPI_F77_BYTE='1275068685' MPI_F77_CHAR='1275068673' MPI_F77_COUNT='' MPI_F77_CXX_BOOL='1275068723' MPI_F77_CXX_DOUBLE_COMPLEX='1275072565' MPI_F77_CXX_FLOAT_COMPLEX='1275070516' MPI_F77_CXX_LONG_DOUBLE_COMPLEX='1275076662' MPI_F77_C_BOOL='1275068735' MPI_F77_C_COMPLEX='1275070528' MPI_F77_C_DOUBLE_COMPLEX='1275072577' MPI_F77_C_FLOAT_COMPLEX='1275070528' MPI_F77_C_LONG_DOUBLE_COMPLEX='1275076674' MPI_F77_DOUBLE='1275070475' MPI_F77_DOUBLE_INT='-1946157055' MPI_F77_FLOAT='1275069450' MPI_F77_FLOAT_INT='-1946157056' MPI_F77_INT16_T='1275068984' MPI_F77_INT32_T='1275069497' MPI_F77_INT64_T='1275070522' MPI_F77_INT8_T='1275068727' MPI_F77_INT='1275069445' MPI_F77_LB='1275068432' MPI_F77_LONG='1275069447' MPI_F77_LONG_DOUBLE='1275072524' MPI_F77_LONG_DOUBLE_INT='-1946157052' MPI_F77_LONG_INT='-1946157054' MPI_F77_LONG_LONG='1275070473' MPI_F77_LONG_LONG_INT='1275070473' MPI_F77_OFFSET='MPI_DATATYPE_NULL' MPI_F77_PACKED='1275068687' MPI_F77_SHORT='1275068931' MPI_F77_SHORT_INT='-1946157053' MPI_F77_SIGNED_CHAR='1275068696' MPI_F77_UB='1275068433' MPI_F77_UINT16_T='1275068988' MPI_F77_UINT32_T='1275069501' MPI_F77_UINT64_T='1275070526' MPI_F77_UINT8_T='1275068731' MPI_F77_UNSIGNED='1275069446' MPI_F77_UNSIGNED_CHAR='1275068674' MPI_F77_UNSIGNED_LONG='1275069448' MPI_F77_UNSIGNED_LONG_LONG='1275070489' MPI_F77_UNSIGNED_SHORT='1275068932' MPI_F77_WCHAR='1275068942' MPI_FINT='int' MPI_FLOAT='0x4c00040a' MPI_FLOAT_INT='0x8c000000' MPI_INT16_T='0x4c000238' MPI_INT32_T='0x4c000439' MPI_INT64_T='0x4c00083a' MPI_INT8_T='0x4c000137' MPI_INT='0x4c000405' MPI_INTEGER16='MPI_DATATYPE_NULL' MPI_INTEGER1='0x4c00012d' MPI_INTEGER2='0x4c00022f' MPI_INTEGER4='0x4c000430' MPI_INTEGER8='0x4c000831' MPI_INTEGER='1275069467' MPI_LB='0x4c000010' MPI_LOGICAL='1275069469' MPI_LONG='0x4c000407' MPI_LONG_DOUBLE='0x4c00100c' MPI_LONG_DOUBLE_INT='0x8c000004' MPI_LONG_INT='0x8c000002' MPI_LONG_LONG='0x4c000809' MPI_MAX_ERROR_STRING='' MPI_MAX_LIBRARY_VERSION_STRING='' MPI_MAX_PROCESSOR_NAME='' MPI_OFFSET='' MPI_OFFSET_DATATYPE='' MPI_OFFSET_TYPEDEF='' MPI_PACKED='0x4c00010f' MPI_REAL16='0x4c00102b' MPI_REAL4='0x4c000427' MPI_REAL8='0x4c000829' MPI_REAL='1275069468' MPI_SHORT='0x4c000203' MPI_SHORT_INT='0x8c000003' MPI_SIGNED_CHAR='0x4c000118' MPI_STATUS_SIZE='' MPI_UB='0x4c000011' MPI_UINT16_T='0x4c00023c' MPI_UINT32_T='0x4c00043d' MPI_UINT64_T='0x4c00083e' MPI_UINT8_T='0x4c00013b' MPI_UNSIGNED_CHAR='0x4c000102' MPI_UNSIGNED_INT='0x4c000406' MPI_UNSIGNED_LONG='0x4c000408' MPI_UNSIGNED_LONG_LONG='0x4c000819' MPI_UNSIGNED_SHORT='0x4c000204' MPI_WCHAR='0x4c00020e' NM='/d/Toolchains/x64/MinGW-w64/4.8.0/bin/nm' NMEDIT='' OBJDUMP='objdump' OBJEXT='o' OFFSET_KIND='8' OTOOL64='' OTOOL='' PACKAGE='mpich' PACKAGE_BUGREPORT='mpich-discuss at mcs.anl.gov' PACKAGE_NAME='MPICH' PACKAGE_STRING='MPICH 3.0.4' PACKAGE_TARNAME='mpich' PACKAGE_URL='http://www.mpich.org/' PACKAGE_VERSION='3.0.4' PAPI_INCLUDE='' PATH_SEPARATOR=':' PERL='/bin/perl' PMPIFLIBNAME='pmpich' PMPILIBNAME='pmpich' PRIMARY_PM_GFORKER_FALSE='' PRIMARY_PM_GFORKER_TRUE='' PRIMARY_PM_REMSHELL_FALSE='' PRIMARY_PM_REMSHELL_TRUE='' PRIMARY_PM_SMPD_FALSE='' PRIMARY_PM_SMPD_TRUE='' RANLIB='ranlib' REQD='' REQI1='' REQI2='' REQI8='' RSH='' SED='/bin/sed' SET_CFLAGS='CFLAGS=' SET_MAKE='MAKE=make' SHELL='/bin/sh' SIZEOF_FC_CHARACTER='1' SIZEOF_FC_DOUBLE_PRECISION='8' SIZEOF_FC_INTEGER='4' SIZEOF_FC_REAL='4' SIZEOF_MPI_STATUS='' SMPD_SOCK_IS_POLL_FALSE='' SMPD_SOCK_IS_POLL_TRUE='' SSH='' STRIP='strip' TCP_LIBS='' THREAD_SERIALIZED_OR_MULTIPLE_FALSE='' THREAD_SERIALIZED_OR_MULTIPLE_TRUE='' USER_CFLAGS='' USER_CPPFLAGS='' USER_CXXFLAGS='' USER_FCFLAGS='' USER_FFLAGS='' USER_LDFLAGS='' USER_LIBS='' USE_DBG_LOGGING='0' VERSION='3.0.4' VPATH='VPATH=.:${srcdir}' WRAPPER_CFLAGS=' ' WRAPPER_CPPFLAGS=' ' WRAPPER_CXXFLAGS=' ' WRAPPER_FCFLAGS=' ' WRAPPER_FFLAGS=' ' WRAPPER_LDFLAGS='' WRAPPER_LIBS='-lopa -lmpl ' WTIME_DOUBLE_TYPE='REAL*8' XARGS_NODATA_OPT='-r' ac_ct_AR='ar' ac_ct_CC='gcc' ac_ct_CXX='c++' ac_ct_DUMPBIN='' ac_ct_F77='gfortran' ac_ct_FC='gfortran' am__EXEEXT_FALSE='' am__EXEEXT_TRUE='' am__fastdepCC_FALSE='#' am__fastdepCC_TRUE='' am__fastdepCXX_FALSE='#' am__fastdepCXX_TRUE='' am__include='include' am__isrc=' -I$(srcdir)' am__leading_dot='.' am__nodep='_no' am__quote='' am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -' bindings=' f77 f90 cxx' bindir='${exec_prefix}/bin' build='i686-pc-mingw32' build_alias='' build_cpu='i686' build_os='mingw32' build_vendor='pc' channel_name='nemesis' datadir='${datarootdir}' datarootdir='${prefix}/share' device_name='ch3' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' dvidir='${docdir}' enable_wrapper_rpath='no' exec_prefix='NONE' host='i686-pc-mingw32' host_alias='' host_cpu='i686' host_os='mingw32' host_vendor='pc' htmldir='${docdir}' includedir='${prefix}/include' infodir='${datarootdir}/info' install_sh='${SHELL} /d/Distributions/mpich-3.0.4/confdb/install-sh' libdir='${exec_prefix}/lib' libexecdir='${exec_prefix}/libexec' libmpich_so_version='10:4:0' localedir='${datarootdir}/locale' localstatedir='${prefix}/var' mandir='${datarootdir}/man' master_top_builddir='/d/Distributions/mpich-3.0.4/build' master_top_srcdir='/d/Distributions/mpich-3.0.4' mkdir_p='$(MKDIR_P)' mmx_copy_s='' mpich_libtool_static_flag='' nemesis_nets_array='' nemesis_nets_array_sz='' nemesis_nets_dirs='' nemesis_nets_func_array='' nemesis_nets_func_decl='' nemesis_nets_macro_defs='' nemesis_nets_strings='' nemesis_networks='tcp' oldincludedir='/usr/include' pdfdir='${docdir}' pm_name='hydra' prefix='D:/Libraries/x64/MinGW-w64/4.8.0/MPICH' program_transform_name='s,x,x,' psdir='${docdir}' sbindir='${exec_prefix}/sbin' sharedstatedir='${prefix}/com' subdirs='' sysconfdir='${prefix}/etc' target_alias='' ## ------------------- ## ## File substitutions. ## ## ------------------- ## cc_shlib_conf='src/env/cc_shlib.conf' cxx_shlib_conf='src/env/cxx_shlib.conf' f77_shlib_conf='src/env/f77_shlib.conf' fc_shlib_conf='src/env/fc_shlib.conf' ## ----------- ## ## confdefs.h. ## ## ----------- ## /* confdefs.h */ #define PACKAGE_NAME "MPICH" #define PACKAGE_TARNAME "mpich" #define PACKAGE_VERSION "3.0.4" #define PACKAGE_STRING "MPICH 3.0.4" #define PACKAGE_BUGREPORT "mpich-discuss at mcs.anl.gov" #define PACKAGE_URL "http://www.mpich.org/" #define USE_SMP_COLLECTIVES 1 #define PACKAGE "mpich" #define VERSION "3.0.4" #define STDC_HEADERS 1 #define HAVE_SYS_TYPES_H 1 #define HAVE_SYS_STAT_H 1 #define HAVE_STDLIB_H 1 #define HAVE_STRING_H 1 #define HAVE_MEMORY_H 1 #define HAVE_STRINGS_H 1 #define HAVE_INTTYPES_H 1 #define HAVE_STDINT_H 1 #define HAVE_UNISTD_H 1 #define LT_OBJDIR ".libs/" #define HAVE_GETPAGESIZE 1 #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL #define USE_LOGGING MPID_LOGGING_NONE #define HAVE_RUNTIME_THREADCHECK 1 #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE #define HAVE_ROMIO 1 #define HAVE__FUNC__ /**/ #define HAVE__FUNCTION__ /**/ #define ENABLE_NEM_STATISTICS 1 #define ENABLE_RECVQ_STATISTICS 1 #define HAVE_LONG_LONG 1 #define STDCALL #define F77_NAME_LOWER_USCORE 1 #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 #define HAVE_FORTRAN_BINDING 1 #define HAVE_CXX_EXCEPTIONS /**/ #define HAVE_NAMESPACES /**/ #define HAVE_NAMESPACE_STD /**/ #define HAVE_CXX_BINDING 1 #define FILE_NAMEPUB_BASEDIR "." #define USE_FILE_FOR_NAMEPUB 1 #define HAVE_NAMEPUB_SERVICE 1 #define restrict __restrict #define HAVE_GCC_ATTRIBUTE 1 #define WORDS_LITTLEENDIAN 1 #define HAVE_LONG_DOUBLE 1 #define HAVE_LONG_LONG_INT 1 #define HAVE_MAX_INTEGER_ALIGNMENT 8 #define HAVE_MAX_STRUCT_ALIGNMENT 8 #define HAVE_MAX_FP_ALIGNMENT 16 #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 #define SIZEOF_CHAR 1 #define SIZEOF_UNSIGNED_CHAR 1 #define SIZEOF_SHORT 2 #define SIZEOF_UNSIGNED_SHORT 2 #define SIZEOF_INT 4 #define SIZEOF_UNSIGNED_INT 4 #define SIZEOF_LONG 4 #define SIZEOF_UNSIGNED_LONG 4 #define SIZEOF_LONG_LONG 8 #define SIZEOF_UNSIGNED_LONG_LONG 8 #define SIZEOF_FLOAT 4 #define SIZEOF_DOUBLE 8 #define SIZEOF_LONG_DOUBLE 16 #define SIZEOF_VOID_P 8 #define STDC_HEADERS 1 #define HAVE_STDDEF_H 1 #define SIZEOF_WCHAR_T 2 #define SIZEOF_FLOAT_INT 8 #define SIZEOF_DOUBLE_INT 16 #define SIZEOF_LONG_INT 8 #define SIZEOF_SHORT_INT 8 #define SIZEOF_TWO_INT 8 #define SIZEOF_LONG_DOUBLE_INT 32 #define HAVE_INTTYPES_H 1 #define HAVE_STDINT_H 1 #define HAVE_INT8_T 1 #define HAVE_INT16_T 1 #define HAVE_INT32_T 1 #define HAVE_INT64_T 1 #define HAVE_UINT8_T 1 #define HAVE_UINT16_T 1 #define HAVE_UINT32_T 1 #define HAVE_UINT64_T 1 #define HAVE_STDBOOL_H 1 #define HAVE_COMPLEX_H 1 #define SIZEOF__BOOL 1 #define SIZEOF_FLOAT__COMPLEX 8 #define SIZEOF_DOUBLE__COMPLEX 16 #define SIZEOF_LONG_DOUBLE__COMPLEX 32 #define HAVE__BOOL 1 #define HAVE_FLOAT__COMPLEX 1 #define HAVE_DOUBLE__COMPLEX 1 #define HAVE_LONG_DOUBLE__COMPLEX 1 #define MPIR_REAL4_CTYPE float #define MPIR_REAL8_CTYPE double #define MPIR_REAL16_CTYPE long double #define MPIR_INTEGER1_CTYPE char #define MPIR_INTEGER2_CTYPE short #define MPIR_INTEGER4_CTYPE int #define MPIR_INTEGER8_CTYPE long long #define SIZEOF_F77_INTEGER 4 #define SIZEOF_F77_REAL 4 #define SIZEOF_F77_DOUBLE_PRECISION 8 #define MPIR_FC_REAL_CTYPE float #define MPIR_FC_DOUBLE_CTYPE double #define HAVE_AINT_LARGER_THAN_FINT 1 #define HAVE_AINT_DIFFERENT_THAN_FINT 1 #define HAVE_FINT_IS_INT 1 #define F77_TRUE_VALUE_SET 1 #define F77_TRUE_VALUE 1 #define F77_FALSE_VALUE 0 #define SIZEOF_BOOL 1 #define MPIR_CXX_BOOL_CTYPE _Bool #define SIZEOF_COMPLEX 8 #define SIZEOF_DOUBLECOMPLEX 16 #define SIZEOF_LONGDOUBLECOMPLEX 32 #define HAVE_CXX_COMPLEX 1 #define MPIR_CXX_BOOL_VALUE 0x4c000133 #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 #define SIZEOF_MPIR_BSEND_DATA_T 0 configure: exit 1 From wbland at mcs.anl.gov Mon Jun 17 13:25:53 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Mon, 17 Jun 2013 13:25:53 -0500 Subject: [mpich-discuss] Troubles Building MPICH on MinGW-w64 (GCC 4.8.0) In-Reply-To: <51BF52A8.2020209@gmail.com> References: <51BF4E96.2050002@gmail.com> <51BF507B.4030502@gmail.com> <240663FA-F8B5-45F6-8DDB-52C46F887A18@mcs.anl.gov> <51BF52A8.2020209@gmail.com> Message-ID: <8670FBFD-362E-4426-84F8-98CF5FCDCEBA@mcs.anl.gov> Unfortunately we don't support Windows installations anymore. You can try going back to the last version that was supported (MPICH2 1.4.1p) and see if that will work for you but there are undoubtably things in the newer versions that will keep MPICH from building. Wesley On Jun 17, 2013, at 1:17 PM, Haroogan wrote: >> Which version of MPICH are you trying to build? > mpich-3.0.4 (stable release) > >> Can you send us the config.log? > I'm not sure what do you want to see there, but here is the attachment. > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From haroogan at gmail.com Mon Jun 17 13:29:51 2013 From: haroogan at gmail.com (Haroogan) Date: Mon, 17 Jun 2013 20:29:51 +0200 Subject: [mpich-discuss] Troubles Building MPICH on MinGW-w64 (GCC 4.8.0) In-Reply-To: <8670FBFD-362E-4426-84F8-98CF5FCDCEBA@mcs.anl.gov> References: <51BF4E96.2050002@gmail.com> <51BF507B.4030502@gmail.com> <240663FA-F8B5-45F6-8DDB-52C46F887A18@mcs.anl.gov> <51BF52A8.2020209@gmail.com> <8670FBFD-362E-4426-84F8-98CF5FCDCEBA@mcs.anl.gov> Message-ID: <51BF559F.6040704@gmail.com> > Unfortunately we don't support Windows installations anymore. Oh my god, that's so sad. What are the reasons to drop the whole platform like that? I mean MinGW-w64 is very mature toolchain, gaining momentum pretty fast. Are there any other reasons besides the toolchain? -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Mon Jun 17 13:44:32 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Mon, 17 Jun 2013 13:44:32 -0500 Subject: [mpich-discuss] Troubles Building MPICH on MinGW-w64 (GCC 4.8.0) In-Reply-To: <51BF559F.6040704@gmail.com> References: <51BF4E96.2050002@gmail.com> <51BF507B.4030502@gmail.com> <240663FA-F8B5-45F6-8DDB-52C46F887A18@mcs.anl.gov> <51BF52A8.2020209@gmail.com> <8670FBFD-362E-4426-84F8-98CF5FCDCEBA@mcs.anl.gov> <51BF559F.6040704@gmail.com> Message-ID: <51BF5910.9080801@mcs.anl.gov> On 06/17/2013 01:29 PM, Haroogan wrote: >> Unfortunately we don't support Windows installations anymore. > Oh my god, that's so sad. What are the reasons to drop the whole > platform like that? I mean MinGW-w64 is very mature toolchain, gaining > momentum pretty fast. Are there any other reasons besides the toolchain? We don't have much windows expertise in the group to support it. But UNIX emulations like MinGW might be OK (though they are not regularly tested). FWIW, the error seems to be this: /d/Distributions/mpich-3.0.4/src/include/mpibsend.h: No such file or directory Did you get the entire tarball correctly? -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From haroogan at gmail.com Mon Jun 17 13:45:43 2013 From: haroogan at gmail.com (Haroogan) Date: Mon, 17 Jun 2013 20:45:43 +0200 Subject: [mpich-discuss] Troubles Building MPICH on MinGW-w64 (GCC 4.8.0) In-Reply-To: <8670FBFD-362E-4426-84F8-98CF5FCDCEBA@mcs.anl.gov> References: <51BF4E96.2050002@gmail.com> <51BF507B.4030502@gmail.com> <240663FA-F8B5-45F6-8DDB-52C46F887A18@mcs.anl.gov> <51BF52A8.2020209@gmail.com> <8670FBFD-362E-4426-84F8-98CF5FCDCEBA@mcs.anl.gov> Message-ID: <51BF5957.2010808@gmail.com> On 17-Jun-13 20:25, Wesley Bland wrote: > Unfortunately we don't support Windows installations anymore. You can > try going back to the last version that was supported (MPICH2 1.4.1p) > and see if that will work for you but there are undoubtably things in > the newer versions that will keep MPICH from building. Following your suggestion, I get the following this time: configure: error: cannot support shared memory: need either sysv shared memory functions or mmap in order to support shared memory configure: error: channels/nemesis configure failed configure: error: src/mpid/ch3 configure failed See the attachment for config.log. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by configure, which was generated by GNU Autoconf 2.63. Invocation command line was $ ../configure -prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH/1.4.1 --enable-fast=all,O3 ## --------- ## ## Platform. ## ## --------- ## hostname = Haroogan-PC uname -m = i686 uname -r = 1.0.17(0.48/3/2) uname -s = MINGW32_NT-6.1 uname -v = 2011-04-24 23:39 /usr/bin/uname -p = unknown /bin/uname -X = unknown /bin/arch = unknown /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = unknown /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: . PATH: /usr/local/bin PATH: /mingw/bin PATH: /bin PATH: /c/Windows PATH: /c/Windows/System32 PATH: /c/Windows/System32/Wbem PATH: /c/Windows/System32/sysprep PATH: /c/Windows/System32/WindowsPowerShell/v1.0 PATH: /d/ProgramFiles/x64/Windows Imaging PATH: /d/ProgramFiles/x64/Intel/Intel(R) Management Engine Components/DAL PATH: /d/ProgramFiles/x64/Intel/Intel(R) Management Engine Components/IPT PATH: /d/ProgramFiles/x86/Intel/Intel(R) Management Engine Components/DAL PATH: /d/ProgramFiles/x86/Intel/Intel(R) Management Engine Components/IPT PATH: /d/ProgramFiles/x86/Intel/iCLS Client/ PATH: /d/ProgramFiles/x64/Intel/iCLS Client/ PATH: /d/ProgramFiles/x86/NVIDIA Corporation/PhysX/Common PATH: /d/Users/Haroogan/Environment PATH: /usr/bin PATH: /d/Toolchains/x64/MinGW-w64/4.8.0/bin PATH: /d/Toolchains/x64/LLVM/3.3/bin PATH: /d/Tools/Ninja/bin PATH: /d/Applications/Vim PATH: /d/Applications/Python 2.7.3 PATH: /d/Applications/Python 2.7.3/Scripts PATH: /d/Applications/ConTeXt/tex/texmf-mswin/bin PATH: /d/Applications/Microsoft Visual Studio 2012/VC/bin/x86_amd64 PATH: /d/Applications/Microsoft Visual Studio 2012/Common7/IDE PATH: /d/Libraries/x64/MinGW-w64/4.7.2/GCF/2.6.2/bin PATH: /d/Applications/Emacs/bin PATH: /d/Tools/PuTTY/bin ## ----------- ## ## Core tests. ## ## ----------- ## configure:3121: checking for gcc configure:3137: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/gcc configure:3148: result: gcc configure:3182: checking for C compiler version configure:3190: gcc --version >&5 gcc.exe (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. configure:3194: $? = 0 configure:3201: gcc -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\gcc.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:3205: $? = 0 configure:3212: gcc -V >&5 gcc.exe: error: unrecognized command line option '-V' gcc.exe: fatal error: no input files compilation terminated. configure:3216: $? = 1 configure:3239: checking for C compiler default output file name configure:3261: gcc conftest.c >&5 configure:3265: $? = 0 configure:3303: result: a.exe configure:3322: checking whether the C compiler works configure:3332: ./a.exe configure:3336: $? = 0 configure:3355: result: yes configure:3362: checking whether we are cross compiling configure:3364: result: no configure:3367: checking for suffix of executables configure:3374: gcc -o conftest.exe conftest.c >&5 configure:3378: $? = 0 configure:3404: result: .exe configure:3410: checking for suffix of object files configure:3436: gcc -c conftest.c >&5 configure:3440: $? = 0 configure:3465: result: o configure:3469: checking whether we are using the GNU C compiler configure:3498: gcc -c conftest.c >&5 configure:3505: $? = 0 configure:3522: result: yes configure:3531: checking whether gcc accepts -g configure:3561: gcc -c -g conftest.c >&5 configure:3568: $? = 0 configure:3669: result: yes configure:3686: checking for gcc option to accept ISO C89 configure:3760: gcc -c conftest.c >&5 configure:3767: $? = 0 configure:3790: result: none needed configure:3826: checking how to run the C preprocessor configure:3866: gcc -E conftest.c configure:3873: $? = 0 configure:3904: gcc -E conftest.c conftest.c:9:28: fatal error: ac_nonexistent.h: No such file or directory #include ^ compilation terminated. configure:3911: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | /* end confdefs.h. */ | #include configure:3944: result: gcc -E configure:3973: gcc -E conftest.c configure:3980: $? = 0 configure:4011: gcc -E conftest.c conftest.c:9:28: fatal error: ac_nonexistent.h: No such file or directory #include ^ compilation terminated. configure:4018: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | /* end confdefs.h. */ | #include configure:4080: checking build system type configure:4098: result: i686-pc-mingw32 configure:4120: checking host system type configure:4135: result: i686-pc-mingw32 configure:4172: checking for grep that handles long lines and -e configure:4232: result: /bin/grep configure:4237: checking for fgrep configure:4301: result: /bin/grep -F configure:6845: ===== configuring src/mpl ===== configure:6952: executing: /d/Distributions/mpich2-1.4.1p1/src/mpl/configure '-prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH/1.4.1' '--enable-fast=all,O3' --disable-option-checking configure:6971: ===== done with src/mpl configure ===== WRAPPER_LIBS(='') does not contain '-lmpl', prepending CPPFLAGS(='') does not contain '-I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include', appending CPPFLAGS(=' -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include') does not contain '-I/d/Distributions/mpich2-1.4.1p1/src/mpl/include', appending LIBS(=' ') does not contain '-lopa', prepending configure:7069: gcc -o conftest.exe -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include conftest.c -lopa >&5 conftest.c:11:28: fatal error: opa_primitives.h: No such file or directory #include "opa_primitives.h" ^ compilation terminated. configure:7076: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | /* end confdefs.h. */ | #include "opa_primitives.h" | | int | main () | { | | OPA_int_t i; | OPA_store_int(i,10); | OPA_fetch_and_incr_int(&i,5); | | ; | return 0; | } CPPFLAGS(=' -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include') does not contain '-I/d/Distributions/mpich2-1.4.1p1/src/openpa/src', appending CPPFLAGS(=' -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src') does not contain '-I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src', appending configure:7150: ===== configuring src/openpa ===== configure:7257: executing: /d/Distributions/mpich2-1.4.1p1/src/openpa/configure --with-atomic-primitives=auto_allow_emulation '-prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH/1.4.1' '--enable-fast=all,O3' --disable-option-checking configure:7276: ===== done with src/openpa configure ===== WRAPPER_LIBS(='-lmpl ') does not contain '-lopa', prepending configure:8062: checking whether the compiler defines __func__ configure:8099: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:8103: $? = 0 configure:8109: ./conftest.exe configure:8113: $? = 0 configure:8186: result: yes configure:8197: checking whether the compiler defines __FUNC__ configure:8234: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'foo': conftest.c:24:20: error: '__FUNC__' undeclared (first use in this function) return (strcmp(__FUNC__, "foo") == 0); ^ conftest.c:24:20: note: each undeclared identifier is reported only once for each function it appears in configure:8238: $? = 1 configure: program exited with status 1 configure: failed program was: | | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | /* end confdefs.h. */ | | #include | int foo(void); | int foo(void) | { | return (strcmp(__FUNC__, "foo") == 0); | } | int main(int argc, char ** argv) | { | return (foo() ? 0 : 1); | } | | configure:8321: result: no configure:8332: checking whether the compiler sets __FUNCTION__ configure:8369: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:8373: $? = 0 configure:8379: ./conftest.exe configure:8383: $? = 0 configure:8456: result: yes configure:8474: checking whether C compiler accepts option -O3 configure:8544: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c > pac_test1.log 2>&1 configure:8551: $? = 0 configure:8603: gcc -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c > pac_test2.log 2>&1 configure:8610: $? = 0 configure:8625: diff -b pac_test1.log pac_test2.log > pac_test.log configure:8628: $? = 0 configure:8749: result: yes configure:8765: checking whether C compiler option -O3 works with an invalid prototype program configure:8783: gcc -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:20:14: warning: 'return' with a value, in function returning void [enabled by default] void main(){ return 0; } ^ configure:8790: $? = 0 configure:8809: result: yes configure:8814: checking whether routines compiled with -O3 can be linked with ones compiled without -O3 configure:8872: gcc -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c > pac_test3.log 2>&1 configure:8879: $? = 0 configure:8887: mv conftest.o pac_conftest.o configure:8890: $? = 0 configure:8951: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c pac_conftest.o > pac_test4.log 2>&1 configure:8958: $? = 0 configure:9025: gcc -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c pac_conftest.o > pac_test5.log 2>&1 configure:9032: $? = 0 configure:9047: diff -b pac_test4.log pac_test5.log > pac_test.log configure:9050: $? = 0 configure:9219: result: yes configure:9255: checking for type of weak symbol alias support configure:9289: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:9296: $? = 0 configure:9344: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:9351: $? = 0 configure:9358: mv conftest.o pac_conftest.o configure:9361: $? = 0 configure:9416: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c pac_conftest.o >&5 D:\Users\Haroogan\AppData\Local\Temp\ccsYRhiz.o:conftest.c:(.text.startup+0x10): undefined reference to `PFoo' collect2.exe: error: ld returned 1 exit status configure:9423: $? = 1 configure: failed program was: | | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | | extern int PFoo(int); | int main(int argc, char **argv) { | return PFoo(0);} | | configure:9701: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccKc9sIw.o:conftest.c:(.text.startup+0x13): undefined reference to `PFoo' collect2.exe: error: ld returned 1 exit status configure:9708: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | | extern int PFoo(int); | #pragma _HP_SECONDARY_DEF Foo PFoo | int Foo(int a) { return a; } | | int | main () | { | return PFoo(1); | ; | return 0; | } configure:9754: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccmMv5ED.o:conftest.c:(.text.startup+0x13): undefined reference to `PFoo' collect2.exe: error: ld returned 1 exit status configure:9761: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | | extern int PFoo(int); | #pragma _CRI duplicate PFoo as Foo | int Foo(int a) { return a; } | | int | main () | { | return PFoo(1); | ; | return 0; | } configure:9789: result: no configure:9816: checking whether __attribute__ ((weak)) allowed configure:9843: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:9850: $? = 0 configure:9865: result: yes configure:9869: checking whether __attribute__ ((weak_import)) allowed configure:9896: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:19:1: warning: 'weak_import' attribute directive ignored [-Wattributes] int foo(int) __attribute__ ((weak_import)); ^ configure:9903: $? = 0 configure:9918: result: yes configure:9921: checking whether __attribute__((weak,alias(...))) allowed configure:9948: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:19:5: error: 'foo' aliased to undefined symbol '__foo' int foo(int) __attribute__((weak,alias("__foo"))); ^ configure:9955: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | int foo(int) __attribute__((weak,alias("__foo"))); | int | main () | { | int a; | ; | return 0; | } configure:9970: result: no configure:10289: checking for ifort configure:10319: result: no configure:10289: checking for pgf77 configure:10319: result: no configure:10289: checking for af77 configure:10319: result: no configure:10289: checking for xlf configure:10319: result: no configure:10289: checking for frt configure:10319: result: no configure:10289: checking for cf77 configure:10319: result: no configure:10289: checking for fort77 configure:10319: result: no configure:10289: checking for fl32 configure:10319: result: no configure:10289: checking for fort configure:10319: result: no configure:10289: checking for ifc configure:10319: result: no configure:10289: checking for efc configure:10319: result: no configure:10289: checking for ftn configure:10319: result: no configure:10289: checking for gfortran configure:10305: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/gfortran configure:10316: result: gfortran configure:10342: checking for Fortran 77 compiler version configure:10350: gfortran --version >&5 GNU Fortran (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING configure:10354: $? = 0 configure:10361: gfortran -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\gfortran.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:10365: $? = 0 configure:10372: gfortran -V >&5 gfortran.exe: error: unrecognized command line option '-V' gfortran.exe: fatal error: no input files compilation terminated. configure:10376: $? = 1 configure:10384: checking whether we are using the GNU Fortran 77 compiler configure:10403: gfortran -c conftest.F >&5 configure:10410: $? = 0 configure:10427: result: yes configure:10433: checking whether gfortran accepts -g configure:10450: gfortran -c -g conftest.f >&5 configure:10457: $? = 0 configure:10473: result: yes configure:10532: checking whether Fortran 77 compiler accepts option -O3 configure:10591: gfortran -o conftest.exe conftest.f > pac_test1.log 2>&1 configure:10598: $? = 0 configure:10650: gfortran -o conftest.exe -O3 conftest.f > pac_test2.log 2>&1 configure:10657: $? = 0 configure:10672: diff -b pac_test1.log pac_test2.log > pac_test.log configure:10675: $? = 0 configure:10796: result: yes configure:10801: checking whether routines compiled with -O3 can be linked with ones compiled without -O3 configure:10854: gfortran -c conftest.f > pac_test3.log 2>&1 configure:10861: $? = 0 configure:10869: mv conftest.o pac_conftest.o configure:10872: $? = 0 configure:10924: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o > pac_test4.log 2>&1 configure:10931: $? = 0 configure:10946: diff -b pac_test2.log pac_test4.log > pac_test.log configure:10949: $? = 0 configure:11070: result: yes configure:11108: checking how to get verbose linking output from gfortran configure:11124: gfortran -c -O3 conftest.f >&5 configure:11131: $? = 0 configure:11153: gfortran -o conftest.exe -O3 -v conftest.f Using built-in specs. Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/f951.exe conftest.f -ffixed-form -quiet -dumpbase conftest.f -mtune=core2 -march=nocona -auxbase conftest -O3 -version -fintrinsic-modules-path d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/finclude -o D:\Users\Haroogan\AppData\Local\Temp\ccWGu1Vf.s GNU Fortran (rev2, Built by MinGW-builds project) version 4.8.0 (x86_64-w64-mingw32) compiled by GNU C version 4.7.2, GMP version 5.1.1, MPFR version 3.1.2, MPC version 1.0.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 GNU Fortran (rev2, Built by MinGW-builds project) version 4.8.0 (x86_64-w64-mingw32) compiled by GNU C version 4.7.2, GMP version 5.1.1, MPFR version 3.1.2, MPC version 1.0.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/bin/as.exe -v -o D:\Users\Haroogan\AppData\Local\Temp\ccakdv4m.o D:\Users\Haroogan\AppData\Local\Temp\ccWGu1Vf.s GNU assembler version 2.23.2 (x86_64-w64-mingw32) using BFD version (GNU Binutils) 2.23.2 Reading specs from d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/libgfortran.spec rename spec lib to liborig d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/collect2.exe --sysroot=C:/gccbuild/msys/temp/x64-480-posix-seh-r2/mingw64 -m i386pep -Bdynamic -o conftest.exe d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crt2.o d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crtbegin.o -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. D:\Users\Haroogan\AppData\Local\Temp\ccakdv4m.o -lgfortran -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crtend.o configure:11222: result: -v configure:11224: checking for Fortran 77 libraries of gfortran configure:11247: gfortran -o conftest.exe -O3 -v conftest.f Using built-in specs. Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/f951.exe conftest.f -ffixed-form -quiet -dumpbase conftest.f -mtune=core2 -march=nocona -auxbase conftest -O3 -version -fintrinsic-modules-path d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/finclude -o D:\Users\Haroogan\AppData\Local\Temp\cceEY3pE.s GNU Fortran (rev2, Built by MinGW-builds project) version 4.8.0 (x86_64-w64-mingw32) compiled by GNU C version 4.7.2, GMP version 5.1.1, MPFR version 3.1.2, MPC version 1.0.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 GNU Fortran (rev2, Built by MinGW-builds project) version 4.8.0 (x86_64-w64-mingw32) compiled by GNU C version 4.7.2, GMP version 5.1.1, MPFR version 3.1.2, MPC version 1.0.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/bin/as.exe -v -o D:\Users\Haroogan\AppData\Local\Temp\ccYPyUCa.o D:\Users\Haroogan\AppData\Local\Temp\cceEY3pE.s GNU assembler version 2.23.2 (x86_64-w64-mingw32) using BFD version (GNU Binutils) 2.23.2 Reading specs from d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/libgfortran.spec rename spec lib to liborig d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/collect2.exe --sysroot=C:/gccbuild/msys/temp/x64-480-posix-seh-r2/mingw64 -m i386pep -Bdynamic -o conftest.exe d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crt2.o d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crtbegin.o -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. D:\Users\Haroogan\AppData\Local\Temp\ccYPyUCa.o -lgfortran -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv -lmingw32 -lgcc_s -lgcc -lmoldname -lmingwex -lmsvcrt d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib/crtend.o configure:11424: result: -lstdc++' -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv configure:11440: checking whether gfortran accepts the FLIBS found by autoconf configure:11462: gfortran -o conftest.exe -O3 conftest.f >&5 configure:11469: $? = 0 configure:11478: result: yes configure:11548: checking whether gcc links with FLIBS found by autoconf configure:11583: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -lstdc++' -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv >&5 d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lstdc++' collect2.exe: error: ld returned 1 exit status configure:11590: $? = 1 configure: failed program was: | | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | | int | main () | { | int a; | ; | return 0; | } | configure:11607: result: no configure:11609: checking for which libraries can be used configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lstdc++' >&5 d:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lstdc++' collect2.exe: error: ld returned 1 exit status configure:11637: $? = 1 configure: failed program was: | | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | /* end confdefs.h. */ | | int | main () | { | int a; | ; | return 0; | } | configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lmingw32 >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lmoldname >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lmingwex >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lmsvcrt >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lquadmath >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lm >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lpthread >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -ladvapi32 >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lshell32 >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -luser32 >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lkernel32 >&5 configure:11637: $? = 0 configure:11630: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -liconv >&5 configure:11637: $? = 0 configure:11659: result: -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv configure:11681: checking whether Fortran 77 and C objects are compatible configure:11770: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:11777: $? = 0 configure:11784: mv conftest.o pac_conftest.o configure:11787: $? = 0 configure:11802: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:11809: $? = 0 configure:11819: result: yes configure:12045: checking for linker for Fortran main program configure:12076: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:12083: $? = 0 configure:12164: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:12171: $? = 0 configure:12178: mv conftest.o pac_conftest.o configure:12181: $? = 0 configure:12196: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:12203: $? = 0 configure:12212: result: Use Fortran to link programs configure:12388: checking for Fortran 77 name mangling configure:12416: gfortran -c -O3 conftest.f >&5 configure:12423: $? = 0 configure:12430: mv conftest.o f77conftest.o configure:12433: $? = 0 configure:12468: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c f77conftest.o -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv >&5 configure:12475: $? = 0 configure:12672: result: lower uscore configure:12735: checking for egrep configure:12799: result: /bin/grep -E configure:12804: checking for ANSI C header files configure:12834: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:12841: $? = 0 configure:12940: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:12944: $? = 0 configure:12950: ./conftest.exe configure:12954: $? = 0 configure:12972: result: yes configure:12998: checking for libraries to link Fortran main with C stdio routines configure:13034: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:13041: $? = 0 configure:13048: mv conftest.o pac_conftest.o configure:13051: $? = 0 configure:13076: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:13083: $? = 0 configure:13134: result: none configure:13187: checking whether Fortran init will work with C configure:13215: gfortran -c -O3 conftest.f >&5 configure:13222: $? = 0 configure:13229: mv conftest.o pac_f77conftest.o configure:13232: $? = 0 configure:13286: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c pac_f77conftest.o >&5 configure:13293: $? = 0 configure:13334: result: yes configure:13420: checking for ifort configure:13450: result: no configure:13420: checking for pgf90 configure:13450: result: no configure:13420: checking for pathf90 configure:13450: result: no configure:13420: checking for pathf95 configure:13450: result: no configure:13420: checking for xlf90 configure:13450: result: no configure:13420: checking for xlf95 configure:13450: result: no configure:13420: checking for xlf2003 configure:13450: result: no configure:13420: checking for f90 configure:13450: result: no configure:13420: checking for epcf90 configure:13450: result: no configure:13420: checking for f95 configure:13450: result: no configure:13420: checking for fort configure:13450: result: no configure:13420: checking for lf95 configure:13450: result: no configure:13420: checking for gfortran configure:13436: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/gfortran configure:13447: result: gfortran configure:13473: checking for Fortran compiler version configure:13481: gfortran --version >&5 GNU Fortran (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING configure:13485: $? = 0 configure:13492: gfortran -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\gfortran.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:13496: $? = 0 configure:13503: gfortran -V >&5 gfortran.exe: error: unrecognized command line option '-V' gfortran.exe: fatal error: no input files compilation terminated. configure:13507: $? = 1 configure:13515: checking whether we are using the GNU Fortran compiler configure:13534: gfortran -c conftest.F >&5 configure:13541: $? = 0 configure:13558: result: yes configure:13564: checking whether gfortran accepts -g configure:13581: gfortran -c -g conftest.f >&5 configure:13588: $? = 0 configure:13604: result: yes configure:13936: checking for extension for Fortran 90 programs configure:13957: gfortran -c conftest.f90 >&5 configure:13964: $? = 0 configure:13970: result: f90 configure:14037: checking whether the Fortran 90 compiler (gfortran ) works configure:14054: gfortran -o conftest.exe conftest.f90 >&5 configure:14061: $? = 0 configure:14071: result: yes configure:14073: checking whether the Fortran 90 compiler (gfortran ) is a cross-compiler configure:14085: gfortran -o conftest.exe conftest.f90 >&5 configure:14089: $? = 0 configure:14095: ./conftest.exe configure:14099: $? = 0 configure:14115: result: no configure:14143: checking whether Fortran 90 compiler works with Fortran 77 compiler configure:14177: gfortran -c -O3 conftest.f >&5 configure:14184: $? = 0 configure:14192: mv conftest.o pac_f77conftest.o configure:14195: $? = 0 configure:14222: gfortran -o conftest.exe conftest.f90 pac_f77conftest.o >&5 configure:14229: $? = 0 configure:14287: result: yes configure:14350: checking whether Fortran 77 accepts ! for comments configure:14374: gfortran -c -O3 conftest.f >&5 configure:14381: $? = 0 configure:14406: result: yes configure:14416: checking for include directory flag for Fortran configure:14449: gfortran -c -I src -O3 conftest.f >&5 configure:14456: $? = 0 configure:14484: result: -I configure:14501: checking for Fortran 77 flag for library directories configure:14526: gfortran -c -O3 conftest.f >&5 configure:14533: $? = 0 configure:14541: mv conftest.o pac_f77conftest.o configure:14544: $? = 0 configure:14547: test -d conftestdir || mkdir conftestdir configure:14550: $? = 0 configure:14553: ar cr conftestdir/libf77conftest.a pac_f77conftest.o configure:14556: $? = 0 configure:14559: ranlib conftestdir/libf77conftest.a configure:14562: $? = 0 configure:14586: gfortran -o conftest.exe -O3 -Lconftestdir conftest.f -lf77conftest >&5 configure:14593: $? = 0 configure:14633: result: -L configure:14858: checking whether Fortran 77 compiler processes .F files with C preprocessor configure:14885: gfortran -c -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.F >&5 configure:14892: $? = 0 configure:14992: result: yes configure:15152: checking for Fortran compiler version configure:15160: gfortran --version >&5 GNU Fortran (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING configure:15164: $? = 0 configure:15171: gfortran -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\gfortran.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:15175: $? = 0 configure:15182: gfortran -V >&5 gfortran.exe: error: unrecognized command line option '-V' gfortran.exe: fatal error: no input files compilation terminated. configure:15186: $? = 1 configure:15194: checking whether we are using the GNU Fortran compiler configure:15237: result: yes configure:15243: checking whether gfortran accepts -g configure:15283: result: yes configure:15319: checking whether the Fortran 90 compiler (gfortran ) works configure:15336: gfortran -o conftest.exe conftest.f90 >&5 configure:15343: $? = 0 configure:15353: result: yes configure:15355: checking whether the Fortran 90 compiler (gfortran ) is a cross-compiler configure:15367: gfortran -o conftest.exe conftest.f90 >&5 configure:15371: $? = 0 configure:15377: ./conftest.exe configure:15381: $? = 0 configure:15397: result: no configure:15452: checking for Fortran 90 module extension configure:15480: gfortran -c conftest.f90 >&5 configure:15487: $? = 0 configure:15547: result: mod configure:15557: checking for Fortran 90 module include flag configure:15592: gfortran -c conftest.f90 >&5 configure:15599: $? = 0 configure:15652: gfortran -c -Iconftestdir conftest.f90 >&5 configure:15659: $? = 0 configure:15749: result: -I configure:15789: checking whether Fortran 90 compiler accepts option -O3 configure:15848: gfortran -o conftest.exe conftest.f90 > pac_test1.log 2>&1 configure:15855: $? = 0 configure:15907: gfortran -o conftest.exe -O3 conftest.f90 > pac_test2.log 2>&1 configure:15914: $? = 0 configure:15929: diff -b pac_test1.log pac_test2.log > pac_test.log configure:15932: $? = 0 configure:16053: result: yes configure:16058: checking whether routines compiled with -O3 can be linked with ones compiled without -O3 configure:16111: gfortran -c conftest.f90 > pac_test3.log 2>&1 configure:16118: $? = 0 configure:16126: mv conftest.o pac_conftest.o configure:16129: $? = 0 configure:16181: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_test4.log 2>&1 configure:16188: $? = 0 configure:16203: diff -b pac_test2.log pac_test4.log > pac_test.log configure:16206: $? = 0 configure:16327: result: yes configure:16359: checking whether Fortran 90 compiler processes .F90 files with C preprocessor configure:16386: gfortran -c -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.F90 >&5 configure:16393: $? = 0 configure:16493: result: yes configure:16514: checking what libraries are needed to link Fortran90 programs with C routines that use stdio configure:16550: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:16557: $? = 0 configure:16565: mv conftest.o pac_conftest.o configure:16568: $? = 0 configure:16591: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o >&5 configure:16598: $? = 0 configure:16684: result: none configure:16701: checking for Fortran 90 compiler vendor configure:16710: gfortran --version conftest.txt 2>&1 configure:16713: $? = 0 configure:16750: result: gnu configure:16857: checking for c++ configure:16873: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/c++ configure:16884: result: c++ configure:17021: checking for C++ compiler version configure:17029: c++ --version >&5 c++.exe (rev2, Built by MinGW-builds project) 4.8.0 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. configure:17033: $? = 0 configure:17040: c++ -v >&5 Using built-in specs. COLLECT_GCC=d:\Toolchains\x64\MinGW-w64\4.8.0\bin\c++.exe COLLECT_LTO_WRAPPER=d:/toolchains/x64/mingw-w64/4.8.0/bin/../libexec/gcc/x86_64-w64-mingw32/4.8.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../../../src/gcc-4.8.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/temp/x64-480-posix-seh-r2/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --disable-isl-version-check --disable-cloog-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --disable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-host-libstdcxx='-static -lstdc++' --with-libiconv --with-system-zlib --with-gmp=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpfr=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-mpc=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-isl=/temp/mingw-prereq/x86_64-w64-mingw32-static --with-cloog=/temp/mingw-prereq/x86_64-w64-mingw32-static --enable-cloog-backend=isl --with-pkgversion='rev2, Built by MinGW-builds project' --with-bugurl=http://sourceforge.net/projects/mingwbuilds/ CFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/temp/x64-480-posix-seh-r2/libs/include -I/temp/mingw-prereq/x64-zlib/include -I/temp/mingw-prereq/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib' Thread model: posix gcc version 4.8.0 (rev2, Built by MinGW-builds project) configure:17044: $? = 0 configure:17051: c++ -V >&5 c++.exe: error: unrecognized command line option '-V' c++.exe: fatal error: no input files compilation terminated. configure:17055: $? = 1 configure:17058: checking whether we are using the GNU C++ compiler configure:17087: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17094: $? = 0 configure:17111: result: yes configure:17120: checking whether c++ accepts -g configure:17150: c++ -c -g -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17157: $? = 0 configure:17258: result: yes configure:17294: checking whether the C++ compiler c++ can build an executable configure:17334: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17341: $? = 0 configure:17367: result: yes configure:17376: checking whether C++ compiler works with string configure:17410: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17417: $? = 0 configure:17438: result: yes configure:17451: checking whether the compiler supports exceptions configure:17484: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17491: $? = 0 configure:17513: result: yes configure:17523: checking whether the compiler recognizes bool as a built-in type configure:17560: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17567: $? = 0 configure:17589: result: yes configure:17599: checking whether the compiler implements namespaces configure:17632: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17639: $? = 0 configure:17661: result: yes configure:17682: checking whether available configure:17711: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17718: $? = 0 configure:17733: result: yes configure:17739: checking whether the compiler implements the namespace std configure:17776: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17783: $? = 0 configure:17806: result: yes configure:17820: checking whether available configure:17849: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 conftest.cpp:29:16: fatal error: math: No such file or directory #include ^ compilation terminated. configure:17856: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | /* end confdefs.h. */ | | #include | | int | main () | { | using namespace std; | ; | return 0; | } configure:17871: result: no configure:17936: checking for GNU g++ version configure:17974: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:17978: $? = 0 configure:17984: ./conftest.exe configure:17988: $? = 0 configure:18005: result: 4 . 8 configure:18054: checking whether C++ compiler accepts option -O3 configure:18124: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp > pac_test1.log 2>&1 configure:18131: $? = 0 configure:18183: c++ -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp > pac_test2.log 2>&1 configure:18190: $? = 0 configure:18205: diff -b pac_test1.log pac_test2.log > pac_test.log configure:18208: $? = 0 configure:18329: result: yes configure:18334: checking whether routines compiled with -O3 can be linked with ones compiled without -O3 configure:18392: c++ -c -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp > pac_test3.log 2>&1 configure:18399: $? = 0 configure:18407: mv conftest.o pac_conftest.o configure:18410: $? = 0 configure:18472: c++ -o conftest.exe -O3 -DNDEBUG -DNVALGRIND -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp pac_conftest.o > pac_test4.log 2>&1 configure:18479: $? = 0 configure:18494: diff -b pac_test2.log pac_test4.log > pac_test.log configure:18497: $? = 0 configure:18618: result: yes configure:18678: checking for perl configure:18696: found /bin/perl configure:18708: result: /bin/perl configure:18721: checking for ar configure:18737: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/ar configure:18748: result: ar configure:18782: checking for ranlib configure:18798: found /d/Toolchains/x64/MinGW-w64/4.8.0/bin/ranlib configure:18809: result: ranlib configure:18828: checking for killall configure:18858: result: no configure:18890: checking for a BSD-compatible install configure:18958: result: /bin/install -c configure:18987: checking whether install works configure:18995: result: yes configure:19103: checking whether install breaks libraries configure:19130: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:19137: $? = 0 configure:19172: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c libconftest1.a >&5 configure:19179: $? = 0 configure:19257: result: no configure:19278: checking whether mkdir -p works configure:19294: result: yes configure:19312: checking for make configure:19328: found /bin/make configure:19339: result: make configure:19353: checking whether clock skew breaks make configure:19378: result: no configure:19388: checking whether make supports include configure:19416: result: yes configure:19425: checking whether make allows comments in actions configure:19452: result: yes configure:19466: checking for virtual path format configure:19509: result: VPATH configure:19519: checking whether make sets CFLAGS configure:19545: result: yes configure:19594: checking for bash configure:19612: found /bin/bash configure:19624: result: /bin/bash configure:19647: checking whether /bin/bash supports arrays configure:19656: result: yes configure:22125: checking for doctext configure:22156: result: false configure:22166: checking for location of doctext style files configure:22183: result: unavailable configure:22195: checking for an ANSI C-conforming const configure:22270: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:22277: $? = 0 configure:22292: result: yes configure:22302: checking for working volatile configure:22331: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:22338: $? = 0 configure:22353: result: yes configure:22363: checking for C/C++ restrict keyword configure:22398: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:22405: $? = 0 configure:22423: result: __restrict configure:22439: checking for inline configure:22465: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:22472: $? = 0 configure:22490: result: inline configure:22514: checking whether __attribute__ allowed configure:22541: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:22548: $? = 0 configure:22563: result: yes configure:22565: checking whether __attribute__((format)) allowed configure:22592: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:22599: $? = 0 configure:22614: result: yes configure:22640: checking whether byte ordering is bigendian configure:22665: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:35:9: error: unknown type name 'not' not a universal capable compiler ^ conftest.c:35:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'universal' not a universal capable compiler ^ conftest.c:35:15: error: unknown type name 'universal' configure:22672: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | /* end confdefs.h. */ | #ifndef __APPLE_CC__ | not a universal capable compiler | #endif | typedef int dummy; | configure:22722: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:22729: $? = 0 configure:22761: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:41:4: error: unknown type name 'not' not big endian ^ conftest.c:41:12: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'endian' not big endian ^ configure:22768: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | /* end confdefs.h. */ | #include | #include | | int | main () | { | #if BYTE_ORDER != BIG_ENDIAN | not big endian | #endif | | ; | return 0; | } configure:23020: result: no configure:23067: checking whether C compiler allows unaligned doubles configure:23110: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:23114: $? = 0 configure:23120: ./conftest.exe configure:23124: $? = 0 configure:23141: result: yes configure:23160: checking whether gcc supports __func__ configure:23186: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:23193: $? = 0 configure:23208: result: yes configure:23351: result: Using gcc to determine dependencies configure:23383: checking whether long double is supported configure:23410: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:23417: $? = 0 configure:23432: result: yes configure:23443: checking whether long long is supported configure:23470: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:23477: $? = 0 configure:23492: result: yes configure:23505: checking for max C struct integer alignment configure:23633: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:23637: $? = 0 configure:23643: ./conftest.exe configure:23647: $? = 0 configure:23666: result: eight configure:23703: checking for max C struct floating point alignment configure:23815: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:23819: $? = 0 configure:23825: ./conftest.exe configure:23829: $? = 0 configure:23848: result: sixteen configure:23883: checking for max C struct alignment of structs with doubles configure:23964: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:23968: $? = 0 configure:23974: ./conftest.exe configure:23978: $? = 0 configure:23997: result: eight configure:24004: checking for max C struct floating point alignment with long doubles configure:24086: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:24090: $? = 0 configure:24096: ./conftest.exe configure:24100: $? = 0 configure:24119: result: sixteen configure:24129: WARNING: Structures containing long doubles may be aligned differently from structures with floats or longs. MPICH2 does not handle this case automatically and you should avoid assumed extents for structures containing float types. configure:24164: checking if alignment of structs with doubles is based on position configure:24208: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:24212: $? = 0 configure:24218: ./conftest.exe configure:24222: $? = 0 configure:24241: result: no configure:24257: checking if alignment of structs with long long ints is based on position configure:24303: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:24307: $? = 0 configure:24313: ./conftest.exe configure:24317: $? = 0 configure:24336: result: no configure:24352: checking if double alignment breaks rules, find actual alignment configure:24409: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:24413: $? = 0 configure:24419: ./conftest.exe configure:24423: $? = 0 configure:24442: result: no configure:24458: checking for alignment restrictions on pointers configure:24488: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:24492: $? = 0 configure:24498: ./conftest.exe configure:24502: $? = 0 configure:24528: result: int or better configure:24540: checking size of char configure:24845: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:24849: $? = 0 configure:24855: ./conftest.exe configure:24859: $? = 0 configure:24885: result: 1 configure:24899: checking size of unsigned char configure:25204: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:25208: $? = 0 configure:25214: ./conftest.exe configure:25218: $? = 0 configure:25244: result: 1 configure:25258: checking size of short configure:25563: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:25567: $? = 0 configure:25573: ./conftest.exe configure:25577: $? = 0 configure:25603: result: 2 configure:25617: checking size of unsigned short configure:25922: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:25926: $? = 0 configure:25932: ./conftest.exe configure:25936: $? = 0 configure:25962: result: 2 configure:25976: checking size of int configure:26281: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:26285: $? = 0 configure:26291: ./conftest.exe configure:26295: $? = 0 configure:26321: result: 4 configure:26335: checking size of unsigned int configure:26640: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:26644: $? = 0 configure:26650: ./conftest.exe configure:26654: $? = 0 configure:26680: result: 4 configure:26694: checking size of long configure:26999: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:27003: $? = 0 configure:27009: ./conftest.exe configure:27013: $? = 0 configure:27039: result: 4 configure:27053: checking size of unsigned long configure:27358: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:27362: $? = 0 configure:27368: ./conftest.exe configure:27372: $? = 0 configure:27398: result: 4 configure:27412: checking size of long long configure:27717: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:27721: $? = 0 configure:27727: ./conftest.exe configure:27731: $? = 0 configure:27757: result: 8 configure:27771: checking size of unsigned long long configure:28076: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:28080: $? = 0 configure:28086: ./conftest.exe configure:28090: $? = 0 configure:28116: result: 8 configure:28130: checking size of float configure:28435: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:28439: $? = 0 configure:28445: ./conftest.exe configure:28449: $? = 0 configure:28475: result: 4 configure:28489: checking size of double configure:28794: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:28798: $? = 0 configure:28804: ./conftest.exe configure:28808: $? = 0 configure:28834: result: 8 configure:28848: checking size of long double configure:29153: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:29157: $? = 0 configure:29163: ./conftest.exe configure:29167: $? = 0 configure:29193: result: 16 configure:29207: checking size of void * configure:29512: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:29516: $? = 0 configure:29522: ./conftest.exe configure:29526: $? = 0 configure:29552: result: 8 configure:29563: checking for ANSI C header files configure:29731: result: yes configure:29757: checking stddef.h usability configure:29774: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:29781: $? = 0 configure:29795: result: yes configure:29799: checking stddef.h presence configure:29814: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:29821: $? = 0 configure:29835: result: yes configure:29863: checking for stddef.h configure:29872: result: yes configure:29891: checking size of wchar_t configure:30226: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:30230: $? = 0 configure:30236: ./conftest.exe configure:30240: $? = 0 configure:30266: result: 2 configure:30281: checking size of float_int configure:30592: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:30596: $? = 0 configure:30602: ./conftest.exe configure:30606: $? = 0 configure:30632: result: 8 configure:30646: checking size of double_int configure:30957: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:30961: $? = 0 configure:30967: ./conftest.exe configure:30971: $? = 0 configure:30997: result: 16 configure:31011: checking size of long_int configure:31322: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:31326: $? = 0 configure:31332: ./conftest.exe configure:31336: $? = 0 configure:31362: result: 8 configure:31376: checking size of short_int configure:31687: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:31691: $? = 0 configure:31697: ./conftest.exe configure:31701: $? = 0 configure:31727: result: 8 configure:31741: checking size of two_int configure:32052: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:32056: $? = 0 configure:32062: ./conftest.exe configure:32066: $? = 0 configure:32092: result: 8 configure:32106: checking size of long_double_int configure:32417: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:32421: $? = 0 configure:32427: ./conftest.exe configure:32431: $? = 0 configure:32457: result: 32 configure:32480: checking sys/bitypes.h usability configure:32497: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:98:25: fatal error: sys/bitypes.h: No such file or directory #include ^ compilation terminated. configure:32504: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | #include configure:32518: result: no configure:32522: checking sys/bitypes.h presence configure:32537: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c conftest.c:65:25: fatal error: sys/bitypes.h: No such file or directory #include ^ compilation terminated. configure:32544: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | /* end confdefs.h. */ | #include configure:32558: result: no configure:32586: checking for sys/bitypes.h configure:32593: result: no configure:32625: checking inttypes.h usability configure:32642: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:32649: $? = 0 configure:32663: result: yes configure:32667: checking inttypes.h presence configure:32682: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:32689: $? = 0 configure:32703: result: yes configure:32731: checking for inttypes.h configure:32740: result: yes configure:32625: checking stdint.h usability configure:32642: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:32649: $? = 0 configure:32663: result: yes configure:32667: checking stdint.h presence configure:32682: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:32689: $? = 0 configure:32703: result: yes configure:32731: checking for stdint.h configure:32740: result: yes configure:32758: checking for int8_t configure:32789: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:32796: $? = 0 configure:32825: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:103:12: error: size of array 'test_array' is negative static int test_array [1 - 2 * !((int8_t) ((((int8_t) 1 << (8 - 2)) - 1) * 2 + 1) ^ configure:32832: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | static int test_array [1 - 2 * !((int8_t) ((((int8_t) 1 << (8 - 2)) - 1) * 2 + 1) | < (int8_t) ((((int8_t) 1 << (8 - 2)) - 1) * 2 + 2))]; | test_array [0] = 0 | | ; | return 0; | } configure:32861: result: yes configure:32874: checking for int16_t configure:32905: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:32912: $? = 0 configure:32941: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:103:12: error: size of array 'test_array' is negative static int test_array [1 - 2 * !((int16_t) ((((int16_t) 1 << (16 - 2)) - 1) * 2 + 1) ^ configure:32948: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | static int test_array [1 - 2 * !((int16_t) ((((int16_t) 1 << (16 - 2)) - 1) * 2 + 1) | < (int16_t) ((((int16_t) 1 << (16 - 2)) - 1) * 2 + 2))]; | test_array [0] = 0 | | ; | return 0; | } configure:32977: result: yes configure:32990: checking for int32_t configure:33021: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:33028: $? = 0 configure:33057: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:104:53: warning: integer overflow in expression [-Woverflow] < (int32_t) ((((int32_t) 1 << (32 - 2)) - 1) * 2 + 2))]; ^ conftest.c:103:12: error: storage size of 'test_array' isn't constant static int test_array [1 - 2 * !((int32_t) ((((int32_t) 1 << (32 - 2)) - 1) * 2 + 1) ^ configure:33064: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | static int test_array [1 - 2 * !((int32_t) ((((int32_t) 1 << (32 - 2)) - 1) * 2 + 1) | < (int32_t) ((((int32_t) 1 << (32 - 2)) - 1) * 2 + 2))]; | test_array [0] = 0 | | ; | return 0; | } configure:33093: result: yes configure:33106: checking for int64_t configure:33137: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:33144: $? = 0 configure:33173: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:104:53: warning: integer overflow in expression [-Woverflow] < (int64_t) ((((int64_t) 1 << (64 - 2)) - 1) * 2 + 2))]; ^ conftest.c:103:12: error: storage size of 'test_array' isn't constant static int test_array [1 - 2 * !((int64_t) ((((int64_t) 1 << (64 - 2)) - 1) * 2 + 1) ^ configure:33180: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | static int test_array [1 - 2 * !((int64_t) ((((int64_t) 1 << (64 - 2)) - 1) * 2 + 1) | < (int64_t) ((((int64_t) 1 << (64 - 2)) - 1) * 2 + 2))]; | test_array [0] = 0 | | ; | return 0; | } configure:33209: result: yes configure:33260: checking for uint8_t configure:33291: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:33298: $? = 0 configure:33319: result: yes configure:33337: checking for uint16_t configure:33368: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:33375: $? = 0 configure:33396: result: yes configure:33410: checking for uint32_t configure:33441: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:33448: $? = 0 configure:33469: result: yes configure:33487: checking for uint64_t configure:33518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:33525: $? = 0 configure:33546: result: yes configure:33613: checking stdbool.h usability configure:33630: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:33637: $? = 0 configure:33651: result: yes configure:33655: checking stdbool.h presence configure:33670: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:33677: $? = 0 configure:33691: result: yes configure:33719: checking for stdbool.h configure:33728: result: yes configure:33613: checking complex.h usability configure:33630: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:33637: $? = 0 configure:33651: result: yes configure:33655: checking complex.h presence configure:33670: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:33677: $? = 0 configure:33691: result: yes configure:33719: checking for complex.h configure:33728: result: yes configure:33747: checking size of _Bool configure:34082: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:34086: $? = 0 configure:34092: ./conftest.exe configure:34096: $? = 0 configure:34122: result: 1 configure:34136: checking size of float _Complex configure:34471: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:34475: $? = 0 configure:34481: ./conftest.exe configure:34485: $? = 0 configure:34511: result: 8 configure:34525: checking size of double _Complex configure:34860: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:34864: $? = 0 configure:34870: ./conftest.exe configure:34874: $? = 0 configure:34900: result: 16 configure:34914: checking size of long double _Complex configure:35249: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:35253: $? = 0 configure:35259: ./conftest.exe configure:35263: $? = 0 configure:35289: result: 32 configure:35301: checking for _Bool configure:35329: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:35336: $? = 0 configure:35363: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:117:20: error: expected expression before ')' token if (sizeof ((_Bool))) ^ configure:35370: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((_Bool))) | return 0; | ; | return 0; | } configure:35393: result: yes configure:35403: checking for float _Complex configure:35431: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:35438: $? = 0 configure:35465: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:118:29: error: expected expression before ')' token if (sizeof ((float _Complex))) ^ configure:35472: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((float _Complex))) | return 0; | ; | return 0; | } configure:35495: result: yes configure:35505: checking for double _Complex configure:35533: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:35540: $? = 0 configure:35567: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:119:30: error: expected expression before ')' token if (sizeof ((double _Complex))) ^ configure:35574: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((double _Complex))) | return 0; | ; | return 0; | } configure:35597: result: yes configure:35607: checking for long double _Complex configure:35635: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:35642: $? = 0 configure:35669: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:120:35: error: expected expression before ')' token if (sizeof ((long double _Complex))) ^ configure:35676: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((long double _Complex))) | return 0; | ; | return 0; | } configure:35699: result: yes configure:36127: checking for size of Fortran type integer configure:36178: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:36185: $? = 0 configure:36193: mv conftest.o pac_conftest.o configure:36196: $? = 0 configure:36229: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:36233: $? = 0 configure:36239: ./conftest.exe configure:36243: $? = 0 configure:36292: result: 4 configure:36307: checking for size of Fortran type real configure:36358: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:36365: $? = 0 configure:36373: mv conftest.o pac_conftest.o configure:36376: $? = 0 configure:36409: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:36413: $? = 0 configure:36419: ./conftest.exe configure:36423: $? = 0 configure:36472: result: 4 configure:36487: checking for size of Fortran type double precision configure:36538: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:36545: $? = 0 configure:36553: mv conftest.o pac_conftest.o configure:36556: $? = 0 configure:36589: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:36593: $? = 0 configure:36599: ./conftest.exe configure:36603: $? = 0 configure:36652: result: 8 configure:36675: checking whether integer*1 is supported configure:36692: gfortran -c -O3 conftest.f >&5 configure:36699: $? = 0 configure:36714: result: yes configure:36716: checking whether integer*2 is supported configure:36733: gfortran -c -O3 conftest.f >&5 configure:36740: $? = 0 configure:36755: result: yes configure:36757: checking whether integer*4 is supported configure:36774: gfortran -c -O3 conftest.f >&5 configure:36781: $? = 0 configure:36796: result: yes configure:36798: checking whether integer*8 is supported configure:36815: gfortran -c -O3 conftest.f >&5 configure:36822: $? = 0 configure:36837: result: yes configure:36839: checking whether integer*16 is supported configure:36856: gfortran -c -O3 conftest.f >&5 configure:36863: $? = 0 configure:36878: result: yes configure:36880: checking whether real*4 is supported configure:36897: gfortran -c -O3 conftest.f >&5 configure:36904: $? = 0 configure:36919: result: yes configure:36921: checking whether real*8 is supported configure:36938: gfortran -c -O3 conftest.f >&5 configure:36945: $? = 0 configure:36960: result: yes configure:36962: checking whether real*16 is supported configure:36979: gfortran -c -O3 conftest.f >&5 configure:36986: $? = 0 configure:37001: result: yes configure:37393: checking for C type matching Fortran integer configure:37400: result: int configure:37456: checking for size of MPI_Status configure:37764: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:37768: $? = 0 configure:37774: ./conftest.exe configure:37778: $? = 0 configure:37796: result: 20 configure:37927: checking for values of Fortran logicals configure:37976: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:37983: $? = 0 configure:37990: mv conftest.o pac_conftest.o configure:37993: $? = 0 configure:38031: gfortran -o conftest.exe -O3 conftest.f pac_conftest.o >&5 configure:38035: $? = 0 configure:38041: ./conftest.exe configure:38045: $? = 0 configure:38096: result: True is 1 and False is 0 configure:38125: checking for BSD/POSIX style global symbol lister configure:38269: result: no configure:38319: checking stdio.h usability configure:38336: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:38343: $? = 0 configure:38357: result: yes configure:38361: checking stdio.h presence configure:38376: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:38383: $? = 0 configure:38397: result: yes configure:38425: checking for stdio.h configure:38434: result: yes configure:38449: checking for multiple __attribute__((alias)) support configure:38520: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:38527: $? = 0 configure:38535: cp conftest.o pac_conftest_other.o configure:38538: $? = 0 configure:38615: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c pac_conftest_other.o >&5 configure:38622: $? = 0 configure:38633: cp conftest.exe pac_conftest_main.exe configure:38636: $? = 0 configure:38701: ./pac_conftest_main.exe configure:38704: $? = 0 configure:38711: result: yes configure:38739: checking the minimum alignment of Fortran common block of 1 integers configure:38788: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:38795: $? = 0 configure:38803: mv conftest.o pac_conftest.o configure:38806: $? = 0 configure:38836: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_align0.log 2>&1 configure:38843: $? = 0 configure:38951: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:38958: $? = 0 configure:38966: mv conftest.o pac_conftest.o configure:38969: $? = 0 configure:39000: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_align1.log 2>&1 configure:39007: $? = 0 configure:39021: diff -b pac_align0.log pac_align1.log > pac_test.log configure:39024: $? = 0 configure:39123: result: 4 configure:39134: checking the minimum alignment of Fortran common block of 5 integers configure:39183: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:39190: $? = 0 configure:39198: mv conftest.o pac_conftest.o configure:39201: $? = 0 configure:39231: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_align0.log 2>&1 configure:39238: $? = 0 configure:39346: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:39353: $? = 0 configure:39361: mv conftest.o pac_conftest.o configure:39364: $? = 0 configure:39395: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_align1.log 2>&1 configure:39402: $? = 0 configure:39416: diff -b pac_align0.log pac_align1.log > pac_test.log configure:39419: $? = 0 configure:39346: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:39353: $? = 0 configure:39361: mv conftest.o pac_conftest.o configure:39364: $? = 0 configure:39395: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_align1.log 2>&1 configure:39402: $? = 0 configure:39416: diff -b pac_align0.log pac_align1.log > pac_test.log configure:39419: $? = 0 configure:39346: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:39353: $? = 0 configure:39361: mv conftest.o pac_conftest.o configure:39364: $? = 0 configure:39395: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_align1.log 2>&1 configure:39402: $? = 0 configure:39416: diff -b pac_align0.log pac_align1.log > pac_test.log configure:39419: $? = 0 configure:39346: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:39353: $? = 0 configure:39361: mv conftest.o pac_conftest.o configure:39364: $? = 0 configure:39395: gfortran -o conftest.exe -O3 conftest.f90 pac_conftest.o > pac_align1.log 2>&1 configure:39402: $? = 0 configure:39416: diff -b pac_align0.log pac_align1.log > pac_test.log configure:39419: $? = 0 configure:39530: result: 4, too small! reset to 32 configure:39659: checking for Fortran 90 integer kind for 8-byte integers configure:39708: gfortran -o conftest.exe -O3 conftest.f90 >&5 configure:39712: $? = 0 configure:39718: ./conftest.exe configure:39722: $? = 0 configure:39746: result: 8 configure:40001: checking if real*8 is supported in Fortran 90 configure:40020: gfortran -c -O3 conftest.f90 >&5 configure:40027: $? = 0 configure:40047: result: yes configure:40191: checking size of bool configure:40496: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:40500: $? = 0 configure:40506: ./conftest.exe configure:40510: $? = 0 configure:40536: result: 1 configure:40584: checking how to run the C++ preprocessor configure:40620: c++ -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp configure:40627: $? = 0 configure:40658: c++ -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp conftest.cpp:105:28: fatal error: ac_nonexistent.h: No such file or directory #include ^ compilation terminated. configure:40665: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | /* end confdefs.h. */ | #include configure:40698: result: c++ -E configure:40727: c++ -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp configure:40734: $? = 0 configure:40765: c++ -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp conftest.cpp:105:28: fatal error: ac_nonexistent.h: No such file or directory #include ^ compilation terminated. configure:40772: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | /* end confdefs.h. */ | #include configure:40822: checking complex usability configure:40839: c++ -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:40846: $? = 0 configure:40860: result: yes configure:40864: checking complex presence configure:40879: c++ -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp configure:40886: $? = 0 configure:40900: result: yes configure:40928: checking for complex configure:40935: result: yes configure:40948: checking size of Complex configure:41283: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:41287: $? = 0 configure:41293: ./conftest.exe configure:41297: $? = 0 configure:41323: result: 8 configure:41337: checking size of DoubleComplex configure:41672: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:41676: $? = 0 configure:41682: ./conftest.exe configure:41686: $? = 0 configure:41712: result: 16 configure:41727: checking size of LongDoubleComplex configure:42062: c++ -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.cpp >&5 configure:42066: $? = 0 configure:42072: ./conftest.exe configure:42076: $? = 0 configure:42102: result: 32 configure:42189: checking if char * pointers use byte addresses configure:42217: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:117:27: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if ((long)(a-(char*)0) == (long)(a)) return 0; return 1; ^ configure:42221: $? = 0 configure:42227: ./conftest.exe configure:42231: $? = 0 configure:42248: result: yes configure:42275: checking for alignment restrictions on configure:42319: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:118:5: error: unknown type name 'int64_t' int64_t *p1, v; ^ conftest.c:122:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x7 ) ) bp += 4; ^ conftest.c:123:11: error: 'int64_t' undeclared (first use in this function) p1 = (int64_t *)bp; ^ conftest.c:123:11: note: each undeclared identifier is reported only once for each function it appears in conftest.c:123:20: error: expected expression before ')' token p1 = (int64_t *)bp; ^ conftest.c:126:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x3 ) ) bp += 2; ^ conftest.c:127:20: error: expected expression before ')' token p1 = (int64_t *)bp; ^ conftest.c:129:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x1 ) ) bp += 1; ^ conftest.c:130:20: error: expected expression before ')' token p1 = (int64_t *)bp; ^ configure:42323: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | /* end confdefs.h. */ | | #include | #include | int main(int argc, char **argv ) | { | int64_t *p1, v; | char *buf_p = (char *)malloc( 64 ), *bp; | bp = buf_p; | /* Make bp aligned on 4, not 8 bytes */ | if (!( (long)bp & 0x7 ) ) bp += 4; | p1 = (int64_t *)bp; | v = -1; | *p1 = v; | if (!( (long)bp & 0x3 ) ) bp += 2; | p1 = (int64_t *)bp; | *p1 = 1; | if (!( (long)bp & 0x1 ) ) bp += 1; | p1 = (int64_t *)bp; | *p1 = 1; | return 0; | } | configure:42351: result: yes configure:42372: checking for alignment restrictions on int32_t configure:42416: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:118:5: error: unknown type name 'int32_t' int32_t *p1, v; ^ conftest.c:122:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x7 ) ) bp += 4; ^ conftest.c:123:11: error: 'int32_t' undeclared (first use in this function) p1 = (int32_t *)bp; ^ conftest.c:123:11: note: each undeclared identifier is reported only once for each function it appears in conftest.c:123:20: error: expected expression before ')' token p1 = (int32_t *)bp; ^ conftest.c:126:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x3 ) ) bp += 2; ^ conftest.c:127:20: error: expected expression before ')' token p1 = (int32_t *)bp; ^ conftest.c:129:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] if (!( (long)bp & 0x1 ) ) bp += 1; ^ conftest.c:130:20: error: expected expression before ')' token p1 = (int32_t *)bp; ^ configure:42420: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | /* end confdefs.h. */ | | #include | #include | int main(int argc, char **argv ) | { | int32_t *p1, v; | char *buf_p = (char *)malloc( 64 ), *bp; | bp = buf_p; | /* Make bp aligned on 4, not 8 bytes */ | if (!( (long)bp & 0x7 ) ) bp += 4; | p1 = (int32_t *)bp; | v = -1; | *p1 = v; | if (!( (long)bp & 0x3 ) ) bp += 2; | p1 = (int32_t *)bp; | *p1 = 1; | if (!( (long)bp & 0x1 ) ) bp += 1; | p1 = (int32_t *)bp; | *p1 = 1; | return 0; | } | configure:42448: result: yes configure:42464: checking size of MPIR_Bsend_data_t configure:42793: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:115:66: fatal error: /d/Distributions/mpich2-1.4.1p1/src/include/mpibsend.h: No such file or directory #include "/d/Distributions/mpich2-1.4.1p1/src/include/mpibsend.h" ^ compilation terminated. configure:42797: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | /* end confdefs.h. */ | | #define MPI_Datatype int | #include "/d/Distributions/mpich2-1.4.1p1/src/include/mpibsend.h" | | | static long int longval () { return (long int) (sizeof (MPIR_Bsend_data_t)); } | static unsigned long int ulongval () { return (long int) (sizeof (MPIR_Bsend_data_t)); } | #include | #include | int | main () | { | | FILE *f = fopen ("conftest.val", "w"); | if (! f) | return 1; | if (((long int) (sizeof (MPIR_Bsend_data_t))) < 0) | { | long int i = longval (); | if (i != ((long int) (sizeof (MPIR_Bsend_data_t)))) | return 1; | fprintf (f, "%ld", i); | } | else | { | unsigned long int i = ulongval (); | if (i != ((long int) (sizeof (MPIR_Bsend_data_t)))) | return 1; | fprintf (f, "%lu", i); | } | /* Do not output a trailing newline, as this causes \r\n confusion | on some platforms. */ | return ferror (f) || fclose (f) != 0; | | ; | return 0; | } configure:42833: result: 0 configure:42852: checking for gcc __asm__ and pentium cmpxchgl instruction configure:42886: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:42890: $? = 0 configure:42896: ./conftest.exe configure:42900: $? = 0 configure:42902: result: yes configure:42935: checking for gcc __asm__ and AMD x86_64 cmpxchgq instruction configure:42969: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccvn8hum.s: Assembler messages: D:\Users\Haroogan\AppData\Local\Temp\ccvn8hum.s:18: Error: incorrect register `%edx' used with `q' suffix configure:42973: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | /* end confdefs.h. */ | | int main(int argc, char *argv[]) | { | long int compval = 10; | volatile long int *p = &compval; | long int oldval = 10; | long int newval = 20; | char ret; | long int readval; | __asm__ __volatile__ ("lock; cmpxchgq %3, %1; sete %0" | : "=q" (ret), "=m" (*p), "=a" (readval) | : "r" (newval), "m" (*p), "a" (oldval) : "memory"); | return (compval == 20) ? 0 : -1; | } | configure:42998: result: no configure:43008: checking for gcc __asm__ and IA64 xchg4 instruction configure:43043: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\cch2mqJM.s: Assembler messages: D:\Users\Haroogan\AppData\Local\Temp\cch2mqJM.s:11: Error: no such instruction: `xchg4 %eax=[%rcx],%edx' D:\Users\Haroogan\AppData\Local\Temp\cch2mqJM.s:32: Error: no such instruction: `xchg4 %edx=[%rax],%edx' configure:43047: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | /* end confdefs.h. */ | | unsigned long _InterlockedExchange(volatile void *ptr, unsigned long x) | { | unsigned long result; | __asm__ __volatile ("xchg4 %0=[%1],%2" : "=r" (result) | : "r" (ptr), "r" (x) : "memory"); | return result; | } | int main(int argc, char *argv[]) | { | long val = 1; | volatile long *p = &val; | long oldval = _InterlockedExchange(p, (unsigned long)2); | return (oldval == 1 && val == 2) ? 0 : -1; | } | configure:43072: result: no configure:43290: checking for ANSI C header files configure:43458: result: yes configure:43501: checking stdlib.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking stdlib.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for stdlib.h configure:43616: result: yes configure:43501: checking stdarg.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking stdarg.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for stdarg.h configure:43616: result: yes configure:43501: checking sys/types.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking sys/types.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for sys/types.h configure:43616: result: yes configure:43501: checking string.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking string.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for string.h configure:43616: result: yes configure:43490: checking for inttypes.h configure:43497: result: yes configure:43501: checking limits.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking limits.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for limits.h configure:43616: result: yes configure:43490: checking for stddef.h configure:43497: result: yes configure:43501: checking errno.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking errno.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for errno.h configure:43616: result: yes configure:43501: checking sys/socket.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:158:24: fatal error: sys/socket.h: No such file or directory #include ^ compilation terminated. configure:43525: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | #include configure:43539: result: no configure:43543: checking sys/socket.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c conftest.c:125:24: fatal error: sys/socket.h: No such file or directory #include ^ compilation terminated. configure:43565: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | /* end confdefs.h. */ | #include configure:43579: result: no configure:43607: checking for sys/socket.h configure:43616: result: no configure:43501: checking sys/time.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking sys/time.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for sys/time.h configure:43616: result: yes configure:43501: checking unistd.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking unistd.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for unistd.h configure:43616: result: yes configure:43501: checking endian.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:160:20: fatal error: endian.h: No such file or directory #include ^ compilation terminated. configure:43525: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | #include configure:43539: result: no configure:43543: checking endian.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c conftest.c:127:20: fatal error: endian.h: No such file or directory #include ^ compilation terminated. configure:43565: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | /* end confdefs.h. */ | #include configure:43579: result: no configure:43607: checking for endian.h configure:43616: result: no configure:43501: checking assert.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking assert.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for assert.h configure:43616: result: yes configure:43501: checking sys/param.h usability configure:43518: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43525: $? = 0 configure:43539: result: yes configure:43543: checking sys/param.h presence configure:43558: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c configure:43565: $? = 0 configure:43579: result: yes configure:43607: checking for sys/param.h configure:43616: result: yes configure:43631: checking for sys/uio.h configure:43661: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:131:21: fatal error: sys/uio.h: No such file or directory #include ^ compilation terminated. configure:43668: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | /* end confdefs.h. */ | | #include | #include | | int | main () | { | int a; | ; | return 0; | } configure:43683: result: no configure:43694: checking for size_t configure:43722: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43729: $? = 0 configure:43756: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:165:21: error: expected expression before ')' token if (sizeof ((size_t))) ^ configure:43763: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | if (sizeof ((size_t))) | return 0; | ; | return 0; | } configure:43786: result: yes configure:43805: checking for setitimer configure:43861: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccK5QF85.o:conftest.c:(.text.startup+0xa): undefined reference to `setitimer' collect2.exe: error: ld returned 1 exit status configure:43868: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | /* end confdefs.h. */ | /* Define setitimer to an innocuous variant, in case declares setitimer. | For example, HP-UX 11i declares gettimeofday. */ | #define setitimer innocuous_setitimer | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char setitimer (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef setitimer | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char setitimer (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_setitimer || defined __stub___setitimer | choke me | #endif | | int | main () | { | return setitimer (); | ; | return 0; | } configure:43890: result: no configure:43805: checking for alarm configure:43861: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:43868: $? = 0 configure:43890: result: yes configure:43908: checking for vsnprintf configure:43964: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:153:6: warning: conflicting types for built-in function 'vsnprintf' [enabled by default] char vsnprintf (); ^ configure:43971: $? = 0 configure:43993: result: yes configure:43908: checking for vsprintf configure:43964: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:154:6: warning: conflicting types for built-in function 'vsprintf' [enabled by default] char vsprintf (); ^ configure:43971: $? = 0 configure:43993: result: yes configure:44008: checking whether vsnprintf needs a declaration configure:44037: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:134:5: error: conflicting types for 'vsnprintf' int vsnprintf(double, int, double, const char *); ^ In file included from conftest.c:132:0: d:\toolchains\x64\mingw-w64\4.8.0\x86_64-w64-mingw32\include\stdio.h:556:7: note: previous definition of 'vsnprintf' was here int vsnprintf (char * __restrict__ __stream, size_t __n, const char * __restrict__ __format, va_list __local_argv) ^ configure:44044: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | /* end confdefs.h. */ | #include | #include | int vsnprintf(double, int, double, const char *); | int | main () | { | int a=vsnprintf(1.0,27,1.0,"foo"); | ; | return 0; | } configure:44059: result: no configure:44078: checking for strerror configure:44134: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:44141: $? = 0 configure:44163: result: yes configure:44078: checking for strncasecmp configure:44134: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:156:6: warning: conflicting types for built-in function 'strncasecmp' [enabled by default] char strncasecmp (); ^ configure:44141: $? = 0 configure:44163: result: yes configure:44175: checking whether strerror_r is declared configure:44204: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c: In function 'main': conftest.c:171:10: error: 'strerror_r' undeclared (first use in this function) (void) strerror_r; ^ conftest.c:171:10: note: each undeclared identifier is reported only once for each function it appears in configure:44211: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | int | main () | { | #ifndef strerror_r | (void) strerror_r; | #endif | | ; | return 0; | } configure:44226: result: no configure:44248: checking for strerror_r configure:44304: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccy2apgT.o:conftest.c:(.text.startup+0xa): undefined reference to `strerror_r' collect2.exe: error: ld returned 1 exit status configure:44311: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | /* end confdefs.h. */ | /* Define strerror_r to an innocuous variant, in case declares strerror_r. | For example, HP-UX 11i declares gettimeofday. */ | #define strerror_r innocuous_strerror_r | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char strerror_r (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef strerror_r | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char strerror_r (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_strerror_r || defined __stub___strerror_r | choke me | #endif | | int | main () | { | return strerror_r (); | ; | return 0; | } configure:44333: result: no configure:44345: checking whether strerror_r returns char * configure:44434: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccqjIgDQ.o:conftest.c:(.text.startup+0x1a): undefined reference to `strerror_r' collect2.exe: error: ld returned 1 exit status configure:44438: $? = 1 configure: program exited with status 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | /* end confdefs.h. */ | #include | #ifdef HAVE_SYS_TYPES_H | # include | #endif | #ifdef HAVE_SYS_STAT_H | # include | #endif | #ifdef STDC_HEADERS | # include | # include | #else | # ifdef HAVE_STDLIB_H | # include | # endif | #endif | #ifdef HAVE_STRING_H | # if !defined STDC_HEADERS && defined HAVE_MEMORY_H | # include | # endif | # include | #endif | #ifdef HAVE_STRINGS_H | # include | #endif | #ifdef HAVE_INTTYPES_H | # include | #endif | #ifdef HAVE_STDINT_H | # include | #endif | #ifdef HAVE_UNISTD_H | # include | #endif | extern char *strerror_r (); | int | main () | { | char buf[100]; | char x = *strerror_r (0, buf, sizeof buf); | return ! isalpha (x); | ; | return 0; | } configure:44465: result: no configure:44545: checking for snprintf configure:44601: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:158:6: warning: conflicting types for built-in function 'snprintf' [enabled by default] char snprintf (); ^ configure:44608: $? = 0 configure:44630: result: yes configure:44644: checking whether snprintf needs a declaration configure:44672: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:137:5: error: conflicting types for 'snprintf' int snprintf(double, int, double, const char *); ^ In file included from conftest.c:136:0: d:\toolchains\x64\mingw-w64\4.8.0\x86_64-w64-mingw32\include\stdio.h:566:5: note: previous definition of 'snprintf' was here int snprintf (char * __restrict__ __stream, size_t __n, const char * __restrict__ __format, ...) ^ configure:44679: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | /* end confdefs.h. */ | #include | int snprintf(double, int, double, const char *); | int | main () | { | int a=snprintf(1.0,27,1.0,"foo"); | ; | return 0; | } configure:44694: result: no configure:44712: checking for qsort configure:44768: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:44775: $? = 0 configure:44797: result: yes configure:44816: checking for va_copy configure:44852: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:44859: $? = 0 configure:44879: result: yes configure:44964: checking for variable argument list macro functionality configure:44990: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:44997: $? = 0 configure:45010: result: yes configure:45028: checking for working alloca.h configure:45055: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:139:20: fatal error: alloca.h: No such file or directory #include ^ compilation terminated. configure:45062: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | /* end confdefs.h. */ | #include | int | main () | { | char *p = (char *) alloca (2 * sizeof (int)); | if (p) return 0; | ; | return 0; | } configure:45082: result: no configure:45092: checking for alloca configure:45139: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:45146: $? = 0 configure:45166: result: yes configure:45425: checking for strdup configure:45481: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:163:6: warning: conflicting types for built-in function 'strdup' [enabled by default] char strdup (); ^ configure:45488: $? = 0 configure:45510: result: yes configure:45525: checking whether strdup needs a declaration configure:45553: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:142:5: error: conflicting types for 'strdup' int strdup(double, int, double, const char *); ^ configure:45560: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | /* end confdefs.h. */ | #include | int strdup(double, int, double, const char *); | int | main () | { | int a=strdup(1.0,27,1.0,"foo"); | ; | return 0; | } configure:45575: result: no configure:45914: checking for mkstemp configure:45970: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccugqO1u.o:conftest.c:(.text.startup+0xa): undefined reference to `mkstemp' collect2.exe: error: ld returned 1 exit status configure:45977: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | /* end confdefs.h. */ | /* Define mkstemp to an innocuous variant, in case declares mkstemp. | For example, HP-UX 11i declares gettimeofday. */ | #define mkstemp innocuous_mkstemp | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char mkstemp (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef mkstemp | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char mkstemp (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_mkstemp || defined __stub___mkstemp | choke me | #endif | | int | main () | { | return mkstemp (); | ; | return 0; | } configure:45999: result: no configure:46080: checking for fdopen configure:46136: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46143: $? = 0 configure:46165: result: yes configure:46179: checking whether fdopen needs a declaration configure:46207: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46214: $? = 0 configure:46229: result: yes configure:46246: checking for putenv configure:46302: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46309: $? = 0 configure:46331: result: yes configure:46345: checking whether putenv needs a declaration configure:46373: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 conftest.c:145:5: error: conflicting types for 'putenv' int putenv(double, int, double, const char *); ^ In file included from conftest.c:144:0: d:\toolchains\x64\mingw-w64\4.8.0\x86_64-w64-mingw32\include\stdlib.h:612:15: note: previous declaration of 'putenv' was here int __cdecl putenv(const char *_EnvString) __MINGW_ATTRIB_DEPRECATED_MSVC2005; ^ configure:46380: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | /* end confdefs.h. */ | #include | int putenv(double, int, double, const char *); | int | main () | { | int a=putenv(1.0,27,1.0,"foo"); | ; | return 0; | } configure:46395: result: no configure:46442: checking for clock_gettime configure:46498: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46505: $? = 0 configure:46527: result: yes configure:46442: checking for clock_getres configure:46498: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46505: $? = 0 configure:46527: result: yes configure:46442: checking for gethrtime configure:46498: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\ccQpd4WS.o:conftest.c:(.text.startup+0xa): undefined reference to `gethrtime' collect2.exe: error: ld returned 1 exit status configure:46505: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | /* end confdefs.h. */ | /* Define gethrtime to an innocuous variant, in case declares gethrtime. | For example, HP-UX 11i declares gettimeofday. */ | #define gethrtime innocuous_gethrtime | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char gethrtime (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef gethrtime | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char gethrtime (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_gethrtime || defined __stub___gethrtime | choke me | #endif | | int | main () | { | return gethrtime (); | ; | return 0; | } configure:46527: result: no configure:46442: checking for mach_absolute_time configure:46498: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 D:\Users\Haroogan\AppData\Local\Temp\cckAzGi1.o:conftest.c:(.text.startup+0xa): undefined reference to `mach_absolute_time' collect2.exe: error: ld returned 1 exit status configure:46505: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | /* end confdefs.h. */ | /* Define mach_absolute_time to an innocuous variant, in case declares mach_absolute_time. | For example, HP-UX 11i declares gettimeofday. */ | #define mach_absolute_time innocuous_mach_absolute_time | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char mach_absolute_time (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef mach_absolute_time | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char mach_absolute_time (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_mach_absolute_time || defined __stub___mach_absolute_time | choke me | #endif | | int | main () | { | return mach_absolute_time (); | ; | return 0; | } configure:46527: result: no configure:46442: checking for gettimeofday configure:46498: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46505: $? = 0 configure:46527: result: yes configure:46725: checking for library containing clock_gettime configure:46766: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46773: $? = 0 configure:46804: result: none required configure:46816: checking for library containing clock_getres configure:46857: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46864: $? = 0 configure:46895: result: none required configure:46910: checking whether struct timespec is defined in time.h configure:46939: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:46946: $? = 0 configure:46962: result: yes configure:47035: checking for CLOCK_REALTIME defined in time.h configure:47064: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src conftest.c >&5 configure:47071: $? = 0 configure:47086: result: yes configure:48277: checking pthread.h usability configure:48294: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c >&5 configure:48301: $? = 0 configure:48315: result: yes configure:48319: checking pthread.h presence configure:48334: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c configure:48341: $? = 0 configure:48355: result: yes configure:48383: checking for pthread.h configure:48392: result: yes configure:48415: checking for pthread_key_create in -lpthread configure:48450: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 configure:48457: $? = 0 configure:48478: result: yes LIBS(=' ') does not contain '-lpthread', prepending configure:48502: checking for pthread_yield configure:48558: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 D:\Users\Haroogan\AppData\Local\Temp\ccack9Iu.o:conftest.c:(.text.startup+0xa): undefined reference to `pthread_yield' collect2.exe: error: ld returned 1 exit status configure:48565: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | /* end confdefs.h. */ | /* Define pthread_yield to an innocuous variant, in case declares pthread_yield. | For example, HP-UX 11i declares gettimeofday. */ | #define pthread_yield innocuous_pthread_yield | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char pthread_yield (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef pthread_yield | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char pthread_yield (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_pthread_yield || defined __stub___pthread_yield | choke me | #endif | | int | main () | { | return pthread_yield (); | ; | return 0; | } configure:48587: result: no configure:48601: checking for pthread_key_create configure:48657: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 configure:48664: $? = 0 configure:48684: result: yes configure:48708: checking for pthread_cleanup_push configure:48764: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 D:\Users\Haroogan\AppData\Local\Temp\ccCv30ZG.o:conftest.c:(.text.startup+0xa): undefined reference to `pthread_cleanup_push' collect2.exe: error: ld returned 1 exit status configure:48771: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | /* end confdefs.h. */ | /* Define pthread_cleanup_push to an innocuous variant, in case declares pthread_cleanup_push. | For example, HP-UX 11i declares gettimeofday. */ | #define pthread_cleanup_push innocuous_pthread_cleanup_push | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char pthread_cleanup_push (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef pthread_cleanup_push | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char pthread_cleanup_push (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_pthread_cleanup_push || defined __stub___pthread_cleanup_push | choke me | #endif | | int | main () | { | return pthread_cleanup_push (); | ; | return 0; | } configure:48793: result: no configure:48806: checking whether pthread_cleanup_push is available (may be a macro in pthread.h) configure:48835: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 conftest.c: In function 'main': conftest.c:162:1: error: expected declaration or statement at end of input } ^ configure:48842: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | /* end confdefs.h. */ | | #include | void f1(void *a) { return; } | int | main () | { | pthread_cleanup_push( f1, (void *)0 ); | ; | return 0; | } configure:48862: result: no configure:48874: checking whether pthread.h defines PTHREAD_MUTEX_RECURSIVE_NP configure:48901: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c >&5 configure:48908: $? = 0 configure:48923: result: yes configure:48925: checking whether pthread.h defines PTHREAD_MUTEX_RECURSIVE configure:48952: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c >&5 configure:48959: $? = 0 configure:48974: result: yes configure:48992: checking whether pthread.h defines PTHREAD_MUTEX_ERRORCHECK_NP configure:49019: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c >&5 configure:49026: $? = 0 configure:49041: result: yes configure:49043: checking whether pthread.h defines PTHREAD_MUTEX_ERRORCHECK configure:49070: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c >&5 configure:49077: $? = 0 configure:49092: result: yes configure:49488: checking for thread local storage specifier configure:49525: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 configure:49532: $? = 0 configure:49567: result: __thread configure:49587: checking sched.h usability configure:49604: gcc -c -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c >&5 configure:49611: $? = 0 configure:49625: result: yes configure:49629: checking sched.h presence configure:49644: gcc -E -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c configure:49651: $? = 0 configure:49665: result: yes configure:49693: checking for sched.h configure:49702: result: yes configure:49726: checking for sched_yield configure:49782: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 configure:49789: $? = 0 configure:49811: result: yes configure:49726: checking for yield configure:49782: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 D:\Users\Haroogan\AppData\Local\Temp\ccgHl88x.o:conftest.c:(.text.startup+0xa): undefined reference to `yield' collect2.exe: error: ld returned 1 exit status configure:49789: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE_NP 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE 1 | #define PTHREAD_MUTEX_ERRORCHECK_VALUE PTHREAD_MUTEX_ERRORCHECK | #define MPIU_THREAD_PACKAGE_NAME MPIU_THREAD_PACKAGE_POSIX | #define MPIU_TLS_SPECIFIER __thread | #define HAVE_SCHED_H 1 | #define HAVE_SCHED_YIELD 1 | /* end confdefs.h. */ | /* Define yield to an innocuous variant, in case declares yield. | For example, HP-UX 11i declares gettimeofday. */ | #define yield innocuous_yield | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char yield (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef yield | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char yield (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_yield || defined __stub___yield | choke me | #endif | | int | main () | { | return yield (); | ; | return 0; | } configure:49811: result: no configure:49726: checking for usleep configure:49782: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 configure:49789: $? = 0 configure:49811: result: yes configure:49726: checking for sleep configure:49782: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 configure:49789: $? = 0 configure:49811: result: yes configure:49726: checking for select configure:49782: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 D:\Users\Haroogan\AppData\Local\Temp\ccAP5z9y.o:conftest.c:(.text.startup+0xa): undefined reference to `select' collect2.exe: error: ld returned 1 exit status configure:49789: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE_NP 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE 1 | #define PTHREAD_MUTEX_ERRORCHECK_VALUE PTHREAD_MUTEX_ERRORCHECK | #define MPIU_THREAD_PACKAGE_NAME MPIU_THREAD_PACKAGE_POSIX | #define MPIU_TLS_SPECIFIER __thread | #define HAVE_SCHED_H 1 | #define HAVE_SCHED_YIELD 1 | #define HAVE_USLEEP 1 | #define HAVE_SLEEP 1 | /* end confdefs.h. */ | /* Define select to an innocuous variant, in case declares select. | For example, HP-UX 11i declares gettimeofday. */ | #define select innocuous_select | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char select (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef select | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char select (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_select || defined __stub___select | choke me | #endif | | int | main () | { | return select (); | ; | return 0; | } configure:49811: result: no configure:49726: checking for getpid configure:49782: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 configure:49789: $? = 0 configure:49811: result: yes configure:49835: checking for sched_setaffinity configure:49891: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 D:\Users\Haroogan\AppData\Local\Temp\ccCHVvUe.o:conftest.c:(.text.startup+0xa): undefined reference to `sched_setaffinity' collect2.exe: error: ld returned 1 exit status configure:49898: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE_NP 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE 1 | #define PTHREAD_MUTEX_ERRORCHECK_VALUE PTHREAD_MUTEX_ERRORCHECK | #define MPIU_THREAD_PACKAGE_NAME MPIU_THREAD_PACKAGE_POSIX | #define MPIU_TLS_SPECIFIER __thread | #define HAVE_SCHED_H 1 | #define HAVE_SCHED_YIELD 1 | #define HAVE_USLEEP 1 | #define HAVE_SLEEP 1 | #define HAVE_GETPID 1 | /* end confdefs.h. */ | /* Define sched_setaffinity to an innocuous variant, in case declares sched_setaffinity. | For example, HP-UX 11i declares gettimeofday. */ | #define sched_setaffinity innocuous_sched_setaffinity | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char sched_setaffinity (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef sched_setaffinity | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char sched_setaffinity (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_sched_setaffinity || defined __stub___sched_setaffinity | choke me | #endif | | int | main () | { | return sched_setaffinity (); | ; | return 0; | } configure:49920: result: no configure:49835: checking for sched_getaffinity configure:49891: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 D:\Users\Haroogan\AppData\Local\Temp\ccU5hKZg.o:conftest.c:(.text.startup+0xa): undefined reference to `sched_getaffinity' collect2.exe: error: ld returned 1 exit status configure:49898: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE_NP 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE 1 | #define PTHREAD_MUTEX_ERRORCHECK_VALUE PTHREAD_MUTEX_ERRORCHECK | #define MPIU_THREAD_PACKAGE_NAME MPIU_THREAD_PACKAGE_POSIX | #define MPIU_TLS_SPECIFIER __thread | #define HAVE_SCHED_H 1 | #define HAVE_SCHED_YIELD 1 | #define HAVE_USLEEP 1 | #define HAVE_SLEEP 1 | #define HAVE_GETPID 1 | /* end confdefs.h. */ | /* Define sched_getaffinity to an innocuous variant, in case declares sched_getaffinity. | For example, HP-UX 11i declares gettimeofday. */ | #define sched_getaffinity innocuous_sched_getaffinity | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char sched_getaffinity (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef sched_getaffinity | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char sched_getaffinity (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_sched_getaffinity || defined __stub___sched_getaffinity | choke me | #endif | | int | main () | { | return sched_getaffinity (); | ; | return 0; | } configure:49920: result: no configure:49835: checking for bindprocessor configure:49891: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 D:\Users\Haroogan\AppData\Local\Temp\ccaZgOgn.o:conftest.c:(.text.startup+0xa): undefined reference to `bindprocessor' collect2.exe: error: ld returned 1 exit status configure:49898: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE_NP 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE 1 | #define PTHREAD_MUTEX_ERRORCHECK_VALUE PTHREAD_MUTEX_ERRORCHECK | #define MPIU_THREAD_PACKAGE_NAME MPIU_THREAD_PACKAGE_POSIX | #define MPIU_TLS_SPECIFIER __thread | #define HAVE_SCHED_H 1 | #define HAVE_SCHED_YIELD 1 | #define HAVE_USLEEP 1 | #define HAVE_SLEEP 1 | #define HAVE_GETPID 1 | /* end confdefs.h. */ | /* Define bindprocessor to an innocuous variant, in case declares bindprocessor. | For example, HP-UX 11i declares gettimeofday. */ | #define bindprocessor innocuous_bindprocessor | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char bindprocessor (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef bindprocessor | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char bindprocessor (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_bindprocessor || defined __stub___bindprocessor | choke me | #endif | | int | main () | { | return bindprocessor (); | ; | return 0; | } configure:49920: result: no configure:49835: checking for thread_policy_set configure:49891: gcc -o conftest.exe -DNDEBUG -DNVALGRIND -O3 -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers conftest.c -lpthread >&5 D:\Users\Haroogan\AppData\Local\Temp\ccKTaFTM.o:conftest.c:(.text.startup+0xa): undefined reference to `thread_policy_set' collect2.exe: error: ld returned 1 exit status configure:49898: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "" | #define PACKAGE_TARNAME "" | #define PACKAGE_VERSION "" | #define PACKAGE_STRING "" | #define PACKAGE_BUGREPORT "" | #define USE_SMP_COLLECTIVES 1 | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL | #define USE_LOGGING MPID_LOGGING_NONE | #define HAVE_RUNTIME_THREADCHECK 1 | #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE | #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL | #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX | #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE | #define HAVE_ROMIO 1 | #define HAVE__FUNC__ /**/ | #define HAVE__FUNCTION__ /**/ | #define HAVE_LONG_LONG 1 | #define STDCALL | #define F77_NAME_LOWER_USCORE 1 | #define STDC_HEADERS 1 | #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 | #define HAVE_FORTRAN_BINDING 1 | #define HAVE_CXX_EXCEPTIONS /**/ | #define HAVE_NAMESPACES /**/ | #define HAVE_NAMESPACE_STD /**/ | #define HAVE_CXX_BINDING 1 | #define FILE_NAMEPUB_BASEDIR "." | #define USE_FILE_FOR_NAMEPUB 1 | #define HAVE_NAMEPUB_SERVICE 1 | #define restrict __restrict | #define HAVE_GCC_ATTRIBUTE 1 | #define WORDS_LITTLEENDIAN 1 | #define HAVE_LONG_DOUBLE 1 | #define HAVE_LONG_LONG_INT 1 | #define HAVE_MAX_INTEGER_ALIGNMENT 8 | #define HAVE_MAX_STRUCT_ALIGNMENT 8 | #define HAVE_MAX_FP_ALIGNMENT 16 | #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 | #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 | #define SIZEOF_CHAR 1 | #define SIZEOF_UNSIGNED_CHAR 1 | #define SIZEOF_SHORT 2 | #define SIZEOF_UNSIGNED_SHORT 2 | #define SIZEOF_INT 4 | #define SIZEOF_UNSIGNED_INT 4 | #define SIZEOF_LONG 4 | #define SIZEOF_UNSIGNED_LONG 4 | #define SIZEOF_LONG_LONG 8 | #define SIZEOF_UNSIGNED_LONG_LONG 8 | #define SIZEOF_FLOAT 4 | #define SIZEOF_DOUBLE 8 | #define SIZEOF_LONG_DOUBLE 16 | #define SIZEOF_VOID_P 8 | #define STDC_HEADERS 1 | #define HAVE_STDDEF_H 1 | #define SIZEOF_WCHAR_T 2 | #define SIZEOF_FLOAT_INT 8 | #define SIZEOF_DOUBLE_INT 16 | #define SIZEOF_LONG_INT 8 | #define SIZEOF_SHORT_INT 8 | #define SIZEOF_TWO_INT 8 | #define SIZEOF_LONG_DOUBLE_INT 32 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_INT8_T 1 | #define HAVE_INT16_T 1 | #define HAVE_INT32_T 1 | #define HAVE_INT64_T 1 | #define HAVE_UINT8_T 1 | #define HAVE_UINT16_T 1 | #define HAVE_UINT32_T 1 | #define HAVE_UINT64_T 1 | #define HAVE_STDBOOL_H 1 | #define HAVE_COMPLEX_H 1 | #define SIZEOF__BOOL 1 | #define SIZEOF_FLOAT__COMPLEX 8 | #define SIZEOF_DOUBLE__COMPLEX 16 | #define SIZEOF_LONG_DOUBLE__COMPLEX 32 | #define HAVE__BOOL 1 | #define HAVE_FLOAT__COMPLEX 1 | #define HAVE_DOUBLE__COMPLEX 1 | #define HAVE_LONG_DOUBLE__COMPLEX 1 | #define MPIR_REAL4_CTYPE float | #define MPIR_REAL8_CTYPE double | #define MPIR_REAL16_CTYPE long double | #define MPIR_INTEGER1_CTYPE char | #define MPIR_INTEGER2_CTYPE short | #define MPIR_INTEGER4_CTYPE int | #define MPIR_INTEGER8_CTYPE long long | #define SIZEOF_F77_INTEGER 4 | #define SIZEOF_F77_REAL 4 | #define SIZEOF_F77_DOUBLE_PRECISION 8 | #define HAVE_AINT_LARGER_THAN_FINT 1 | #define HAVE_AINT_DIFFERENT_THAN_FINT 1 | #define HAVE_FINT_IS_INT 1 | #define F77_TRUE_VALUE_SET 1 | #define F77_TRUE_VALUE 1 | #define F77_FALSE_VALUE 0 | #define HAVE_STDIO_H 1 | #define HAVE_C_MULTI_ATTR_ALIAS 1 | #define SIZEOF_BOOL 1 | #define MPIR_CXX_BOOL_CTYPE _Bool | #define SIZEOF_COMPLEX 8 | #define SIZEOF_DOUBLECOMPLEX 16 | #define SIZEOF_LONGDOUBLECOMPLEX 32 | #define HAVE_CXX_COMPLEX 1 | #define MPIR_CXX_BOOL_VALUE 0x4c000133 | #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 | #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 | #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 | #define SIZEOF_MPIR_BSEND_DATA_T 0 | #define HAVE_GCC_AND_PENTIUM_ASM 1 | #define USE_ATOMIC_UPDATES /**/ | #define STDC_HEADERS 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STDARG_H 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_STRING_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_LIMITS_H 1 | #define HAVE_STDDEF_H 1 | #define HAVE_ERRNO_H 1 | #define HAVE_SYS_TIME_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_ASSERT_H 1 | #define HAVE_SYS_PARAM_H 1 | #define HAVE_ALARM 1 | #define HAVE_VSNPRINTF 1 | #define HAVE_VSPRINTF 1 | #define HAVE_STRERROR 1 | #define HAVE_STRNCASECMP 1 | #define HAVE_DECL_STRERROR_R 0 | #define HAVE_SNPRINTF 1 | #define HAVE_QSORT 1 | #define HAVE_VA_COPY 1 | #define HAVE_MACRO_VA_ARGS 1 | #define HAVE_ALLOCA 1 | #define HAVE_STRDUP 1 | #define HAVE_FDOPEN 1 | #define NEEDS_FDOPEN_DECL 1 | #define HAVE_PUTENV 1 | #define HAVE_CLOCK_GETTIME 1 | #define HAVE_CLOCK_GETRES 1 | #define HAVE_GETTIMEOFDAY 1 | #define MPIR_Pint long long | #define MPIR_PINT_FMT_DEC_SPEC "%lld" | #define MPIR_Upint unsigned long long | #define MPIR_UPINT_FMT_DEC_SPEC "%llu" | #define MPIU_SIZE_T unsigned long long | #define HAVE_PTHREAD_H 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE_NP 1 | #define HAVE_PTHREAD_MUTEX_RECURSIVE 1 | #define PTHREAD_MUTEX_ERRORCHECK_VALUE PTHREAD_MUTEX_ERRORCHECK | #define MPIU_THREAD_PACKAGE_NAME MPIU_THREAD_PACKAGE_POSIX | #define MPIU_TLS_SPECIFIER __thread | #define HAVE_SCHED_H 1 | #define HAVE_SCHED_YIELD 1 | #define HAVE_USLEEP 1 | #define HAVE_SLEEP 1 | #define HAVE_GETPID 1 | /* end confdefs.h. */ | /* Define thread_policy_set to an innocuous variant, in case declares thread_policy_set. | For example, HP-UX 11i declares gettimeofday. */ | #define thread_policy_set innocuous_thread_policy_set | | /* System header to define __stub macros and hopefully few prototypes, | which can conflict with char thread_policy_set (); below. | Prefer to if __STDC__ is defined, since | exists even on freestanding compilers. */ | | #ifdef __STDC__ | # include | #else | # include | #endif | | #undef thread_policy_set | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern "C" | #endif | char thread_policy_set (); | /* The GNU C library defines this for functions which it implements | to always fail with ENOSYS. Some functions are actually named | something starting with __ and the normal name is an alias. */ | #if defined __stub_thread_policy_set || defined __stub___thread_policy_set | choke me | #endif | | int | main () | { | return thread_policy_set (); | ; | return 0; | } configure:49920: result: no configure:50460: ===== configuring src/mpid/ch3 ===== configure:50567: executing: /d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/configure '-prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH/1.4.1' '--enable-fast=all,O3' --disable-option-checking configure:50572: error: src/mpid/ch3 configure failed ## ---------------- ## ## Cache variables. ## ## ---------------- ## ac_cv_build=i686-pc-mingw32 ac_cv_c_bigendian=no ac_cv_c_compiler_gnu=yes ac_cv_c_const=yes ac_cv_c_inline=inline ac_cv_c_int16_t=yes ac_cv_c_int32_t=yes ac_cv_c_int64_t=yes ac_cv_c_int8_t=yes ac_cv_c_restrict=__restrict ac_cv_c_uint16_t=yes ac_cv_c_uint32_t=yes ac_cv_c_uint64_t=yes ac_cv_c_uint8_t=yes ac_cv_c_volatile=yes ac_cv_cxx_bool=yes ac_cv_cxx_compiler_gnu=yes ac_cv_cxx_exceptions=yes ac_cv_cxx_namespace_std=yes ac_cv_cxx_namespaces=yes ac_cv_env_AR_FLAGS_set= ac_cv_env_AR_FLAGS_value= ac_cv_env_CCC_set= ac_cv_env_CCC_value= ac_cv_env_CC_set= ac_cv_env_CC_value= ac_cv_env_CFLAGS_set= ac_cv_env_CFLAGS_value= ac_cv_env_CPPFLAGS_set= ac_cv_env_CPPFLAGS_value= ac_cv_env_CPP_set= ac_cv_env_CPP_value= ac_cv_env_CXXCPP_set= ac_cv_env_CXXCPP_value= ac_cv_env_CXXFLAGS_set= ac_cv_env_CXXFLAGS_value= ac_cv_env_CXX_set= ac_cv_env_CXX_value= ac_cv_env_F77_set= ac_cv_env_F77_value= ac_cv_env_FCFLAGS_set= ac_cv_env_FCFLAGS_value= ac_cv_env_FC_set= ac_cv_env_FC_value= ac_cv_env_FFLAGS_set= ac_cv_env_FFLAGS_value= ac_cv_env_FROM_MPICH2_set= ac_cv_env_FROM_MPICH2_value= ac_cv_env_LDFLAGS_set= ac_cv_env_LDFLAGS_value= ac_cv_env_LIBS_set= ac_cv_env_LIBS_value= ac_cv_env_MPICH2LIB_CFLAGS_set= ac_cv_env_MPICH2LIB_CFLAGS_value= ac_cv_env_MPICH2LIB_CPPFLAGS_set= ac_cv_env_MPICH2LIB_CPPFLAGS_value= ac_cv_env_MPICH2LIB_CXXFLAGS_set= ac_cv_env_MPICH2LIB_CXXFLAGS_value= ac_cv_env_MPICH2LIB_FCFLAGS_set= ac_cv_env_MPICH2LIB_FCFLAGS_value= ac_cv_env_MPICH2LIB_FFLAGS_set= ac_cv_env_MPICH2LIB_FFLAGS_value= ac_cv_env_MPICH2LIB_LDFLAGS_set= ac_cv_env_MPICH2LIB_LDFLAGS_value= ac_cv_env_MPICH2LIB_LIBS_set= ac_cv_env_MPICH2LIB_LIBS_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set= ac_cv_env_host_alias_value= ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_exeext=.exe ac_cv_f77_compiler_gnu=yes ac_cv_f77_libs=' -lstdc++'\'' -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib'\'' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv' ac_cv_fc_compiler_gnu=yes ac_cv_func_alarm=yes ac_cv_func_alloca_works=yes ac_cv_func_bindprocessor=no ac_cv_func_clock_getres=yes ac_cv_func_clock_gettime=yes ac_cv_func_fdopen=yes ac_cv_func_gethrtime=no ac_cv_func_getpid=yes ac_cv_func_gettimeofday=yes ac_cv_func_mach_absolute_time=no ac_cv_func_mkstemp=no ac_cv_func_pthread_cleanup_push=no ac_cv_func_pthread_key_create=yes ac_cv_func_pthread_yield=no ac_cv_func_putenv=yes ac_cv_func_qsort=yes ac_cv_func_sched_getaffinity=no ac_cv_func_sched_setaffinity=no ac_cv_func_sched_yield=yes ac_cv_func_select=no ac_cv_func_setitimer=no ac_cv_func_sleep=yes ac_cv_func_snprintf=yes ac_cv_func_strdup=yes ac_cv_func_strerror=yes ac_cv_func_strerror_r=no ac_cv_func_strerror_r_char_p=no ac_cv_func_strncasecmp=yes ac_cv_func_thread_policy_set=no ac_cv_func_usleep=yes ac_cv_func_vsnprintf=yes ac_cv_func_vsprintf=yes ac_cv_func_yield=no ac_cv_have_decl_strerror_r=no ac_cv_header_assert_h=yes ac_cv_header_complex=yes ac_cv_header_complex_h=yes ac_cv_header_endian_h=no ac_cv_header_errno_h=yes ac_cv_header_inttypes_h=yes ac_cv_header_limits_h=yes ac_cv_header_pthread_h=yes ac_cv_header_sched_h=yes ac_cv_header_stdarg_h=yes ac_cv_header_stdbool_h=yes ac_cv_header_stdc=yes ac_cv_header_stddef_h=yes ac_cv_header_stdint_h=yes ac_cv_header_stdio_h=yes ac_cv_header_stdlib_h=yes ac_cv_header_string_h=yes ac_cv_header_sys_bitypes_h=no ac_cv_header_sys_param_h=yes ac_cv_header_sys_socket_h=no ac_cv_header_sys_time_h=yes ac_cv_header_sys_types_h=yes ac_cv_header_sys_uio_h=no ac_cv_header_unistd_h=yes ac_cv_host=i686-pc-mingw32 ac_cv_lib_pthread_pthread_key_create=yes ac_cv_objext=o ac_cv_path_BASH_SHELL=/bin/bash ac_cv_path_DOCTEXT=false ac_cv_path_EGREP='/bin/grep -E' ac_cv_path_FGREP='/bin/grep -F' ac_cv_path_GREP=/bin/grep ac_cv_path_NM_G=/d/Toolchains/x64/MinGW-w64/4.8.0/bin/nm ac_cv_path_PERL=/bin/perl ac_cv_path_install='/bin/install -c' ac_cv_prog_AR=ar ac_cv_prog_CPP='gcc -E' ac_cv_prog_CXX=c++ ac_cv_prog_CXXCPP='c++ -E' ac_cv_prog_MAKE=make ac_cv_prog_RANLIB=ranlib ac_cv_prog_ac_ct_CC=gcc ac_cv_prog_ac_ct_F77=gfortran ac_cv_prog_ac_ct_FC=gfortran ac_cv_prog_cc_c89= ac_cv_prog_cc_g=yes ac_cv_prog_cxx_g=yes ac_cv_prog_f77_g=yes ac_cv_prog_f77_v=-v ac_cv_prog_fc_g=yes ac_cv_prog_install_breaks_libs=no ac_cv_search_clock_getres='none required' ac_cv_search_clock_gettime='none required' ac_cv_sizeof_Complex=8 ac_cv_sizeof_DoubleComplex=16 ac_cv_sizeof_LongDoubleComplex=32 ac_cv_sizeof_MPIR_Bsend_data_t=0 ac_cv_sizeof__Bool=1 ac_cv_sizeof_bool=1 ac_cv_sizeof_char=1 ac_cv_sizeof_double=8 ac_cv_sizeof_double__Complex=16 ac_cv_sizeof_double_int=16 ac_cv_sizeof_float=4 ac_cv_sizeof_float__Complex=8 ac_cv_sizeof_float_int=8 ac_cv_sizeof_int=4 ac_cv_sizeof_long=4 ac_cv_sizeof_long_double=16 ac_cv_sizeof_long_double__Complex=32 ac_cv_sizeof_long_double_int=32 ac_cv_sizeof_long_int=8 ac_cv_sizeof_long_long=8 ac_cv_sizeof_short=2 ac_cv_sizeof_short_int=8 ac_cv_sizeof_two_int=8 ac_cv_sizeof_unsigned_char=1 ac_cv_sizeof_unsigned_int=4 ac_cv_sizeof_unsigned_long=4 ac_cv_sizeof_unsigned_long_long=8 ac_cv_sizeof_unsigned_short=2 ac_cv_sizeof_void_p=8 ac_cv_sizeof_wchar_t=2 ac_cv_tls=__thread ac_cv_type__Bool=yes ac_cv_type_double__Complex=yes ac_cv_type_float__Complex=yes ac_cv_type_long_double__Complex=yes ac_cv_type_size_t=yes ac_cv_working_alloca_h=no lac_cv_use_atomic_updates=yes pac_cv_attr_weak=yes pac_cv_attr_weak_alias=no pac_cv_attr_weak_import=yes pac_cv_c_char_p_is_byte=yes pac_cv_c_double_alignment_exception=no pac_cv_c_double_pos_align=no pac_cv_c_fp_align_nr=16 pac_cv_c_llint_pos_align=no pac_cv_c_max_double_fp_align=eight pac_cv_c_max_fp_align=sixteen pac_cv_c_max_integer_align=eight pac_cv_c_max_longdouble_fp_align=sixteen pac_cv_c_struct_align_nr=8 pac_cv_cc_has___func__=yes pac_cv_cxx_builds_exe=yes pac_cv_cxx_compiles_string=yes pac_cv_cxx_has_iostream=yes pac_cv_cxx_has_math=no pac_cv_f77_accepts_F=yes pac_cv_f77_flibs_valid=unknown pac_cv_f77_sizeof_double_precision=8 pac_cv_f77_sizeof_integer=4 pac_cv_f77_sizeof_real=4 pac_cv_fc_accepts_F90=yes pac_cv_fc_and_f77=yes pac_cv_fc_module_case=lower pac_cv_fc_module_ext=mod pac_cv_fc_module_incflag=-I pac_cv_fc_vendor=gnu pac_cv_fort90_real8=yes pac_cv_fort_integer16=yes pac_cv_fort_integer1=yes pac_cv_fort_integer2=yes pac_cv_fort_integer4=yes pac_cv_fort_integer8=yes pac_cv_fort_real16=yes pac_cv_fort_real4=yes pac_cv_fort_real8=yes pac_cv_func_decl_fdopen=yes pac_cv_func_decl_putenv=no pac_cv_func_decl_snprintf=no pac_cv_func_decl_strdup=no pac_cv_func_decl_vsnprintf=no pac_cv_func_pthread_cleanup_push=no pac_cv_func_va_copy=yes pac_cv_gnu_attr_format=yes pac_cv_gnu_attr_pure=yes pac_cv_has_pthread_mutex_errorcheck=yes pac_cv_has_pthread_mutex_errorcheck_np=yes pac_cv_has_pthread_mutex_recursive=yes pac_cv_has_pthread_mutex_recursive_np=yes pac_cv_have__func__=yes pac_cv_have__function__=yes pac_cv_have_cap__func__=no pac_cv_have_long_double=yes pac_cv_have_long_long=yes pac_cv_int32_t_alignment=yes pac_cv_int64_t_alignment=yes pac_cv_mkdir_p=yes pac_cv_my_conf_dir=/d/Distributions/mpich2-1.4.1p1/build pac_cv_pointers_have_int_alignment=yes pac_cv_posix_clock_realtime=yes pac_cv_prog_c_unaligned_doubles=yes pac_cv_prog_c_weak_symbols=no pac_cv_prog_f77_and_c_stdio_libs=none pac_cv_prog_f77_exclaim_comments=yes pac_cv_prog_f77_has_incdir=-I pac_cv_prog_f77_library_dir_flag=-L pac_cv_prog_f77_name_mangle='lower uscore' pac_cv_prog_f77_true_false_value='1 0' pac_cv_prog_fc_and_c_stdio_libs=none pac_cv_prog_fc_cross=no pac_cv_prog_fc_int_kind_16=8 pac_cv_prog_fc_works=yes pac_cv_prog_make_allows_comments=yes pac_cv_prog_make_found_clock_skew=no pac_cv_prog_make_include=yes pac_cv_prog_make_set_cflags=yes pac_cv_prog_make_vpath=VPATH pac_cv_sizeof_mpi_status=20 pac_cv_struct_timespec_defined=yes ## ----------------- ## ## Output variables. ## ## ----------------- ## ABIVERSION='3:3' ADDRESS_KIND='8' ALLOCA='' AR='ar' AR_FLAGS='cr' BASH_SHELL='/bin/bash' BSEND_OVERHEAD='0' BUILD_BASH_SCRIPTS='yes' BUILD_DLLS='no' BUILD_TVDLL='no' CC='gcc' CC_SHL='true' CC_SHL_DBG='' CFLAGS=' -DNDEBUG -DNVALGRIND -O3' CMB_1INT_ALIGNMENT='__attribute__((aligned(4)))' CMB_STATUS_ALIGNMENT='__attribute__((aligned(32)))' CONFIGURE_ARGS_CLEAN='-prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH/1.4.1 --enable-fast=all,O3' CONFIGURE_ARGUMENTS=' '\''-prefix=D:/Libraries/x64/MinGW-w64/4.8.0/MPICH/1.4.1'\'' '\''--enable-fast=all,O3'\''' CPP='gcc -E' CPPFLAGS=' -I/d/Distributions/mpich2-1.4.1p1/build/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/mpl/include -I/d/Distributions/mpich2-1.4.1p1/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/openpa/src -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/datatype -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/src/mpid/common/locks -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/include -I/d/Distributions/mpich2-1.4.1p1/build/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/src/mpid/ch3/channels/nemesis/nemesis/utils/monitor -I/d/Distributions/mpich2-1.4.1p1/build/src/util/wrappers -I/d/Distributions/mpich2-1.4.1p1/src/util/wrappers' CREATESHLIB='false' CXX='c++' CXXCPP='c++ -E' CXXFLAGS=' -DNDEBUG -DNVALGRIND -O3' CXX_DEFS=' -DHAVE_CXX_IOSTREAM -DHAVE_NAMESPACE_STD' CXX_LINKPATH_SHL='' CXX_SHL='false' C_LINKPATH_SHL='' C_LINK_SHL='true' C_LINK_SHL_DBG='' DBG_SHLIB_TYPE='' DEFS='' DEVICE='ch3:nemesis' DEVICE_ARGS='' DEVICE_NAME='ch3' DLLIMPORT='' DOCTEXT='false' DOCTEXTSTYLE='' ECHO_C='' ECHO_N='-n' ECHO_T='' EGREP='/bin/grep -E' ENABLE_SHLIB='none' EXEEXT='.exe' EXTERNAL_SRC_DIRS=' src/mpl src/openpa' EXTRA_STATUS_DECL='' F77='gfortran' F77CPP='' F77_COMPLEX16='1275072554' F77_COMPLEX32='1275076652' F77_COMPLEX8='1275070504' F77_INCDIR='-I' F77_INTEGER16='MPI_DATATYPE_NULL' F77_INTEGER1='1275068717' F77_INTEGER2='1275068975' F77_INTEGER4='1275069488' F77_INTEGER8='1275070513' F77_LIBDIR_LEADER='-L' F77_LINKPATH_SHL='' F77_NAME_MANGLE='F77_NAME_LOWER_USCORE' F77_OTHER_LIBS='' F77_REAL16='1275072555' F77_REAL4='1275069479' F77_REAL8='1275070505' F77_SHL='false' FC='gfortran' FCCPP='' FCEXT='f90' FCFLAGS=' -O3' FCINC='-I' FCINCFLAG='-I' FCMODEXT='mod' FCMODINCFLAG='-I' FCMODINCSPEC='' FC_LINKPATH_SHL='' FC_OTHER_LIBS='' FC_SHL='' FC_WORK_FILES_ARG='' FFLAGS=' -O3' FGREP='/bin/grep -F' FILE='' FINCLUDES='-I/d/Distributions/mpich2-1.4.1p1/build/src' FLIBS=' -L/temp/x64-480-posix-seh-r2/libs/lib -L/temp/mingw-prereq/x64-zlib/lib -L/temp/mingw-prereq/x86_64-w64-mingw32-static/lib -L/temp/x64-480-posix-seh-r2/mingw64/opt/lib'\'' -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0 -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib/../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../../../x86_64-w64-mingw32/lib -Ld:/toolchains/x64/mingw-w64/4.8.0/bin/../lib/gcc/x86_64-w64-mingw32/4.8.0/../../.. -lgfortran -lmingw32 -lmoldname -lmingwex -lmsvcrt -lquadmath -lm -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32 -liconv' FORTRAN_BINDING='1' FORTRAN_MPI_OFFSET='' FROM_MPICH2='yes' FWRAPNAME='fmpich' GCC='yes' GNUCXX_MINORVERSION='8' GNUCXX_VERSION='4' GREP='/bin/grep' HAVE_CXX_EXCEPTIONS='1' HAVE_ROMIO='#include "mpio.h"' INCLUDE_MPICXX_H='#include "mpicxx.h"' INCLUDE_MPIDDEFS_H='/* ... no device specific definitions ... */' INSTALL_DATA='${INSTALL} -m 644' INSTALL_PROGRAM='${INSTALL}' INSTALL_SCRIPT='${INSTALL}' KILLALL='true' LDFLAGS=' ' LDFLAGS_DEPS='' LIBOBJS='' LIBS='-lpthread ' LIBTOOL='' LIB_DEPS='' LTLIBOBJS='' MAKE='make' MAKE_DEPEND_C='gcc -MM' MANY_PM='no' MKDIR_P='mkdir -p' MPIBASEMODNAME='mpi_base' MPICC='' MPICH2LIB_CFLAGS='' MPICH2LIB_CPPFLAGS='' MPICH2LIB_CXXFLAGS='' MPICH2LIB_FCFLAGS='' MPICH2LIB_FFLAGS='' MPICH2LIB_LDFLAGS='' MPICH2LIB_LIBS='' MPICH2_NUMVERSION='10401301' MPICH2_RELEASE_DATE='Thu Sep 1 13:53:02 CDT 2011' MPICH2_VERSION='1.4.1p1' MPICH_TIMER_KIND='USE_CLOCK_GETTIME' MPICONSTMODNAME='mpi_constants' MPICXX='' MPICXXLIBNAME='mpichcxx' MPID_TIMER_TYPE='struct timespec' MPIF77='' MPIFC='' MPIFLIBNAME='mpich' MPIFPMPI=',PMPI_WTIME,PMPI_WTICK' MPILIBNAME='mpich' MPIMODNAME='mpi' MPIR_CXX_BOOL='0x4c000133' MPIR_CXX_COMPLEX='0x4c000834' MPIR_CXX_DOUBLE_COMPLEX='0x4c001035' MPIR_CXX_LONG_DOUBLE_COMPLEX='0x4c002036' MPISIZEOFMODNAME='mpi_sizeofs' MPIU_DLL_SPEC_DEF='' MPIU_THREAD_LIB_NAME='mpich' MPI_2COMPLEX='1275072548' MPI_2DOUBLE_COMPLEX='1275076645' MPI_2DOUBLE_PRECISION='1275072547' MPI_2INT='0x4c000816' MPI_2INTEGER='1275070496' MPI_2REAL='1275070497' MPI_AINT='long long' MPI_AINT_DATATYPE='0x4c000843' MPI_AINT_FMT_DEC_SPEC='%lld' MPI_AINT_FMT_HEX_SPEC='%llx' MPI_BYTE='0x4c00010d' MPI_CHAR='0x4c000101' MPI_CHARACTER='1275068698' MPI_COMPLEX16='0x4c00102a' MPI_COMPLEX32='0x4c00202c' MPI_COMPLEX8='0x4c000828' MPI_COMPLEX='1275070494' MPI_C_BOOL='0x4c00013f' MPI_C_DOUBLE_COMPLEX='0x4c001041' MPI_C_FLOAT_COMPLEX='0x4c000840' MPI_C_LONG_DOUBLE_COMPLEX='0x4c002042' MPI_DOUBLE='0x4c00080b' MPI_DOUBLE_COMPLEX='1275072546' MPI_DOUBLE_INT='0x8c000001' MPI_DOUBLE_PRECISION='1275070495' MPI_F77_2INT='1275070486' MPI_F77_AINT='1275070531' MPI_F77_BYTE='1275068685' MPI_F77_CHAR='1275068673' MPI_F77_C_BOOL='1275068735' MPI_F77_C_COMPLEX='1275070528' MPI_F77_C_DOUBLE_COMPLEX='1275072577' MPI_F77_C_FLOAT_COMPLEX='1275070528' MPI_F77_C_LONG_DOUBLE_COMPLEX='1275076674' MPI_F77_DOUBLE='1275070475' MPI_F77_DOUBLE_INT='-1946157055' MPI_F77_FLOAT='1275069450' MPI_F77_FLOAT_INT='-1946157056' MPI_F77_INT16_T='1275068984' MPI_F77_INT32_T='1275069497' MPI_F77_INT64_T='1275070522' MPI_F77_INT8_T='1275068727' MPI_F77_INT='1275069445' MPI_F77_LB='1275068432' MPI_F77_LONG='1275069447' MPI_F77_LONG_DOUBLE='1275072524' MPI_F77_LONG_DOUBLE_INT='-1946157052' MPI_F77_LONG_INT='-1946157054' MPI_F77_LONG_LONG='1275070473' MPI_F77_LONG_LONG_INT='1275070473' MPI_F77_OFFSET='MPI_DATATYPE_NULL' MPI_F77_PACKED='1275068687' MPI_F77_SHORT='1275068931' MPI_F77_SHORT_INT='-1946157053' MPI_F77_SIGNED_CHAR='1275068696' MPI_F77_UB='1275068433' MPI_F77_UINT16_T='1275068988' MPI_F77_UINT32_T='1275069501' MPI_F77_UINT64_T='1275070526' MPI_F77_UINT8_T='1275068731' MPI_F77_UNSIGNED='1275069446' MPI_F77_UNSIGNED_CHAR='1275068674' MPI_F77_UNSIGNED_LONG='1275069448' MPI_F77_UNSIGNED_LONG_LONG='1275070489' MPI_F77_UNSIGNED_SHORT='1275068932' MPI_F77_WCHAR='1275068942' MPI_FINT='int' MPI_FLOAT='0x4c00040a' MPI_FLOAT_INT='0x8c000000' MPI_INT16_T='0x4c000238' MPI_INT32_T='0x4c000439' MPI_INT64_T='0x4c00083a' MPI_INT8_T='0x4c000137' MPI_INT='0x4c000405' MPI_INTEGER16='MPI_DATATYPE_NULL' MPI_INTEGER1='0x4c00012d' MPI_INTEGER2='0x4c00022f' MPI_INTEGER4='0x4c000430' MPI_INTEGER8='0x4c000831' MPI_INTEGER='1275069467' MPI_LB='0x4c000010' MPI_LOGICAL='1275069469' MPI_LONG='0x4c000407' MPI_LONG_DOUBLE='0x4c00100c' MPI_LONG_DOUBLE_INT='0x8c000004' MPI_LONG_INT='0x8c000002' MPI_LONG_LONG='0x4c000809' MPI_MAX_PROCESSOR_NAME='' MPI_OFFSET='' MPI_OFFSET_DATATYPE='' MPI_OFFSET_TYPEDEF='' MPI_PACKED='0x4c00010f' MPI_REAL16='0x4c00102b' MPI_REAL4='0x4c000427' MPI_REAL8='0x4c000829' MPI_REAL='1275069468' MPI_SHORT='0x4c000203' MPI_SHORT_INT='0x8c000003' MPI_SIGNED_CHAR='0x4c000118' MPI_STATUS_SIZE='5' MPI_UB='0x4c000011' MPI_UINT16_T='0x4c00023c' MPI_UINT32_T='0x4c00043d' MPI_UINT64_T='0x4c00083e' MPI_UINT8_T='0x4c00013b' MPI_UNSIGNED_CHAR='0x4c000102' MPI_UNSIGNED_INT='0x4c000406' MPI_UNSIGNED_LONG='0x4c000408' MPI_UNSIGNED_LONG_LONG='0x4c000819' MPI_UNSIGNED_SHORT='0x4c000204' MPI_WCHAR='0x4c00020e' NEEDSPLIB='yes' NO_WEAK_SYM='build_proflib' NO_WEAK_SYM_TARGET='build_proflib' OBJEXT='o' OFFSET_KIND='8' PACKAGE_BUGREPORT='' PACKAGE_NAME='' PACKAGE_STRING='' PACKAGE_TARNAME='' PACKAGE_VERSION='' PATH_SEPARATOR=':' PERL='/bin/perl' PMPIFLIBNAME='pmpich' PMPILIBNAME='pmpich' PROFILE_DEF_MPI='-DMPICH_MPI_FROM_PMPI' RANLIB='ranlib' RANLIB_AFTER_INSTALL='no' REQD='' REQI1='' REQI2='' REQI8='' SET_CFLAGS='CFLAGS=' SET_MAKE='MAKE=make' SHELL='/bin/sh' SHLIB_EXT='unknown' SHLIB_FROM_LO='no' SHLIB_INSTALL='$(INSTALL_PROGRAM)' SIZEOF_FC_CHARACTER='1' SIZEOF_FC_DOUBLE_PRECISION='8' SIZEOF_FC_INTEGER='4' SIZEOF_FC_REAL='4' SIZEOF_MPI_STATUS='20' USER_CFLAGS='' USER_CPPFLAGS='' USER_CXXFLAGS='' USER_FCFLAGS='' USER_FFLAGS='' USER_LDFLAGS='' USER_LIBS='' VPATH='VPATH=.:${srcdir}' WRAPPER_CFLAGS=' ' WRAPPER_CPPFLAGS=' ' WRAPPER_CXXFLAGS=' ' WRAPPER_FCFLAGS=' ' WRAPPER_FFLAGS=' ' WRAPPER_LDFLAGS='' WRAPPER_LIBS='-lopa -lmpl ' WTIME_DOUBLE_TYPE='REAL*8' XARGS_NODATA_OPT='-r' ac_ct_CC='gcc' ac_ct_CXX='' ac_ct_F77='gfortran' ac_ct_FC='gfortran' bindings=' f77 f90 cxx' bindings_dirs=' src/binding/f77 src/binding/f90 src/binding/cxx' bindir='${exec_prefix}/bin' build='i686-pc-mingw32' build_alias='' build_cpu='i686' build_os='mingw32' build_vendor='pc' datadir='${datarootdir}' datarootdir='${prefix}/share' debugger_dir='' device_name='ch3' docdir='${datarootdir}/doc/${PACKAGE}' dvidir='${docdir}' exec_prefix='NONE' host='i686-pc-mingw32' host_alias='' host_cpu='i686' host_os='mingw32' host_vendor='pc' htmldir='${docdir}' includedir='${prefix}/include' infodir='${datarootdir}/info' libdir='${exec_prefix}/lib' libexecdir='${exec_prefix}/libexec' localedir='${datarootdir}/locale' localstatedir='${prefix}/var' logging_dir='' logging_name='none' logging_subdirs='' mandir='${datarootdir}/man' master_top_builddir='/d/Distributions/mpich2-1.4.1p1/build' master_top_srcdir='/d/Distributions/mpich2-1.4.1p1' modincdir='${prefix}/include' mpe_dir='mpe2' nameserv_name='file' oldincludedir='/usr/include' opadir='openpa' other_install_dirs=' src/mpl src/openpa src/pm/hydra src/mpe2' other_pm_names='' pac_prog='' pdfdir='${docdir}' pm_name='hydra' pmi_name='simple' prefix='D:/Libraries/x64/MinGW-w64/4.8.0/MPICH/1.4.1' program_transform_name='s,x,x,' psdir='${docdir}' romio_dir='romio' sbindir='${exec_prefix}/sbin' sharedstatedir='${prefix}/com' subdirs='' subsystems=' src/mpi/romio src/pmi/simple src/pm/hydra src/mpe2' sysconfdir='${prefix}/etc' target_alias='' ## ----------- ## ## confdefs.h. ## ## ----------- ## #define PACKAGE_NAME "" #define PACKAGE_TARNAME "" #define PACKAGE_VERSION "" #define PACKAGE_STRING "" #define PACKAGE_BUGREPORT "" #define USE_SMP_COLLECTIVES 1 #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL #define USE_LOGGING MPID_LOGGING_NONE #define HAVE_RUNTIME_THREADCHECK 1 #define MPICH_THREAD_LEVEL MPI_THREAD_MULTIPLE #define MPIU_THREAD_GRANULARITY MPIU_THREAD_GRANULARITY_GLOBAL #define MPIU_HANDLE_ALLOCATION_METHOD MPIU_HANDLE_ALLOCATION_MUTEX #define MPIU_THREAD_REFCOUNT MPIU_REFCOUNT_NONE #define HAVE_ROMIO 1 #define HAVE__FUNC__ /**/ #define HAVE__FUNCTION__ /**/ #define HAVE_LONG_LONG 1 #define STDCALL #define F77_NAME_LOWER_USCORE 1 #define STDC_HEADERS 1 #define HAVE_MPI_F_INIT_WORKS_WITH_C 1 #define HAVE_FORTRAN_BINDING 1 #define HAVE_CXX_EXCEPTIONS /**/ #define HAVE_NAMESPACES /**/ #define HAVE_NAMESPACE_STD /**/ #define HAVE_CXX_BINDING 1 #define FILE_NAMEPUB_BASEDIR "." #define USE_FILE_FOR_NAMEPUB 1 #define HAVE_NAMEPUB_SERVICE 1 #define restrict __restrict #define HAVE_GCC_ATTRIBUTE 1 #define WORDS_LITTLEENDIAN 1 #define HAVE_LONG_DOUBLE 1 #define HAVE_LONG_LONG_INT 1 #define HAVE_MAX_INTEGER_ALIGNMENT 8 #define HAVE_MAX_STRUCT_ALIGNMENT 8 #define HAVE_MAX_FP_ALIGNMENT 16 #define HAVE_MAX_DOUBLE_FP_ALIGNMENT 8 #define HAVE_MAX_LONG_DOUBLE_FP_ALIGNMENT 16 #define SIZEOF_CHAR 1 #define SIZEOF_UNSIGNED_CHAR 1 #define SIZEOF_SHORT 2 #define SIZEOF_UNSIGNED_SHORT 2 #define SIZEOF_INT 4 #define SIZEOF_UNSIGNED_INT 4 #define SIZEOF_LONG 4 #define SIZEOF_UNSIGNED_LONG 4 #define SIZEOF_LONG_LONG 8 #define SIZEOF_UNSIGNED_LONG_LONG 8 #define SIZEOF_FLOAT 4 #define SIZEOF_DOUBLE 8 #define SIZEOF_LONG_DOUBLE 16 #define SIZEOF_VOID_P 8 #define STDC_HEADERS 1 #define HAVE_STDDEF_H 1 #define SIZEOF_WCHAR_T 2 #define SIZEOF_FLOAT_INT 8 #define SIZEOF_DOUBLE_INT 16 #define SIZEOF_LONG_INT 8 #define SIZEOF_SHORT_INT 8 #define SIZEOF_TWO_INT 8 #define SIZEOF_LONG_DOUBLE_INT 32 #define HAVE_INTTYPES_H 1 #define HAVE_STDINT_H 1 #define HAVE_INT8_T 1 #define HAVE_INT16_T 1 #define HAVE_INT32_T 1 #define HAVE_INT64_T 1 #define HAVE_UINT8_T 1 #define HAVE_UINT16_T 1 #define HAVE_UINT32_T 1 #define HAVE_UINT64_T 1 #define HAVE_STDBOOL_H 1 #define HAVE_COMPLEX_H 1 #define SIZEOF__BOOL 1 #define SIZEOF_FLOAT__COMPLEX 8 #define SIZEOF_DOUBLE__COMPLEX 16 #define SIZEOF_LONG_DOUBLE__COMPLEX 32 #define HAVE__BOOL 1 #define HAVE_FLOAT__COMPLEX 1 #define HAVE_DOUBLE__COMPLEX 1 #define HAVE_LONG_DOUBLE__COMPLEX 1 #define MPIR_REAL4_CTYPE float #define MPIR_REAL8_CTYPE double #define MPIR_REAL16_CTYPE long double #define MPIR_INTEGER1_CTYPE char #define MPIR_INTEGER2_CTYPE short #define MPIR_INTEGER4_CTYPE int #define MPIR_INTEGER8_CTYPE long long #define SIZEOF_F77_INTEGER 4 #define SIZEOF_F77_REAL 4 #define SIZEOF_F77_DOUBLE_PRECISION 8 #define HAVE_AINT_LARGER_THAN_FINT 1 #define HAVE_AINT_DIFFERENT_THAN_FINT 1 #define HAVE_FINT_IS_INT 1 #define F77_TRUE_VALUE_SET 1 #define F77_TRUE_VALUE 1 #define F77_FALSE_VALUE 0 #define HAVE_STDIO_H 1 #define HAVE_C_MULTI_ATTR_ALIAS 1 #define SIZEOF_BOOL 1 #define MPIR_CXX_BOOL_CTYPE _Bool #define SIZEOF_COMPLEX 8 #define SIZEOF_DOUBLECOMPLEX 16 #define SIZEOF_LONGDOUBLECOMPLEX 32 #define HAVE_CXX_COMPLEX 1 #define MPIR_CXX_BOOL_VALUE 0x4c000133 #define MPIR_CXX_COMPLEX_VALUE 0x4c000834 #define MPIR_CXX_DOUBLE_COMPLEX_VALUE 0x4c001035 #define MPIR_CXX_LONG_DOUBLE_COMPLEX_VALUE 0x4c002036 #define SIZEOF_MPIR_BSEND_DATA_T 0 #define HAVE_GCC_AND_PENTIUM_ASM 1 #define USE_ATOMIC_UPDATES /**/ #define STDC_HEADERS 1 #define HAVE_STDLIB_H 1 #define HAVE_STDARG_H 1 #define HAVE_SYS_TYPES_H 1 #define HAVE_STRING_H 1 #define HAVE_INTTYPES_H 1 #define HAVE_LIMITS_H 1 #define HAVE_STDDEF_H 1 #define HAVE_ERRNO_H 1 #define HAVE_SYS_TIME_H 1 #define HAVE_UNISTD_H 1 #define HAVE_ASSERT_H 1 #define HAVE_SYS_PARAM_H 1 #define HAVE_ALARM 1 #define HAVE_VSNPRINTF 1 #define HAVE_VSPRINTF 1 #define HAVE_STRERROR 1 #define HAVE_STRNCASECMP 1 #define HAVE_DECL_STRERROR_R 0 #define HAVE_SNPRINTF 1 #define HAVE_QSORT 1 #define HAVE_VA_COPY 1 #define HAVE_MACRO_VA_ARGS 1 #define HAVE_ALLOCA 1 #define HAVE_STRDUP 1 #define HAVE_FDOPEN 1 #define NEEDS_FDOPEN_DECL 1 #define HAVE_PUTENV 1 #define HAVE_CLOCK_GETTIME 1 #define HAVE_CLOCK_GETRES 1 #define HAVE_GETTIMEOFDAY 1 #define MPIR_Pint long long #define MPIR_PINT_FMT_DEC_SPEC "%lld" #define MPIR_Upint unsigned long long #define MPIR_UPINT_FMT_DEC_SPEC "%llu" #define MPIU_SIZE_T unsigned long long #define HAVE_PTHREAD_H 1 #define HAVE_PTHREAD_MUTEX_RECURSIVE_NP 1 #define HAVE_PTHREAD_MUTEX_RECURSIVE 1 #define PTHREAD_MUTEX_ERRORCHECK_VALUE PTHREAD_MUTEX_ERRORCHECK #define MPIU_THREAD_PACKAGE_NAME MPIU_THREAD_PACKAGE_POSIX #define MPIU_TLS_SPECIFIER __thread #define HAVE_SCHED_H 1 #define HAVE_SCHED_YIELD 1 #define HAVE_USLEEP 1 #define HAVE_SLEEP 1 #define HAVE_GETPID 1 configure: exit 1 From sniu at hawk.iit.edu Mon Jun 17 13:48:57 2013 From: sniu at hawk.iit.edu (Sufeng Niu) Date: Mon, 17 Jun 2013 13:48:57 -0500 Subject: [mpich-discuss] MPI server setup issue In-Reply-To: <51BDBA77.2000704@mcs.anl.gov> References: <51BDBA77.2000704@mcs.anl.gov> Message-ID: Hi, Pavan, Thanks a lot! you are correct, I gonna re-install the Linux for servers. Best, Sufeng On Sun, Jun 16, 2013 at 8:15 AM, Pavan Balaji wrote: > Hi Sufeng, > > > On 06/14/2013 04:35 PM, Sufeng Niu wrote: > >> 1. when I run a simple MPI hello world on multiple nodes, (I already >> installed mpich3 library on master node, mount the nfs, shared the >> executable file and mpi library, set slave node to be keyless ssh), my >> program was stoped there say: >> bash: /mnt/mpi/mpich-install/bin/**hydra_pmi_proxy: /lib/ld-linux.so.2: >> bad ELF interpreter: No such file or directory. >> > > 1. Did you make sure /mnt/mpi/mpich-install/bin/**hydra_pmi_proxy is > available on each node? > > 2. Did you also make sure all libraries it is linked to are available on > each node? > > You can check these libraries using "ldd /mnt/mpi/mpich-install/bin/** > hydra_pmi_proxy" > > -- Pavan > > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Best Regards, Sufeng Niu ECASP lab, ECE department, Illinois Institute of Technology Tel: 312-731-7219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.dorier at irisa.fr Mon Jun 17 14:54:43 2013 From: matthieu.dorier at irisa.fr (Matthieu Dorier) Date: Mon, 17 Jun 2013 21:54:43 +0200 (CEST) Subject: [mpich-discuss] difference between ADIOI_xxx_ReadDone and ADIOI_xxx_ReadComplete In-Reply-To: <231893761.7240589.1371498781935.JavaMail.root@irisa.fr> Message-ID: <1906705230.7240748.1371498882984.JavaMail.root@irisa.fr> Hi, I'm trying to implement an ADIO backend and I'd like to know what is the difference between ADIOI_xxx_ReadDone and ADIOI_xxx_ReadComplete (and respectively for Write)? When are these functions called? (I'm working with mpich 3.0.4) More generally is there somewhere a documentation explaining when and how each of the ADIOI_ functions are called? Thank you, Matthieu Dorier PhD student at ENS Cachan Brittany and IRISA http://people.irisa.fr/Matthieu.Dorier -------------- next part -------------- An HTML attachment was scrubbed... URL: From thakur at mcs.anl.gov Mon Jun 17 15:22:18 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Mon, 17 Jun 2013 15:22:18 -0500 Subject: [mpich-discuss] difference between ADIOI_xxx_ReadDone and ADIOI_xxx_ReadComplete In-Reply-To: <1906705230.7240748.1371498882984.JavaMail.root@irisa.fr> References: <1906705230.7240748.1371498882984.JavaMail.root@irisa.fr> Message-ID: If you are looking for 17 year old documentation, see below. But it has the answer to your question :-). Rajeev Thakur, William Gropp, and Ewing Lusk, "An Abstract-Device Interface for Implementing Portable Parallel-I/O Interfaces," in Proc. of the 6th Symposium on the Frontiers of Massively Parallel Computation, October 1996, pp. 180-187. http://www.mcs.anl.gov/~thakur/papers/adio.pdf On Jun 17, 2013, at 2:54 PM, Matthieu Dorier wrote: > Hi, > > I'm trying to implement an ADIO backend and I'd like to know what is the difference between ADIOI_xxx_ReadDone and ADIOI_xxx_ReadComplete (and respectively for Write)? When are these functions called? (I'm working with mpich 3.0.4) > More generally is there somewhere a documentation explaining when and how each of the ADIOI_ functions are called? > Thank you, > > Matthieu Dorier > PhD student at ENS Cachan Brittany and IRISA > http://people.irisa.fr/Matthieu.Dorier > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From matthieu.dorier at irisa.fr Mon Jun 17 16:11:24 2013 From: matthieu.dorier at irisa.fr (Matthieu Dorier) Date: Mon, 17 Jun 2013 23:11:24 +0200 (CEST) Subject: [mpich-discuss] difference between ADIOI_xxx_ReadDone and ADIOI_xxx_ReadComplete In-Reply-To: Message-ID: <273039959.7257067.1371503484708.JavaMail.root@irisa.fr> Thanks Rajeev! >From reading further the code, I understand that the ReadDone function is called automatically after a read is finished (a sort of callback, somehow?) while the Complete function would be called when the user waits on a request corresponding to an asynchronous I/O operation. Am I correct? Matthieu ----- Mail original ----- > De: "Rajeev Thakur" > ?: discuss at mpich.org > Envoy?: Lundi 17 Juin 2013 15:22:18 > Objet: Re: [mpich-discuss] difference between ADIOI_xxx_ReadDone and ADIOI_xxx_ReadComplete > > If you are looking for 17 year old documentation, see below. But it > has the answer to your question :-). > > Rajeev Thakur, William Gropp, and Ewing Lusk, "An Abstract-Device > Interface for Implementing Portable Parallel-I/O Interfaces," in > Proc. of the 6th Symposium on the Frontiers of Massively Parallel > Computation, October 1996, pp. 180-187. > http://www.mcs.anl.gov/~thakur/papers/adio.pdf > > > On Jun 17, 2013, at 2:54 PM, Matthieu Dorier wrote: > > > Hi, > > > > I'm trying to implement an ADIO backend and I'd like to know what > > is the difference between ADIOI_xxx_ReadDone and > > ADIOI_xxx_ReadComplete (and respectively for Write)? When are > > these functions called? (I'm working with mpich 3.0.4) > > More generally is there somewhere a documentation explaining when > > and how each of the ADIOI_ functions are called? > > Thank you, > > > > Matthieu Dorier > > PhD student at ENS Cachan Brittany and IRISA > > http://people.irisa.fr/Matthieu.Dorier > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > From matthieu.dorier at irisa.fr Mon Jun 17 16:15:16 2013 From: matthieu.dorier at irisa.fr (Matthieu Dorier) Date: Mon, 17 Jun 2013 23:15:16 +0200 (CEST) Subject: [mpich-discuss] difference between ADIOI_xxx_ReadDone and ADIOI_xxx_ReadComplete In-Reply-To: <273039959.7257067.1371503484708.JavaMail.root@irisa.fr> Message-ID: <1985085506.7257478.1371503716595.JavaMail.root@irisa.fr> Mmm from the paper you send, I seem to be *almost* correct. ReadDone is just to check without blocking. Thanks again, Matthieu ----- Mail original ----- > De: "Matthieu Dorier" > ?: discuss at mpich.org > Envoy?: Lundi 17 Juin 2013 16:11:24 > Objet: Re: [mpich-discuss] difference between ADIOI_xxx_ReadDone and ADIOI_xxx_ReadComplete > > Thanks Rajeev! > > From reading further the code, I understand that the ReadDone > function is called automatically after a read is finished (a sort of > callback, somehow?) while the Complete function would be called when > the user waits on a request corresponding to an asynchronous I/O > operation. Am I correct? > > Matthieu > > ----- Mail original ----- > > De: "Rajeev Thakur" > > ?: discuss at mpich.org > > Envoy?: Lundi 17 Juin 2013 15:22:18 > > Objet: Re: [mpich-discuss] difference between ADIOI_xxx_ReadDone > > and ADIOI_xxx_ReadComplete > > > > If you are looking for 17 year old documentation, see below. But it > > has the answer to your question :-). > > > > Rajeev Thakur, William Gropp, and Ewing Lusk, "An Abstract-Device > > Interface for Implementing Portable Parallel-I/O Interfaces," in > > Proc. of the 6th Symposium on the Frontiers of Massively Parallel > > Computation, October 1996, pp. 180-187. > > http://www.mcs.anl.gov/~thakur/papers/adio.pdf > > > > > > On Jun 17, 2013, at 2:54 PM, Matthieu Dorier wrote: > > > > > Hi, > > > > > > I'm trying to implement an ADIO backend and I'd like to know what > > > is the difference between ADIOI_xxx_ReadDone and > > > ADIOI_xxx_ReadComplete (and respectively for Write)? When are > > > these functions called? (I'm working with mpich 3.0.4) > > > More generally is there somewhere a documentation explaining when > > > and how each of the ADIOI_ functions are called? > > > Thank you, > > > > > > Matthieu Dorier > > > PhD student at ENS Cachan Brittany and IRISA > > > http://people.irisa.fr/Matthieu.Dorier > > > _______________________________________________ > > > discuss mailing list discuss at mpich.org > > > To manage subscription options or unsubscribe: > > > https://lists.mpich.org/mailman/listinfo/discuss > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > From eibhlin.lee10 at imperial.ac.uk Wed Jun 19 05:13:44 2013 From: eibhlin.lee10 at imperial.ac.uk (Lee, Eibhlin) Date: Wed, 19 Jun 2013 10:13:44 +0000 Subject: [mpich-discuss] Starting processes in root on multiple machines Message-ID: <2D283C3861654E41AEB39AE4B6767663173AD49A@icexch-m3.ic.ac.uk> Hello all, Jim and Pavan will be glad to hear I have now switched to Hydra. I am able to run cpi as user on multiple machines. I am also able to run cpi as root on one machine. However, I am having difficulty setting up ssh properly so that I can run cpi as root on multiple machines. How can I force a process to run as root on another machine? Regards, Eibhlin -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Wed Jun 19 07:14:53 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Wed, 19 Jun 2013 07:14:53 -0500 Subject: [mpich-discuss] Starting processes in root on multiple machines In-Reply-To: <2D283C3861654E41AEB39AE4B6767663173AD49A@icexch-m3.ic.ac.uk> References: <2D283C3861654E41AEB39AE4B6767663173AD49A@icexch-m3.ic.ac.uk> Message-ID: <51C1A0BD.70406@mcs.anl.gov> On 06/19/2013 05:13 AM, Lee, Eibhlin wrote: > Jim and Pavan will be glad to hear I have now switched to Hydra. I am > able to run cpi as user on multiple machines. I am also able to run cpi > as root on one machine. However, I am having difficulty setting up ssh > properly so that I can run cpi as root on multiple machines. Excellent. Once you login as root, can you ssh into the other nodes? You'll need to set that up first. > How can I force a process to run as root on another machine? Once the above ssh is setup, you don't need to do anything special. Just login as root and run your application. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From gus at ldeo.columbia.edu Wed Jun 19 11:56:21 2013 From: gus at ldeo.columbia.edu (Gus Correa) Date: Wed, 19 Jun 2013 12:56:21 -0400 Subject: [mpich-discuss] Starting processes in root on multiple machines In-Reply-To: <51C1A0BD.70406@mcs.anl.gov> References: <2D283C3861654E41AEB39AE4B6767663173AD49A@icexch-m3.ic.ac.uk> <51C1A0BD.70406@mcs.anl.gov> Message-ID: <51C1E2B5.5000001@ldeo.columbia.edu> On 06/19/2013 08:14 AM, Pavan Balaji wrote: > > On 06/19/2013 05:13 AM, Lee, Eibhlin wrote: >> Jim and Pavan will be glad to hear I have now switched to Hydra. I am >> able to run cpi as user on multiple machines. I am also able to run cpi >> as root on one machine. However, I am having difficulty setting up ssh >> properly so that I can run cpi as root on multiple machines. > > Excellent. Once you login as root, can you ssh into the other nodes? > You'll need to set that up first. > >> How can I force a process to run as root on another machine? > > Once the above ssh is setup, you don't need to do anything special. Just > login as root and run your application. > > -- Pavan > I would caution that passwordless ssh for root is not a secure thing. It may be alright if you keep your cluster off the Internet. Maybe the cluster is dedicated to run the MPI A/D converter program? Some people/sites disable remote root logins (or any direct root logins). So, Eibhlin may want to check if any of these restrictive mechanisms exist in his cluster's nodes: http://www.centos.org/docs/4/4.5/Security_Guide/s2-wstation-privileges-noroot.html I hope this helps, Gus Correa From gyi at mtu.edu Thu Jun 20 14:50:26 2013 From: gyi at mtu.edu (Yi Gu) Date: Thu, 20 Jun 2013 14:50:26 -0500 Subject: [mpich-discuss] make error: 'MPID_STATE_MPI_T_CATEGORY_CHANGED' undeclared Message-ID: Hi, I try to install MPICH 3.04 with logging enabled. So I configure with commands: ./configure --enable-timing=log --with-logging=rlog --enable-timer-type=gettimeofday When configuring rlog, it said "Makefile.in" could not be found in the following directory /mpich-3.0.4/src/util/logging/rlog/ So I copied the "Makefile.in" in mpich-3.0.4/doc/logging/ to that directory and it configured successfully. However, when I tried to make it, there is an error CC src/mpi_t/cat_changed.lo src/mpi_t/cat_changed.c: In function 'PMPI_T_category_changed': src/mpi_t/cat_changed.c:104:5: error: 'MPID_STATE_MPI_T_CATEGORY_CHANGED' undeclared (first use in this function) src/mpi_t/cat_changed.c:104:5: note: each undeclared identifier is reported only once for each function it appears in make[2]: *** [src/mpi_t/cat_changed.lo] Error 1 So how could I solve this problem? Thanks Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sniu at hawk.iit.edu Fri Jun 21 10:51:50 2013 From: sniu at hawk.iit.edu (Sufeng Niu) Date: Fri, 21 Jun 2013 10:51:50 -0500 Subject: [mpich-discuss] run hello world on multiple server Message-ID: Hi, Sorry to bother you guys on this stupid question. last time I re-install OS for all blades to keep them the same version. after I mount, set keyless ssh, the terimnal gives the error below: [proxy:0:1 at iocfccd3.aps.anl.gov] HYDU_sock_connect (./utils/sock/sock.c:174): unable to connect from "iocfccd3.aps.anl.gov" to "iocfccd1.aps.anl.gov" (No route to host) [proxy:0:1 at iocfccd3.aps.anl.gov] main (./pm/pmiserv/pmip.c:189): unable to connect to server iocfccd1.aps.anl.gov at port 38242 (check for firewalls!) I can ssh from iocfccd1 to iocfccd3 without password. Should I shut down all firewalls on each server? I cannot find out where is the problem. Thank you -- Best Regards, Sufeng Niu ECASP lab, ECE department, Illinois Institute of Technology Tel: 312-731-7219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From apenya at mcs.anl.gov Fri Jun 21 10:58:26 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Fri, 21 Jun 2013 10:58:26 -0500 Subject: [mpich-discuss] run hello world on multiple server In-Reply-To: References: Message-ID: <1764654.VsHJIGvujv@localhost.localdomain> Hi Sufeng, Can you ping/ssh exactly this name "iocfccd1.aps.anl.gov" from iocfccd3? [1] Antonio On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: Hi, Sorry to bother you guys on this stupid question. last time I re-install OS for all blades to keep them the same version. after I mount, set keyless ssh, the terimnal gives the error below: proxy:0:1 at iocfccd3.aps.anl.gov[2]] HYDU_sock_connect (./utils/sock/sock.c:174): unable to connect from "iocfccd3.aps.anl.gov[3]" to "iocfccd1.aps.anl.gov[1]" (No route to host) [proxy:0:1 at iocfccd3.aps.anl.gov[2]] main (./pm/pmiserv/pmip.c:189): unable to connect to server iocfccd1.aps.anl.gov[1] at port 38242 (check for firewalls!) I can ssh from iocfccd1 to iocfccd3 without password. Should I shut down all firewalls on each server? I cannot find out where is the problem. Thank you -- Best Regards, Sufeng Niu ECASP lab, ECE department, Illinois Institute of Technology Tel: 312-731-7219[4] -------- [1] http://iocfccd1.aps.anl.gov [2] mailto:proxy%3A0%3A1 at iocfccd3.aps.anl.gov [3] http://iocfccd3.aps.anl.gov [4] tel:312-731-7219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsimsa at cs.cmu.edu Sat Jun 22 11:49:10 2013 From: jsimsa at cs.cmu.edu (Jiri Simsa) Date: Sat, 22 Jun 2013 12:49:10 -0400 Subject: [mpich-discuss] Problem reproducing an example from the MPI standard Message-ID: Hi, I tried implementing the Example 3.17 from the MPI 3.0 specification document using as follows: #include #include #include int main(int argc, char *argv[]) { int myrank, size; MPI_Status status; MPI_Init(&argc, &argv ); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); MPI_Comm_size(MPI_COMM_WORLD, &size); assert(size == 3); if (myrank == 0) { /* code for process zero */ int i = 1; MPI_Send(&i, 1, MPI_INTEGER, 2, 99, MPI_COMM_WORLD); } if (myrank == 1) { /* code for process one */ double d = 3.14; MPI_Send(&d, 1, MPI_REAL, 2, 99, MPI_COMM_WORLD); } if (myrank == 2) { /* code for process two */ for (int i = 0; i < 2; i++) { MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); printf("Probe matched %lld bytes from source %d.\n", status.count, status.MPI_SOURCE); if (status.MPI_SOURCE == 0) { int i; MPI_Recv(&i, 1, MPI_INTEGER, 0, 99, MPI_COMM_WORLD, &status); printf("Received integer '%d' from %d.\n", i, status.MPI_SOURCE); } else { double d; MPI_Recv(&d, 1, MPI_REAL, 1, 99, MPI_COMM_WORLD, &status); printf("Received real '%f' from %d.\n", d, status.MPI_SOURCE); } } } MPI_Finalize(); return 0; } This example compiles without any warning with the MPICH-3.0.4 library. However, running: $ mpiexec -n 3 ./example leads to the following output: Probe matched 4 bytes from source 1. Received real '0.000000' from 1. Probe matched 4 bytes from source 0. Received integer '1' from 0. Could someone please let me know what is the problem with my program? I failed to see a problem there. Thank you. Best, --Jiri Simsa -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Sat Jun 22 12:02:19 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Sat, 22 Jun 2013 12:02:19 -0500 Subject: [mpich-discuss] Problem reproducing an example from the MPI standard In-Reply-To: References: Message-ID: <51C5D89B.7010108@mcs.anl.gov> The example is correct, since it's in Fortran. Your conversion to C is incorrect. You probably want to use MPI_DOUBLE instead of MPI_REAL, and MPI_INT instead of MPI_INTEGER. -- Pavan On 06/22/2013 11:49 AM, Jiri Simsa wrote: > Hi, > > I tried implementing the Example 3.17 from the MPI 3.0 specification > document using as follows: > > #include > #include > #include > > int main(int argc, char *argv[]) { > int myrank, size; > MPI_Status status; > MPI_Init(&argc, &argv ); > MPI_Comm_rank(MPI_COMM_WORLD, &myrank); > MPI_Comm_size(MPI_COMM_WORLD, &size); > assert(size == 3); > if (myrank == 0) { > /* code for process zero */ > int i = 1; > MPI_Send(&i, 1, MPI_INTEGER, 2, 99, MPI_COMM_WORLD); > } > if (myrank == 1) { > /* code for process one */ > double d = 3.14; > MPI_Send(&d, 1, MPI_REAL, 2, 99, MPI_COMM_WORLD); > } > if (myrank == 2) { > /* code for process two */ > for (int i = 0; i < 2; i++) { > MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); > printf("Probe matched %lld bytes from source %d.\n", > status.count, status.MPI_SOURCE); > if (status.MPI_SOURCE == 0) { > int i; > MPI_Recv(&i, 1, MPI_INTEGER, 0, 99, MPI_COMM_WORLD, &status); > printf("Received integer '%d' from %d.\n", i, status.MPI_SOURCE); > } else { > double d; > MPI_Recv(&d, 1, MPI_REAL, 1, 99, MPI_COMM_WORLD, &status); > printf("Received real '%f' from %d.\n", d, status.MPI_SOURCE); > } > } > } > MPI_Finalize(); > return 0; > } > > This example compiles without any warning with the MPICH-3.0.4 library. > However, running: > > $ mpiexec -n 3 ./example > > leads to the following output: > > Probe matched 4 bytes from source 1. > Received real '0.000000' from 1. > Probe matched 4 bytes from source 0. > Received integer '1' from 0. > > Could someone please let me know what is the problem with my program? I > failed to see a problem there. Thank you. > > Best, > > --Jiri Simsa > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jsimsa at cs.cmu.edu Sat Jun 22 12:12:06 2013 From: jsimsa at cs.cmu.edu (Jiri Simsa) Date: Sat, 22 Jun 2013 13:12:06 -0400 Subject: [mpich-discuss] Problem reproducing an example from the MPI standard In-Reply-To: References: Message-ID: Nevermind. I was incorrectly using the MPI_REAL data type. After replacing it with the MPI_DOUBLE data type the program was as expected. On Sat, Jun 22, 2013 at 12:49 PM, Jiri Simsa wrote: > Hi, > > I tried implementing the Example 3.17 from the MPI 3.0 specification > document using as follows: > > #include > #include > #include > > int main(int argc, char *argv[]) { > int myrank, size; > MPI_Status status; > MPI_Init(&argc, &argv ); > MPI_Comm_rank(MPI_COMM_WORLD, &myrank); > MPI_Comm_size(MPI_COMM_WORLD, &size); > assert(size == 3); > if (myrank == 0) { > /* code for process zero */ > int i = 1; > MPI_Send(&i, 1, MPI_INTEGER, 2, 99, MPI_COMM_WORLD); > } > if (myrank == 1) { > /* code for process one */ > double d = 3.14; > MPI_Send(&d, 1, MPI_REAL, 2, 99, MPI_COMM_WORLD); > } > if (myrank == 2) { > /* code for process two */ > for (int i = 0; i < 2; i++) { > MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); > printf("Probe matched %lld bytes from source %d.\n", > status.count, status.MPI_SOURCE); > if (status.MPI_SOURCE == 0) { > int i; > MPI_Recv(&i, 1, MPI_INTEGER, 0, 99, MPI_COMM_WORLD, &status); > printf("Received integer '%d' from %d.\n", i, status.MPI_SOURCE); > } else { > double d; > MPI_Recv(&d, 1, MPI_REAL, 1, 99, MPI_COMM_WORLD, &status); > printf("Received real '%f' from %d.\n", d, status.MPI_SOURCE); > } > } > } > MPI_Finalize(); > return 0; > } > > This example compiles without any warning with the MPICH-3.0.4 library. > However, running: > > $ mpiexec -n 3 ./example > > leads to the following output: > > Probe matched 4 bytes from source 1. > Received real '0.000000' from 1. > Probe matched 4 bytes from source 0. > Received integer '1' from 0. > > Could someone please let me know what is the problem with my program? I > failed to see a problem there. Thank you. > > Best, > > --Jiri Simsa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sniu at hawk.iit.edu Sun Jun 23 11:13:31 2013 From: sniu at hawk.iit.edu (Sufeng Niu) Date: Sun, 23 Jun 2013 11:13:31 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 37 In-Reply-To: References: Message-ID: Hi, Antonio Thanks a lot for your reply, I just figure out that is the firewall issue. after I set the firewall. it works now. Thanks again. But I still got a few questions on MPI and multithreads mixed programming. Currently, I try to run each process on each server, and each process using thread pool to run multiple threads (pthread lib). I am not sure whether it is the correct way or not. I wrote it as: MPI_Init() .... ... /* create thread pool and initial */ ...... /* fetch job into thread pool */ ...... MPI_Finalize(); When I check the book and notes, I found people use MPI_Init_thread() with MPI_THREAD_MULTIPLE but the some docs said it supported OpenMP, is that possible to use it with pthread library? I am new guy to this hybrid programming. I am not sure which is the proper way to do it. Any suggestions are appreciate. Thank you! Sufeng On Sat, Jun 22, 2013 at 12:12 PM, wrote: > Send discuss mailing list submissions to > discuss at mpich.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mpich.org/mailman/listinfo/discuss > or, via email, send a message with subject or body 'help' to > discuss-request at mpich.org > > You can reach the person managing the list at > discuss-owner at mpich.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of discuss digest..." > > > Today's Topics: > > 1. run hello world on multiple server (Sufeng Niu) > 2. Re: run hello world on multiple server (Antonio J. Pe?a) > 3. Problem reproducing an example from the MPI standard (Jiri Simsa) > 4. Re: Problem reproducing an example from the MPI standard > (Pavan Balaji) > 5. Re: Problem reproducing an example from the MPI standard > (Jiri Simsa) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 21 Jun 2013 10:51:50 -0500 > From: Sufeng Niu > To: discuss at mpich.org > Subject: [mpich-discuss] run hello world on multiple server > Message-ID: > < > CAFNNHkwpqdGfZXctL0Uz3hpeL25mZZMtB93qGXjc_+tjnV4csA at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > Sorry to bother you guys on this stupid question. last time I re-install OS > for all blades to keep them the same version. after I mount, set keyless > ssh, the terimnal gives the error below: > > [proxy:0:1 at iocfccd3.aps.anl.gov] HYDU_sock_connect > (./utils/sock/sock.c:174): unable to connect from "iocfccd3.aps.anl.gov" > to > "iocfccd1.aps.anl.gov" (No route to host) > [proxy:0:1 at iocfccd3.aps.anl.gov] main (./pm/pmiserv/pmip.c:189): unable to > connect to server iocfccd1.aps.anl.gov at port 38242 (check for > firewalls!) > > I can ssh from iocfccd1 to iocfccd3 without password. Should I shut down > all firewalls on each server? I cannot find out where is the problem. Thank > you > > -- > Best Regards, > Sufeng Niu > ECASP lab, ECE department, Illinois Institute of Technology > Tel: 312-731-7219 > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mpich.org/pipermail/discuss/attachments/20130621/5503b1bc/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Fri, 21 Jun 2013 10:58:26 -0500 > From: Antonio J. Pe?a > To: discuss at mpich.org > Subject: Re: [mpich-discuss] run hello world on multiple server > Message-ID: <1764654.VsHJIGvujv at localhost.localdomain> > Content-Type: text/plain; charset="iso-8859-1" > > > Hi Sufeng, > > Can you ping/ssh exactly this name "iocfccd1.aps.anl.gov" from iocfccd3? > [1] > > Antonio > > > On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: > > > Hi, > > > Sorry to bother you guys on this stupid question. last time I re-install > OS for > all blades to keep them the same version. after I mount, set keyless ssh, > the terimnal gives the error below: > > proxy:0:1 at iocfccd3.aps.anl.gov[2]] HYDU_sock_connect > (./utils/sock/sock.c:174): unable to connect from "iocfccd3.aps.anl.gov > [3]" > to "iocfccd1.aps.anl.gov[1]" (No route to host) > [proxy:0:1 at iocfccd3.aps.anl.gov[2]] main (./pm/pmiserv/pmip.c:189): > unable to connect to server iocfccd1.aps.anl.gov[1] at port 38242 (check > for firewalls!) > > > > I can ssh from iocfccd1 to iocfccd3 without password. Should I shut down > all firewalls on each server? I cannot find out where is the problem. Thank > you > > > > > -- Best Regards, > Sufeng Niu > ECASP lab, ECE department, Illinois Institute of Technology > Tel: 312-731-7219[4] > > > -------- > [1] http://iocfccd1.aps.anl.gov > [2] mailto:proxy%3A0%3A1 at iocfccd3.aps.anl.gov > [3] http://iocfccd3.aps.anl.gov > [4] tel:312-731-7219 > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mpich.org/pipermail/discuss/attachments/20130621/01b37902/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Sat, 22 Jun 2013 12:49:10 -0400 > From: Jiri Simsa > To: discuss at mpich.org > Subject: [mpich-discuss] Problem reproducing an example from the MPI > standard > Message-ID: > < > CAHs9ut-_6W6SOHTJ_rD+shQ76bo4cTCuFVAy1f9x-J0gioakHg at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > I tried implementing the Example 3.17 from the MPI 3.0 specification > document using as follows: > > #include > #include > #include > > int main(int argc, char *argv[]) { > int myrank, size; > MPI_Status status; > MPI_Init(&argc, &argv ); > MPI_Comm_rank(MPI_COMM_WORLD, &myrank); > MPI_Comm_size(MPI_COMM_WORLD, &size); > assert(size == 3); > if (myrank == 0) { > /* code for process zero */ > int i = 1; > MPI_Send(&i, 1, MPI_INTEGER, 2, 99, MPI_COMM_WORLD); > } > if (myrank == 1) { > /* code for process one */ > double d = 3.14; > MPI_Send(&d, 1, MPI_REAL, 2, 99, MPI_COMM_WORLD); > } > if (myrank == 2) { > /* code for process two */ > for (int i = 0; i < 2; i++) { > MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); > printf("Probe matched %lld bytes from source %d.\n", > status.count, status.MPI_SOURCE); > if (status.MPI_SOURCE == 0) { > int i; > MPI_Recv(&i, 1, MPI_INTEGER, 0, 99, MPI_COMM_WORLD, &status); > printf("Received integer '%d' from %d.\n", i, status.MPI_SOURCE); > } else { > double d; > MPI_Recv(&d, 1, MPI_REAL, 1, 99, MPI_COMM_WORLD, &status); > printf("Received real '%f' from %d.\n", d, status.MPI_SOURCE); > } > } > } > MPI_Finalize(); > return 0; > } > > This example compiles without any warning with the MPICH-3.0.4 library. > However, running: > > $ mpiexec -n 3 ./example > > leads to the following output: > > Probe matched 4 bytes from source 1. > Received real '0.000000' from 1. > Probe matched 4 bytes from source 0. > Received integer '1' from 0. > > Could someone please let me know what is the problem with my program? I > failed to see a problem there. Thank you. > > Best, > > --Jiri Simsa > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mpich.org/pipermail/discuss/attachments/20130622/70ca8d04/attachment-0001.html > > > > ------------------------------ > > Message: 4 > Date: Sat, 22 Jun 2013 12:02:19 -0500 > From: Pavan Balaji > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Problem reproducing an example from the > MPI standard > Message-ID: <51C5D89B.7010108 at mcs.anl.gov> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > The example is correct, since it's in Fortran. Your conversion to C is > incorrect. You probably want to use MPI_DOUBLE instead of MPI_REAL, and > MPI_INT instead of MPI_INTEGER. > > -- Pavan > > On 06/22/2013 11:49 AM, Jiri Simsa wrote: > > Hi, > > > > I tried implementing the Example 3.17 from the MPI 3.0 specification > > document using as follows: > > > > #include > > #include > > #include > > > > int main(int argc, char *argv[]) { > > int myrank, size; > > MPI_Status status; > > MPI_Init(&argc, &argv ); > > MPI_Comm_rank(MPI_COMM_WORLD, &myrank); > > MPI_Comm_size(MPI_COMM_WORLD, &size); > > assert(size == 3); > > if (myrank == 0) { > > /* code for process zero */ > > int i = 1; > > MPI_Send(&i, 1, MPI_INTEGER, 2, 99, MPI_COMM_WORLD); > > } > > if (myrank == 1) { > > /* code for process one */ > > double d = 3.14; > > MPI_Send(&d, 1, MPI_REAL, 2, 99, MPI_COMM_WORLD); > > } > > if (myrank == 2) { > > /* code for process two */ > > for (int i = 0; i < 2; i++) { > > MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); > > printf("Probe matched %lld bytes from source %d.\n", > > status.count, status.MPI_SOURCE); > > if (status.MPI_SOURCE == 0) { > > int i; > > MPI_Recv(&i, 1, MPI_INTEGER, 0, 99, MPI_COMM_WORLD, &status); > > printf("Received integer '%d' from %d.\n", i, status.MPI_SOURCE); > > } else { > > double d; > > MPI_Recv(&d, 1, MPI_REAL, 1, 99, MPI_COMM_WORLD, &status); > > printf("Received real '%f' from %d.\n", d, status.MPI_SOURCE); > > } > > } > > } > > MPI_Finalize(); > > return 0; > > } > > > > This example compiles without any warning with the MPICH-3.0.4 library. > > However, running: > > > > $ mpiexec -n 3 ./example > > > > leads to the following output: > > > > Probe matched 4 bytes from source 1. > > Received real '0.000000' from 1. > > Probe matched 4 bytes from source 0. > > Received integer '1' from 0. > > > > Could someone please let me know what is the problem with my program? I > > failed to see a problem there. Thank you. > > > > Best, > > > > --Jiri Simsa > > > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > > > ------------------------------ > > Message: 5 > Date: Sat, 22 Jun 2013 13:12:06 -0400 > From: Jiri Simsa > To: discuss at mpich.org > Subject: Re: [mpich-discuss] Problem reproducing an example from the > MPI standard > Message-ID: > < > CAHs9ut_0_masn6v2+8FKZ8Y1Z_cdoteCespHNpSORc9iqFOcoA at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Nevermind. > > I was incorrectly using the MPI_REAL data type. After replacing it with the > MPI_DOUBLE data type the program was as expected. > > > On Sat, Jun 22, 2013 at 12:49 PM, Jiri Simsa wrote: > > > Hi, > > > > I tried implementing the Example 3.17 from the MPI 3.0 specification > > document using as follows: > > > > #include > > #include > > #include > > > > int main(int argc, char *argv[]) { > > int myrank, size; > > MPI_Status status; > > MPI_Init(&argc, &argv ); > > MPI_Comm_rank(MPI_COMM_WORLD, &myrank); > > MPI_Comm_size(MPI_COMM_WORLD, &size); > > assert(size == 3); > > if (myrank == 0) { > > /* code for process zero */ > > int i = 1; > > MPI_Send(&i, 1, MPI_INTEGER, 2, 99, MPI_COMM_WORLD); > > } > > if (myrank == 1) { > > /* code for process one */ > > double d = 3.14; > > MPI_Send(&d, 1, MPI_REAL, 2, 99, MPI_COMM_WORLD); > > } > > if (myrank == 2) { > > /* code for process two */ > > for (int i = 0; i < 2; i++) { > > MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); > > printf("Probe matched %lld bytes from source %d.\n", > > status.count, status.MPI_SOURCE); > > if (status.MPI_SOURCE == 0) { > > int i; > > MPI_Recv(&i, 1, MPI_INTEGER, 0, 99, MPI_COMM_WORLD, &status); > > printf("Received integer '%d' from %d.\n", i, status.MPI_SOURCE); > > } else { > > double d; > > MPI_Recv(&d, 1, MPI_REAL, 1, 99, MPI_COMM_WORLD, &status); > > printf("Received real '%f' from %d.\n", d, status.MPI_SOURCE); > > } > > } > > } > > MPI_Finalize(); > > return 0; > > } > > > > This example compiles without any warning with the MPICH-3.0.4 library. > > However, running: > > > > $ mpiexec -n 3 ./example > > > > leads to the following output: > > > > Probe matched 4 bytes from source 1. > > Received real '0.000000' from 1. > > Probe matched 4 bytes from source 0. > > Received integer '1' from 0. > > > > Could someone please let me know what is the problem with my program? I > > failed to see a problem there. Thank you. > > > > Best, > > > > --Jiri Simsa > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mpich.org/pipermail/discuss/attachments/20130622/4b69c2f8/attachment.html > > > > ------------------------------ > > _______________________________________________ > discuss mailing list > discuss at mpich.org > https://lists.mpich.org/mailman/listinfo/discuss > > End of discuss Digest, Vol 8, Issue 37 > ************************************** > -- Best Regards, Sufeng Niu ECASP lab, ECE department, Illinois Institute of Technology Tel: 312-731-7219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From muhtaroglu.n at gmail.com Sun Jun 23 16:58:59 2013 From: muhtaroglu.n at gmail.com (Nitel Muhtaroglu) Date: Mon, 24 Jun 2013 00:58:59 +0300 Subject: [mpich-discuss] Error with MPI_Spawn Message-ID: <51C76FA3.2050303@gmail.com> Hello, I am trying to integrate PETSc library to a serial program. The idea is that the serial program creates a linear equation system and then calls PETSc solver by MPI_Spawn and then solves this system in parallel. But when I execute MPI_Spawn the following error message occurs and the solver is not called. I couldn't find a solution to this error. Does anyone have an idea about it? Kind Regards, -- Nitel ********************************************************** Assertion failed in file socksm.c at line 590: hdr.pkt_type == MPIDI_NEM_TCP_SOCKSM_PKT_ID_INFO || hdr.pkt_type == MPIDI_NEM_TCP_SOCKSM_PKT_TMPVC_INFO internal ABORT - process 0 INTERNAL ERROR: Invalid error class (66) encountered while returning from MPI_Init. Please file a bug report. Fatal error in MPI_Init: Unknown error. Please file a bug report., error stack: (unknown)(): connection failure [cli_0]: aborting job: Fatal error in MPI_Init: Unknown error. Please file a bug report., error stack: (unknown)(): connection failure ********************************************************** From jeff.science at gmail.com Sun Jun 23 17:03:09 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Sun, 23 Jun 2013 17:03:09 -0500 Subject: [mpich-discuss] Error with MPI_Spawn In-Reply-To: <51C76FA3.2050303@gmail.com> References: <51C76FA3.2050303@gmail.com> Message-ID: <-1578386667793986954@unknownmsgid> This is the wrong way to use PETSc and to parallelize a code with a parallel library in general. Write the PETSc user list and they will explain to you how to parallelize your code properly with PETSc. Jeff Sent from my iPhone On Jun 23, 2013, at 4:59 PM, Nitel Muhtaroglu wrote: > Hello, > > I am trying to integrate PETSc library to a serial program. The idea is that the serial program creates a linear equation system and then calls PETSc solver by MPI_Spawn and then solves this system in parallel. But when I execute MPI_Spawn the following error message occurs and the solver is not called. I couldn't find a solution to this error. Does anyone have an idea about it? > > Kind Regards, > -- > Nitel > > ********************************************************** > Assertion failed in file socksm.c at line 590: hdr.pkt_type == MPIDI_NEM_TCP_SOCKSM_PKT_ID_INFO || hdr.pkt_type == MPIDI_NEM_TCP_SOCKSM_PKT_TMPVC_INFO > internal ABORT - process 0 > INTERNAL ERROR: Invalid error class (66) encountered while returning from > MPI_Init. Please file a bug report. > Fatal error in MPI_Init: Unknown error. Please file a bug report., error stack: > (unknown)(): connection failure > [cli_0]: aborting job: > Fatal error in MPI_Init: Unknown error. Please file a bug report., error stack: > (unknown)(): connection failure > ********************************************************** > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From apenya at mcs.anl.gov Sun Jun 23 18:54:38 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Sun, 23 Jun 2013 18:54:38 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 37 In-Reply-To: References: Message-ID: <8872211.IGCl89YM8r@localhost.localdomain> Sufeng, The correct way is to use the MPI_Init_thread function with MPI_THREAD_MULTIPLE. This will tell the MPI implementation to be thread safe. It supports OpenMP and Posix Threads (OpenMP primitives in most systems are likely to be implemented on top of PThreads). Antonio On Sunday, June 23, 2013 11:13:31 AM Sufeng Niu wrote: Hi, Antonio Thanks a lot for your reply, I just figure out that is the firewall issue. after I set the firewall. it works now. Thanks again. But I still got a few questions on MPI and multithreads mixed programming. Currently, I try to run each process on each server, and each process using thread pool to run multiple threads (pthread lib). I am not sure whether it is the correct way or not. I wrote it as: MPI_Init() .... ... /* create thread pool and initial */ ...... /* fetch job into thread pool */ ...... MPI_Finalize(); When I check the book and notes, I found people use MPI_Init_thread() with MPI_THREAD_MULTIPLE but the some docs said it supported OpenMP, is that possible to use it with pthread library? I am new guy to this hybrid programming. I am not sure which is the proper way to do it. Any suggestions are appreciate. Thank you! Sufeng On Sat, Jun 22, 2013 at 12:12 PM, wrote: Send discuss mailing list submissions to discuss at mpich.org[2] https://lists.mpich.org/mailman/listinfo/discuss[3] discuss-request at mpich.org[1] discuss-owner at mpich.org[4] sniu at hawk.iit.edu[5]>To: discuss at mpich.org[2] CAFNNHkwpqdGfZXctL0Uz3hpeL25mZZMtB93qGXjc_+tjnV4csA at mail.gmail.c om[6]>Content-Type: text/plain; charset="iso-8859-1" Hi, Sorry to bother you guys on this stupid question. last time I re-install OSfor all blades to keep them the same version. after I mount, set keylessssh, the terimnal gives the error below: [proxy:0:1 at iocfccd3.aps.anl.gov[7]] HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from "iocfccd3.aps.anl.gov[8]" to"iocfccd1.aps.anl.gov[9]" (No route to host) [proxy:0:1 at iocfccd3.aps.anl.gov[7]] main (./pm/pmiserv/pmip.c:189): unable toconnect to server iocfccd1.aps.anl.gov[9] at port 38242 (check for firewalls!) I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall firewalls on each server? I cannot find out where is the problem. Thankyou --Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of TechnologyTel: 312-731-7219[10] http://lists.mpich.org/pipermail/discuss/attachments/20130621/5503b1bc/a ttachment-0001.html[11]> ------------------------------ Message: 2Date: Fri, 21 Jun 2013 10:58:26 -0500From: Antonio J. Pe?a To: discuss at mpich.org[2] iocfccd1.aps.anl.gov[9]" from iocfccd3?[1] Antonio On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: Hi, Sorry to bother you guys on this stupid question. last time I re-install OS forall blades to keep them the same version. after I mount, set keyless ssh,the terimnal gives the error below: proxy:0:1 at iocfccd3.aps.anl.gov[7][2]] HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from "iocfccd3.aps.anl.gov[8][3]"to "iocfccd1.aps.anl.gov[9][1]" (No route to host)[proxy:0:1 at iocfccd3.aps.anl.gov[7][2]] main (./pm/pmiserv/pmip.c:189):unable to connect to server iocfccd1.aps.anl.gov[9][1] at port 38242 (checkfor firewalls!) I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall firewalls on each server? I cannot find out where is the problem. Thankyou -- Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of TechnologyTel: 312-731-7219[4] --------[1] _http://iocfccd1.aps.anl.gov_ proxy%3A0%3A1 at iocfccd3.aps.anl.gov[13] http://iocfccd3.aps.anl.gov[8] 312-731-7219[10] http://lists.mpich.org/pipermail/discuss/attachments/20130621/01b37902/ attachment-0001.html[14]> ------------------------------ Message: 3Date: Sat, 22 Jun 2013 12:49:10 -0400From: Jiri Simsa To: discuss at mpich.org[2] CAHs9ut-_6W6SOHTJ_rD+shQ76bo4cTCuFVAy1f9x- J0gioakHg at mail.gmail.com[16]>Content-Type: text/plain; charset="iso-8859-1" Hi, -------------- next part -------------- An HTML attachment was scrubbed... URL: From sniu at hawk.iit.edu Mon Jun 24 11:41:44 2013 From: sniu at hawk.iit.edu (Sufeng Niu) Date: Mon, 24 Jun 2013 11:41:44 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 39 In-Reply-To: References: Message-ID: Hi, Antonio Thanks a lot! Now I make sense. Let's say if I am running MPI and multithreads program. If I called MPI_Barrier in each threads what gonna happen? Will threads be synced by MPI_Barrier? or I should use thread level sync? Thank you! Sufeng On Sun, Jun 23, 2013 at 6:54 PM, wrote: > Send discuss mailing list submissions to > discuss at mpich.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mpich.org/mailman/listinfo/discuss > or, via email, send a message with subject or body 'help' to > discuss-request at mpich.org > > You can reach the person managing the list at > discuss-owner at mpich.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of discuss digest..." > > > Today's Topics: > > 1. Re: Error with MPI_Spawn (Jeff Hammond) > 2. Re: discuss Digest, Vol 8, Issue 37 (Antonio J. Pe?a) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 23 Jun 2013 17:03:09 -0500 > From: Jeff Hammond > To: "discuss at mpich.org" > Subject: Re: [mpich-discuss] Error with MPI_Spawn > Message-ID: <-1578386667793986954 at unknownmsgid> > Content-Type: text/plain; charset=ISO-8859-1 > > This is the wrong way to use PETSc and to parallelize a code with a > parallel library in general. > > Write the PETSc user list and they will explain to you how to > parallelize your code properly with PETSc. > > Jeff > > Sent from my iPhone > > On Jun 23, 2013, at 4:59 PM, Nitel Muhtaroglu > wrote: > > > Hello, > > > > I am trying to integrate PETSc library to a serial program. The idea is > that the serial program creates a linear equation system and then calls > PETSc solver by MPI_Spawn and then solves this system in parallel. But when > I execute MPI_Spawn the following error message occurs and the solver is > not called. I couldn't find a solution to this error. Does anyone have an > idea about it? > > > > Kind Regards, > > -- > > Nitel > > > > ********************************************************** > > Assertion failed in file socksm.c at line 590: hdr.pkt_type == > MPIDI_NEM_TCP_SOCKSM_PKT_ID_INFO || hdr.pkt_type == > MPIDI_NEM_TCP_SOCKSM_PKT_TMPVC_INFO > > internal ABORT - process 0 > > INTERNAL ERROR: Invalid error class (66) encountered while returning from > > MPI_Init. Please file a bug report. > > Fatal error in MPI_Init: Unknown error. Please file a bug report., > error stack: > > (unknown)(): connection failure > > [cli_0]: aborting job: > > Fatal error in MPI_Init: Unknown error. Please file a bug report., > error stack: > > (unknown)(): connection failure > > ********************************************************** > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > > ------------------------------ > > Message: 2 > Date: Sun, 23 Jun 2013 18:54:38 -0500 > From: Antonio J. Pe?a > To: discuss at mpich.org > Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 37 > Message-ID: <8872211.IGCl89YM8r at localhost.localdomain> > Content-Type: text/plain; charset="iso-8859-1" > > > Sufeng, > > The correct way is to use the MPI_Init_thread function with > MPI_THREAD_MULTIPLE. This will tell the MPI implementation to be thread > safe. It supports OpenMP and Posix Threads (OpenMP primitives in most > systems are likely to be implemented on top of PThreads). > > Antonio > > > On Sunday, June 23, 2013 11:13:31 AM Sufeng Niu wrote: > > > Hi, Antonio > > > Thanks a lot for your reply, I just figure out that is the firewall issue. > after I > set the firewall. it works now. Thanks again. > > > But I still got a few questions on MPI and multithreads mixed programming. > Currently, I try to run each process on each server, and each process > using thread pool to run multiple threads (pthread lib). I am not sure > whether it is the correct way or not. I wrote it as: > > > MPI_Init() > .... > ... > /* create thread pool and initial */ > ...... > /* fetch job into thread pool */ > ...... > > > MPI_Finalize(); > > > When I check the book and notes, I found people use > > > MPI_Init_thread() with MPI_THREAD_MULTIPLE > > > but the some docs said it supported OpenMP, is that possible to use it > with pthread library? > I am new guy to this hybrid programming. I am not sure which is the proper > way to do it. Any suggestions are appreciate. Thank you! > > > Sufeng > > > > > On Sat, Jun 22, 2013 at 12:12 PM, wrote: > > > Send discuss mailing list submissions to discuss at mpich.org[2] > https://lists.mpich.org/mailman/listinfo/discuss[3] > discuss-request at mpich.org[1] > discuss-owner at mpich.org[4] > sniu at hawk.iit.edu[5]>To: discuss at mpich.org[2] > CAFNNHkwpqdGfZXctL0Uz3hpeL25mZZMtB93qGXjc_+tjnV4csA at mail.gmail.c > om[6]>Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > Sorry to bother you guys on this stupid question. last time I re-install > OSfor > all blades to keep them the same version. after I mount, set keylessssh, > the terimnal gives the error below: > > [proxy:0:1 at iocfccd3.aps.anl.gov[7]] > HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from > "iocfccd3.aps.anl.gov[8]" to"iocfccd1.aps.anl.gov[9]" (No route to host) > [proxy:0:1 at iocfccd3.aps.anl.gov[7]] main (./pm/pmiserv/pmip.c:189): > unable toconnect to server iocfccd1.aps.anl.gov[9] at port 38242 (check > for firewalls!) > > I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall > firewalls on each server? I cannot find out where is the problem. Thankyou > > --Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of > TechnologyTel: 312-731-7219[10] > http://lists.mpich.org/pipermail/discuss/attachments/20130621/5503b1bc/a > ttachment-0001.html[11]> > > ------------------------------ > > Message: 2Date: Fri, 21 Jun 2013 10:58:26 -0500From: Antonio J. Pe?a > To: discuss at mpich.org[2] > iocfccd1.aps.anl.gov[9]" from iocfccd3?[1] > > Antonio > > > On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: > > > Hi, > > > Sorry to bother you guys on this stupid question. last time I re-install OS > forall blades to keep them the same version. after I mount, set keyless > ssh,the terimnal gives the error below: > > > proxy:0:1 at iocfccd3.aps.anl.gov[7][2]] > HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from > "iocfccd3.aps.anl.gov[8][3]"to "iocfccd1.aps.anl.gov[9][1]" (No route to > host)[proxy:0:1 at iocfccd3.aps.anl.gov[7][2]] main > (./pm/pmiserv/pmip.c:189):unable to connect to server > iocfccd1.aps.anl.gov[9][1] at port 38242 (checkfor firewalls!) > > > > I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall > firewalls on each server? I cannot find out where is the problem. Thankyou > > > > > -- Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of > TechnologyTel: 312-731-7219[4] > > > --------[1] _http://iocfccd1.aps.anl.gov_ > proxy%3A0%3A1 at iocfccd3.aps.anl.gov[13] > http://iocfccd3.aps.anl.gov[8] > 312-731-7219[10] > http://lists.mpich.org/pipermail/discuss/attachments/20130621/01b37902/ > attachment-0001.html[14]> > > ------------------------------ > > Message: 3Date: Sat, 22 Jun 2013 12:49:10 -0400From: Jiri Simsa > To: discuss at mpich.org[2] > CAHs9ut-_6W6SOHTJ_rD+shQ76bo4cTCuFVAy1f9x- > J0gioakHg at mail.gmail.com[16]>Content-Type: text/plain; > charset="iso-8859-1" > > Hi, > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mpich.org/pipermail/discuss/attachments/20130623/a6104115/attachment.html > > > > ------------------------------ > > _______________________________________________ > discuss mailing list > discuss at mpich.org > https://lists.mpich.org/mailman/listinfo/discuss > > End of discuss Digest, Vol 8, Issue 39 > ************************************** > -- Best Regards, Sufeng Niu ECASP lab, ECE department, Illinois Institute of Technology Tel: 312-731-7219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.dorier at irisa.fr Mon Jun 24 15:46:21 2013 From: matthieu.dorier at irisa.fr (Matthieu Dorier) Date: Mon, 24 Jun 2013 22:46:21 +0200 (CEST) Subject: [mpich-discuss] Problem with ADIOI_Info_get (MPI_Info_get) from the ADIO layer In-Reply-To: <1845380388.2141341.1372106570426.JavaMail.root@irisa.fr> Message-ID: <1911044962.2141864.1372106781445.JavaMail.root@irisa.fr> Hi, I'm implementing an ADIO backend and I'm having problems retrieving values from the MPI_Info attached to the file. On the application side, I have something like this: MPI_Info_create(&info); MPI_Info_set(info,"cb_buffer_size","64"); MPI_Info_set(info,"xyz","3"); MPI_File_open(comm, "file", MPI_MODE_WRONLY | MPI_MODE_CREATE, info, &fh); then a call to a MPI_File_write, which ends up calling my implementation of ADIOI_xxx_WriteContig. In this function, I try to read back these info: int info_flag; char* value = (char *) ADIOI_Malloc((MPI_MAX_INFO_VAL+1)*sizeof(char)); ADIOI_Info_get(fd->info, "xyz", MPI_MAX_INFO_VAL, value,&info_flag); if(info_flag) fprintf(stderr,"xyz = %d\n",atoi(value)); ADIOI_Info_get(fd->info, "cb_buffer_size", MPI_MAX_INFO_VAL, value,&info_flag); if(info_flag) fprintf(stderr,"cb_buffer_size = %d\n",atoi(value)); I can get the 64 associated to the cb_buffer_size key (which is a reserved hint), but I don't get the second value. Where does the problem come from? I tried everything: re-ordering the calls, changing the name of the key, calling MPI_Info_get in the application to check that the values are properly set (they are)... Thanks Matthieu Dorier PhD student at ENS Cachan Brittany and IRISA http://people.irisa.fr/Matthieu.Dorier -------------- next part -------------- An HTML attachment was scrubbed... URL: From apenya at mcs.anl.gov Mon Jun 24 15:49:08 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Mon, 24 Jun 2013 15:49:08 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 39 In-Reply-To: References: Message-ID: <1621080.0a08dQASC5@localhost.localdomain> Sufeng, I'd say you're OK syncing your threads by all of them calling MPI_Barrier, as it's a thread-safe function. Antonio On Monday, June 24, 2013 11:41:44 AM Sufeng Niu wrote: Hi, Antonio Thanks a lot! Now I make sense. Let's say if I am running MPI and multithreads program. If I called MPI_Barrier in each threads what gonna happen? Will threads be synced by MPI_Barrier? or I should use thread level sync? Thank you! Sufeng On Sun, Jun 23, 2013 at 6:54 PM, wrote: Send discuss mailing list submissions to discuss at mpich.org[2] https://lists.mpich.org/mailman/listinfo/discuss[3] discuss-request at mpich.org[1] discuss-owner at mpich.org[4] jeff.science at gmail.com[5]>To: "discuss at mpich.org[2]" <_discuss at mpich.org_>Subject: Re: [mpich-discuss] Error with MPI_SpawnMessage-ID: <-1578386667793986954 at unknownmsgid>Content-Type: text/plain; charset=ISO-8859-1 This is the wrong way to use PETSc and to parallelize a code with aparallel library in general. Write the PETSc user list and they will explain to you how toparallelize your code properly with PETSc. Jeff Sent from my iPhone On Jun 23, 2013, at 4:59 PM, Nitel Muhtaroglu wrote: > Hello,>> I am trying to integrate PETSc library to a serial program. The idea is that the serial program creates a linear equation system and then calls PETSc solver by MPI_Spawn and then solves this system in parallel. But when I execute MPI_Spawn the following error message occurs and the solver is not called. I couldn't find a solution to this error. Does anyone have an idea about it?>> Kind Regards,> --> Nitel>> **********************************************************> Assertion failed in file socksm.c at line 590: hdr.pkt_type == MPIDI_NEM_TCP_SOCKSM_PKT_ID_INFO || hdr.pkt_type == MPIDI_NEM_TCP_SOCKSM_PKT_TMPVC_INFO> internal ABORT - process 0> INTERNAL ERROR: Invalid error class (66) encountered while returning from> MPI_Init. Please file a bug report.> Fatal error in MPI_Init: Unknown error. Please file a bug report., error stack:> (unknown)(): connection failure> [cli_0]: aborting job:> Fatal error in MPI_Init: Unknown error. Please file a bug report., error stack:> (unknown)(): connection failure> **********************************************************> _______________________________________________> discuss mailing list discuss at mpich.org[2] https://lists.mpich.org/mailman/listinfo/discuss[3] apenya at mcs.anl.gov[7]>To: discuss at mpich.org[2] discuss-request at mpich.org[1][1]> wrote: Send discuss mailing list submissions to discuss at mpich.org[2][2] https://lists.mpich.org/mailman/listinfo/discuss[3][8] discuss-request at mpich.org[1][1] discuss-owner at mpich.org[4][4] sniu at hawk.iit.edu[9][5]>To: discuss at mpich.org[2] [2]CAFNNHkwpqdGfZXctL0Uz3hpeL25mZZMtB93qGXjc_+tjnV4csA at mail.gma il.com[6]>Content-Type: text/plain; charset="iso-8859-1" Hi, Sorry to bother you guys on this stupid question. last time I re-install OSforall blades to keep them the same version. after I mount, set keylessssh,the terimnal gives the error below: [proxy:0:1 at iocfccd3.aps.anl.gov[10] [7]]HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from"iocfccd3.aps.anl.gov[11][8]" to"iocfccd1.aps.anl.gov[12][9]" (No route to host)[proxy:0:1 at iocfccd3.aps.anl.gov[10][7]] main (./pm/pmiserv/pmip.c:189):unable toconnect to server iocfccd1.aps.anl.gov[12][9] at port 38242 (checkfor firewalls!) I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downallfirewalls on each server? I cannot find out where is the problem. Thankyou --Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute ofTechnologyTel: 312-731-7219[10] http://lists.mpich.org/pipermail/discuss/attachments/20130621/5503b1bc/a ttachment-0001.html[11][13]> ------------------------------ Message: 2Date: Fri, 21 Jun 2013 10:58:26 -0500From: Antonio J. Pe?aTo: discuss at mpich.org[2][2] iocfccd1.aps.anl.gov[12][9]" from iocfccd3?[1] Antonio On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: Hi, Sorry to bother you guys on this stupid question. last time I re-install OSforall blades to keep them the same version. after I mount, set keylessssh,the terimnal gives the error below: proxy:0:1 at iocfccd3.aps.anl.gov[10][7] [2]]HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from"iocfccd3.aps.anl.gov[11][8][3]"to "iocfccd1.aps.anl.gov[12][9][1]" (No route tohost)[proxy:0:1 at iocfccd3.aps.anl.gov[10][7][2]] main(./pm/pmiserv/pmip.c:189):unable to connect to server -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhammond at alcf.anl.gov Mon Jun 24 16:14:20 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Mon, 24 Jun 2013 16:14:20 -0500 (CDT) Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 39 In-Reply-To: <1621080.0a08dQASC5@localhost.localdomain> Message-ID: <1861069226.7428562.1372108460652.JavaMail.root@alcf.anl.gov> MPI_Barrier will not sync threads. If N threads call MPI_Barrier, you will get at best the same result as if you call MPI_Barrier N times from the main thread. If you want to sync threads, you need to sync them with the appropriate thread API. OpenMP and Pthreads both have barrier calls. If you want a fast Pthread barrier, you should not use pthread_barrier though. The Internet has details. Jeff ----- Original Message ----- From: "Antonio J. Pe?a" To: discuss at mpich.org Sent: Monday, June 24, 2013 3:49:08 PM Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 39 Sufeng, I'd say you're OK syncing your threads by all of them calling MPI_Barrier, as it's a thread-safe function. Antonio On Monday, June 24, 2013 11:41:44 AM Sufeng Niu wrote: Hi, Antonio Thanks a lot! Now I make sense. Let's say if I am running MPI and multithreads program. If I called MPI_Barrier in each threads what gonna happen? Will threads be synced by MPI_Barrier? or I should use thread level sync? Thank you! Sufeng On Sun, Jun 23, 2013 at 6:54 PM, < discuss-request at mpich.org > wrote: Send discuss mailing list submissions to discuss at mpich.org To subscribe or unsubscribe via the World Wide Web, visit https://lists.mpich.org/mailman/listinfo/discuss or, via email, send a message with subject or body 'help' to discuss-request at mpich.org You can reach the person managing the list at discuss-owner at mpich.org When replying, please edit your Subject line so it is more specific than "Re: Contents of discuss digest..." Today's Topics: 1. Re: Error with MPI_Spawn (Jeff Hammond) 2. Re: discuss Digest, Vol 8, Issue 37 (Antonio J. Pe?a) ---------------------------------------------------------------------- Message: 1 Date: Sun, 23 Jun 2013 17:03:09 -0500 From: Jeff Hammond < jeff.science at gmail.com > To: " discuss at mpich.org " < discuss at mpich.org > Subject: Re: [mpich-discuss] Error with MPI_Spawn Message-ID: <-1578386667793986954 at unknownmsgid> Content-Type: text/plain; charset=ISO-8859-1 This is the wrong way to use PETSc and to parallelize a code with a parallel library in general. Write the PETSc user list and they will explain to you how to parallelize your code properly with PETSc. Jeff Sent from my iPhone On Jun 23, 2013, at 4:59 PM, Nitel Muhtaroglu < muhtaroglu.n at gmail.com > wrote: > Hello, > > I am trying to integrate PETSc library to a serial program. The idea is that the serial program creates a linear equation system and then calls PETSc solver by MPI_Spawn and then solves this system in parallel. But when I execute MPI_Spawn the following error message occurs and the solver is not called. I couldn't find a solution to this error. Does anyone have an idea about it? > > Kind Regards, > -- > Nitel > > ********************************************************** > Assertion failed in file socksm.c at line 590: hdr.pkt_type == MPIDI_NEM_TCP_SOCKSM_PKT_ID_INFO || hdr.pkt_type == MPIDI_NEM_TCP_SOCKSM_PKT_TMPVC_INFO > internal ABORT - process 0 > INTERNAL ERROR: Invalid error class (66) encountered while returning from > MPI_Init. Please file a bug report. > Fatal error in MPI_Init: Unknown error. Please file a bug report., error stack: > (unknown)(): connection failure > [cli_0]: aborting job: > Fatal error in MPI_Init: Unknown error. Please file a bug report., error stack: > (unknown)(): connection failure > ********************************************************** > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss ------------------------------ Message: 2 Date: Sun, 23 Jun 2013 18:54:38 -0500 From: Antonio J. Pe?a < apenya at mcs.anl.gov > To: discuss at mpich.org Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 37 Message-ID: <8872211.IGCl89YM8r at localhost.localdomain> Content-Type: text/plain; charset="iso-8859-1" Sufeng, The correct way is to use the MPI_Init_thread function with MPI_THREAD_MULTIPLE. This will tell the MPI implementation to be thread safe. It supports OpenMP and Posix Threads (OpenMP primitives in most systems are likely to be implemented on top of PThreads). Antonio On Sunday, June 23, 2013 11:13:31 AM Sufeng Niu wrote: Hi, Antonio Thanks a lot for your reply, I just figure out that is the firewall issue. after I set the firewall. it works now. Thanks again. But I still got a few questions on MPI and multithreads mixed programming. Currently, I try to run each process on each server, and each process using thread pool to run multiple threads (pthread lib). I am not sure whether it is the correct way or not. I wrote it as: MPI_Init() .... ... /* create thread pool and initial */ ...... /* fetch job into thread pool */ ...... MPI_Finalize(); When I check the book and notes, I found people use MPI_Init_thread() with MPI_THREAD_MULTIPLE but the some docs said it supported OpenMP, is that possible to use it with pthread library? I am new guy to this hybrid programming. I am not sure which is the proper way to do it. Any suggestions are appreciate. Thank you! Sufeng On Sat, Jun 22, 2013 at 12:12 PM, < discuss-request at mpich.org [1]> wrote: Send discuss mailing list submissions to discuss at mpich.org [2] https://lists.mpich.org/mailman/listinfo/discuss[3] discuss-request at mpich.org [1] discuss-owner at mpich.org [4] sniu at hawk.iit.edu [5]>To: discuss at mpich.org [2] CAFNNHkwpqdGfZXctL0Uz3hpeL25mZZMtB93qGXjc_+tjnV4csA at mail.gmail.c om[6]>Content-Type: text/plain; charset="iso-8859-1" Hi, Sorry to bother you guys on this stupid question. last time I re-install OSfor all blades to keep them the same version. after I mount, set keylessssh, the terimnal gives the error below: [ proxy:0:1 at iocfccd3.aps.anl.gov [7]] HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from " iocfccd3.aps.anl.gov [8]" to" iocfccd1.aps.anl.gov [9]" (No route to host) [ proxy:0:1 at iocfccd3.aps.anl.gov [7]] main (./pm/pmiserv/pmip.c:189): unable toconnect to server iocfccd1.aps.anl.gov [9] at port 38242 (check for firewalls!) I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall firewalls on each server? I cannot find out where is the problem. Thankyou --Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of TechnologyTel: 312-731-7219[10] http://lists.mpich.org/pipermail/discuss/attachments/20130621/5503b1bc/a ttachment-0001.html[11] > ------------------------------ Message: 2Date: Fri, 21 Jun 2013 10:58:26 -0500From: Antonio J. Pe?a < apenya at mcs.anl.gov [12]>To: discuss at mpich.org [2] iocfccd1.aps.anl.gov [9]" from iocfccd3?[1] Antonio On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: Hi, Sorry to bother you guys on this stupid question. last time I re-install OS forall blades to keep them the same version. after I mount, set keyless ssh,the terimnal gives the error below: proxy:0:1 at iocfccd3.aps.anl.gov [7][2]] HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from " iocfccd3.aps.anl.gov [8][3]"to " iocfccd1.aps.anl.gov [9][1]" (No route to host)[ proxy:0:1 at iocfccd3.aps.anl.gov [7][2]] main (./pm/pmiserv/pmip.c:189):unable to connect to server iocfccd1.aps.anl.gov [9][1] at port 38242 (checkfor firewalls!) I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall firewalls on each server? I cannot find out where is the problem. Thankyou -- Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of TechnologyTel: 312-731-7219[4] --------[1] _ http://iocfccd1.aps.anl.gov_ proxy%3A0%3A1 at iocfccd3.aps.anl.gov [13] http://iocfccd3.aps.anl.gov [8] 312-731-7219[10] http://lists.mpich.org/pipermail/discuss/attachments/20130621/01b37902/ attachment-0001.html[14] > ------------------------------ Message: 3Date: Sat, 22 Jun 2013 12:49:10 -0400From: Jiri Simsa < jsimsa at cs.cmu.edu [15]>To: discuss at mpich.org [2] CAHs9ut-_6W6SOHTJ_rD+shQ76bo4cTCuFVAy1f9x- J0gioakHg at mail.gmail.com [16]>Content-Type: text/plain; charset="iso-8859-1" Hi, -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://lists.mpich.org/pipermail/discuss/attachments/20130623/a6104115/attachment.html > ------------------------------ _______________________________________________ discuss mailing list discuss at mpich.org https://lists.mpich.org/mailman/listinfo/discuss End of discuss Digest, Vol 8, Issue 39 ************************************** -- Best Regards, Sufeng Niu ECASP lab, ECE department, Illinois Institute of Technology Tel: 312-731-7219 -- Antonio J. Pe?a Postdoctoral Appointee Mathematics and Computer Science Division Argonne National Laboratory 9700 South Cass Avenue, Bldg. 240, Of. 3148 Argonne, IL 60439-4847 (+1) 630-252-7928 apenya at mcs.anl.gov _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From apenya at mcs.anl.gov Mon Jun 24 16:22:48 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Mon, 24 Jun 2013 16:22:48 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 39 In-Reply-To: <1861069226.7428562.1372108460652.JavaMail.root@alcf.anl.gov> References: <1861069226.7428562.1372108460652.JavaMail.root@alcf.anl.gov> Message-ID: <1884537.1z1OIaxkHc@localhost.localdomain> Hi Jeff, Please correct me where I'm wrong: if only one thread calls MPI_Barrier, that one will get locked while others will go ahead. On the other hand, if every thread calls MPI_Barrier, all will get locked in that call until the barrier completes. That doesn't mean this will be a replacement for thread-wise synchronization mechanisms, but will be effective for all of them waiting for the barrier. Antonio On Monday, June 24, 2013 04:14:20 PM Jeff Hammond wrote: > MPI_Barrier will not sync threads. If N threads call MPI_Barrier, you will > get at best the same result as if you call MPI_Barrier N times from the > main thread. > > If you want to sync threads, you need to sync them with the appropriate > thread API. OpenMP and Pthreads both have barrier calls. If you want a > fast Pthread barrier, you should not use pthread_barrier though. The > Internet has details. > > Jeff > > ----- Original Message ----- > From: "Antonio J. Pe?a" > To: discuss at mpich.org > Sent: Monday, June 24, 2013 3:49:08 PM > Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 39 > > > > > > > Sufeng, > > > > I'd say you're OK syncing your threads by all of them calling MPI_Barrier, > as it's a thread-safe function. > > > > Antonio > > > > > > On Monday, June 24, 2013 11:41:44 AM Sufeng Niu wrote: > > > Hi, Antonio > > > > Thanks a lot! Now I make sense. Let's say if I am running MPI and > multithreads program. If I called MPI_Barrier in each threads > > > what gonna happen? Will threads be synced by MPI_Barrier? or I should use > thread level sync? > > > > Thank you! > > > Sufeng > > > > > > > On Sun, Jun 23, 2013 at 6:54 PM, < discuss-request at mpich.org > wrote: > > > Send discuss mailing list submissions to > discuss at mpich.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mpich.org/mailman/listinfo/discuss > or, via email, send a message with subject or body 'help' to > discuss-request at mpich.org > > You can reach the person managing the list at > discuss-owner at mpich.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of discuss digest..." > > > Today's Topics: > > 1. Re: Error with MPI_Spawn (Jeff Hammond) > 2. Re: discuss Digest, Vol 8, Issue 37 (Antonio J. Pe?a) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 23 Jun 2013 17:03:09 -0500 > From: Jeff Hammond < jeff.science at gmail.com > > To: " discuss at mpich.org " < discuss at mpich.org > > Subject: Re: [mpich-discuss] Error with MPI_Spawn > Message-ID: <-1578386667793986954 at unknownmsgid> > Content-Type: text/plain; charset=ISO-8859-1 > > This is the wrong way to use PETSc and to parallelize a code with a > parallel library in general. > > Write the PETSc user list and they will explain to you how to > parallelize your code properly with PETSc. > > Jeff > > Sent from my iPhone > > On Jun 23, 2013, at 4:59 PM, Nitel Muhtaroglu < muhtaroglu.n at gmail.com > wrote: > > Hello, > > > > I am trying to integrate PETSc library to a serial program. The idea is > > that the serial program creates a linear equation system and then calls > > PETSc solver by MPI_Spawn and then solves this system in parallel. But > > when I execute MPI_Spawn the following error message occurs and the > > solver is not called. I couldn't find a solution to this error. Does > > anyone have an idea about it? > > > > Kind Regards, > > ------------------------------ > > Message: 2 > Date: Sun, 23 Jun 2013 18:54:38 -0500 > From: Antonio J. Pe?a < apenya at mcs.anl.gov > > To: discuss at mpich.org > Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 37 > Message-ID: <8872211.IGCl89YM8r at localhost.localdomain> > Content-Type: text/plain; charset="iso-8859-1" > > > Sufeng, > > The correct way is to use the MPI_Init_thread function with > MPI_THREAD_MULTIPLE. This will tell the MPI implementation to be thread > safe. It supports OpenMP and Posix Threads (OpenMP primitives in most > systems are likely to be implemented on top of PThreads). > > Antonio > > > On Sunday, June 23, 2013 11:13:31 AM Sufeng Niu wrote: > > > Hi, Antonio > > > Thanks a lot for your reply, I just figure out that is the firewall issue. > after I set the firewall. it works now. Thanks again. > > > But I still got a few questions on MPI and multithreads mixed programming. > Currently, I try to run each process on each server, and each process > using thread pool to run multiple threads (pthread lib). I am not sure > whether it is the correct way or not. I wrote it as: > > > MPI_Init() > .... > ... > /* create thread pool and initial */ > ...... > /* fetch job into thread pool */ > ...... > > > MPI_Finalize(); > > > When I check the book and notes, I found people use > > > MPI_Init_thread() with MPI_THREAD_MULTIPLE > > > but the some docs said it supported OpenMP, is that possible to use it > with pthread library? > I am new guy to this hybrid programming. I am not sure which is the proper > way to do it. Any suggestions are appreciate. Thank you! > > > Sufeng > > > > > On Sat, Jun 22, 2013 at 12:12 PM, < discuss-request at mpich.org [1]> wrote: > > > Send discuss mailing list submissions to discuss at mpich.org [2] > https://lists.mpich.org/mailman/listinfo/discuss[3] > discuss-request at mpich.org [1] > discuss-owner at mpich.org [4] > sniu at hawk.iit.edu [5]>To: discuss at mpich.org [2] > CAFNNHkwpqdGfZXctL0Uz3hpeL25mZZMtB93qGXjc_+tjnV4csA at mail.gmail.c > om[6]>Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > Sorry to bother you guys on this stupid question. last time I re-install > OSfor all blades to keep them the same version. after I mount, set > keylessssh, the terimnal gives the error below: > > [ proxy:0:1 at iocfccd3.aps.anl.gov [7]] > HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from > " iocfccd3.aps.anl.gov [8]" to" iocfccd1.aps.anl.gov [9]" (No route to host) > [ proxy:0:1 at iocfccd3.aps.anl.gov [7]] main (./pm/pmiserv/pmip.c:189): > unable toconnect to server iocfccd1.aps.anl.gov [9] at port 38242 (check > for firewalls!) > > I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall > firewalls on each server? I cannot find out where is the problem. Thankyou > > --Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of > TechnologyTel: 312-731-7219[10] > http://lists.mpich.org/pipermail/discuss/attachments/20130621/5503b1bc/a > ttachment-0001.html[11] > > > ------------------------------ > > Message: 2Date: Fri, 21 Jun 2013 10:58:26 -0500From: Antonio J. Pe?a > < apenya at mcs.anl.gov [12]>To: discuss at mpich.org [2] > iocfccd1.aps.anl.gov [9]" from iocfccd3?[1] > > Antonio > > > On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: > > > Hi, > > > Sorry to bother you guys on this stupid question. last time I re-install OS > forall blades to keep them the same version. after I mount, set keyless > ssh,the terimnal gives the error below: > > > proxy:0:1 at iocfccd3.aps.anl.gov [7][2]] > HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from > " iocfccd3.aps.anl.gov [8][3]"to " iocfccd1.aps.anl.gov [9][1]" (No route to > host)[ proxy:0:1 at iocfccd3.aps.anl.gov [7][2]] main > (./pm/pmiserv/pmip.c:189):unable to connect to server > iocfccd1.aps.anl.gov [9][1] at port 38242 (checkfor firewalls!) > > > > I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall > firewalls on each server? I cannot find out where is the problem. Thankyou > > > > > -- Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of > TechnologyTel: 312-731-7219[4] > > > --------[1] _ http://iocfccd1.aps.anl.gov_ > proxy%3A0%3A1 at iocfccd3.aps.anl.gov [13] > http://iocfccd3.aps.anl.gov [8] > 312-731-7219[10] > http://lists.mpich.org/pipermail/discuss/attachments/20130621/01b37902/ > attachment-0001.html[14] > > > ------------------------------ > > Message: 3Date: Sat, 22 Jun 2013 12:49:10 -0400From: Jiri Simsa > < jsimsa at cs.cmu.edu [15]>To: discuss at mpich.org [2] > CAHs9ut-_6W6SOHTJ_rD+shQ76bo4cTCuFVAy1f9x- > J0gioakHg at mail.gmail.com [16]>Content-Type: text/plain; > charset="iso-8859-1" > > Hi, > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mpich.org/pipermail/discuss/attachments/20130623/a6104115/atta > chment.html > > > ------------------------------ > > _______________________________________________ > discuss mailing list > discuss at mpich.org > https://lists.mpich.org/mailman/listinfo/discuss > > End of discuss Digest, Vol 8, Issue 39 > ************************************** From jeff.science at gmail.com Mon Jun 24 16:30:57 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Mon, 24 Jun 2013 16:30:57 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 39 In-Reply-To: <1884537.1z1OIaxkHc@localhost.localdomain> References: <1861069226.7428562.1372108460652.JavaMail.root@alcf.anl.gov> <1884537.1z1OIaxkHc@localhost.localdomain> Message-ID: N threads calling MPI_Barrier corresponds to N different, unrelated barriers. A thread calling MPI_Barrier will only synchronize with other processes, not any other threads. MPI_Barrier only acts between processes. It has no effect on threads. Just use comm=MPI_COMM_SELF and think about the behavior of MPI_Barrier. That is the one-process limit of the multithreaded problem. Jeff On Mon, Jun 24, 2013 at 4:22 PM, Antonio J. Pe?a wrote: > > Hi Jeff, > > Please correct me where I'm wrong: if only one thread calls MPI_Barrier, that > one will get locked while others will go ahead. On the other hand, if every > thread calls MPI_Barrier, all will get locked in that call until the barrier > completes. That doesn't mean this will be a replacement for thread-wise > synchronization mechanisms, but will be effective for all of them waiting for > the barrier. > > Antonio > > > On Monday, June 24, 2013 04:14:20 PM Jeff Hammond wrote: >> MPI_Barrier will not sync threads. If N threads call MPI_Barrier, you will >> get at best the same result as if you call MPI_Barrier N times from the >> main thread. >> >> If you want to sync threads, you need to sync them with the appropriate >> thread API. OpenMP and Pthreads both have barrier calls. If you want a >> fast Pthread barrier, you should not use pthread_barrier though. The >> Internet has details. >> >> Jeff >> >> ----- Original Message ----- >> From: "Antonio J. Pe?a" >> To: discuss at mpich.org >> Sent: Monday, June 24, 2013 3:49:08 PM >> Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 39 >> >> >> >> >> >> >> Sufeng, >> >> >> >> I'd say you're OK syncing your threads by all of them calling MPI_Barrier, >> as it's a thread-safe function. >> >> >> >> Antonio >> >> >> >> >> >> On Monday, June 24, 2013 11:41:44 AM Sufeng Niu wrote: >> >> >> Hi, Antonio >> >> >> >> Thanks a lot! Now I make sense. Let's say if I am running MPI and >> multithreads program. If I called MPI_Barrier in each threads >> >> >> what gonna happen? Will threads be synced by MPI_Barrier? or I should use >> thread level sync? >> >> >> >> Thank you! >> >> >> Sufeng >> >> >> >> >> >> >> On Sun, Jun 23, 2013 at 6:54 PM, < discuss-request at mpich.org > wrote: >> >> >> Send discuss mailing list submissions to >> discuss at mpich.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.mpich.org/mailman/listinfo/discuss >> or, via email, send a message with subject or body 'help' to >> discuss-request at mpich.org >> >> You can reach the person managing the list at >> discuss-owner at mpich.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of discuss digest..." >> >> >> Today's Topics: >> >> 1. Re: Error with MPI_Spawn (Jeff Hammond) >> 2. Re: discuss Digest, Vol 8, Issue 37 (Antonio J. Pe?a) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Sun, 23 Jun 2013 17:03:09 -0500 >> From: Jeff Hammond < jeff.science at gmail.com > >> To: " discuss at mpich.org " < discuss at mpich.org > >> Subject: Re: [mpich-discuss] Error with MPI_Spawn >> Message-ID: <-1578386667793986954 at unknownmsgid> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> This is the wrong way to use PETSc and to parallelize a code with a >> parallel library in general. >> >> Write the PETSc user list and they will explain to you how to >> parallelize your code properly with PETSc. >> >> Jeff >> >> Sent from my iPhone >> >> On Jun 23, 2013, at 4:59 PM, Nitel Muhtaroglu < muhtaroglu.n at gmail.com > > wrote: >> > Hello, >> > >> > I am trying to integrate PETSc library to a serial program. The idea is >> > that the serial program creates a linear equation system and then calls >> > PETSc solver by MPI_Spawn and then solves this system in parallel. But >> > when I execute MPI_Spawn the following error message occurs and the >> > solver is not called. I couldn't find a solution to this error. Does >> > anyone have an idea about it? >> > >> > Kind Regards, >> >> ------------------------------ >> >> Message: 2 >> Date: Sun, 23 Jun 2013 18:54:38 -0500 >> From: Antonio J. Pe?a < apenya at mcs.anl.gov > >> To: discuss at mpich.org >> Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 37 >> Message-ID: <8872211.IGCl89YM8r at localhost.localdomain> >> Content-Type: text/plain; charset="iso-8859-1" >> >> >> Sufeng, >> >> The correct way is to use the MPI_Init_thread function with >> MPI_THREAD_MULTIPLE. This will tell the MPI implementation to be thread >> safe. It supports OpenMP and Posix Threads (OpenMP primitives in most >> systems are likely to be implemented on top of PThreads). >> >> Antonio >> >> >> On Sunday, June 23, 2013 11:13:31 AM Sufeng Niu wrote: >> >> >> Hi, Antonio >> >> >> Thanks a lot for your reply, I just figure out that is the firewall issue. >> after I set the firewall. it works now. Thanks again. >> >> >> But I still got a few questions on MPI and multithreads mixed programming. >> Currently, I try to run each process on each server, and each process >> using thread pool to run multiple threads (pthread lib). I am not sure >> whether it is the correct way or not. I wrote it as: >> >> >> MPI_Init() >> .... >> ... >> /* create thread pool and initial */ >> ...... >> /* fetch job into thread pool */ >> ...... >> >> >> MPI_Finalize(); >> >> >> When I check the book and notes, I found people use >> >> >> MPI_Init_thread() with MPI_THREAD_MULTIPLE >> >> >> but the some docs said it supported OpenMP, is that possible to use it >> with pthread library? >> I am new guy to this hybrid programming. I am not sure which is the proper >> way to do it. Any suggestions are appreciate. Thank you! >> >> >> Sufeng >> >> >> >> >> On Sat, Jun 22, 2013 at 12:12 PM, < discuss-request at mpich.org [1]> wrote: >> >> >> Send discuss mailing list submissions to discuss at mpich.org [2] >> https://lists.mpich.org/mailman/listinfo/discuss[3] >> discuss-request at mpich.org [1] >> discuss-owner at mpich.org [4] >> sniu at hawk.iit.edu [5]>To: discuss at mpich.org [2] >> CAFNNHkwpqdGfZXctL0Uz3hpeL25mZZMtB93qGXjc_+tjnV4csA at mail.gmail.c >> om[6]>Content-Type: text/plain; charset="iso-8859-1" >> >> Hi, >> >> Sorry to bother you guys on this stupid question. last time I re-install >> OSfor all blades to keep them the same version. after I mount, set >> keylessssh, the terimnal gives the error below: >> >> [ proxy:0:1 at iocfccd3.aps.anl.gov [7]] >> HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from >> " iocfccd3.aps.anl.gov [8]" to" iocfccd1.aps.anl.gov [9]" (No route to host) >> [ proxy:0:1 at iocfccd3.aps.anl.gov [7]] main (./pm/pmiserv/pmip.c:189): >> unable toconnect to server iocfccd1.aps.anl.gov [9] at port 38242 (check >> for firewalls!) >> >> I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall >> firewalls on each server? I cannot find out where is the problem. Thankyou >> >> --Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of >> TechnologyTel: 312-731-7219[10] >> http://lists.mpich.org/pipermail/discuss/attachments/20130621/5503b1bc/a >> ttachment-0001.html[11] > >> >> ------------------------------ >> >> Message: 2Date: Fri, 21 Jun 2013 10:58:26 -0500From: Antonio J. Pe?a >> < apenya at mcs.anl.gov [12]>To: discuss at mpich.org [2] >> iocfccd1.aps.anl.gov [9]" from iocfccd3?[1] >> >> Antonio >> >> >> On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: >> >> >> Hi, >> >> >> Sorry to bother you guys on this stupid question. last time I re-install OS >> forall blades to keep them the same version. after I mount, set keyless >> ssh,the terimnal gives the error below: >> >> >> proxy:0:1 at iocfccd3.aps.anl.gov [7][2]] >> HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from >> " iocfccd3.aps.anl.gov [8][3]"to " iocfccd1.aps.anl.gov [9][1]" (No route to >> host)[ proxy:0:1 at iocfccd3.aps.anl.gov [7][2]] main >> (./pm/pmiserv/pmip.c:189):unable to connect to server >> iocfccd1.aps.anl.gov [9][1] at port 38242 (checkfor firewalls!) >> >> >> >> I can ssh from iocfccd1 to iocfccd3 without password. Should I shut downall >> firewalls on each server? I cannot find out where is the problem. Thankyou >> >> >> >> >> -- Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of >> TechnologyTel: 312-731-7219[4] >> >> >> --------[1] _ http://iocfccd1.aps.anl.gov_ >> proxy%3A0%3A1 at iocfccd3.aps.anl.gov [13] >> http://iocfccd3.aps.anl.gov [8] >> 312-731-7219[10] >> http://lists.mpich.org/pipermail/discuss/attachments/20130621/01b37902/ >> attachment-0001.html[14] > >> >> ------------------------------ >> >> Message: 3Date: Sat, 22 Jun 2013 12:49:10 -0400From: Jiri Simsa >> < jsimsa at cs.cmu.edu [15]>To: discuss at mpich.org [2] >> CAHs9ut-_6W6SOHTJ_rD+shQ76bo4cTCuFVAy1f9x- >> J0gioakHg at mail.gmail.com [16]>Content-Type: text/plain; >> charset="iso-8859-1" >> >> Hi, >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://lists.mpich.org/pipermail/discuss/attachments/20130623/a6104115/atta >> chment.html > >> >> ------------------------------ >> >> _______________________________________________ >> discuss mailing list >> discuss at mpich.org >> https://lists.mpich.org/mailman/listinfo/discuss >> >> End of discuss Digest, Vol 8, Issue 39 >> ************************************** > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.science at gmail.com From apenya at mcs.anl.gov Mon Jun 24 16:36:07 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Mon, 24 Jun 2013 16:36:07 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 39 In-Reply-To: References: <1861069226.7428562.1372108460652.JavaMail.root@alcf.anl.gov> <1884537.1z1OIaxkHc@localhost.localdomain> Message-ID: <2119126.ZJulsKQ2or@localhost.localdomain> Thanks Jeff, it looks trivial now you explained it. On Monday, June 24, 2013 04:30:57 PM Jeff Hammond wrote: > N threads calling MPI_Barrier corresponds to N different, unrelated > barriers. A thread calling MPI_Barrier will only synchronize with > other processes, not any other threads. > > MPI_Barrier only acts between processes. It has no effect on threads. > Just use comm=MPI_COMM_SELF and think about the behavior of > MPI_Barrier. That is the one-process limit of the multithreaded > problem. > > Jeff > > On Mon, Jun 24, 2013 at 4:22 PM, Antonio J. Pe?a wrote: > > Hi Jeff, > > > > Please correct me where I'm wrong: if only one thread calls MPI_Barrier, > > that one will get locked while others will go ahead. On the other hand, > > if every thread calls MPI_Barrier, all will get locked in that call until > > the barrier completes. That doesn't mean this will be a replacement for > > thread-wise synchronization mechanisms, but will be effective for all of > > them waiting for the barrier. > > > > Antonio > > > > On Monday, June 24, 2013 04:14:20 PM Jeff Hammond wrote: > >> MPI_Barrier will not sync threads. If N threads call MPI_Barrier, you > >> will > >> get at best the same result as if you call MPI_Barrier N times from the > >> main thread. > >> > >> If you want to sync threads, you need to sync them with the appropriate > >> thread API. OpenMP and Pthreads both have barrier calls. If you want a > >> fast Pthread barrier, you should not use pthread_barrier though. The > >> Internet has details. > >> > >> Jeff > >> > >> ----- Original Message ----- > >> From: "Antonio J. Pe?a" > >> To: discuss at mpich.org > >> Sent: Monday, June 24, 2013 3:49:08 PM > >> Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 39 > >> > >> > >> > >> > >> > >> > >> Sufeng, > >> > >> > >> > >> I'd say you're OK syncing your threads by all of them calling > >> MPI_Barrier, > >> as it's a thread-safe function. > >> > >> > >> > >> Antonio > >> > >> > >> > >> > >> > >> On Monday, June 24, 2013 11:41:44 AM Sufeng Niu wrote: > >> > >> > >> Hi, Antonio > >> > >> > >> > >> Thanks a lot! Now I make sense. Let's say if I am running MPI and > >> multithreads program. If I called MPI_Barrier in each threads > >> > >> > >> what gonna happen? Will threads be synced by MPI_Barrier? or I should use > >> thread level sync? > >> > >> > >> > >> Thank you! > >> > >> > >> Sufeng > >> > >> > >> > >> > >> > >> > >> On Sun, Jun 23, 2013 at 6:54 PM, < discuss-request at mpich.org > wrote: > >> > >> > >> Send discuss mailing list submissions to > >> discuss at mpich.org > >> > >> To subscribe or unsubscribe via the World Wide Web, visit > >> https://lists.mpich.org/mailman/listinfo/discuss > >> or, via email, send a message with subject or body 'help' to > >> discuss-request at mpich.org > >> > >> You can reach the person managing the list at > >> discuss-owner at mpich.org > >> > >> When replying, please edit your Subject line so it is more specific > >> than "Re: Contents of discuss digest..." > >> > >> > >> Today's Topics: > >> > >> 1. Re: Error with MPI_Spawn (Jeff Hammond) > >> 2. Re: discuss Digest, Vol 8, Issue 37 (Antonio J. Pe?a) > >> > >> > >> ---------------------------------------------------------------------- > >> > >> Message: 1 > >> Date: Sun, 23 Jun 2013 17:03:09 -0500 > >> From: Jeff Hammond < jeff.science at gmail.com > > >> To: " discuss at mpich.org " < discuss at mpich.org > > >> Subject: Re: [mpich-discuss] Error with MPI_Spawn > >> Message-ID: <-1578386667793986954 at unknownmsgid> > >> Content-Type: text/plain; charset=ISO-8859-1 > >> > >> This is the wrong way to use PETSc and to parallelize a code with a > >> parallel library in general. > >> > >> Write the PETSc user list and they will explain to you how to > >> parallelize your code properly with PETSc. > >> > >> Jeff > >> > >> Sent from my iPhone > >> > >> On Jun 23, 2013, at 4:59 PM, Nitel Muhtaroglu < muhtaroglu.n at gmail.com > > > > > wrote: > >> > Hello, > >> > > >> > I am trying to integrate PETSc library to a serial program. The idea is > >> > that the serial program creates a linear equation system and then calls > >> > PETSc solver by MPI_Spawn and then solves this system in parallel. But > >> > when I execute MPI_Spawn the following error message occurs and the > >> > solver is not called. I couldn't find a solution to this error. Does > >> > anyone have an idea about it? > >> > > >> > Kind Regards, > >> > >> ------------------------------ > >> > >> Message: 2 > >> Date: Sun, 23 Jun 2013 18:54:38 -0500 > >> From: Antonio J. Pe?a < apenya at mcs.anl.gov > > >> To: discuss at mpich.org > >> Subject: Re: [mpich-discuss] discuss Digest, Vol 8, Issue 37 > >> Message-ID: <8872211.IGCl89YM8r at localhost.localdomain> > >> Content-Type: text/plain; charset="iso-8859-1" > >> > >> > >> Sufeng, > >> > >> The correct way is to use the MPI_Init_thread function with > >> MPI_THREAD_MULTIPLE. This will tell the MPI implementation to be thread > >> safe. It supports OpenMP and Posix Threads (OpenMP primitives in most > >> systems are likely to be implemented on top of PThreads). > >> > >> Antonio > >> > >> > >> On Sunday, June 23, 2013 11:13:31 AM Sufeng Niu wrote: > >> > >> > >> Hi, Antonio > >> > >> > >> Thanks a lot for your reply, I just figure out that is the firewall > >> issue. > >> after I set the firewall. it works now. Thanks again. > >> > >> > >> But I still got a few questions on MPI and multithreads mixed > >> programming. > >> Currently, I try to run each process on each server, and each process > >> using thread pool to run multiple threads (pthread lib). I am not sure > >> whether it is the correct way or not. I wrote it as: > >> > >> > >> MPI_Init() > >> .... > >> ... > >> /* create thread pool and initial */ > >> ...... > >> /* fetch job into thread pool */ > >> ...... > >> > >> > >> MPI_Finalize(); > >> > >> > >> When I check the book and notes, I found people use > >> > >> > >> MPI_Init_thread() with MPI_THREAD_MULTIPLE > >> > >> > >> but the some docs said it supported OpenMP, is that possible to use it > >> with pthread library? > >> I am new guy to this hybrid programming. I am not sure which is the > >> proper > >> way to do it. Any suggestions are appreciate. Thank you! > >> > >> > >> Sufeng > >> > >> > >> > >> > >> On Sat, Jun 22, 2013 at 12:12 PM, < discuss-request at mpich.org [1]> wrote: > >> > >> > >> Send discuss mailing list submissions to discuss at mpich.org [2] > >> https://lists.mpich.org/mailman/listinfo/discuss[3] > >> discuss-request at mpich.org [1] > >> discuss-owner at mpich.org [4] > >> sniu at hawk.iit.edu [5]>To: discuss at mpich.org [2] > >> CAFNNHkwpqdGfZXctL0Uz3hpeL25mZZMtB93qGXjc_+tjnV4csA at mail.gmail.c > >> om[6]>Content-Type: text/plain; charset="iso-8859-1" > >> > >> Hi, > >> > >> Sorry to bother you guys on this stupid question. last time I re-install > >> OSfor all blades to keep them the same version. after I mount, set > >> keylessssh, the terimnal gives the error below: > >> > >> [ proxy:0:1 at iocfccd3.aps.anl.gov [7]] > >> HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from > >> " iocfccd3.aps.anl.gov [8]" to" iocfccd1.aps.anl.gov [9]" (No route to > >> host) [ proxy:0:1 at iocfccd3.aps.anl.gov [7]] main > >> (./pm/pmiserv/pmip.c:189): unable toconnect to server > >> iocfccd1.aps.anl.gov [9] at port 38242 (check for firewalls!) > >> > >> I can ssh from iocfccd1 to iocfccd3 without password. Should I shut > >> downall > >> firewalls on each server? I cannot find out where is the problem. > >> Thankyou > >> > >> --Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute of > >> TechnologyTel: 312-731-7219[10] > >> http://lists.mpich.org/pipermail/discuss/attachments/20130621/5503b1bc/a > >> ttachment-0001.html[11] > > >> > >> ------------------------------ > >> > >> Message: 2Date: Fri, 21 Jun 2013 10:58:26 -0500From: Antonio J. Pe?a > >> < apenya at mcs.anl.gov [12]>To: discuss at mpich.org [2] > >> iocfccd1.aps.anl.gov [9]" from iocfccd3?[1] > >> > >> Antonio > >> > >> > >> On Friday, June 21, 2013 10:51:50 AM Sufeng Niu wrote: > >> > >> > >> Hi, > >> > >> > >> Sorry to bother you guys on this stupid question. last time I re-install > >> OS > >> forall blades to keep them the same version. after I mount, set keyless > >> ssh,the terimnal gives the error below: > >> > >> > >> proxy:0:1 at iocfccd3.aps.anl.gov [7][2]] > >> HYDU_sock_connect(./utils/sock/sock.c:174): unable to connect from > >> " iocfccd3.aps.anl.gov [8][3]"to " iocfccd1.aps.anl.gov [9][1]" (No route > >> to host)[ proxy:0:1 at iocfccd3.aps.anl.gov [7][2]] main > >> (./pm/pmiserv/pmip.c:189):unable to connect to server > >> iocfccd1.aps.anl.gov [9][1] at port 38242 (checkfor firewalls!) > >> > >> > >> > >> I can ssh from iocfccd1 to iocfccd3 without password. Should I shut > >> downall > >> firewalls on each server? I cannot find out where is the problem. > >> Thankyou > >> > >> > >> > >> > >> -- Best Regards,Sufeng NiuECASP lab, ECE department, Illinois Institute > >> of > >> TechnologyTel: 312-731-7219[4] > >> > >> > >> --------[1] _ http://iocfccd1.aps.anl.gov_ > >> proxy%3A0%3A1 at iocfccd3.aps.anl.gov [13] > >> http://iocfccd3.aps.anl.gov [8] > >> 312-731-7219[10] > >> http://lists.mpich.org/pipermail/discuss/attachments/20130621/01b37902/ > >> attachment-0001.html[14] > > >> > >> ------------------------------ > >> > >> Message: 3Date: Sat, 22 Jun 2013 12:49:10 -0400From: Jiri Simsa > >> < jsimsa at cs.cmu.edu [15]>To: discuss at mpich.org [2] > >> CAHs9ut-_6W6SOHTJ_rD+shQ76bo4cTCuFVAy1f9x- > >> J0gioakHg at mail.gmail.com [16]>Content-Type: text/plain; > >> charset="iso-8859-1" > >> > >> Hi, > >> -------------- next part -------------- > >> An HTML attachment was scrubbed... > >> URL: < > >> http://lists.mpich.org/pipermail/discuss/attachments/20130623/a6104115/at > >> ta > >> chment.html > > >> > >> ------------------------------ > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss at mpich.org > >> https://lists.mpich.org/mailman/listinfo/discuss > >> > >> End of discuss Digest, Vol 8, Issue 39 > >> ************************************** > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss -- Antonio J. Pe?a Postdoctoral Appointee Mathematics and Computer Science Division Argonne National Laboratory 9700 South Cass Avenue, Bldg. 240, Of. 3148 Argonne, IL 60439-4847 (+1) 630-252-7928 apenya at mcs.anl.gov From akp4221 at hawaii.edu Tue Jun 25 03:15:55 2013 From: akp4221 at hawaii.edu (Andre Pattantyus) Date: Mon, 24 Jun 2013 22:15:55 -1000 Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 Message-ID: Hello, I am not familiar with installing mpich and am having problems with installation. I have found from searching online that mpich-1.2.6 produces libmpich.so.1.0 which I require run a certain program in parallel. I am unable to follow the installation documentation I have found online for this version because when i configure for ch_p4mpd I get an error when I run make. Therefore I just run ./configure and make but this does not produce my required libmpich.so.1.0. What do I need to specify prior to either configure or make in order to build this? I am building on a linux86-64 with pgi/10.2 compiler. -- Andre Pattantyus Graduate Student Research Assistant Department of Meteorology University of Hawaii at Manoa 2525 Correa Rd, HIG 350 Honolulu, HI 96822 Phone: (845) 264-3582 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradnya.dixit123 at gmail.com Tue Jun 25 03:33:31 2013 From: pradnya.dixit123 at gmail.com (pradnya dixit) Date: Tue, 25 Jun 2013 14:03:31 +0530 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 40 Message-ID: Hello, I am trying to send chunk of data using send and recv funcions in mpi.. but facing so many problem or errors like Elements of rank 1 are: [pradnya-Lenovo-G570:04553] *** Process received signal *** [pradnya-Lenovo-G570:04553] Signal: Segmentation fault (11) [pradnya-Lenovo-G570:04553] Signal code: Address not mapped (1) [pradnya-Lenovo-G570:04553] Failing at address: 0x8be8950 [pradnya-Lenovo-G570:04554] *** Process received signal *** [pradnya-Lenovo-G570:04554] Signal: Segmentation fault (11) [pradnya-Lenovo-G570:04554] Signal code: Address not mapped (1) [pradnya-Lenovo-G570:04554] Failing at address: 0x8be8960 [pradnya-Lenovo-G570:04552] *** Process received signal *** [pradnya-Lenovo-G570:04552] Signal: Segmentation fault (11) [pradnya-Lenovo-G570:04552] Signal code: Address not mapped (1) [pradnya-Lenovo-G570:04552] Failing at address: 0x8be86f8 [pradnya-Lenovo-G570:04553] [ 0] [0xff440c] [pradnya-Lenovo-G570:04553] [ 1] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0xc66e37] [pradnya-Lenovo-G570:04553] [ 2] lst() [0x8048871] [pradnya-Lenovo-G570:04553] *** End of error message *** [pradnya-Lenovo-G570:04552] [ 0] [0x57c40c] [pradnya-Lenovo-G570:04552] [ 1] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x126e37] [pradnya-Lenovo-G570:04552] [ 2] lst() [0x8048871] [pradnya-Lenovo-G570:04552] *** End of error message *** [pradnya-Lenovo-G570:04554] [ 0] [0xd0740c] [pradnya-Lenovo-G570:04554] [ 1] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x27de37] [pradnya-Lenovo-G570:04554] [ 2] lst() [0x8048871] [pradnya-Lenovo-G570:04554] *** End of error message *** -------------------------------------------------------------------------- mpirun noticed that process rank 2 with PID 4553 on node pradnya-Lenovo-G570 exited on signal 11 (Segmentation fault) so plz guide me. check given attachment. thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: last.c Type: text/x-csrc Size: 2023 bytes Desc: not available URL: From jhammond at alcf.anl.gov Tue Jun 25 07:19:12 2013 From: jhammond at alcf.anl.gov (Jeff Hammond) Date: Tue, 25 Jun 2013 07:19:12 -0500 (CDT) Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 In-Reply-To: Message-ID: <410161671.7761991.1372162752362.JavaMail.root@alcf.anl.gov> Try MPICH 3.0.4. MPICH 1.2.6 is ancient and not supported. This is how I build MPICH 3.0.4. You will have to edit the prefix option appropriately. wget http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz tar -xzf mpich-3.0.4.tar.gz cd mpich-3.0.4 mkdir build-gcc cd build-gcc/ ../configure CC=gcc CXX=g++ FC=gfortran F77=gfortran --enable-fc --enable-f77 --enable-threads --enable-g=dbg --with-device=ch3:nemesis --with-pm=hydra --prefix=. Jeff ----- Original Message ----- From: "Andre Pattantyus" To: discuss at mpich.org Sent: Tuesday, June 25, 2013 3:15:55 AM Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 Hello, I am not familiar with installing mpich and am having problems with installation. I have found from searching online that mpich-1.2.6 produces libmpich.so.1.0 which I require run a certain program in parallel. I am unable to follow the installation documentation I have found online for this version because when i configure for ch_p4mpd I get an error when I run make. Therefore I just run ./configure and make but this does not produce my required libmpich.so.1.0. What do I need to specify prior to either configure or make in order to build this? I am building on a linux86-64 with pgi/10.2 compiler. -- Andre Pattantyus Graduate Student Research Assistant Department of Meteorology University of Hawaii at Manoa 2525 Correa Rd, HIG 350 Honolulu, HI 96822 Phone: (845) 264-3582 _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From wbland at mcs.anl.gov Tue Jun 25 07:39:48 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Tue, 25 Jun 2013 07:39:48 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 40 In-Reply-To: References: Message-ID: The problem is that all of the processes other than rank 0 are not seeing the data that is entered on the command line. They're falling directly through to your receive call and therefore their per_process_data is 0 and they aren't allocating a buffer for the data that needs to be received. The usual way of getting input from a user is not via command line input in a parallel program, but via a file in the filesystem. If the file is shared among all of the MPI processes (such as with a shared file system like NFS), then they can all read it directly. If not, then rank 0 could read the file and send all the data to the other ranks individually, which I believe is more like what you're trying to do. The problem is that you need to tell the other ranks how much data will be sent first so they can allocate an appropriate buffer. By the way, in the future when you see the error Segmentation fault, you can usually debug things your self very simply by using 'gdb' or 'ddd'. Often your program will generate a core file (something like 1234.core) that you can pass into 'gdb' or 'ddd' to discover where the problems are. If you aren't familiar with those debugging programs, there should be plenty of tutorials on the web that can help you familiarize yourself. Thanks, Wesley On Jun 25, 2013, at 3:33 AM, pradnya dixit wrote: > > Hello, > > I am trying to send chunk of data using send and recv funcions in mpi.. but facing so many problem or errors like > > > Elements of rank 1 are: > [pradnya-Lenovo-G570:04553] *** Process received signal *** > [pradnya-Lenovo-G570:04553] Signal: Segmentation fault (11) > [pradnya-Lenovo-G570:04553] Signal code: Address not mapped (1) > [pradnya-Lenovo-G570:04553] Failing at address: 0x8be8950 > [pradnya-Lenovo-G570:04554] *** Process received signal *** > [pradnya-Lenovo-G570:04554] Signal: Segmentation fault (11) > [pradnya-Lenovo-G570:04554] Signal code: Address not mapped (1) > [pradnya-Lenovo-G570:04554] Failing at address: 0x8be8960 > [pradnya-Lenovo-G570:04552] *** Process received signal *** > [pradnya-Lenovo-G570:04552] Signal: Segmentation fault (11) > [pradnya-Lenovo-G570:04552] Signal code: Address not mapped (1) > [pradnya-Lenovo-G570:04552] Failing at address: 0x8be86f8 > [pradnya-Lenovo-G570:04553] [ 0] [0xff440c] > [pradnya-Lenovo-G570:04553] [ 1] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0xc66e37] > [pradnya-Lenovo-G570:04553] [ 2] lst() [0x8048871] > [pradnya-Lenovo-G570:04553] *** End of error message *** > [pradnya-Lenovo-G570:04552] [ 0] [0x57c40c] > [pradnya-Lenovo-G570:04552] [ 1] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x126e37] > [pradnya-Lenovo-G570:04552] [ 2] lst() [0x8048871] > [pradnya-Lenovo-G570:04552] *** End of error message *** > [pradnya-Lenovo-G570:04554] [ 0] [0xd0740c] > [pradnya-Lenovo-G570:04554] [ 1] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x27de37] > [pradnya-Lenovo-G570:04554] [ 2] lst() [0x8048871] > [pradnya-Lenovo-G570:04554] *** End of error message *** > -------------------------------------------------------------------------- > mpirun noticed that process rank 2 with PID 4553 on node pradnya-Lenovo-G570 exited on signal 11 (Segmentation fault) > > > so plz guide me. > check given attachment. > > thank you. > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From wbland at mcs.anl.gov Tue Jun 25 07:46:10 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Tue, 25 Jun 2013 07:46:10 -0500 Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 In-Reply-To: <410161671.7761991.1372162752362.JavaMail.root@alcf.anl.gov> References: <410161671.7761991.1372162752362.JavaMail.root@alcf.anl.gov> Message-ID: <1932EA9F-FAE3-4A71-AB2B-1BD6B55561A7@mcs.anl.gov> Alternatively, if you have access to a package manager on your machine, you can just do something like apt-get install mpich2. The version available from most package managers is a little old, but it does simplify the installation process. If you do need to build from source, make sure you read through the README file in the top level directory of MPICH. It has lots of instructions for installation and will show you the best way to report issues to this mailing list so we can help you most efficiently. Let us know if you're still having trouble after you try those things. Wesley On Jun 25, 2013, at 7:19 AM, Jeff Hammond wrote: > Try MPICH 3.0.4. MPICH 1.2.6 is ancient and not supported. > > This is how I build MPICH 3.0.4. You will have to edit the prefix option appropriately. > > wget http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz > tar -xzf mpich-3.0.4.tar.gz > cd mpich-3.0.4 > mkdir build-gcc > cd build-gcc/ > ../configure CC=gcc CXX=g++ FC=gfortran F77=gfortran --enable-fc --enable-f77 --enable-threads --enable-g=dbg --with-device=ch3:nemesis --with-pm=hydra --prefix=. > > Jeff > > ----- Original Message ----- > From: "Andre Pattantyus" > To: discuss at mpich.org > Sent: Tuesday, June 25, 2013 3:15:55 AM > Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 > > > > > Hello, > > I am not familiar with installing mpich and am having problems with installation. I have found from searching online that mpich-1.2.6 produces libmpich.so.1.0 which I require run a certain program in parallel. I am unable to follow the installation documentation I have found online for this version because when i configure for ch_p4mpd I get an error when I run make. Therefore I just run ./configure and make but this does not produce my required libmpich.so.1.0. What do I need to specify prior to either configure or make in order to build this? I am building on a linux86-64 with pgi/10.2 compiler. > > > > -- > Andre Pattantyus > Graduate Student Research Assistant > Department of Meteorology > University of Hawaii at Manoa > 2525 Correa Rd, HIG 350 > Honolulu, HI 96822 > Phone: (845) 264-3582 > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From matthieu.dorier at irisa.fr Tue Jun 25 10:10:48 2013 From: matthieu.dorier at irisa.fr (Matthieu Dorier) Date: Tue, 25 Jun 2013 17:10:48 +0200 (CEST) Subject: [mpich-discuss] Problem with ADIOI_Info_get (MPI_Info_get) from the ADIO layer In-Reply-To: <1911044962.2141864.1372106781445.JavaMail.root@irisa.fr> Message-ID: <596002285.2445235.1372173048387.JavaMail.root@irisa.fr> Hi, I found the solution by investigating the code so I'll post it here in case it can be useful to someone else: When opening a file, ADIOI_xxx_SetInfo is called to copy the info structure. Unless overwritten by the ADIO backend, it's ADIO_GEN_SetInfo (in src/mpi/romio/adio/common/ad_hints.c) that ends up being called and this function only copies the hints that it knows (e.g. cb_buffer_size). So the solution consists in changing ADIO_GEN_SetInfo or (more appropriately) provide an implementation of ADIOI_xxx_SetInfo that copies custom parameters and the called ADIO_GEN_SetInfo. Matthieu ----- Mail original ----- > De: "Matthieu Dorier" > ?: discuss at mpich.org > Envoy?: Lundi 24 Juin 2013 15:46:21 > Objet: [mpich-discuss] Problem with ADIOI_Info_get (MPI_Info_get) > from the ADIO layer > Hi, > I'm implementing an ADIO backend and I'm having problems retrieving > values from the MPI_Info attached to the file. > On the application side, I have something like this: > MPI_Info_create(&info); > MPI_Info_set(info,"cb_buffer_size","64"); > MPI_Info_set(info,"xyz","3"); > MPI_File_open(comm, "file", > MPI_MODE_WRONLY | MPI_MODE_CREATE, info, &fh); > then a call to a MPI_File_write, which ends up calling my > implementation of ADIOI_xxx_WriteContig. In this function, I try to > read back these info: > int info_flag; > char* value = (char *) > ADIOI_Malloc((MPI_MAX_INFO_VAL+1)*sizeof(char)); > ADIOI_Info_get(fd->info, "xyz", MPI_MAX_INFO_VAL, value,&info_flag); > if(info_flag) fprintf(stderr,"xyz = %d\n",atoi(value)); > ADIOI_Info_get(fd->info, "cb_buffer_size", MPI_MAX_INFO_VAL, > value,&info_flag); > if(info_flag) fprintf(stderr,"cb_buffer_size = %d\n",atoi(value)); > I can get the 64 associated to the cb_buffer_size key (which is a > reserved hint), but I don't get the second value. > Where does the problem come from? > I tried everything: re-ordering the calls, changing the name of the > key, calling MPI_Info_get in the application to check that the > values are properly set (they are)... > Thanks > Matthieu Dorier > PhD student at ENS Cachan Brittany and IRISA > http://people.irisa.fr/Matthieu.Dorier > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From akp4221 at hawaii.edu Tue Jun 25 14:54:19 2013 From: akp4221 at hawaii.edu (Andre Pattantyus) Date: Tue, 25 Jun 2013 09:54:19 -1000 Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 In-Reply-To: <410161671.7761991.1372162752362.JavaMail.root@alcf.anl.gov> References: <410161671.7761991.1372162752362.JavaMail.root@alcf.anl.gov> Message-ID: Jeff, Are you saying that this file will only build with a gnu compiler? or can I build with pgf compiler with mpich 3.0.4? -Andre On Tue, Jun 25, 2013 at 2:19 AM, Jeff Hammond wrote: > Try MPICH 3.0.4. MPICH 1.2.6 is ancient and not supported. > > This is how I build MPICH 3.0.4. You will have to edit the prefix option > appropriately. > > wget http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz > tar -xzf mpich-3.0.4.tar.gz > cd mpich-3.0.4 > mkdir build-gcc > cd build-gcc/ > ../configure CC=gcc CXX=g++ FC=gfortran F77=gfortran --enable-fc > --enable-f77 --enable-threads --enable-g=dbg --with-device=ch3:nemesis > --with-pm=hydra --prefix=. > > Jeff > > ----- Original Message ----- > From: "Andre Pattantyus" > To: discuss at mpich.org > Sent: Tuesday, June 25, 2013 3:15:55 AM > Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 > > > > > Hello, > > I am not familiar with installing mpich and am having problems with > installation. I have found from searching online that mpich-1.2.6 produces > libmpich.so.1.0 which I require run a certain program in parallel. I am > unable to follow the installation documentation I have found online for > this version because when i configure for ch_p4mpd I get an error when I > run make. Therefore I just run ./configure and make but this does not > produce my required libmpich.so.1.0. What do I need to specify prior to > either configure or make in order to build this? I am building on a > linux86-64 with pgi/10.2 compiler. > > > > -- > Andre Pattantyus > Graduate Student Research Assistant > Department of Meteorology > University of Hawaii at Manoa > 2525 Correa Rd, HIG 350 > Honolulu, HI 96822 > Phone: (845) 264-3582 > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Andre Pattantyus Graduate Student Research Assistant Department of Meteorology University of Hawaii at Manoa 2525 Correa Rd, HIG 350 Honolulu, HI 96822 Phone: (845) 264-3582 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Tue Jun 25 15:12:20 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Tue, 25 Jun 2013 15:12:20 -0500 Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 In-Reply-To: References: <410161671.7761991.1372162752362.JavaMail.root@alcf.anl.gov> Message-ID: <51C9F9A4.3090503@mcs.anl.gov> Any compiler is fine. But use the latest version of mpich. You don't need all the configure flags that Jeff sent. In fact, I'd recommend that you not use them. For example, --enable-g is only for developers. My recommendation is just this: ./configure --prefix= CC=pgcc CXX=pgCC F77=pgf77 FC=pgfc -- Pavan On 06/25/2013 02:54 PM, Andre Pattantyus wrote: > Jeff, > > Are you saying that this file will only build with a gnu compiler? or > can I build with pgf compiler with mpich 3.0.4? > > -Andre > > > On Tue, Jun 25, 2013 at 2:19 AM, Jeff Hammond > wrote: > > Try MPICH 3.0.4. MPICH 1.2.6 is ancient and not supported. > > This is how I build MPICH 3.0.4. You will have to edit the prefix > option appropriately. > > wget http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz > tar -xzf mpich-3.0.4.tar.gz > cd mpich-3.0.4 > mkdir build-gcc > cd build-gcc/ > ../configure CC=gcc CXX=g++ FC=gfortran F77=gfortran --enable-fc > --enable-f77 --enable-threads --enable-g=dbg > --with-device=ch3:nemesis --with-pm=hydra --prefix=. > > Jeff > > ----- Original Message ----- > From: "Andre Pattantyus" > > To: discuss at mpich.org > Sent: Tuesday, June 25, 2013 3:15:55 AM > Subject: [mpich-discuss] installing mpich-1.2.6 to create > libmpich.so.1.0 > > > > > Hello, > > I am not familiar with installing mpich and am having problems with > installation. I have found from searching online that mpich-1.2.6 > produces libmpich.so.1.0 which I require run a certain program in > parallel. I am unable to follow the installation documentation I > have found online for this version because when i configure for > ch_p4mpd I get an error when I run make. Therefore I just run > ./configure and make but this does not produce my required > libmpich.so.1.0. What do I need to specify prior to either configure > or make in order to build this? I am building on a linux86-64 with > pgi/10.2 compiler. > > > > -- > Andre Pattantyus > Graduate Student Research Assistant > Department of Meteorology > University of Hawaii at Manoa > 2525 Correa Rd, HIG 350 > Honolulu, HI 96822 > Phone: (845) 264-3582 > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > > > -- > Andre Pattantyus > Graduate Student Research Assistant > Department of Meteorology > University of Hawaii at Manoa > 2525 Correa Rd, HIG 350 > Honolulu, HI 96822 > Phone: (845) 264-3582 > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jeff.science at gmail.com Tue Jun 25 15:43:05 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Tue, 25 Jun 2013 15:43:05 -0500 Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 In-Reply-To: <51C9F9A4.3090503@mcs.anl.gov> References: <410161671.7761991.1372162752362.JavaMail.root@alcf.anl.gov> <51C9F9A4.3090503@mcs.anl.gov> Message-ID: Yeah, everything Pavan said. I forgot what --enable-g meant. If you intend to use a debugger (because you're developing an application that uses MPI), you might find --enable-g=dbg useful, but --enable-g is total overkill for users. Jeff On Tue, Jun 25, 2013 at 3:12 PM, Pavan Balaji wrote: > > Any compiler is fine. But use the latest version of mpich. You don't need > all the configure flags that Jeff sent. In fact, I'd recommend that you not > use them. For example, --enable-g is only for developers. > > My recommendation is just this: > > ./configure --prefix= CC=pgcc CXX=pgCC F77=pgf77 FC=pgfc > > -- Pavan > > > On 06/25/2013 02:54 PM, Andre Pattantyus wrote: >> >> Jeff, >> >> Are you saying that this file will only build with a gnu compiler? or >> can I build with pgf compiler with mpich 3.0.4? >> >> -Andre >> >> >> On Tue, Jun 25, 2013 at 2:19 AM, Jeff Hammond > > wrote: >> >> Try MPICH 3.0.4. MPICH 1.2.6 is ancient and not supported. >> >> This is how I build MPICH 3.0.4. You will have to edit the prefix >> option appropriately. >> >> wget http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz >> tar -xzf mpich-3.0.4.tar.gz >> cd mpich-3.0.4 >> mkdir build-gcc >> cd build-gcc/ >> ../configure CC=gcc CXX=g++ FC=gfortran F77=gfortran --enable-fc >> --enable-f77 --enable-threads --enable-g=dbg >> --with-device=ch3:nemesis --with-pm=hydra --prefix=. >> >> Jeff >> >> ----- Original Message ----- >> From: "Andre Pattantyus" > > >> To: discuss at mpich.org >> Sent: Tuesday, June 25, 2013 3:15:55 AM >> Subject: [mpich-discuss] installing mpich-1.2.6 to create >> libmpich.so.1.0 >> >> >> >> >> Hello, >> >> I am not familiar with installing mpich and am having problems with >> installation. I have found from searching online that mpich-1.2.6 >> produces libmpich.so.1.0 which I require run a certain program in >> parallel. I am unable to follow the installation documentation I >> have found online for this version because when i configure for >> ch_p4mpd I get an error when I run make. Therefore I just run >> ./configure and make but this does not produce my required >> libmpich.so.1.0. What do I need to specify prior to either configure >> or make in order to build this? I am building on a linux86-64 with >> pgi/10.2 compiler. >> >> >> >> -- >> Andre Pattantyus >> Graduate Student Research Assistant >> Department of Meteorology >> University of Hawaii at Manoa >> 2525 Correa Rd, HIG 350 >> Honolulu, HI 96822 >> Phone: (845) 264-3582 >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> >> -- >> Andre Pattantyus >> Graduate Student Research Assistant >> Department of Meteorology >> University of Hawaii at Manoa >> 2525 Correa Rd, HIG 350 >> Honolulu, HI 96822 >> Phone: (845) 264-3582 >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.science at gmail.com From balaji at mcs.anl.gov Tue Jun 25 15:46:27 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Tue, 25 Jun 2013 15:46:27 -0500 Subject: [mpich-discuss] installing mpich-1.2.6 to create libmpich.so.1.0 In-Reply-To: References: <410161671.7761991.1372162752362.JavaMail.root@alcf.anl.gov> <51C9F9A4.3090503@mcs.anl.gov> Message-ID: <51CA01A3.3090004@mcs.anl.gov> Jeff, On 06/25/2013 03:43 PM, Jeff Hammond wrote: > Yeah, everything Pavan said. I forgot what --enable-g meant. If you > intend to use a debugger (because you're developing an application > that uses MPI), you might find --enable-g=dbg useful, but --enable-g > is total overkill for users. --enable-g=dbg adds debug symbols to the MPI library. To debug the applications, you'll need to set CFLAGS=-g or just compile your application with "mpicc -g". -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From robl at mcs.anl.gov Wed Jun 26 10:21:36 2013 From: robl at mcs.anl.gov (Rob Latham) Date: Wed, 26 Jun 2013 10:21:36 -0500 Subject: [mpich-discuss] Problem with ADIOI_Info_get (MPI_Info_get) from the ADIO layer In-Reply-To: <596002285.2445235.1372173048387.JavaMail.root@irisa.fr> References: <1911044962.2141864.1372106781445.JavaMail.root@irisa.fr> <596002285.2445235.1372173048387.JavaMail.root@irisa.fr> Message-ID: <20130626152136.GC3154@mcs.anl.gov> On Tue, Jun 25, 2013 at 05:10:48PM +0200, Matthieu Dorier wrote: > Hi, > > I found the solution by investigating the code so I'll post it here in case it can be useful to someone else: > > When opening a file, ADIOI_xxx_SetInfo is called to copy the info structure. Unless overwritten by the ADIO backend, it's ADIO_GEN_SetInfo (in src/mpi/romio/adio/common/ad_hints.c) that ends up being called and this function only copies the hints that it knows (e.g. cb_buffer_size). So the solution consists in changing ADIO_GEN_SetInfo or (more appropriately) provide an implementation of ADIOI_xxx_SetInfo that copies custom parameters and the called ADIO_GEN_SetInfo. Yeah, consider the way ad_pvfs2 deals with this: the function pointers in ad_pvfs2.c point to ADIOI_PVFS2_SetInfo In src/mpi/romio/adio/ad_pvfs2/ad_pvfs2_hints.c , all the PVFS2-specific hints are processed, then it calls ADIOI_GEN_SetInfo (Now that I look at this, maybe the order should be reversed) ==rob > Matthieu > > ----- Mail original ----- > > > De: "Matthieu Dorier" > > ?: discuss at mpich.org > > Envoy?: Lundi 24 Juin 2013 15:46:21 > > Objet: [mpich-discuss] Problem with ADIOI_Info_get (MPI_Info_get) > > from the ADIO layer > > > Hi, > > > I'm implementing an ADIO backend and I'm having problems retrieving > > values from the MPI_Info attached to the file. > > On the application side, I have something like this: > > > MPI_Info_create(&info); > > MPI_Info_set(info,"cb_buffer_size","64"); > > MPI_Info_set(info,"xyz","3"); > > MPI_File_open(comm, "file", > > MPI_MODE_WRONLY | MPI_MODE_CREATE, info, &fh); > > > then a call to a MPI_File_write, which ends up calling my > > implementation of ADIOI_xxx_WriteContig. In this function, I try to > > read back these info: > > > int info_flag; > > char* value = (char *) > > ADIOI_Malloc((MPI_MAX_INFO_VAL+1)*sizeof(char)); > > ADIOI_Info_get(fd->info, "xyz", MPI_MAX_INFO_VAL, value,&info_flag); > > if(info_flag) fprintf(stderr,"xyz = %d\n",atoi(value)); > > ADIOI_Info_get(fd->info, "cb_buffer_size", MPI_MAX_INFO_VAL, > > value,&info_flag); > > if(info_flag) fprintf(stderr,"cb_buffer_size = %d\n",atoi(value)); > > > I can get the 64 associated to the cb_buffer_size key (which is a > > reserved hint), but I don't get the second value. > > Where does the problem come from? > > I tried everything: re-ordering the calls, changing the name of the > > key, calling MPI_Info_get in the application to check that the > > values are properly set (they are)... > > > Thanks > > > Matthieu Dorier > > PhD student at ENS Cachan Brittany and IRISA > > http://people.irisa.fr/Matthieu.Dorier > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA From jayesh at mcs.anl.gov Wed Jun 26 12:38:39 2013 From: jayesh at mcs.anl.gov (Jayesh Krishna) Date: Wed, 26 Jun 2013 12:38:39 -0500 (CDT) Subject: [mpich-discuss] install + config on windows In-Reply-To: Message-ID: <1705457099.8240817.1372268319266.JavaMail.root@mcs.anl.gov> Hi, Please note that upgrading the user who installs the software (Utilisateur) to an administrator is not enough to install MPICH2 correctly. You need to make sure that you install MPICH2 from an administrator command prompt (The MPICH2 installer's guide should have the details). Please follow the steps below to install MPICH2, # Uninstall any exising versions of MPICH2 on your system # Right-click on the Windows command prompt icon and select "Run as administrator" (Now you have an administrator command prompt) # From within the command prompt type "msiexec /i MPICH2-INSTALLER-FILE.msi" (Where MPICH2-INSTALLER-FILE.msi is the msi file downloaded from the MPICH2 website to install MPICH2) to install MPICH2. Regards, Jayesh ----- Original Message ----- From: "spatiogis" To: "Jayesh Krishna" Sent: Wednesday, June 19, 2013 1:13:48 AM Subject: Re: [mpich-discuss] install + config on windows Hello, I am coming back to this post to try to see what is happening. Utilisateur is now the admin on my machine. In wmpiregister, I just let the account by default (empty), and the password "behappy" Actually, the register command seems to have worked correctly. Anyway in wmpiconfig, there is this line : "nbpc: MPICH2 not installed or unable to query the host" Is it possible to reboot the host and the password ? Regards, Benoit > Hi, > From the log output it looks like credentials (password) for > Utilisateur was not correct. > Is Utilisateur a valid Windows user on your machine? Have you > registered the username/password correctly (Try re-registering the > username+password by typing "mpiexec -register" at the command prompt)? > > Regards, > Jayesh > > ----- Original Message ----- > From: "spatiogis" > To: discuss at mpich.org > Sent: Friday, May 3, 2013 11:58:00 AM > Subject: Re: [mpich-discuss] install + config on windows > > Hello, > > for this command : > > # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 > C:\Progra~1\MPICH2\examples\cpi.exe > > result : > > ....../SMPDU_Sock_post_readv > ...../SMPDU_Sock_post_read > ..../smpd_handle_op_connect > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_challenge_string > ......read challenge string: '1.4.1p1 18467' > ......\smpd_verify_version > ....../smpd_verify_version > ......Verification of smpd version succeeded > ......\smpd_hash > ....../smpd_hash > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_state_reading_challenge_string > ..../smpd_handle_op_read > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > ....\smpd_handle_op_write > .....\smpd_state_writing_challenge_response > ......wrote challenge response: 'dafd1d07c1e6e9cb5fae968403d0d933' > ......\SMPDU_Sock_post_read > .......\SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_readv > ....../SMPDU_Sock_post_read > ...../smpd_state_writing_challenge_response > ..../smpd_handle_op_write > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_connect_result > ......read connect result: 'SUCCESS' > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_state_reading_connect_result > ..../smpd_handle_op_read > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > ....\smpd_handle_op_write > .....\smpd_state_writing_process_session_request > ......wrote process session request: 'process' > ......\SMPDU_Sock_post_read > .......\SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_readv > ....../SMPDU_Sock_post_read > ...../smpd_state_writing_process_session_request > ..../smpd_handle_op_write > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_cred_request > ......read cred request: 'credentials' > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > .......\smpd_option_on > ........\smpd_get_smpd_data > .........\smpd_get_smpd_data_from_environment > ........./smpd_get_smpd_data_from_environment > .........\smpd_get_smpd_data_default > ........./smpd_get_smpd_data_default > .........Unable to get the data for the key 'nocache' > ......../smpd_get_smpd_data > ......./smpd_option_on > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_handle_op_read > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_state_writing_cred_ack_yes > .......wrote cred request yes ack. > .......\SMPDU_Sock_post_write > ........\SMPDU_Sock_post_writev > ......../SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_write > ....../smpd_state_writing_cred_ack_yes > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_state_writing_account > .......wrote account: 'Utilisateur' > .......\smpd_encrypt_data > ......./smpd_encrypt_data > .......\SMPDU_Sock_post_write > ........\SMPDU_Sock_post_writev > ......../SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_write > ....../smpd_state_writing_account > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > .......\SMPDU_Sock_post_read > ........\SMPDU_Sock_post_readv > ......../SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_read > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_READ event.error = 0, result = 0, context=left > .....\smpd_handle_op_read > ......\smpd_state_reading_process_result > .......read process session result: 'FAIL' > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > Credentials for Utilisateur rejected connecting to Benoit > .......process session rejected > .......\SMPDU_Sock_post_close > ........\SMPDU_Sock_post_read > .........\SMPDU_Sock_post_readv > ........./SMPDU_Sock_post_readv > ......../SMPDU_Sock_post_read > ......./SMPDU_Sock_post_close > .......\smpd_post_abort_command > ........\smpd_create_command > .........\smpd_init_command > ........./smpd_init_command > ......../smpd_create_command > ........\smpd_add_command_arg > ......../smpd_add_command_arg > ........\smpd_command_destination > .........0 -> 0 : returning NULL context > ......../smpd_command_destination > Aborting: Unable to connect to Benoit > ......./smpd_post_abort_command > .......\smpd_exit > ........\smpd_kill_all_processes > ......../smpd_kill_all_processes > ........\smpd_finalize_drive_maps > ......../smpd_finalize_drive_maps > ........\smpd_dbs_finalize > ......../smpd_dbs_finalize > ........\SMPDU_Sock_finalize > ......../SMPDU_Sock_finalize > > C:\Users\Utilisateur> >> Hi, >> Looks like you missed the "-" before the status ("smpd -status" not >> "smpd status") argument. >> It also looks like you have multiple MPI libraries installed in your >> system. Try running this command (full path to mpiexec and smpd), >> >> # C:\Progra~1\MPICH2\bin\smpd -status >> >> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 >> C:\Progra~1\MPICH2\examples\cpi.exe >> >> >> Regards, >> Jayesh >> >> ----- Original Message ----- >> From: "spatiogis" >> To: "Jayesh Krishna" >> Sent: Friday, May 3, 2013 11:05:34 AM >> Subject: Re: [mpich-discuss] install + config on windows >> >> Hello, >> >> C:\Users\Utilisateur>smpd status >> Unexpected parameters: status >> >> C:\Users\Utilisateur>mpiexec -verbose -n 2 >> C:\Progra~1\MPICH2\examples\cpi.exe >> Unknown option: -verbose >> >> ----------------------------------------------------------------------------- >> C:\Program Files\MPICH2\examples>mpiexec -verbose -n 2 cpi.exe >> Unknown option: -verbose >> >> C:\Program Files\MPICH2\examples>smpd status >> Unexpected parameters: status >> ----------------------------------------------------------------------------- >> >> regards, Ben >> >>> Hi, >>> Ok. Please send us the output of the following commands, >>> >>> # smpd -status >>> # mpiexec -verbose -n 2 C:\Progra~1\MPICH2\examples\cpi.exe >>> >>> Please copy-paste the command and the complete output in your email. >>> >>> Regards, >>> Jayesh >>> >>> >>> ----- Original Message ----- >>> From: "spatiogis" >>> To: discuss at mpich.org >>> Sent: Friday, May 3, 2013 1:46:53 AM >>> Subject: Re: [mpich-discuss] install + config on windows >>> >>> Hello >>> >>> >>>> (PS: I am assuming from your reply in the previous email that you can >>>> run a command like "mpiexec -n 2 C:\Progra~1\MPICH2\examples\cpi.exe" >>>> correctly) >>> >>> In fact this command doesn't run. >>> >>> The message is this one >>> >>> [01:11728]....ERROR:unable to read the cmd header on the pmi context, >>> Error = -1 >>> >>> Ben >>> >>> >>>> ----- Original Message ----- >>>> From: "spatiogis" >>>> To: "Jayesh Krishna" >>>> Sent: Thursday, May 2, 2013 10:48:56 AM >>>> Subject: Re: [mpich-discuss] install + config on windows >>>> >>>> Hello, >>>> >>>>> Hi, >>>>> Are you able to run any other MPI programs? Try running the example >>>>> program, cpi.exe (C:\Program Files\MPICH2\examples\cpi.exe), to make >>>>> sure that your MPICH2 installation works. >>>> >>>> yes it does work >>>> >>>>> Installing MPICH2 on Windows 7 typically requires you to uninstall >>>>> any >>>>> previous versions of MPICH2, launch an administrative command promt >>>>> and >>>>> run "msiexec /i mpich2-installer.msi" to install MPICH2. >>>> >>>> yes it 's been installed like this... >>>> >>>> In wmpiconfig, the message is the following in the 'Get settings' >>>> line. >>>> >>>> Credentials for Utilisateur rejected connecting to host >>>> Aborting: Unable to connect to host >>>> >>>> The software I try to use is Taudem, which is intergrated inside >>>> Qgis. >>>> Launching a taudem process inside Qgis gives the same message. >>>> >>>> >>>>> Regards, >>>>> Jayesh >>>> >>>> Sincerely, Ben >>>> >>>>> >>>>> ----- Original Message ----- >>>>> From: "spatiogis" >>>>> To: discuss at mpich.org >>>>> Sent: Thursday, May 2, 2013 10:08:23 AM >>>>> Subject: Re: [mpich-discuss] install + config on windows >>>>> >>>>> Hello, >>>>> >>>>> in my case Mpich is normally used to run .exe programs. I guess that >>>>> they >>>>> are already compiled... >>>>> The .exe files are integrated into a software, and accessed from >>>>> menus >>>>> inside it. When I run one of the programs, the answer is actually >>>>> "unable >>>>> to query host". >>>>> At the end, the process is not realised. It seems that this 'host' >>>>> question is a problem to the software... >>>>> >>>>> Sincerely, >>>>> >>>>> Ben. >>>>> >>>>> >>>>>> Hi, >>>>>> You can download MPICH2 binaries for Windows at >>>>>> http://www.mpich.org/downloads/ . >>>>>> You need to compile your MPI programs with MPICH2 to make it work. >>>>>> I >>>>>> would recommend recompiling your code after you install MPICH2 (If >>>>>> you >>>>>> have MPI program binaries pre-built with MPICH2 - instead of >>>>>> compiling >>>>>> them on your own - make sure that you install the same version of >>>>>> MPICH2 >>>>>> that was used to build the binaries). >>>>>> The wmpiregister program has a bug and you can ignore this error >>>>>> message ("...unable to query host"). Can you run your MPI program >>>>>> using >>>>>> mpiexec from a command prompt? >>>>>> >>>>>> Regards, >>>>>> Jayesh >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: "spatiogis" >>>>>> To: discuss at mpich.org >>>>>> Sent: Tuesday, April 30, 2013 9:26:35 AM >>>>>> Subject: [mpich-discuss] install + config on windows >>>>>> >>>>>> Hello, >>>>>> >>>>>> I'm not very good at computing, but I would like to install Mpich2 >>>>>> on >>>>>> windows 7 - 64 bits. There is only one pc, with one user plus the >>>>>> admin, >>>>>> and a simple core processor. >>>>>> >>>>>> I would like to know if it's mandatory to have compiling softwares >>>>>> with >>>>>> it to make it work, whereas it is asked in this case only to make >>>>>> run >>>>>> another software, and not for compiling (that would maybe save some >>>>>> disk >>>>>> space and simplify the installation) ? >>>>>> >>>>>> My second issue is that I must be missing something about the >>>>>> server >>>>>> configuration. I have installed Mpich from the .msi file, then >>>>>> configured >>>>>> the wmpiregister program with the Domain/user informations. >>>>>> >>>>>> There is this message displayed when trying to connect in the >>>>>> 'configurable settings' window : 'MPICH2 not installed or unable to >>>>>> query >>>>>> the host'. >>>>>> >>>>>> What is the host actually ? >>>>>> >>>>>> I know I am starting from very far, I am sorry for these very >>>>>> simple >>>>>> questions. Thanks if you can reply me, that would certainly save me >>>>>> some >>>>>> long hours of reading and testing ;) >>>>>> >>>>>> sincerely, >>>>>> >>>>>> Ben >>>>>> _______________________________________________ >>>>>> discuss mailing list discuss at mpich.org >>>>>> To manage subscription options or unsubscribe: >>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- Benoit V?ler Adh?rent au groupe JAM Ing?nierie 180 Avenue du Genevois, Parc d'Activit? de Croix Rousse 73000 Chamb?ry http://www.spatiogis.fr 06-46-13-40-94 From jayesh at mcs.anl.gov Wed Jun 26 12:39:19 2013 From: jayesh at mcs.anl.gov (Jayesh Krishna) Date: Wed, 26 Jun 2013 12:39:19 -0500 (CDT) Subject: [mpich-discuss] install + config on windows In-Reply-To: <1705457099.8240817.1372268319266.JavaMail.root@mcs.anl.gov> Message-ID: <965245623.8240918.1372268359989.JavaMail.root@mcs.anl.gov> Hi, Please note that upgrading the user who installs the software (Utilisateur) to an administrator is not enough to install MPICH2 correctly. You need to make sure that you install MPICH2 from an administrator command prompt (The MPICH2 installer's guide should have the details). Please follow the steps below to install MPICH2, # Uninstall any exising versions of MPICH2 on your system # Right-click on the Windows command prompt icon and select "Run as administrator" (Now you have an administrator command prompt) # From within the command prompt type "msiexec /i MPICH2-INSTALLER-FILE.msi" (Where MPICH2-INSTALLER-FILE.msi is the msi file downloaded from the MPICH2 website to install MPICH2) to install MPICH2. Regards, Jayesh ----- Original Message ----- From: "spatiogis" To: "Jayesh Krishna" Sent: Wednesday, June 19, 2013 1:13:48 AM Subject: Re: [mpich-discuss] install + config on windows Hello, I am coming back to this post to try to see what is happening. Utilisateur is now the admin on my machine. In wmpiregister, I just let the account by default (empty), and the password "behappy" Actually, the register command seems to have worked correctly. Anyway in wmpiconfig, there is this line : "nbpc: MPICH2 not installed or unable to query the host" Is it possible to reboot the host and the password ? Regards, Benoit > Hi, > From the log output it looks like credentials (password) for > Utilisateur was not correct. > Is Utilisateur a valid Windows user on your machine? Have you > registered the username/password correctly (Try re-registering the > username+password by typing "mpiexec -register" at the command prompt)? > > Regards, > Jayesh > > ----- Original Message ----- > From: "spatiogis" > To: discuss at mpich.org > Sent: Friday, May 3, 2013 11:58:00 AM > Subject: Re: [mpich-discuss] install + config on windows > > Hello, > > for this command : > > # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 > C:\Progra~1\MPICH2\examples\cpi.exe > > result : > > ....../SMPDU_Sock_post_readv > ...../SMPDU_Sock_post_read > ..../smpd_handle_op_connect > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_challenge_string > ......read challenge string: '1.4.1p1 18467' > ......\smpd_verify_version > ....../smpd_verify_version > ......Verification of smpd version succeeded > ......\smpd_hash > ....../smpd_hash > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_state_reading_challenge_string > ..../smpd_handle_op_read > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > ....\smpd_handle_op_write > .....\smpd_state_writing_challenge_response > ......wrote challenge response: 'dafd1d07c1e6e9cb5fae968403d0d933' > ......\SMPDU_Sock_post_read > .......\SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_readv > ....../SMPDU_Sock_post_read > ...../smpd_state_writing_challenge_response > ..../smpd_handle_op_write > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_connect_result > ......read connect result: 'SUCCESS' > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_state_reading_connect_result > ..../smpd_handle_op_read > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_WRITE event.error = 0, result = 0, context=left > ....\smpd_handle_op_write > .....\smpd_state_writing_process_session_request > ......wrote process session request: 'process' > ......\SMPDU_Sock_post_read > .......\SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_readv > ....../SMPDU_Sock_post_read > ...../smpd_state_writing_process_session_request > ..../smpd_handle_op_write > ....sock_waiting for the next event. > ....\SMPDU_Sock_wait > ..../SMPDU_Sock_wait > ....SOCK_OP_READ event.error = 0, result = 0, context=left > ....\smpd_handle_op_read > .....\smpd_state_reading_cred_request > ......read cred request: 'credentials' > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > .......\smpd_option_on > ........\smpd_get_smpd_data > .........\smpd_get_smpd_data_from_environment > ........./smpd_get_smpd_data_from_environment > .........\smpd_get_smpd_data_default > ........./smpd_get_smpd_data_default > .........Unable to get the data for the key 'nocache' > ......../smpd_get_smpd_data > ......./smpd_option_on > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ......\SMPDU_Sock_post_write > .......\SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_writev > ....../SMPDU_Sock_post_write > ...../smpd_handle_op_read > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_state_writing_cred_ack_yes > .......wrote cred request yes ack. > .......\SMPDU_Sock_post_write > ........\SMPDU_Sock_post_writev > ......../SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_write > ....../smpd_state_writing_cred_ack_yes > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_state_writing_account > .......wrote account: 'Utilisateur' > .......\smpd_encrypt_data > ......./smpd_encrypt_data > .......\SMPDU_Sock_post_write > ........\SMPDU_Sock_post_writev > ......../SMPDU_Sock_post_writev > ......./SMPDU_Sock_post_write > ....../smpd_state_writing_account > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_WRITE event.error = 0, result = 0, context=left > .....\smpd_handle_op_write > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > .......\SMPDU_Sock_post_read > ........\SMPDU_Sock_post_readv > ......../SMPDU_Sock_post_readv > ......./SMPDU_Sock_post_read > ......\smpd_hide_string_arg > .......\first_token > ......./first_token > .......\compare_token > ......./compare_token > .......\next_token > ........\first_token > ......../first_token > ........\first_token > ......../first_token > ......./next_token > ....../smpd_hide_string_arg > ....../smpd_hide_string_arg > ...../smpd_handle_op_write > .....sock_waiting for the next event. > .....\SMPDU_Sock_wait > ...../SMPDU_Sock_wait > .....SOCK_OP_READ event.error = 0, result = 0, context=left > .....\smpd_handle_op_read > ......\smpd_state_reading_process_result > .......read process session result: 'FAIL' > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > .......\smpd_hide_string_arg > ........\first_token > ......../first_token > ........\compare_token > ......../compare_token > ........\next_token > .........\first_token > ........./first_token > .........\first_token > ........./first_token > ......../next_token > ......./smpd_hide_string_arg > ......./smpd_hide_string_arg > Credentials for Utilisateur rejected connecting to Benoit > .......process session rejected > .......\SMPDU_Sock_post_close > ........\SMPDU_Sock_post_read > .........\SMPDU_Sock_post_readv > ........./SMPDU_Sock_post_readv > ......../SMPDU_Sock_post_read > ......./SMPDU_Sock_post_close > .......\smpd_post_abort_command > ........\smpd_create_command > .........\smpd_init_command > ........./smpd_init_command > ......../smpd_create_command > ........\smpd_add_command_arg > ......../smpd_add_command_arg > ........\smpd_command_destination > .........0 -> 0 : returning NULL context > ......../smpd_command_destination > Aborting: Unable to connect to Benoit > ......./smpd_post_abort_command > .......\smpd_exit > ........\smpd_kill_all_processes > ......../smpd_kill_all_processes > ........\smpd_finalize_drive_maps > ......../smpd_finalize_drive_maps > ........\smpd_dbs_finalize > ......../smpd_dbs_finalize > ........\SMPDU_Sock_finalize > ......../SMPDU_Sock_finalize > > C:\Users\Utilisateur> >> Hi, >> Looks like you missed the "-" before the status ("smpd -status" not >> "smpd status") argument. >> It also looks like you have multiple MPI libraries installed in your >> system. Try running this command (full path to mpiexec and smpd), >> >> # C:\Progra~1\MPICH2\bin\smpd -status >> >> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 >> C:\Progra~1\MPICH2\examples\cpi.exe >> >> >> Regards, >> Jayesh >> >> ----- Original Message ----- >> From: "spatiogis" >> To: "Jayesh Krishna" >> Sent: Friday, May 3, 2013 11:05:34 AM >> Subject: Re: [mpich-discuss] install + config on windows >> >> Hello, >> >> C:\Users\Utilisateur>smpd status >> Unexpected parameters: status >> >> C:\Users\Utilisateur>mpiexec -verbose -n 2 >> C:\Progra~1\MPICH2\examples\cpi.exe >> Unknown option: -verbose >> >> ----------------------------------------------------------------------------- >> C:\Program Files\MPICH2\examples>mpiexec -verbose -n 2 cpi.exe >> Unknown option: -verbose >> >> C:\Program Files\MPICH2\examples>smpd status >> Unexpected parameters: status >> ----------------------------------------------------------------------------- >> >> regards, Ben >> >>> Hi, >>> Ok. Please send us the output of the following commands, >>> >>> # smpd -status >>> # mpiexec -verbose -n 2 C:\Progra~1\MPICH2\examples\cpi.exe >>> >>> Please copy-paste the command and the complete output in your email. >>> >>> Regards, >>> Jayesh >>> >>> >>> ----- Original Message ----- >>> From: "spatiogis" >>> To: discuss at mpich.org >>> Sent: Friday, May 3, 2013 1:46:53 AM >>> Subject: Re: [mpich-discuss] install + config on windows >>> >>> Hello >>> >>> >>>> (PS: I am assuming from your reply in the previous email that you can >>>> run a command like "mpiexec -n 2 C:\Progra~1\MPICH2\examples\cpi.exe" >>>> correctly) >>> >>> In fact this command doesn't run. >>> >>> The message is this one >>> >>> [01:11728]....ERROR:unable to read the cmd header on the pmi context, >>> Error = -1 >>> >>> Ben >>> >>> >>>> ----- Original Message ----- >>>> From: "spatiogis" >>>> To: "Jayesh Krishna" >>>> Sent: Thursday, May 2, 2013 10:48:56 AM >>>> Subject: Re: [mpich-discuss] install + config on windows >>>> >>>> Hello, >>>> >>>>> Hi, >>>>> Are you able to run any other MPI programs? Try running the example >>>>> program, cpi.exe (C:\Program Files\MPICH2\examples\cpi.exe), to make >>>>> sure that your MPICH2 installation works. >>>> >>>> yes it does work >>>> >>>>> Installing MPICH2 on Windows 7 typically requires you to uninstall >>>>> any >>>>> previous versions of MPICH2, launch an administrative command promt >>>>> and >>>>> run "msiexec /i mpich2-installer.msi" to install MPICH2. >>>> >>>> yes it 's been installed like this... >>>> >>>> In wmpiconfig, the message is the following in the 'Get settings' >>>> line. >>>> >>>> Credentials for Utilisateur rejected connecting to host >>>> Aborting: Unable to connect to host >>>> >>>> The software I try to use is Taudem, which is intergrated inside >>>> Qgis. >>>> Launching a taudem process inside Qgis gives the same message. >>>> >>>> >>>>> Regards, >>>>> Jayesh >>>> >>>> Sincerely, Ben >>>> >>>>> >>>>> ----- Original Message ----- >>>>> From: "spatiogis" >>>>> To: discuss at mpich.org >>>>> Sent: Thursday, May 2, 2013 10:08:23 AM >>>>> Subject: Re: [mpich-discuss] install + config on windows >>>>> >>>>> Hello, >>>>> >>>>> in my case Mpich is normally used to run .exe programs. I guess that >>>>> they >>>>> are already compiled... >>>>> The .exe files are integrated into a software, and accessed from >>>>> menus >>>>> inside it. When I run one of the programs, the answer is actually >>>>> "unable >>>>> to query host". >>>>> At the end, the process is not realised. It seems that this 'host' >>>>> question is a problem to the software... >>>>> >>>>> Sincerely, >>>>> >>>>> Ben. >>>>> >>>>> >>>>>> Hi, >>>>>> You can download MPICH2 binaries for Windows at >>>>>> http://www.mpich.org/downloads/ . >>>>>> You need to compile your MPI programs with MPICH2 to make it work. >>>>>> I >>>>>> would recommend recompiling your code after you install MPICH2 (If >>>>>> you >>>>>> have MPI program binaries pre-built with MPICH2 - instead of >>>>>> compiling >>>>>> them on your own - make sure that you install the same version of >>>>>> MPICH2 >>>>>> that was used to build the binaries). >>>>>> The wmpiregister program has a bug and you can ignore this error >>>>>> message ("...unable to query host"). Can you run your MPI program >>>>>> using >>>>>> mpiexec from a command prompt? >>>>>> >>>>>> Regards, >>>>>> Jayesh >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: "spatiogis" >>>>>> To: discuss at mpich.org >>>>>> Sent: Tuesday, April 30, 2013 9:26:35 AM >>>>>> Subject: [mpich-discuss] install + config on windows >>>>>> >>>>>> Hello, >>>>>> >>>>>> I'm not very good at computing, but I would like to install Mpich2 >>>>>> on >>>>>> windows 7 - 64 bits. There is only one pc, with one user plus the >>>>>> admin, >>>>>> and a simple core processor. >>>>>> >>>>>> I would like to know if it's mandatory to have compiling softwares >>>>>> with >>>>>> it to make it work, whereas it is asked in this case only to make >>>>>> run >>>>>> another software, and not for compiling (that would maybe save some >>>>>> disk >>>>>> space and simplify the installation) ? >>>>>> >>>>>> My second issue is that I must be missing something about the >>>>>> server >>>>>> configuration. I have installed Mpich from the .msi file, then >>>>>> configured >>>>>> the wmpiregister program with the Domain/user informations. >>>>>> >>>>>> There is this message displayed when trying to connect in the >>>>>> 'configurable settings' window : 'MPICH2 not installed or unable to >>>>>> query >>>>>> the host'. >>>>>> >>>>>> What is the host actually ? >>>>>> >>>>>> I know I am starting from very far, I am sorry for these very >>>>>> simple >>>>>> questions. Thanks if you can reply me, that would certainly save me >>>>>> some >>>>>> long hours of reading and testing ;) >>>>>> >>>>>> sincerely, >>>>>> >>>>>> Ben >>>>>> _______________________________________________ >>>>>> discuss mailing list discuss at mpich.org >>>>>> To manage subscription options or unsubscribe: >>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- Benoit V?ler Adh?rent au groupe JAM Ing?nierie 180 Avenue du Genevois, Parc d'Activit? de Croix Rousse 73000 Chamb?ry http://www.spatiogis.fr 06-46-13-40-94 From balaji at mcs.anl.gov Wed Jun 26 12:46:41 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Wed, 26 Jun 2013 12:46:41 -0500 Subject: [mpich-discuss] install + config on windows In-Reply-To: <1705457099.8240817.1372268319266.JavaMail.root@mcs.anl.gov> References: <1705457099.8240817.1372268319266.JavaMail.root@mcs.anl.gov> Message-ID: <51CB2901.8060706@mcs.anl.gov> FYI: sed -e 's/MPICH2/MPICH/g' -- Pavan On 06/26/2013 12:38 PM, Jayesh Krishna wrote: > Hi, > Please note that upgrading the user who installs the software (Utilisateur) to an administrator is not enough to install MPICH2 correctly. You need to make sure that you install MPICH2 from an administrator command prompt (The MPICH2 installer's guide should have the details). Please follow the steps below to install MPICH2, > > # Uninstall any exising versions of MPICH2 on your system > # Right-click on the Windows command prompt icon and select "Run as administrator" (Now you have an administrator command prompt) > # From within the command prompt type "msiexec /i MPICH2-INSTALLER-FILE.msi" (Where MPICH2-INSTALLER-FILE.msi is the msi file downloaded from the MPICH2 website to install MPICH2) to install MPICH2. > > Regards, > Jayesh > > ----- Original Message ----- > From: "spatiogis" > To: "Jayesh Krishna" > Sent: Wednesday, June 19, 2013 1:13:48 AM > Subject: Re: [mpich-discuss] install + config on windows > > Hello, > > I am coming back to this post to try to see what is happening. > > Utilisateur is now the admin on my machine. In wmpiregister, I just let > the account by default (empty), and the password "behappy" > > Actually, the register command seems to have worked correctly. Anyway in > wmpiconfig, there is this line : "nbpc: MPICH2 not installed or unable to > query the host" > > Is it possible to reboot the host and the password ? > > Regards, > > Benoit > >> Hi, >> From the log output it looks like credentials (password) for >> Utilisateur was not correct. >> Is Utilisateur a valid Windows user on your machine? Have you >> registered the username/password correctly (Try re-registering the >> username+password by typing "mpiexec -register" at the command prompt)? >> >> Regards, >> Jayesh >> >> ----- Original Message ----- >> From: "spatiogis" >> To: discuss at mpich.org >> Sent: Friday, May 3, 2013 11:58:00 AM >> Subject: Re: [mpich-discuss] install + config on windows >> >> Hello, >> >> for this command : >> >> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 >> C:\Progra~1\MPICH2\examples\cpi.exe >> >> result : >> >> ....../SMPDU_Sock_post_readv >> ...../SMPDU_Sock_post_read >> ..../smpd_handle_op_connect >> ....sock_waiting for the next event. >> ....\SMPDU_Sock_wait >> ..../SMPDU_Sock_wait >> ....SOCK_OP_READ event.error = 0, result = 0, context=left >> ....\smpd_handle_op_read >> .....\smpd_state_reading_challenge_string >> ......read challenge string: '1.4.1p1 18467' >> ......\smpd_verify_version >> ....../smpd_verify_version >> ......Verification of smpd version succeeded >> ......\smpd_hash >> ....../smpd_hash >> ......\SMPDU_Sock_post_write >> .......\SMPDU_Sock_post_writev >> ......./SMPDU_Sock_post_writev >> ....../SMPDU_Sock_post_write >> ...../smpd_state_reading_challenge_string >> ..../smpd_handle_op_read >> ....sock_waiting for the next event. >> ....\SMPDU_Sock_wait >> ..../SMPDU_Sock_wait >> ....SOCK_OP_WRITE event.error = 0, result = 0, context=left >> ....\smpd_handle_op_write >> .....\smpd_state_writing_challenge_response >> ......wrote challenge response: 'dafd1d07c1e6e9cb5fae968403d0d933' >> ......\SMPDU_Sock_post_read >> .......\SMPDU_Sock_post_readv >> ......./SMPDU_Sock_post_readv >> ....../SMPDU_Sock_post_read >> ...../smpd_state_writing_challenge_response >> ..../smpd_handle_op_write >> ....sock_waiting for the next event. >> ....\SMPDU_Sock_wait >> ..../SMPDU_Sock_wait >> ....SOCK_OP_READ event.error = 0, result = 0, context=left >> ....\smpd_handle_op_read >> .....\smpd_state_reading_connect_result >> ......read connect result: 'SUCCESS' >> ......\SMPDU_Sock_post_write >> .......\SMPDU_Sock_post_writev >> ......./SMPDU_Sock_post_writev >> ....../SMPDU_Sock_post_write >> ...../smpd_state_reading_connect_result >> ..../smpd_handle_op_read >> ....sock_waiting for the next event. >> ....\SMPDU_Sock_wait >> ..../SMPDU_Sock_wait >> ....SOCK_OP_WRITE event.error = 0, result = 0, context=left >> ....\smpd_handle_op_write >> .....\smpd_state_writing_process_session_request >> ......wrote process session request: 'process' >> ......\SMPDU_Sock_post_read >> .......\SMPDU_Sock_post_readv >> ......./SMPDU_Sock_post_readv >> ....../SMPDU_Sock_post_read >> ...../smpd_state_writing_process_session_request >> ..../smpd_handle_op_write >> ....sock_waiting for the next event. >> ....\SMPDU_Sock_wait >> ..../SMPDU_Sock_wait >> ....SOCK_OP_READ event.error = 0, result = 0, context=left >> ....\smpd_handle_op_read >> .....\smpd_state_reading_cred_request >> ......read cred request: 'credentials' >> ......\smpd_hide_string_arg >> .......\first_token >> ......./first_token >> .......\compare_token >> ......./compare_token >> .......\next_token >> ........\first_token >> ......../first_token >> ........\first_token >> ......../first_token >> ......./next_token >> ....../smpd_hide_string_arg >> ....../smpd_hide_string_arg >> ......\smpd_hide_string_arg >> .......\first_token >> ......./first_token >> .......\compare_token >> ......./compare_token >> .......\next_token >> ........\first_token >> ......../first_token >> ........\first_token >> ......../first_token >> ......./next_token >> ....../smpd_hide_string_arg >> ....../smpd_hide_string_arg >> ......\smpd_hide_string_arg >> .......\first_token >> ......./first_token >> .......\compare_token >> ......./compare_token >> .......\next_token >> ........\first_token >> ......../first_token >> ........\first_token >> ......../first_token >> ......./next_token >> ....../smpd_hide_string_arg >> ....../smpd_hide_string_arg >> ......\smpd_hide_string_arg >> .......\first_token >> ......./first_token >> .......\compare_token >> ......./compare_token >> .......\next_token >> ........\first_token >> ......../first_token >> ........\first_token >> ......../first_token >> ......./next_token >> ....../smpd_hide_string_arg >> ....../smpd_hide_string_arg >> ......\smpd_hide_string_arg >> .......\first_token >> ......./first_token >> .......\compare_token >> ......./compare_token >> .......\next_token >> ........\first_token >> ......../first_token >> ........\first_token >> ......../first_token >> ......./next_token >> ....../smpd_hide_string_arg >> ....../smpd_hide_string_arg >> .......\smpd_option_on >> ........\smpd_get_smpd_data >> .........\smpd_get_smpd_data_from_environment >> ........./smpd_get_smpd_data_from_environment >> .........\smpd_get_smpd_data_default >> ........./smpd_get_smpd_data_default >> .........Unable to get the data for the key 'nocache' >> ......../smpd_get_smpd_data >> ......./smpd_option_on >> ......\smpd_hide_string_arg >> .......\first_token >> ......./first_token >> .......\compare_token >> ......./compare_token >> .......\next_token >> ........\first_token >> ......../first_token >> ........\first_token >> ......../first_token >> ......./next_token >> ....../smpd_hide_string_arg >> ....../smpd_hide_string_arg >> ......\SMPDU_Sock_post_write >> .......\SMPDU_Sock_post_writev >> ......./SMPDU_Sock_post_writev >> ....../SMPDU_Sock_post_write >> ...../smpd_handle_op_read >> .....sock_waiting for the next event. >> .....\SMPDU_Sock_wait >> ...../SMPDU_Sock_wait >> .....SOCK_OP_WRITE event.error = 0, result = 0, context=left >> .....\smpd_handle_op_write >> ......\smpd_state_writing_cred_ack_yes >> .......wrote cred request yes ack. >> .......\SMPDU_Sock_post_write >> ........\SMPDU_Sock_post_writev >> ......../SMPDU_Sock_post_writev >> ......./SMPDU_Sock_post_write >> ....../smpd_state_writing_cred_ack_yes >> ...../smpd_handle_op_write >> .....sock_waiting for the next event. >> .....\SMPDU_Sock_wait >> ...../SMPDU_Sock_wait >> .....SOCK_OP_WRITE event.error = 0, result = 0, context=left >> .....\smpd_handle_op_write >> ......\smpd_state_writing_account >> .......wrote account: 'Utilisateur' >> .......\smpd_encrypt_data >> ......./smpd_encrypt_data >> .......\SMPDU_Sock_post_write >> ........\SMPDU_Sock_post_writev >> ......../SMPDU_Sock_post_writev >> ......./SMPDU_Sock_post_write >> ....../smpd_state_writing_account >> ...../smpd_handle_op_write >> .....sock_waiting for the next event. >> .....\SMPDU_Sock_wait >> ...../SMPDU_Sock_wait >> .....SOCK_OP_WRITE event.error = 0, result = 0, context=left >> .....\smpd_handle_op_write >> ......\smpd_hide_string_arg >> .......\first_token >> ......./first_token >> .......\compare_token >> ......./compare_token >> .......\next_token >> ........\first_token >> ......../first_token >> ........\first_token >> ......../first_token >> ......./next_token >> ....../smpd_hide_string_arg >> ....../smpd_hide_string_arg >> .......\smpd_hide_string_arg >> ........\first_token >> ......../first_token >> ........\compare_token >> ......../compare_token >> ........\next_token >> .........\first_token >> ........./first_token >> .........\first_token >> ........./first_token >> ......../next_token >> ......./smpd_hide_string_arg >> ......./smpd_hide_string_arg >> .......\SMPDU_Sock_post_read >> ........\SMPDU_Sock_post_readv >> ......../SMPDU_Sock_post_readv >> ......./SMPDU_Sock_post_read >> ......\smpd_hide_string_arg >> .......\first_token >> ......./first_token >> .......\compare_token >> ......./compare_token >> .......\next_token >> ........\first_token >> ......../first_token >> ........\first_token >> ......../first_token >> ......./next_token >> ....../smpd_hide_string_arg >> ....../smpd_hide_string_arg >> ...../smpd_handle_op_write >> .....sock_waiting for the next event. >> .....\SMPDU_Sock_wait >> ...../SMPDU_Sock_wait >> .....SOCK_OP_READ event.error = 0, result = 0, context=left >> .....\smpd_handle_op_read >> ......\smpd_state_reading_process_result >> .......read process session result: 'FAIL' >> .......\smpd_hide_string_arg >> ........\first_token >> ......../first_token >> ........\compare_token >> ......../compare_token >> ........\next_token >> .........\first_token >> ........./first_token >> .........\first_token >> ........./first_token >> ......../next_token >> ......./smpd_hide_string_arg >> ......./smpd_hide_string_arg >> .......\smpd_hide_string_arg >> ........\first_token >> ......../first_token >> ........\compare_token >> ......../compare_token >> ........\next_token >> .........\first_token >> ........./first_token >> .........\first_token >> ........./first_token >> ......../next_token >> ......./smpd_hide_string_arg >> ......./smpd_hide_string_arg >> Credentials for Utilisateur rejected connecting to Benoit >> .......process session rejected >> .......\SMPDU_Sock_post_close >> ........\SMPDU_Sock_post_read >> .........\SMPDU_Sock_post_readv >> ........./SMPDU_Sock_post_readv >> ......../SMPDU_Sock_post_read >> ......./SMPDU_Sock_post_close >> .......\smpd_post_abort_command >> ........\smpd_create_command >> .........\smpd_init_command >> ........./smpd_init_command >> ......../smpd_create_command >> ........\smpd_add_command_arg >> ......../smpd_add_command_arg >> ........\smpd_command_destination >> .........0 -> 0 : returning NULL context >> ......../smpd_command_destination >> Aborting: Unable to connect to Benoit >> ......./smpd_post_abort_command >> .......\smpd_exit >> ........\smpd_kill_all_processes >> ......../smpd_kill_all_processes >> ........\smpd_finalize_drive_maps >> ......../smpd_finalize_drive_maps >> ........\smpd_dbs_finalize >> ......../smpd_dbs_finalize >> ........\SMPDU_Sock_finalize >> ......../SMPDU_Sock_finalize >> >> C:\Users\Utilisateur> >>> Hi, >>> Looks like you missed the "-" before the status ("smpd -status" not >>> "smpd status") argument. >>> It also looks like you have multiple MPI libraries installed in your >>> system. Try running this command (full path to mpiexec and smpd), >>> >>> # C:\Progra~1\MPICH2\bin\smpd -status >>> >>> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 >>> C:\Progra~1\MPICH2\examples\cpi.exe >>> >>> >>> Regards, >>> Jayesh >>> >>> ----- Original Message ----- >>> From: "spatiogis" >>> To: "Jayesh Krishna" >>> Sent: Friday, May 3, 2013 11:05:34 AM >>> Subject: Re: [mpich-discuss] install + config on windows >>> >>> Hello, >>> >>> C:\Users\Utilisateur>smpd status >>> Unexpected parameters: status >>> >>> C:\Users\Utilisateur>mpiexec -verbose -n 2 >>> C:\Progra~1\MPICH2\examples\cpi.exe >>> Unknown option: -verbose >>> >>> ----------------------------------------------------------------------------- >>> C:\Program Files\MPICH2\examples>mpiexec -verbose -n 2 cpi.exe >>> Unknown option: -verbose >>> >>> C:\Program Files\MPICH2\examples>smpd status >>> Unexpected parameters: status >>> ----------------------------------------------------------------------------- >>> >>> regards, Ben >>> >>>> Hi, >>>> Ok. Please send us the output of the following commands, >>>> >>>> # smpd -status >>>> # mpiexec -verbose -n 2 C:\Progra~1\MPICH2\examples\cpi.exe >>>> >>>> Please copy-paste the command and the complete output in your email. >>>> >>>> Regards, >>>> Jayesh >>>> >>>> >>>> ----- Original Message ----- >>>> From: "spatiogis" >>>> To: discuss at mpich.org >>>> Sent: Friday, May 3, 2013 1:46:53 AM >>>> Subject: Re: [mpich-discuss] install + config on windows >>>> >>>> Hello >>>> >>>> >>>>> (PS: I am assuming from your reply in the previous email that you can >>>>> run a command like "mpiexec -n 2 C:\Progra~1\MPICH2\examples\cpi.exe" >>>>> correctly) >>>> >>>> In fact this command doesn't run. >>>> >>>> The message is this one >>>> >>>> [01:11728]....ERROR:unable to read the cmd header on the pmi context, >>>> Error = -1 >>>> >>>> Ben >>>> >>>> >>>>> ----- Original Message ----- >>>>> From: "spatiogis" >>>>> To: "Jayesh Krishna" >>>>> Sent: Thursday, May 2, 2013 10:48:56 AM >>>>> Subject: Re: [mpich-discuss] install + config on windows >>>>> >>>>> Hello, >>>>> >>>>>> Hi, >>>>>> Are you able to run any other MPI programs? Try running the example >>>>>> program, cpi.exe (C:\Program Files\MPICH2\examples\cpi.exe), to make >>>>>> sure that your MPICH2 installation works. >>>>> >>>>> yes it does work >>>>> >>>>>> Installing MPICH2 on Windows 7 typically requires you to uninstall >>>>>> any >>>>>> previous versions of MPICH2, launch an administrative command promt >>>>>> and >>>>>> run "msiexec /i mpich2-installer.msi" to install MPICH2. >>>>> >>>>> yes it 's been installed like this... >>>>> >>>>> In wmpiconfig, the message is the following in the 'Get settings' >>>>> line. >>>>> >>>>> Credentials for Utilisateur rejected connecting to host >>>>> Aborting: Unable to connect to host >>>>> >>>>> The software I try to use is Taudem, which is intergrated inside >>>>> Qgis. >>>>> Launching a taudem process inside Qgis gives the same message. >>>>> >>>>> >>>>>> Regards, >>>>>> Jayesh >>>>> >>>>> Sincerely, Ben >>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: "spatiogis" >>>>>> To: discuss at mpich.org >>>>>> Sent: Thursday, May 2, 2013 10:08:23 AM >>>>>> Subject: Re: [mpich-discuss] install + config on windows >>>>>> >>>>>> Hello, >>>>>> >>>>>> in my case Mpich is normally used to run .exe programs. I guess that >>>>>> they >>>>>> are already compiled... >>>>>> The .exe files are integrated into a software, and accessed from >>>>>> menus >>>>>> inside it. When I run one of the programs, the answer is actually >>>>>> "unable >>>>>> to query host". >>>>>> At the end, the process is not realised. It seems that this 'host' >>>>>> question is a problem to the software... >>>>>> >>>>>> Sincerely, >>>>>> >>>>>> Ben. >>>>>> >>>>>> >>>>>>> Hi, >>>>>>> You can download MPICH2 binaries for Windows at >>>>>>> http://www.mpich.org/downloads/ . >>>>>>> You need to compile your MPI programs with MPICH2 to make it work. >>>>>>> I >>>>>>> would recommend recompiling your code after you install MPICH2 (If >>>>>>> you >>>>>>> have MPI program binaries pre-built with MPICH2 - instead of >>>>>>> compiling >>>>>>> them on your own - make sure that you install the same version of >>>>>>> MPICH2 >>>>>>> that was used to build the binaries). >>>>>>> The wmpiregister program has a bug and you can ignore this error >>>>>>> message ("...unable to query host"). Can you run your MPI program >>>>>>> using >>>>>>> mpiexec from a command prompt? >>>>>>> >>>>>>> Regards, >>>>>>> Jayesh >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>> From: "spatiogis" >>>>>>> To: discuss at mpich.org >>>>>>> Sent: Tuesday, April 30, 2013 9:26:35 AM >>>>>>> Subject: [mpich-discuss] install + config on windows >>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> I'm not very good at computing, but I would like to install Mpich2 >>>>>>> on >>>>>>> windows 7 - 64 bits. There is only one pc, with one user plus the >>>>>>> admin, >>>>>>> and a simple core processor. >>>>>>> >>>>>>> I would like to know if it's mandatory to have compiling softwares >>>>>>> with >>>>>>> it to make it work, whereas it is asked in this case only to make >>>>>>> run >>>>>>> another software, and not for compiling (that would maybe save some >>>>>>> disk >>>>>>> space and simplify the installation) ? >>>>>>> >>>>>>> My second issue is that I must be missing something about the >>>>>>> server >>>>>>> configuration. I have installed Mpich from the .msi file, then >>>>>>> configured >>>>>>> the wmpiregister program with the Domain/user informations. >>>>>>> >>>>>>> There is this message displayed when trying to connect in the >>>>>>> 'configurable settings' window : 'MPICH2 not installed or unable to >>>>>>> query >>>>>>> the host'. >>>>>>> >>>>>>> What is the host actually ? >>>>>>> >>>>>>> I know I am starting from very far, I am sorry for these very >>>>>>> simple >>>>>>> questions. Thanks if you can reply me, that would certainly save me >>>>>>> some >>>>>>> long hours of reading and testing ;) >>>>>>> >>>>>>> sincerely, >>>>>>> >>>>>>> Ben >>>>>>> _______________________________________________ >>>>>>> discuss mailing list discuss at mpich.org >>>>>>> To manage subscription options or unsubscribe: >>>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From wbland at mcs.anl.gov Wed Jun 26 12:49:48 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Wed, 26 Jun 2013 12:49:48 -0500 Subject: [mpich-discuss] install + config on windows In-Reply-To: <51CB2901.8060706@mcs.anl.gov> References: <1705457099.8240817.1372268319266.JavaMail.root@mcs.anl.gov> <51CB2901.8060706@mcs.anl.gov> Message-ID: That's not entirely true. For Windows, there is no version of MPICH that is supported. On Jun 26, 2013, at 12:46 PM, Pavan Balaji wrote: > > FYI: > > sed -e 's/MPICH2/MPICH/g' > > -- Pavan > > On 06/26/2013 12:38 PM, Jayesh Krishna wrote: >> Hi, >> Please note that upgrading the user who installs the software (Utilisateur) to an administrator is not enough to install MPICH2 correctly. You need to make sure that you install MPICH2 from an administrator command prompt (The MPICH2 installer's guide should have the details). Please follow the steps below to install MPICH2, >> >> # Uninstall any exising versions of MPICH2 on your system >> # Right-click on the Windows command prompt icon and select "Run as administrator" (Now you have an administrator command prompt) >> # From within the command prompt type "msiexec /i MPICH2-INSTALLER-FILE.msi" (Where MPICH2-INSTALLER-FILE.msi is the msi file downloaded from the MPICH2 website to install MPICH2) to install MPICH2. >> >> Regards, >> Jayesh >> >> ----- Original Message ----- >> From: "spatiogis" >> To: "Jayesh Krishna" >> Sent: Wednesday, June 19, 2013 1:13:48 AM >> Subject: Re: [mpich-discuss] install + config on windows >> >> Hello, >> >> I am coming back to this post to try to see what is happening. >> >> Utilisateur is now the admin on my machine. In wmpiregister, I just let >> the account by default (empty), and the password "behappy" >> >> Actually, the register command seems to have worked correctly. Anyway in >> wmpiconfig, there is this line : "nbpc: MPICH2 not installed or unable to >> query the host" >> >> Is it possible to reboot the host and the password ? >> >> Regards, >> >> Benoit >> >>> Hi, >>> From the log output it looks like credentials (password) for >>> Utilisateur was not correct. >>> Is Utilisateur a valid Windows user on your machine? Have you >>> registered the username/password correctly (Try re-registering the >>> username+password by typing "mpiexec -register" at the command prompt)? >>> >>> Regards, >>> Jayesh >>> >>> ----- Original Message ----- >>> From: "spatiogis" >>> To: discuss at mpich.org >>> Sent: Friday, May 3, 2013 11:58:00 AM >>> Subject: Re: [mpich-discuss] install + config on windows >>> >>> Hello, >>> >>> for this command : >>> >>> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 >>> C:\Progra~1\MPICH2\examples\cpi.exe >>> >>> result : >>> >>> ....../SMPDU_Sock_post_readv >>> ...../SMPDU_Sock_post_read >>> ..../smpd_handle_op_connect >>> ....sock_waiting for the next event. >>> ....\SMPDU_Sock_wait >>> ..../SMPDU_Sock_wait >>> ....SOCK_OP_READ event.error = 0, result = 0, context=left >>> ....\smpd_handle_op_read >>> .....\smpd_state_reading_challenge_string >>> ......read challenge string: '1.4.1p1 18467' >>> ......\smpd_verify_version >>> ....../smpd_verify_version >>> ......Verification of smpd version succeeded >>> ......\smpd_hash >>> ....../smpd_hash >>> ......\SMPDU_Sock_post_write >>> .......\SMPDU_Sock_post_writev >>> ......./SMPDU_Sock_post_writev >>> ....../SMPDU_Sock_post_write >>> ...../smpd_state_reading_challenge_string >>> ..../smpd_handle_op_read >>> ....sock_waiting for the next event. >>> ....\SMPDU_Sock_wait >>> ..../SMPDU_Sock_wait >>> ....SOCK_OP_WRITE event.error = 0, result = 0, context=left >>> ....\smpd_handle_op_write >>> .....\smpd_state_writing_challenge_response >>> ......wrote challenge response: 'dafd1d07c1e6e9cb5fae968403d0d933' >>> ......\SMPDU_Sock_post_read >>> .......\SMPDU_Sock_post_readv >>> ......./SMPDU_Sock_post_readv >>> ....../SMPDU_Sock_post_read >>> ...../smpd_state_writing_challenge_response >>> ..../smpd_handle_op_write >>> ....sock_waiting for the next event. >>> ....\SMPDU_Sock_wait >>> ..../SMPDU_Sock_wait >>> ....SOCK_OP_READ event.error = 0, result = 0, context=left >>> ....\smpd_handle_op_read >>> .....\smpd_state_reading_connect_result >>> ......read connect result: 'SUCCESS' >>> ......\SMPDU_Sock_post_write >>> .......\SMPDU_Sock_post_writev >>> ......./SMPDU_Sock_post_writev >>> ....../SMPDU_Sock_post_write >>> ...../smpd_state_reading_connect_result >>> ..../smpd_handle_op_read >>> ....sock_waiting for the next event. >>> ....\SMPDU_Sock_wait >>> ..../SMPDU_Sock_wait >>> ....SOCK_OP_WRITE event.error = 0, result = 0, context=left >>> ....\smpd_handle_op_write >>> .....\smpd_state_writing_process_session_request >>> ......wrote process session request: 'process' >>> ......\SMPDU_Sock_post_read >>> .......\SMPDU_Sock_post_readv >>> ......./SMPDU_Sock_post_readv >>> ....../SMPDU_Sock_post_read >>> ...../smpd_state_writing_process_session_request >>> ..../smpd_handle_op_write >>> ....sock_waiting for the next event. >>> ....\SMPDU_Sock_wait >>> ..../SMPDU_Sock_wait >>> ....SOCK_OP_READ event.error = 0, result = 0, context=left >>> ....\smpd_handle_op_read >>> .....\smpd_state_reading_cred_request >>> ......read cred request: 'credentials' >>> ......\smpd_hide_string_arg >>> .......\first_token >>> ......./first_token >>> .......\compare_token >>> ......./compare_token >>> .......\next_token >>> ........\first_token >>> ......../first_token >>> ........\first_token >>> ......../first_token >>> ......./next_token >>> ....../smpd_hide_string_arg >>> ....../smpd_hide_string_arg >>> ......\smpd_hide_string_arg >>> .......\first_token >>> ......./first_token >>> .......\compare_token >>> ......./compare_token >>> .......\next_token >>> ........\first_token >>> ......../first_token >>> ........\first_token >>> ......../first_token >>> ......./next_token >>> ....../smpd_hide_string_arg >>> ....../smpd_hide_string_arg >>> ......\smpd_hide_string_arg >>> .......\first_token >>> ......./first_token >>> .......\compare_token >>> ......./compare_token >>> .......\next_token >>> ........\first_token >>> ......../first_token >>> ........\first_token >>> ......../first_token >>> ......./next_token >>> ....../smpd_hide_string_arg >>> ....../smpd_hide_string_arg >>> ......\smpd_hide_string_arg >>> .......\first_token >>> ......./first_token >>> .......\compare_token >>> ......./compare_token >>> .......\next_token >>> ........\first_token >>> ......../first_token >>> ........\first_token >>> ......../first_token >>> ......./next_token >>> ....../smpd_hide_string_arg >>> ....../smpd_hide_string_arg >>> ......\smpd_hide_string_arg >>> .......\first_token >>> ......./first_token >>> .......\compare_token >>> ......./compare_token >>> .......\next_token >>> ........\first_token >>> ......../first_token >>> ........\first_token >>> ......../first_token >>> ......./next_token >>> ....../smpd_hide_string_arg >>> ....../smpd_hide_string_arg >>> .......\smpd_option_on >>> ........\smpd_get_smpd_data >>> .........\smpd_get_smpd_data_from_environment >>> ........./smpd_get_smpd_data_from_environment >>> .........\smpd_get_smpd_data_default >>> ........./smpd_get_smpd_data_default >>> .........Unable to get the data for the key 'nocache' >>> ......../smpd_get_smpd_data >>> ......./smpd_option_on >>> ......\smpd_hide_string_arg >>> .......\first_token >>> ......./first_token >>> .......\compare_token >>> ......./compare_token >>> .......\next_token >>> ........\first_token >>> ......../first_token >>> ........\first_token >>> ......../first_token >>> ......./next_token >>> ....../smpd_hide_string_arg >>> ....../smpd_hide_string_arg >>> ......\SMPDU_Sock_post_write >>> .......\SMPDU_Sock_post_writev >>> ......./SMPDU_Sock_post_writev >>> ....../SMPDU_Sock_post_write >>> ...../smpd_handle_op_read >>> .....sock_waiting for the next event. >>> .....\SMPDU_Sock_wait >>> ...../SMPDU_Sock_wait >>> .....SOCK_OP_WRITE event.error = 0, result = 0, context=left >>> .....\smpd_handle_op_write >>> ......\smpd_state_writing_cred_ack_yes >>> .......wrote cred request yes ack. >>> .......\SMPDU_Sock_post_write >>> ........\SMPDU_Sock_post_writev >>> ......../SMPDU_Sock_post_writev >>> ......./SMPDU_Sock_post_write >>> ....../smpd_state_writing_cred_ack_yes >>> ...../smpd_handle_op_write >>> .....sock_waiting for the next event. >>> .....\SMPDU_Sock_wait >>> ...../SMPDU_Sock_wait >>> .....SOCK_OP_WRITE event.error = 0, result = 0, context=left >>> .....\smpd_handle_op_write >>> ......\smpd_state_writing_account >>> .......wrote account: 'Utilisateur' >>> .......\smpd_encrypt_data >>> ......./smpd_encrypt_data >>> .......\SMPDU_Sock_post_write >>> ........\SMPDU_Sock_post_writev >>> ......../SMPDU_Sock_post_writev >>> ......./SMPDU_Sock_post_write >>> ....../smpd_state_writing_account >>> ...../smpd_handle_op_write >>> .....sock_waiting for the next event. >>> .....\SMPDU_Sock_wait >>> ...../SMPDU_Sock_wait >>> .....SOCK_OP_WRITE event.error = 0, result = 0, context=left >>> .....\smpd_handle_op_write >>> ......\smpd_hide_string_arg >>> .......\first_token >>> ......./first_token >>> .......\compare_token >>> ......./compare_token >>> .......\next_token >>> ........\first_token >>> ......../first_token >>> ........\first_token >>> ......../first_token >>> ......./next_token >>> ....../smpd_hide_string_arg >>> ....../smpd_hide_string_arg >>> .......\smpd_hide_string_arg >>> ........\first_token >>> ......../first_token >>> ........\compare_token >>> ......../compare_token >>> ........\next_token >>> .........\first_token >>> ........./first_token >>> .........\first_token >>> ........./first_token >>> ......../next_token >>> ......./smpd_hide_string_arg >>> ......./smpd_hide_string_arg >>> .......\SMPDU_Sock_post_read >>> ........\SMPDU_Sock_post_readv >>> ......../SMPDU_Sock_post_readv >>> ......./SMPDU_Sock_post_read >>> ......\smpd_hide_string_arg >>> .......\first_token >>> ......./first_token >>> .......\compare_token >>> ......./compare_token >>> .......\next_token >>> ........\first_token >>> ......../first_token >>> ........\first_token >>> ......../first_token >>> ......./next_token >>> ....../smpd_hide_string_arg >>> ....../smpd_hide_string_arg >>> ...../smpd_handle_op_write >>> .....sock_waiting for the next event. >>> .....\SMPDU_Sock_wait >>> ...../SMPDU_Sock_wait >>> .....SOCK_OP_READ event.error = 0, result = 0, context=left >>> .....\smpd_handle_op_read >>> ......\smpd_state_reading_process_result >>> .......read process session result: 'FAIL' >>> .......\smpd_hide_string_arg >>> ........\first_token >>> ......../first_token >>> ........\compare_token >>> ......../compare_token >>> ........\next_token >>> .........\first_token >>> ........./first_token >>> .........\first_token >>> ........./first_token >>> ......../next_token >>> ......./smpd_hide_string_arg >>> ......./smpd_hide_string_arg >>> .......\smpd_hide_string_arg >>> ........\first_token >>> ......../first_token >>> ........\compare_token >>> ......../compare_token >>> ........\next_token >>> .........\first_token >>> ........./first_token >>> .........\first_token >>> ........./first_token >>> ......../next_token >>> ......./smpd_hide_string_arg >>> ......./smpd_hide_string_arg >>> Credentials for Utilisateur rejected connecting to Benoit >>> .......process session rejected >>> .......\SMPDU_Sock_post_close >>> ........\SMPDU_Sock_post_read >>> .........\SMPDU_Sock_post_readv >>> ........./SMPDU_Sock_post_readv >>> ......../SMPDU_Sock_post_read >>> ......./SMPDU_Sock_post_close >>> .......\smpd_post_abort_command >>> ........\smpd_create_command >>> .........\smpd_init_command >>> ........./smpd_init_command >>> ......../smpd_create_command >>> ........\smpd_add_command_arg >>> ......../smpd_add_command_arg >>> ........\smpd_command_destination >>> .........0 -> 0 : returning NULL context >>> ......../smpd_command_destination >>> Aborting: Unable to connect to Benoit >>> ......./smpd_post_abort_command >>> .......\smpd_exit >>> ........\smpd_kill_all_processes >>> ......../smpd_kill_all_processes >>> ........\smpd_finalize_drive_maps >>> ......../smpd_finalize_drive_maps >>> ........\smpd_dbs_finalize >>> ......../smpd_dbs_finalize >>> ........\SMPDU_Sock_finalize >>> ......../SMPDU_Sock_finalize >>> >>> C:\Users\Utilisateur> >>>> Hi, >>>> Looks like you missed the "-" before the status ("smpd -status" not >>>> "smpd status") argument. >>>> It also looks like you have multiple MPI libraries installed in your >>>> system. Try running this command (full path to mpiexec and smpd), >>>> >>>> # C:\Progra~1\MPICH2\bin\smpd -status >>>> >>>> # C:\Progra~1\MPICH2\bin\mpiexec -verbose -n 2 >>>> C:\Progra~1\MPICH2\examples\cpi.exe >>>> >>>> >>>> Regards, >>>> Jayesh >>>> >>>> ----- Original Message ----- >>>> From: "spatiogis" >>>> To: "Jayesh Krishna" >>>> Sent: Friday, May 3, 2013 11:05:34 AM >>>> Subject: Re: [mpich-discuss] install + config on windows >>>> >>>> Hello, >>>> >>>> C:\Users\Utilisateur>smpd status >>>> Unexpected parameters: status >>>> >>>> C:\Users\Utilisateur>mpiexec -verbose -n 2 >>>> C:\Progra~1\MPICH2\examples\cpi.exe >>>> Unknown option: -verbose >>>> >>>> ----------------------------------------------------------------------------- >>>> C:\Program Files\MPICH2\examples>mpiexec -verbose -n 2 cpi.exe >>>> Unknown option: -verbose >>>> >>>> C:\Program Files\MPICH2\examples>smpd status >>>> Unexpected parameters: status >>>> ----------------------------------------------------------------------------- >>>> >>>> regards, Ben >>>> >>>>> Hi, >>>>> Ok. Please send us the output of the following commands, >>>>> >>>>> # smpd -status >>>>> # mpiexec -verbose -n 2 C:\Progra~1\MPICH2\examples\cpi.exe >>>>> >>>>> Please copy-paste the command and the complete output in your email. >>>>> >>>>> Regards, >>>>> Jayesh >>>>> >>>>> >>>>> ----- Original Message ----- >>>>> From: "spatiogis" >>>>> To: discuss at mpich.org >>>>> Sent: Friday, May 3, 2013 1:46:53 AM >>>>> Subject: Re: [mpich-discuss] install + config on windows >>>>> >>>>> Hello >>>>> >>>>> >>>>>> (PS: I am assuming from your reply in the previous email that you can >>>>>> run a command like "mpiexec -n 2 C:\Progra~1\MPICH2\examples\cpi.exe" >>>>>> correctly) >>>>> >>>>> In fact this command doesn't run. >>>>> >>>>> The message is this one >>>>> >>>>> [01:11728]....ERROR:unable to read the cmd header on the pmi context, >>>>> Error = -1 >>>>> >>>>> Ben >>>>> >>>>> >>>>>> ----- Original Message ----- >>>>>> From: "spatiogis" >>>>>> To: "Jayesh Krishna" >>>>>> Sent: Thursday, May 2, 2013 10:48:56 AM >>>>>> Subject: Re: [mpich-discuss] install + config on windows >>>>>> >>>>>> Hello, >>>>>> >>>>>>> Hi, >>>>>>> Are you able to run any other MPI programs? Try running the example >>>>>>> program, cpi.exe (C:\Program Files\MPICH2\examples\cpi.exe), to make >>>>>>> sure that your MPICH2 installation works. >>>>>> >>>>>> yes it does work >>>>>> >>>>>>> Installing MPICH2 on Windows 7 typically requires you to uninstall >>>>>>> any >>>>>>> previous versions of MPICH2, launch an administrative command promt >>>>>>> and >>>>>>> run "msiexec /i mpich2-installer.msi" to install MPICH2. >>>>>> >>>>>> yes it 's been installed like this... >>>>>> >>>>>> In wmpiconfig, the message is the following in the 'Get settings' >>>>>> line. >>>>>> >>>>>> Credentials for Utilisateur rejected connecting to host >>>>>> Aborting: Unable to connect to host >>>>>> >>>>>> The software I try to use is Taudem, which is intergrated inside >>>>>> Qgis. >>>>>> Launching a taudem process inside Qgis gives the same message. >>>>>> >>>>>> >>>>>>> Regards, >>>>>>> Jayesh >>>>>> >>>>>> Sincerely, Ben >>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>> From: "spatiogis" >>>>>>> To: discuss at mpich.org >>>>>>> Sent: Thursday, May 2, 2013 10:08:23 AM >>>>>>> Subject: Re: [mpich-discuss] install + config on windows >>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> in my case Mpich is normally used to run .exe programs. I guess that >>>>>>> they >>>>>>> are already compiled... >>>>>>> The .exe files are integrated into a software, and accessed from >>>>>>> menus >>>>>>> inside it. When I run one of the programs, the answer is actually >>>>>>> "unable >>>>>>> to query host". >>>>>>> At the end, the process is not realised. It seems that this 'host' >>>>>>> question is a problem to the software... >>>>>>> >>>>>>> Sincerely, >>>>>>> >>>>>>> Ben. >>>>>>> >>>>>>> >>>>>>>> Hi, >>>>>>>> You can download MPICH2 binaries for Windows at >>>>>>>> http://www.mpich.org/downloads/ . >>>>>>>> You need to compile your MPI programs with MPICH2 to make it work. >>>>>>>> I >>>>>>>> would recommend recompiling your code after you install MPICH2 (If >>>>>>>> you >>>>>>>> have MPI program binaries pre-built with MPICH2 - instead of >>>>>>>> compiling >>>>>>>> them on your own - make sure that you install the same version of >>>>>>>> MPICH2 >>>>>>>> that was used to build the binaries). >>>>>>>> The wmpiregister program has a bug and you can ignore this error >>>>>>>> message ("...unable to query host"). Can you run your MPI program >>>>>>>> using >>>>>>>> mpiexec from a command prompt? >>>>>>>> >>>>>>>> Regards, >>>>>>>> Jayesh >>>>>>>> >>>>>>>> ----- Original Message ----- >>>>>>>> From: "spatiogis" >>>>>>>> To: discuss at mpich.org >>>>>>>> Sent: Tuesday, April 30, 2013 9:26:35 AM >>>>>>>> Subject: [mpich-discuss] install + config on windows >>>>>>>> >>>>>>>> Hello, >>>>>>>> >>>>>>>> I'm not very good at computing, but I would like to install Mpich2 >>>>>>>> on >>>>>>>> windows 7 - 64 bits. There is only one pc, with one user plus the >>>>>>>> admin, >>>>>>>> and a simple core processor. >>>>>>>> >>>>>>>> I would like to know if it's mandatory to have compiling softwares >>>>>>>> with >>>>>>>> it to make it work, whereas it is asked in this case only to make >>>>>>>> run >>>>>>>> another software, and not for compiling (that would maybe save some >>>>>>>> disk >>>>>>>> space and simplify the installation) ? >>>>>>>> >>>>>>>> My second issue is that I must be missing something about the >>>>>>>> server >>>>>>>> configuration. I have installed Mpich from the .msi file, then >>>>>>>> configured >>>>>>>> the wmpiregister program with the Domain/user informations. >>>>>>>> >>>>>>>> There is this message displayed when trying to connect in the >>>>>>>> 'configurable settings' window : 'MPICH2 not installed or unable to >>>>>>>> query >>>>>>>> the host'. >>>>>>>> >>>>>>>> What is the host actually ? >>>>>>>> >>>>>>>> I know I am starting from very far, I am sorry for these very >>>>>>>> simple >>>>>>>> questions. Thanks if you can reply me, that would certainly save me >>>>>>>> some >>>>>>>> long hours of reading and testing ;) >>>>>>>> >>>>>>>> sincerely, >>>>>>>> >>>>>>>> Ben >>>>>>>> _______________________________________________ >>>>>>>> discuss mailing list discuss at mpich.org >>>>>>>> To manage subscription options or unsubscribe: >>>>>>>> https://lists.mpich.org/mailman/listinfo/discuss >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From balaji at mcs.anl.gov Wed Jun 26 12:58:29 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Wed, 26 Jun 2013 12:58:29 -0500 Subject: [mpich-discuss] install + config on windows In-Reply-To: References: <1705457099.8240817.1372268319266.JavaMail.root@mcs.anl.gov> <51CB2901.8060706@mcs.anl.gov> Message-ID: <51CB2BC5.3060001@mcs.anl.gov> On 06/26/2013 12:49 PM, Wesley Bland wrote: > That's not entirely true. For Windows, there is no version of MPICH > that is supported. LOL :-). Well, in that case, "mpich2" is not supported anymore either. :-) -- Pavan Balaji http://www.mcs.anl.gov/~balaji From matthieu.dorier at irisa.fr Wed Jun 26 13:10:29 2013 From: matthieu.dorier at irisa.fr (Matthieu Dorier) Date: Wed, 26 Jun 2013 20:10:29 +0200 (CEST) Subject: [mpich-discuss] Problem with ADIOI_Info_get (MPI_Info_get) from the ADIO layer In-Reply-To: <20130626152136.GC3154@mcs.anl.gov> Message-ID: <1727700249.2951290.1372270228965.JavaMail.root@irisa.fr> ----- Mail original ----- > De: "Rob Latham" > ?: discuss at mpich.org > Envoy?: Mercredi 26 Juin 2013 10:21:36 > Objet: Re: [mpich-discuss] Problem with ADIOI_Info_get (MPI_Info_get) from the????????ADIO layer > > On Tue, Jun 25, 2013 at 05:10:48PM +0200, Matthieu Dorier wrote: > > Hi, > > > > I found the solution by investigating the code so I'll post it here > > in case it can be useful to someone else: > > > > When opening a file, ADIOI_xxx_SetInfo is called to copy the info > > structure. Unless overwritten by the ADIO backend, it's > > ADIO_GEN_SetInfo (in src/mpi/romio/adio/common/ad_hints.c) that > > ends up being called and this function only copies the hints that > > it knows (e.g. cb_buffer_size). So the solution consists in > > changing ADIO_GEN_SetInfo or (more appropriately) provide an > > implementation of ADIOI_xxx_SetInfo that copies custom parameters > > and the called ADIO_GEN_SetInfo. > > Yeah, consider the way ad_pvfs2 deals with this: > the function pointers in ad_pvfs2.c ?point to ADIOI_PVFS2_SetInfo > > In src/mpi/romio/adio/ad_pvfs2/ad_pvfs2_hints.c , ?all the > PVFS2-specific hints are processed, then it calls ADIOI_GEN_SetInfo > > (Now that I look at this, maybe the order should be reversed) Yes it should, otherwise ADIOI_GEN_SetInfo erases what has been set by ADIOI_PVFS2_SetInfo :-) Matthieu > > ==rob > > > Matthieu > > > > ----- Mail original ----- > > > > > De: "Matthieu Dorier" > > > ?: discuss at mpich.org > > > Envoy?: Lundi 24 Juin 2013 15:46:21 > > > Objet: [mpich-discuss] Problem with ADIOI_Info_get (MPI_Info_get) > > > from the ADIO layer > > > > > Hi, > > > > > I'm implementing an ADIO backend and I'm having problems > > > retrieving > > > values from the MPI_Info attached to the file. > > > On the application side, I have something like this: > > > > > MPI_Info_create(&info); > > > MPI_Info_set(info,"cb_buffer_size","64"); > > > MPI_Info_set(info,"xyz","3"); > > > MPI_File_open(comm, "file", > > > MPI_MODE_WRONLY | MPI_MODE_CREATE, info, &fh); > > > > > then a call to a MPI_File_write, which ends up calling my > > > implementation of ADIOI_xxx_WriteContig. In this function, I try > > > to > > > read back these info: > > > > > int info_flag; > > > char* value = (char *) > > > ADIOI_Malloc((MPI_MAX_INFO_VAL+1)*sizeof(char)); > > > ADIOI_Info_get(fd->info, "xyz", MPI_MAX_INFO_VAL, > > > value,&info_flag); > > > if(info_flag) fprintf(stderr,"xyz = %d\n",atoi(value)); > > > ADIOI_Info_get(fd->info, "cb_buffer_size", MPI_MAX_INFO_VAL, > > > value,&info_flag); > > > if(info_flag) fprintf(stderr,"cb_buffer_size = > > > %d\n",atoi(value)); > > > > > I can get the 64 associated to the cb_buffer_size key (which is a > > > reserved hint), but I don't get the second value. > > > Where does the problem come from? > > > I tried everything: re-ordering the calls, changing the name of > > > the > > > key, calling MPI_Info_get in the application to check that the > > > values are properly set (they are)... > > > > > Thanks > > > > > Matthieu Dorier > > > PhD student at ENS Cachan Brittany and IRISA > > > http://people.irisa.fr/Matthieu.Dorier > > > > > _______________________________________________ > > > discuss mailing list discuss at mpich.org > > > To manage subscription options or unsubscribe: > > > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > > discuss mailing list ? ? discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > > -- > Rob Latham > Mathematics and Computer Science Division > Argonne National Lab, IL USA > _______________________________________________ > discuss mailing list ? ? discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > From jedbrown at mcs.anl.gov Wed Jun 26 14:41:32 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 26 Jun 2013 14:41:32 -0500 Subject: [mpich-discuss] discuss Digest, Vol 8, Issue 39 In-Reply-To: References: <1861069226.7428562.1372108460652.JavaMail.root@alcf.anl.gov> <1884537.1z1OIaxkHc@localhost.localdomain> Message-ID: <87hagklkwj.fsf@mcs.anl.gov> Jeff Hammond writes: > N threads calling MPI_Barrier corresponds to N different, unrelated > barriers. A thread calling MPI_Barrier will only synchronize with > other processes, not any other threads. > > MPI_Barrier only acts between processes. It has no effect on threads. > Just use comm=MPI_COMM_SELF and think about the behavior of > MPI_Barrier. That is the one-process limit of the multithreaded > problem. Antonio, the MPI Forum is considering proposals for something they are calling "endpoints", that would enable use of MPI between threads. If accepted, the new interfaces would provide a way to create communicators that could be used in the way you suggested. Until then, you have to synchronize threads using other mechanisms (locks, barriers, OpenMP, etc). -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jsimsa at cs.cmu.edu Wed Jun 26 14:54:14 2013 From: jsimsa at cs.cmu.edu (Jiri Simsa) Date: Wed, 26 Jun 2013 15:54:14 -0400 Subject: [mpich-discuss] Non-blocking Collectives Message-ID: Hi, I have a question about the semantics of non-blocking collective communication. For example, let's consider MPI_Ibarrier. The MPI 3.0 standard specifies that: "MPI_IBARRIER is a nonblocking version of MPI_BARRIER. By calling MPI_IBARRIER, a process notifies that it has reached the barrier. The call returns immediately, indepen- dent of whether other processes have called MPI_IBARRIER. The usual barrier semantics are enforced at the corresponding completion operation (test or wait), which in the intra- communicator case will complete only after all other processes in the communicator have called MPI_IBARRIER. In the intercommunicator case, it will complete when all processes in the remote group have called MPI_IBARRIER." My understanding of the standard is that that MPI_Wait(&request, &status), where request has been previously passed into MPI_Ibarrier, returns after all processes in the respective intra-communicator called MPI_Ibarrier. However, the mpich-3.0.4 library, seems to in some cases wait for all processes in the respective intra-communicator to call MPI_Wait. Here is an example that demonstrates this behavior: #include #include int main( int argc, char *argv[]) { MPI_Request request; MPI_Status status; MPI_Init(&argc, &argv ); int myrank; MPI_Comm_rank(MPI_COMM_WORLD, &myrank); if (myrank == 0) { MPI_Ibarrier(MPI_COMM_WORLD, &request); MPI_Wait(&request, &status); printf("%d, Completed barrier.\n", myrank); } else { MPI_Ibarrier(MPI_COMM_WORLD, &request); sleep(1); MPI_Wait(&request, &status); printf("%d, Completed barrier.\n", myrank); } MPI_Finalize(); return 0; } When executed with "mpiexec -n 2 ./example", I see the expected output and timing. However, when executed with "mpiexec -n 3 ./example", the call to MPI_Wait in process 0 returns only after the other processes wake up from sleep() and call MPI_Wait. Isn't this a violation of the standard? Best, --Jiri Simsa -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.science at gmail.com Wed Jun 26 14:59:07 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Wed, 26 Jun 2013 14:59:07 -0500 Subject: [mpich-discuss] Non-blocking Collectives In-Reply-To: References: Message-ID: How is what you see not consistent with "The usual barrier semantics are enforced at the corresponding completion operation (test or wait)..."? I don't know what you mean by "expected output and timing" since I don't share your interpretation of the MPI standard, but I believe that the n=3 case is absolutely consistent with the semantics of MPI_Ibarrier+MPI_Wait. Jeff On Wed, Jun 26, 2013 at 2:54 PM, Jiri Simsa wrote: > Hi, > > I have a question about the semantics of non-blocking collective > communication. For example, let's consider MPI_Ibarrier. The MPI 3.0 > standard specifies that: > > "MPI_IBARRIER is a nonblocking version of MPI_BARRIER. By calling > MPI_IBARRIER, a process notifies that it has reached the barrier. The call > returns immediately, indepen- dent of whether other processes have called > MPI_IBARRIER. The usual barrier semantics are enforced at the corresponding > completion operation (test or wait), which in the intra- communicator case > will complete only after all other processes in the communicator have called > MPI_IBARRIER. In the intercommunicator case, it will complete when all > processes in the remote group have called MPI_IBARRIER." > > My understanding of the standard is that that MPI_Wait(&request, &status), > where request has been previously passed into MPI_Ibarrier, returns after > all processes in the respective intra-communicator called MPI_Ibarrier. > However, the mpich-3.0.4 library, seems to in some cases wait for all > processes in the respective intra-communicator to call MPI_Wait. Here is an > example that demonstrates this behavior: > > #include > #include > > int main( int argc, char *argv[]) { > MPI_Request request; > MPI_Status status; > MPI_Init(&argc, &argv ); > int myrank; > MPI_Comm_rank(MPI_COMM_WORLD, &myrank); > if (myrank == 0) { > MPI_Ibarrier(MPI_COMM_WORLD, &request); > MPI_Wait(&request, &status); > printf("%d, Completed barrier.\n", myrank); > } else { > MPI_Ibarrier(MPI_COMM_WORLD, &request); > sleep(1); > MPI_Wait(&request, &status); > printf("%d, Completed barrier.\n", myrank); > } > MPI_Finalize(); > return 0; > } > > When executed with "mpiexec -n 2 ./example", I see the expected output and > timing. However, when executed with "mpiexec -n 3 ./example", the call to > MPI_Wait in process 0 returns only after the other processes wake up from > sleep() and call MPI_Wait. > > Isn't this a violation of the standard? > > Best, > > --Jiri Simsa > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.science at gmail.com From balaji at mcs.anl.gov Wed Jun 26 15:00:20 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Wed, 26 Jun 2013 15:00:20 -0500 Subject: [mpich-discuss] Non-blocking Collectives In-Reply-To: References: Message-ID: <51CB4854.8010400@mcs.anl.gov> Hi Jiri, The completion of MPI_IBARRIER indicates that all processes have called MPI_IBARRIER. This part is correct. However, the specification does not say that MPI_WAIT on one process has to complete before others have called MPI_WAIT. That's related to asynchronous progress and is a quality of implementation issue. -- Pavan On 06/26/2013 02:54 PM, Jiri Simsa wrote: > Hi, > > I have a question about the semantics of non-blocking collective > communication. For example, let's consider MPI_Ibarrier. The MPI 3.0 > standard specifies that: > > "MPI_IBARRIER is a nonblocking version of MPI_BARRIER. By calling > MPI_IBARRIER, a process notifies that it has reached the barrier. The > call returns immediately, indepen- dent of whether other processes have > called MPI_IBARRIER. The usual barrier semantics are enforced at the > corresponding completion operation (test or wait), which in the intra- > communicator case will complete only after all other processes in the > communicator have called MPI_IBARRIER. In the intercommunicator case, it > will complete when all processes in the remote group have called > MPI_IBARRIER." > > My understanding of the standard is that that MPI_Wait(&request, > &status), where request has been previously passed into MPI_Ibarrier, > returns after all processes in the respective intra-communicator called > MPI_Ibarrier. However, the mpich-3.0.4 library, seems to in some cases > wait for all processes in the respective intra-communicator to call > MPI_Wait. Here is an example that demonstrates this behavior: > > #include > #include > > int main( int argc, char *argv[]) { > MPI_Request request; > MPI_Status status; > MPI_Init(&argc, &argv ); > int myrank; > MPI_Comm_rank(MPI_COMM_WORLD, &myrank); > if (myrank == 0) { > MPI_Ibarrier(MPI_COMM_WORLD, &request); > MPI_Wait(&request, &status); > printf("%d, Completed barrier.\n", myrank); > } else { > MPI_Ibarrier(MPI_COMM_WORLD, &request); > sleep(1); > MPI_Wait(&request, &status); > printf("%d, Completed barrier.\n", myrank); > } > MPI_Finalize(); > return 0; > } > > When executed with "mpiexec -n 2 ./example", I see the expected output > and timing. However, when executed with "mpiexec -n 3 ./example", the > call to MPI_Wait in process 0 returns only after the other processes > wake up from sleep() and call MPI_Wait. > > Isn't this a violation of the standard? > > Best, > > --Jiri Simsa > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jsimsa at cs.cmu.edu Wed Jun 26 15:08:22 2013 From: jsimsa at cs.cmu.edu (Jiri Simsa) Date: Wed, 26 Jun 2013 16:08:22 -0400 Subject: [mpich-discuss] Non-blocking Collectives In-Reply-To: <51CB4854.8010400@mcs.anl.gov> References: <51CB4854.8010400@mcs.anl.gov> Message-ID: Hi Pavan, Thank you for your quick answer. I am trying to understand the blocking behavior of MPI_Wait in the case of non-blocking collectives. Is it safe to assume that, for a non-blocking collective, MPI_Wait is guaranteed to return once all other processes call the corresponding completion operation (e.g. MPI_Wait or MPI_Test)? --Jiri On Wed, Jun 26, 2013 at 4:00 PM, Pavan Balaji wrote: > Hi Jiri, > > The completion of MPI_IBARRIER indicates that all processes have called > MPI_IBARRIER. This part is correct. > > However, the specification does not say that MPI_WAIT on one process has > to complete before others have called MPI_WAIT. That's related to > asynchronous progress and is a quality of implementation issue. > > -- Pavan > > > On 06/26/2013 02:54 PM, Jiri Simsa wrote: > >> Hi, >> >> I have a question about the semantics of non-blocking collective >> communication. For example, let's consider MPI_Ibarrier. The MPI 3.0 >> standard specifies that: >> >> "MPI_IBARRIER is a nonblocking version of MPI_BARRIER. By calling >> MPI_IBARRIER, a process notifies that it has reached the barrier. The >> call returns immediately, indepen- dent of whether other processes have >> called MPI_IBARRIER. The usual barrier semantics are enforced at the >> corresponding completion operation (test or wait), which in the intra- >> communicator case will complete only after all other processes in the >> communicator have called MPI_IBARRIER. In the intercommunicator case, it >> will complete when all processes in the remote group have called >> MPI_IBARRIER." >> >> My understanding of the standard is that that MPI_Wait(&request, >> &status), where request has been previously passed into MPI_Ibarrier, >> returns after all processes in the respective intra-communicator called >> MPI_Ibarrier. However, the mpich-3.0.4 library, seems to in some cases >> wait for all processes in the respective intra-communicator to call >> MPI_Wait. Here is an example that demonstrates this behavior: >> >> #include >> #include >> >> int main( int argc, char *argv[]) { >> MPI_Request request; >> MPI_Status status; >> MPI_Init(&argc, &argv ); >> int myrank; >> MPI_Comm_rank(MPI_COMM_WORLD, &myrank); >> if (myrank == 0) { >> MPI_Ibarrier(MPI_COMM_WORLD, &request); >> MPI_Wait(&request, &status); >> printf("%d, Completed barrier.\n", myrank); >> } else { >> MPI_Ibarrier(MPI_COMM_WORLD, &request); >> sleep(1); >> MPI_Wait(&request, &status); >> printf("%d, Completed barrier.\n", myrank); >> } >> MPI_Finalize(); >> return 0; >> } >> >> When executed with "mpiexec -n 2 ./example", I see the expected output >> and timing. However, when executed with "mpiexec -n 3 ./example", the >> call to MPI_Wait in process 0 returns only after the other processes >> wake up from sleep() and call MPI_Wait. >> >> Isn't this a violation of the standard? >> >> Best, >> >> --Jiri Simsa >> >> >> >> ______________________________**_________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/**mailman/listinfo/discuss >> >> > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akp4221 at hawaii.edu Wed Jun 26 15:29:37 2013 From: akp4221 at hawaii.edu (Andre Pattantyus) Date: Wed, 26 Jun 2013 10:29:37 -1000 Subject: [mpich-discuss] PMGR_COLLECTIVE ERROR Message-ID: Hello, I am running a numerical dispersion model on a linux cluster with mvapich-1.2.0 and pgi/10.2 compiler. I am trying to submit a job on 8 processors with code written by the developer but I keep getting errors. Right now I am getting the following error when the executable hycm_std gets called: /share/huina/akp4221/hysplit/trunk/exec/hycm_std PMGR_COLLECTIVE ERROR: unitialized MPI task: Missing required environment variable: MPIRUN_RANK I do not have root privileges but I would like to know what the potential issues I could be facing so I can let the administrator now. -- Andre Pattantyus Graduate Student Research Assistant Department of Meteorology University of Hawaii at Manoa 2525 Correa Rd, HIG 350 Honolulu, HI 96822 Phone: (845) 264-3582 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Wed Jun 26 15:48:19 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Wed, 26 Jun 2013 15:48:19 -0500 Subject: [mpich-discuss] Non-blocking Collectives In-Reply-To: References: <51CB4854.8010400@mcs.anl.gov> Message-ID: <51CB5393.5030708@mcs.anl.gov> Hi Jiri, On 06/26/2013 03:08 PM, Jiri Simsa wrote: > Thank you for your quick answer. I am trying to understand the blocking > behavior of MPI_Wait in the case of non-blocking collectives. Is it safe > to assume that, for a non-blocking collective, MPI_Wait is guaranteed to > return once all other processes call the corresponding completion > operation (e.g. MPI_Wait or MPI_Test)? I'm not sure I understand your question. Are you asking if MPI_WAIT in a process is guaranteed to return after some finite amount of time after every other process has called MPI_WAIT? Then, yes. -- Pavan Balaji http://www.mcs.anl.gov/~balaji From apenya at mcs.anl.gov Wed Jun 26 16:43:38 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Wed, 26 Jun 2013 16:43:38 -0500 Subject: [mpich-discuss] PMGR_COLLECTIVE ERROR In-Reply-To: References: Message-ID: <1576702.64xAhxksYD@localhost.localdomain> Hi Andre, Could you check with our MVAPICH folks? There are a bunch of things that may cause issues not related to the MPICH code integrated into MVAPICH, so they may be able to provide you with a better support for your issue. In fact, the missing environment variable is likely to be related to the process manager, and you might be using mpirun_rsh, which is not developed by us. Antonio On Wednesday, June 26, 2013 10:29:37 AM Andre Pattantyus wrote: Hello, I am running a numerical dispersion model on a linux cluster with mvapich-1.2.0 and pgi/10.2 compiler. I am trying to submit a job on 8 processors with code written by the developer but I keep getting errors. Right now I am getting the following error when the executable hycm_std gets called: /share/huina/akp4221/hysplit/trunk/exec/hycm_stdPMGR_COLLECTIVE ERROR: unitialized MPI task: Missing required environment variable: MPIRUN_RANK I do not have root privileges but I would like to know what the potential issues I could be facing so I can let the administrator now. -- Andre PattantyusGraduate Student Research AssistantDepartment of MeteorologyUniversity of Hawaii at Manoa2525 Correa Rd, HIG 350Honolulu, HI 96822Phone: (845) 264-3582 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lindbom at gmail.com Thu Jun 27 02:59:37 2013 From: lindbom at gmail.com (Lars Lindbom) Date: Thu, 27 Jun 2013 09:59:37 +0200 Subject: [mpich-discuss] mpich2 and windows server 2012 Message-ID: Hi, I have a problem getting mpich2 to run on Windows Server 2012. We have a small set of servers successfully configured and running Windows Server 2008 R2 and MPICH2 without any problem. The error I get seems to indicate a problem in the authentication process. >"c:\Program Files (x86)\MPICH2\bin\mpiexec.exe" ?validate SUCCESS >"c:\Program Files (x86)\MPICH2\bin\mpiexec.exe" -host localhost nonmem.exe Credentials for Lars rejected connecting to localhost Aborting: Unable to connect to localhost The credentials are correct and I have tried multiple user accounts with the same result. I don't think it's related but for what it's worth I have made sure that I have the same firewall settings for the mpich executables as on the 2008 R2 servers. I would appreciate any help in getting this solved. Thanks, Lars -------------- next part -------------- An HTML attachment was scrubbed... URL: From apenya at mcs.anl.gov Thu Jun 27 09:31:37 2013 From: apenya at mcs.anl.gov (Antonio =?ISO-8859-1?Q?J=2E_Pe=F1a?=) Date: Thu, 27 Jun 2013 09:31:37 -0500 Subject: [mpich-discuss] mpich2 and windows server 2012 In-Reply-To: References: Message-ID: <2703045.GbR0s30VlD@localhost.localdomain> Hi Lars, Unfortunately the MPICH team doesn't provide support for Windows anymore, as the development for Windows was discontinued some time ago, and we don't have any Windows experts anymore within our team. Maybe someone else in this mailing list can help you. Antonio On Thursday, June 27, 2013 09:59:37 AM Lars Lindbom wrote: Hi, I have a problem getting mpich2 to run on Windows Server 2012. We have a small set of servers successfully configured and running Windows Server 2008 R2 and MPICH2 without any problem. The error I get seems to indicate a problem in the authentication process. >"c:\Program Files (x86)\MPICH2\bin\mpiexec.exe" ?validate SUCCESS >"c:\Program Files (x86)\MPICH2\bin\mpiexec.exe" -host localhost nonmem.exe Credentials for Lars rejected connecting to localhost Aborting: Unable to connect to localhost The credentials are correct and I have tried multiple user accounts with the same result. I don't think it's related but for what it's worth I have made sure that I have the same firewall settings for the mpich executables as on the 2008 R2 servers. I would appreciate any help in getting this solved. Thanks, Lars -------------- next part -------------- An HTML attachment was scrubbed... URL: From costas.yamin at gmail.com Thu Jun 27 10:16:02 2013 From: costas.yamin at gmail.com (Costas Yamin) Date: Thu, 27 Jun 2013 18:16:02 +0300 Subject: [mpich-discuss] mpich2 and windows server 2012 In-Reply-To: References: Message-ID: <51CC5732.5050606@gmail.com> An HTML attachment was scrubbed... URL: From jsimsa at cs.cmu.edu Thu Jun 27 10:33:14 2013 From: jsimsa at cs.cmu.edu (Jiri Simsa) Date: Thu, 27 Jun 2013 11:33:14 -0400 Subject: [mpich-discuss] Non-blocking Collectives In-Reply-To: <51CB5393.5030708@mcs.anl.gov> References: <51CB4854.8010400@mcs.anl.gov> <51CB5393.5030708@mcs.anl.gov> Message-ID: Hi Pavan, To rephrase, I am interested in understanding when would MPI_Wait() block indefinitely, waiting for other process to make progress. I believe that your response answers my question. Thanks again. --Jiri On Wed, Jun 26, 2013 at 4:48 PM, Pavan Balaji wrote: > Hi Jiri, > > > On 06/26/2013 03:08 PM, Jiri Simsa wrote: > >> Thank you for your quick answer. I am trying to understand the blocking >> behavior of MPI_Wait in the case of non-blocking collectives. Is it safe >> to assume that, for a non-blocking collective, MPI_Wait is guaranteed to >> return once all other processes call the corresponding completion >> operation (e.g. MPI_Wait or MPI_Test)? >> > > I'm not sure I understand your question. Are you asking if MPI_WAIT in a > process is guaranteed to return after some finite amount of time after > every other process has called MPI_WAIT? Then, yes. > > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmcruz at arcos.inf.uc3m.es Thu Jun 27 11:58:32 2013 From: gmcruz at arcos.inf.uc3m.es (=?ISO-8859-1?Q?Gonzalo_Mart=EDn_Cruz?=) Date: Thu, 27 Jun 2013 18:58:32 +0200 Subject: [mpich-discuss] Process-to-core binding in MPI_Comm_spawn Message-ID: Hi all, I am working with MPI_Comm_spawn to launch dynamic processes in my MPI application and I would like to use the process-to-core binding allocation strategy in processes spawned dynamically. I already know that it is possible to set the "host" key in the MPI_Info parameter of the MPI_Comm_spawn call to spawn the process in a specific host. However, I would like to know how to bound a dynamic process to a specific processor core. I tried passing to MPI_Comm_spawn the "host" info key as "compute-node-X binding:user=4", but it does not work and the process goes to any core randomly. Thank you very much, Regards, Gonzalo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahanzeb.maqbool at gmail.com Thu Jun 27 21:36:53 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 11:36:53 +0900 Subject: [mpich-discuss] mpich hangs Message-ID: Hello I am trying to run HPL on a cluster of nodes. The problem I am facing is with mpich, as I have successfully configured mpich. The program runs on single node without passing -machinefile argument. But as long as I execute of multiple nodes (-machinefile nodes) then the program hangs on indefinitely right after issuing the command. Any help? -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 27 21:39:03 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 27 Jun 2013 21:39:03 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: Message-ID: <51CCF747.70308@mcs.anl.gov> On 06/27/2013 09:36 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > I am trying to run HPL on a cluster of nodes. The problem I am facing is > with mpich, as I have successfully configured mpich. The program runs on > single node without passing -machinefile argument. But as long as I > execute of multiple nodes (-machinefile nodes) then the program hangs on > indefinitely right after issuing the command. Given how little information you have provided, here's the only response I can give: You are doing something wrong. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jahanzeb.maqbool at gmail.com Thu Jun 27 21:41:51 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 11:41:51 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <51CCF747.70308@mcs.anl.gov> References: <51CCF747.70308@mcs.anl.gov> Message-ID: sorry for giving such little information. ok here is the output after a long hang (which sometimes comes out) ================START OF OUTPUT===================== linaro at weiser1:/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a$ mpirun -np 8 -machinefile machines ./xhp l Fatal error in MPI_Send: A process has failed, error stack: MPI_Send(171)..............: MPI_Send(buf=0xbe84fc50, count=1, MPI_INT, dest=0, tag=9001, MPI_COMM_WORLD ) failed MPID_nem_tcp_connpoll(1826): Communication error with rank 0: Connection refused =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== [proxy:0:0 at weiser1] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed [proxy:0:0 at weiser1] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status [proxy:0:0 at weiser1] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event [mpiexec at weiser1] HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting [mpiexec at weiser1] HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion [mpiexec at weiser1] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for completion [mpiexec at weiser1] main (./ui/mpich/mpiexec.c:331): process manager error waiting for completion ================ENDOF OUTPUT===================== On Fri, Jun 28, 2013 at 11:39 AM, Pavan Balaji wrote: > > On 06/27/2013 09:36 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >> I am trying to run HPL on a cluster of nodes. The problem I am facing is >> with mpich, as I have successfully configured mpich. The program runs on >> single node without passing -machinefile argument. But as long as I >> execute of multiple nodes (-machinefile nodes) then the program hangs on >> indefinitely right after issuing the command. >> > > Given how little information you have provided, here's the only response I > can give: > > You are doing something wrong. > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 27 21:43:36 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 27 Jun 2013 21:43:36 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> Message-ID: <51CCF858.6020304@mcs.anl.gov> On 06/27/2013 09:41 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > MPID_nem_tcp_connpoll(1826): Communication error with rank 0: Connection > refused Sounds like a firewall problem. http://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_My_MPI_program_aborts_with_an_error_saying_it_cannot_communicate_with_other_processes -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jahanzeb.maqbool at gmail.com Thu Jun 27 21:48:33 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 11:48:33 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <51CCF858.6020304@mcs.anl.gov> References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> Message-ID: As I already read some discussion on the mpich thread and I came to know these questions before: - *Can you ssh from "node01" to "node02"?* Yes I have passwordless ssh from each node to other (and vice versa) - *Make sure the firewalls are turned off on all machines.* I am using Ubuntu Linaro, 12.04, and I have no firewall installed on all the nodes. Any other workaround :( On Fri, Jun 28, 2013 at 11:43 AM, Pavan Balaji wrote: > > On 06/27/2013 09:41 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >> MPID_nem_tcp_connpoll(1826): Communication error with rank 0: Connection >> refused >> > > Sounds like a firewall problem. > > http://wiki.mpich.org/mpich/**index.php/Frequently_Asked_** > Questions#Q:_My_MPI_program_**aborts_with_an_error_saying_** > it_cannot_communicate_with_**other_processes > > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 27 21:50:52 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 27 Jun 2013 21:50:52 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> Message-ID: <51CCFA0C.4020607@mcs.anl.gov> IIRC, Ubuntu has a firewall setup by default. Try this: % sudo ufw status -- Pavan On 06/27/2013 09:48 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > > As I already read some discussion on the mpich thread and I came to know > these questions before: > > * *Can you ssh from "node01" to "node02"?* > Yes I have passwordless ssh from each node to other (and vice versa) > * *Make sure the firewalls are turned off on all machines.* > I am using Ubuntu Linaro, 12.04, and I have no firewall installed on > all the nodes. > > > Any other workaround :( > > On Fri, Jun 28, 2013 at 11:43 AM, Pavan Balaji > wrote: > > > On 06/27/2013 09:41 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > > MPID_nem_tcp_connpoll(1826): Communication error with rank 0: > Connection > refused > > > Sounds like a firewall problem. > > http://wiki.mpich.org/mpich/__index.php/Frequently_Asked___Questions#Q:_My_MPI_program___aborts_with_an_error_saying___it_cannot_communicate_with___other_processes > > > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jahanzeb.maqbool at gmail.com Thu Jun 27 21:52:00 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 11:52:00 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <51CCFA0C.4020607@mcs.anl.gov> References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> Message-ID: Yes thats what I did before and found out this to be exact: "sudo: ufw: command not found" On Fri, Jun 28, 2013 at 11:50 AM, Pavan Balaji wrote: > > IIRC, Ubuntu has a firewall setup by default. Try this: > > % sudo ufw status > > -- Pavan > > > On 06/27/2013 09:48 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >> >> As I already read some discussion on the mpich thread and I came to know >> these questions before: >> >> * *Can you ssh from "node01" to "node02"?* >> >> Yes I have passwordless ssh from each node to other (and vice versa) >> * *Make sure the firewalls are turned off on all machines.* >> >> I am using Ubuntu Linaro, 12.04, and I have no firewall installed on >> all the nodes. >> >> >> Any other workaround :( >> >> On Fri, Jun 28, 2013 at 11:43 AM, Pavan Balaji > > wrote: >> >> >> On 06/27/2013 09:41 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> >> MPID_nem_tcp_connpoll(1826): Communication error with rank 0: >> Connection >> refused >> >> >> Sounds like a firewall problem. >> >> http://wiki.mpich.org/mpich/__**index.php/Frequently_Asked___** >> Questions#Q:_My_MPI_program___**aborts_with_an_error_saying___** >> it_cannot_communicate_with___**other_processes >> >> > Questions#Q:_My_MPI_program_**aborts_with_an_error_saying_** >> it_cannot_communicate_with_**other_processes >> > >> >> >> -- Pavan >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> >> >> > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 27 21:57:48 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 27 Jun 2013 21:57:48 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> Message-ID: <51CCFBAC.3090404@mcs.anl.gov> % sudo /usr/sbin/ufw status On 06/27/2013 09:52 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > Yes thats what I did before and found out this to be exact: > "sudo: ufw: command not found" > > > On Fri, Jun 28, 2013 at 11:50 AM, Pavan Balaji > wrote: > > > IIRC, Ubuntu has a firewall setup by default. Try this: > > % sudo ufw status > > -- Pavan > > > On 06/27/2013 09:48 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > > > As I already read some discussion on the mpich thread and I came > to know > these questions before: > > * *Can you ssh from "node01" to "node02"?* > > Yes I have passwordless ssh from each node to other (and > vice versa) > * *Make sure the firewalls are turned off on all machines.* > > I am using Ubuntu Linaro, 12.04, and I have no firewall > installed on > all the nodes. > > > Any other workaround :( > > On Fri, Jun 28, 2013 at 11:43 AM, Pavan Balaji > > >> wrote: > > > On 06/27/2013 09:41 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > > MPID_nem_tcp_connpoll(1826): Communication error with > rank 0: > Connection > refused > > > Sounds like a firewall problem. > > http://wiki.mpich.org/mpich/____index.php/Frequently_Asked_____Questions#Q:_My_MPI_program_____aborts_with_an_error_saying_____it_cannot_communicate_with_____other_processes > > > > > > > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > > > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jahanzeb.maqbool at gmail.com Thu Jun 27 22:01:24 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 12:01:24 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <51CCFBAC.3090404@mcs.anl.gov> References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> Message-ID: I feel sorry to not able to provide more information and keeping the thread long. But still there is no ufw in my /usr/sbin. Actually, I am using an ARM development board (ORDROID-X2) which has ubuntu linaro image. I suppose that due to space limitation, this image may not include additional utility features like firewall but only the core functionality. On Fri, Jun 28, 2013 at 11:57 AM, Pavan Balaji wrote: > > % sudo /usr/sbin/ufw status > > > On 06/27/2013 09:52 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >> Yes thats what I did before and found out this to be exact: >> "sudo: ufw: command not found" >> >> >> On Fri, Jun 28, 2013 at 11:50 AM, Pavan Balaji > > wrote: >> >> >> IIRC, Ubuntu has a firewall setup by default. Try this: >> >> % sudo ufw status >> >> -- Pavan >> >> >> On 06/27/2013 09:48 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> >> >> As I already read some discussion on the mpich thread and I came >> to know >> these questions before: >> >> * *Can you ssh from "node01" to "node02"?* >> >> Yes I have passwordless ssh from each node to other (and >> vice versa) >> * *Make sure the firewalls are turned off on all machines.* >> >> I am using Ubuntu Linaro, 12.04, and I have no firewall >> installed on >> all the nodes. >> >> >> Any other workaround :( >> >> On Fri, Jun 28, 2013 at 11:43 AM, Pavan Balaji >> >> >> wrote: >> >> >> On 06/27/2013 09:41 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> >> MPID_nem_tcp_connpoll(1826): Communication error with >> rank 0: >> Connection >> refused >> >> >> Sounds like a firewall problem. >> >> http://wiki.mpich.org/mpich/__**__index.php/Frequently_Asked__** >> ___Questions#Q:_My_MPI_**program_____aborts_with_an_** >> error_saying_____it_cannot_**communicate_with_____other_**processes >> > Questions#Q:_My_MPI_program___**aborts_with_an_error_saying___** >> it_cannot_communicate_with___**other_processes >> > >> >> >> >> > Questions#Q:_My_MPI_program___**aborts_with_an_error_saying___** >> it_cannot_communicate_with___**other_processes >> > Questions#Q:_My_MPI_program_**aborts_with_an_error_saying_** >> it_cannot_communicate_with_**other_processes >> >> >> >> >> -- Pavan >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> >> >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> >> >> > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 27 22:03:33 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 27 Jun 2013 22:03:33 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> Message-ID: <51CCFD05.10109@mcs.anl.gov> On 06/27/2013 10:01 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > I feel sorry to not able to provide more information and keeping the > thread long. > But still there is no ufw in my /usr/sbin. > > Actually, I am using an ARM development board (ORDROID-X2) which has > ubuntu linaro image. I suppose that due to space limitation, this image > may not include additional utility features like firewall but only the > core functionality. Ah, this information would have been useful to start with. Can you give the exact command-line you are using, including the contents of your machinefile? Also, run mpiexec with the -verbose option and send its output as well. -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jahanzeb.maqbool at gmail.com Thu Jun 27 22:08:44 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 12:08:44 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <51CCFD05.10109@mcs.anl.gov> References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> Message-ID: Here is the command I am giving: *$ mpiexec -verbose -hostfile machines ./xhpl* * * The hostfile "machines" contains: (two quadcore machines having ips 192.168.0.101 and 192.168.0.102) weiser1:4 weiser2:4 Here is the complete output: ------------ START OF OUTPUT ------------------- host: weiser1 host: weiser2 ================================================================================================== mpiexec options: ---------------- Base path: /mnt/nfs/install/mpich-install/bin/ Launcher: (null) Debug level: 1 Enable X: -1 Global environment: ------------------- TERM=xterm SHELL=/bin/bash XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422 SSH_CLIENT=192.168.0.3 57311 22 OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1 SSH_TTY=/dev/pts/0 USER=linaro LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib MAIL=/var/mail/linaro PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a LANG=C.UTF-8 SHLVL=1 HOME=/home/linaro LOGNAME=linaro SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22 LESSOPEN=| /usr/bin/lesspipe %s LESSCLOSE=/usr/bin/lesspipe %s %s _=/mnt/nfs/install/mpich-install/bin/mpiexec Hydra internal environment: --------------------------- GFORTRAN_UNBUFFERED_PRECONNECTED=y Proxy information: ********************* [1] proxy: weiser1 (4 cores) Exec list: ./xhpl (4 processes); [2] proxy: weiser2 (4 cores) Exec list: ./xhpl (4 processes); ================================================================================================== [mpiexec at weiser1] Timeout set to -1 (-1 means infinite) [mpiexec at weiser1] Got a control port string of weiser1:44161 Proxy launch args: /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy --control-port weiser1:44161 --debug --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id Arguments being passed to proxy 0: --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname weiser1 --global-core-map 0,4,8 --pmi-id-map 0,0 --global-process-count 8 --auto-cleanup 1 --pmi-kvsname kvs_21930_0 --pmi-process-mapping (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' 'SHELL=/bin/bash' 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' 'SSH_CLIENT=192.168.0.3 57311 22' 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' 'USER=linaro' 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' 'MAIL=/var/mail/linaro' 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' 'SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' 'LESSCLOSE=/usr/bin/lesspipe %s %s' '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 --exec-local-env 0 --exec-wdir /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl Arguments being passed to proxy 1: --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname weiser2 --global-core-map 0,4,8 --pmi-id-map 0,4 --global-process-count 8 --auto-cleanup 1 --pmi-kvsname kvs_21930_0 --pmi-process-mapping (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' 'SHELL=/bin/bash' 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' 'SSH_CLIENT=192.168.0.3 57311 22' 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' 'USER=linaro' 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' 'MAIL=/var/mail/linaro' 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' 'SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' 'LESSCLOSE=/usr/bin/lesspipe %s %s' '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 --exec-local-env 0 --exec-wdir /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl [mpiexec at weiser1] Launch arguments: /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy --control-port weiser1:44161 --debug --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id 0 [mpiexec at weiser1] Launch arguments: /usr/bin/ssh -x weiser2 "/mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy" --control-port weiser1:44161 --debug --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id 1 [proxy:0:0 at weiser1] got pmi command (from 6): init pmi_version=1 pmi_subversion=1 [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:0 at weiser1] got pmi command (from 6): get_maxes [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:0 at weiser1] got pmi command (from 6): get_appnum [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:0 at weiser1] got pmi command (from 6): get kvsname=kvs_21930_0 key=PMI_process_mapping [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in [proxy:0:0 at weiser1] got pmi command (from 15): init pmi_version=1 pmi_subversion=1 [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:0 at weiser1] got pmi command (from 15): get_maxes [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:0 at weiser1] got pmi command (from 15): get_appnum [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:0 at weiser1] got pmi command (from 0): init pmi_version=1 pmi_subversion=1 [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:0 at weiser1] got pmi command (from 0): get_maxes [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:0 at weiser1] got pmi command (from 15): get kvsname=kvs_21930_0 key=PMI_process_mapping [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:0 at weiser1] got pmi command (from 0): get_appnum [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:0 at weiser1] got pmi command (from 0): get kvsname=kvs_21930_0 key=PMI_process_mapping [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:0 at weiser1] got pmi command (from 0): put kvsname=kvs_21930_0 key=sharedFilename[0] value=/dev/shm/mpich_shar_tmpIoDbts [proxy:0:0 at weiser1] cached command: sharedFilename[0]=/dev/shm/mpich_shar_tmpIoDbts [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 8): init pmi_version=1 pmi_subversion=1 [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in [proxy:0:0 at weiser1] got pmi command (from 8): get_maxes [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:0 at weiser1] got pmi command (from 8): get_appnum [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:0 at weiser1] got pmi command (from 8): get kvsname=kvs_21930_0 key=PMI_process_mapping [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in [proxy:0:0 at weiser1] flushing 1 put command(s) out [proxy:0:0 at weiser1] forwarding command (cmd=put sharedFilename[0]=/dev/shm/mpich_shar_tmpIoDbts) upstream [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put sharedFilename[0]=/dev/shm/mpich_shar_tmpIoDbts [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in [proxy:0:1 at weiser2] got pmi command (from 4): init pmi_version=1 pmi_subversion=1 [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:1 at weiser2] got pmi command (from 4): get_maxes [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:1 at weiser2] got pmi command (from 4): get_appnum [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 4): get kvsname=kvs_21930_0 key=PMI_process_mapping [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:1 at weiser2] got pmi command (from 7): init pmi_version=1 pmi_subversion=1 [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:1 at weiser2] got pmi command (from 7): get_maxes [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:1 at weiser2] got pmi command (from 7): get_appnum [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 7): get kvsname=kvs_21930_0 key=PMI_process_mapping [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in [proxy:0:1 at weiser2] got pmi command (from 4): put kvsname=kvs_21930_0 key=sharedFilename[4] value=/dev/shm/mpich_shar_tmpeuylT4 [proxy:0:1 at weiser2] cached command: sharedFilename[4]=/dev/shm/mpich_shar_tmpeuylT4 [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in [proxy:0:1 at weiser2] got pmi command (from 5): init pmi_version=1 pmi_subversion=1 [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:1 at weiser2] got pmi command (from 5): get_maxes [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:1 at weiser2] got pmi command (from 5): get_appnum [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 5): get kvsname=kvs_21930_0 key=PMI_process_mapping [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in [proxy:0:1 at weiser2] got pmi command (from 10): init pmi_version=1 pmi_subversion=1 [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put sharedFilename[4]=/dev/shm/mpich_shar_tmpeuylT4 [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache sharedFilename[0]=/dev/shm/mpich_shar_tmpIoDbts sharedFilename[4]=/dev/shm/mpich_shar_tmpeuylT4 [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache sharedFilename[0]=/dev/shm/mpich_shar_tmpIoDbts sharedFilename[4]=/dev/shm/mpich_shar_tmpeuylT4 [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out [proxy:0:1 at weiser2] got pmi command (from 10): get_maxes [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:1 at weiser2] got pmi command (from 10): get_appnum [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 10): get kvsname=kvs_21930_0 key=PMI_process_mapping [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in [proxy:0:1 at weiser2] flushing 1 put command(s) out [proxy:0:1 at weiser2] forwarding command (cmd=put sharedFilename[4]=/dev/shm/mpich_shar_tmpeuylT4) upstream [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] got pmi command (from 6): get kvsname=kvs_21930_0 key=sharedFilename[0] [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpIoDbts [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] got pmi command (from 5): get kvsname=kvs_21930_0 key=sharedFilename[4] [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpeuylT4 [proxy:0:1 at weiser2] got pmi command (from 7): get kvsname=kvs_21930_0 key=sharedFilename[4] [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpeuylT4 [proxy:0:1 at weiser2] got pmi command (from 10): get kvsname=kvs_21930_0 key=sharedFilename[4] [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpeuylT4 [proxy:0:0 at weiser1] got pmi command (from 8): get kvsname=kvs_21930_0 key=sharedFilename[0] [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpIoDbts [proxy:0:0 at weiser1] got pmi command (from 15): get kvsname=kvs_21930_0 [proxy:0:1 at weiser2] got pmi command (from 4): put kvsname=kvs_21930_0 key=P4-businesscard value=description#weiser2$port#57651$ifname#192.168.0.102$ [proxy:0:1 at weiser2] cached command: P4-businesscard=description#weiser2$port#57651$ifname#192.168.0.102$ [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 5): put kvsname=kvs_21930_0 key=P5-businesscard value=description#weiser2$port#52622$ifname#192.168.0.102$ [proxy:0:1 at weiser2] cached command: P5-businesscard=description#weiser2$port#52622$ifname#192.168.0.102$ [proxy:0:1 at weiser2] [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put P4-businesscard=description#weiser2$port#57651$ifname#192.168.0.102$ P5-businesscard=description#weiser2$port#52622$ifname#192.168.0.102$ P6-businesscard=description#weiser2$port#55935$ifname#192.168.0.102$ P7-businesscard=description#weiser2$port#54952$ifname#192.168.0.102$ [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in [proxy:0:1 at weiser2] got pmi command (from 7): put kvsname=kvs_21930_0 key=P6-businesscard value=description#weiser2$port#55935$ifname#192.168.0.102$ [proxy:0:1 at weiser2] cached command: P6-businesscard=description#weiser2$port#55935$ifname#192.168.0.102$ [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in [proxy:0:1 at weiser2] got pmi command (from 10): put kvsname=kvs_21930_0 key=P7-businesscard value=description#weiser2$port#54952$ifname#192.168.0.102$ [proxy:0:1 at weiser2] cached command: P7-businesscard=description#weiser2$port#54952$ifname#192.168.0.102$ [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in [proxy:0:1 at weiser2] flushing 4 put command(s) out [proxy:0:1 at weiser2] forwarding command (cmd=put P4-businesscard=description#weiser2$port#57651$ifname#192.168.0.102$ P5-businesscard=description#weiser2$port#52622$ifname#192.168.0.102$ P6-businesscard=description#weiser2$port#55935$ifname#192.168.0.102$ P7-businesscard=description#weiser2$port#54952$ifname#192.168.0.102$) upstream [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream key=sharedFilename[0] [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpIoDbts [proxy:0:0 at weiser1] got pmi command (from 0): put kvsname=kvs_21930_0 key=P0-businesscard value=description#weiser1$port#41958$ifname#127.0.1.1$ [proxy:0:0 at weiser1] cached command: P0-businesscard=description#weiser1$port#41958$ifname#127.0.1.1$ [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 8): put kvsname=kvs_21930_0 key=P2-businesscard value=description#weiser1$port#35049$ifname#127.0.1.1$ [proxy:0:0 at weiser1] cached command: P2-businesscard=description#weiser1$port#35049$ifname#127.0.1.1$ [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in [proxy:0:0 at weiser1] got pmi command (from 6): put kvsname=kvs_21930_0 key=P1-businesscard value=description#weiser1$port#39634$ifname#127.0.1.1$ [proxy:0:0 at weiser1] cached command: P1-businesscard=description#weiser1$port#39634$ifname#127.0.1.1$ [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in [proxy:0:0 at weiser1] got pmi command (from 15): put kvsname=kvs_21930_0 key=P3-businesscard value=description#weiser1$port#51802$ifname#127.0.1.1$ [proxy:0:0 at weiser1] cached command: P3-businesscard=description#weiser1$port#51802$ifname#127.0.1.1$ [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in [proxy:0:0 at weiser1] flushing 4 put command(s) out [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put P0-businesscard=description#weiser1$port#41958$ifname#127.0.1.1$ P2-businesscard=description#weiser1$port#35049$ifname#127.0.1.1$ P1-businesscard=description#weiser1$port#39634$ifname#127.0.1.1$ P3-businesscard=description#weiser1$port#51802$ifname#127.0.1.1$ [proxy:0:0 at weiser1] forwarding command (cmd=put P0-businesscard=description#weiser1$port#41958$ifname#127.0.1.1$ P2-businesscard=description#weiser1$port#35049$ifname#127.0.1.1$ P1-businesscard=description#weiser1$port#39634$ifname#127.0.1.1$ P3-businesscard=description#weiser1$port#51802$ifname#127.0.1.1$) upstream [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in [mpiexec at weiser1] PMI response to fd 6 pid 15: cmd=keyval_cache P4-businesscard=description#weiser2$port#57651$ifname#192.168.0.102$ P5-businesscard=description#weiser2$port#52622$ifname#192.168.0.102$ P6-businesscard=description#weiser2$port#55935$ifname#192.168.0.102$ P7-businesscard=description#weiser2$port#54952$ifname#192.168.0.102$ P0-businesscard=description#weiser1$port#41958$ifname#127.0.1.1$ P2-businesscard=description#weiser1$port#35049$ifname#127.0.1.1$ P1-businesscard=description#weiser1$port#39634$ifname#127.0.1.1$ P3-businesscard=description#weiser1$port#51802$ifname#127.0.1.1$ [mpiexec at weiser1] PMI response to fd 7 pid 15: cmd=keyval_cache P4-businesscard=description#weiser2$port#57651$ifname#192.168.0.102$ P5-businesscard=description#weiser2$port#52622$ifname#192.168.0.102$ P6-businesscard=description#weiser2$port#55935$ifname#192.168.0.102$ P7-businesscard=description#weiser2$port#54952$ifname#192.168.0.102$ P0-businesscard=description#weiser1$port#41958$ifname#127.0.1.1$ P2-businesscard=description#weiser1$port#35049$ifname#127.0.1.1$ P1-businesscard=description#weiser1$port#39634$ifname#127.0.1.1$ P3-businesscard=description#weiser1$port#51802$ifname#127.0.1.1$ [mpiexec at weiser1] PMI response to fd 6 pid 15: cmd=barrier_out [mpiexec at weiser1] PMI response to fd 7 pid 15: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] got pmi command (from 4): get kvsname=kvs_21930_0 key=P0-businesscard [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=description#weiser1$port#41958$ifname#127.0.1.1$ =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 1 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ------------ END OF OUTPUT ------------------- On Fri, Jun 28, 2013 at 12:03 PM, Pavan Balaji wrote: > > On 06/27/2013 10:01 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >> I feel sorry to not able to provide more information and keeping the >> thread long. >> But still there is no ufw in my /usr/sbin. >> >> Actually, I am using an ARM development board (ORDROID-X2) which has >> ubuntu linaro image. I suppose that due to space limitation, this image >> may not include additional utility features like firewall but only the >> core functionality. >> > > Ah, this information would have been useful to start with. > > Can you give the exact command-line you are using, including the contents > of your machinefile? Also, run mpiexec with the -verbose option and send > its output as well. > > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 27 22:12:57 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 27 Jun 2013 22:12:57 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> Message-ID: <51CCFF39.3060401@mcs.anl.gov> On 06/27/2013 10:08 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > P4-businesscard=description#weiser2$port#57651$ifname#192.168.0.102$ > P5-businesscard=description#weiser2$port#52622$ifname#192.168.0.102$ > P6-businesscard=description#weiser2$port#55935$ifname#192.168.0.102$ > P7-businesscard=description#weiser2$port#54952$ifname#192.168.0.102$ > P0-businesscard=description#weiser1$port#41958$ifname#127.0.1.1$ > P2-businesscard=description#weiser1$port#35049$ifname#127.0.1.1$ > P1-businesscard=description#weiser1$port#39634$ifname#127.0.1.1$ > P3-businesscard=description#weiser1$port#51802$ifname#127.0.1.1$ I have two concerns with your output. Let's start with the first. Did you look at this question on the FAQ page? "Is your /etc/hosts file consistent across all nodes? Unless you are using an external DNS server, the /etc/hosts file on every machine should contain the correct IP information about all hosts in the system." -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jahanzeb.maqbool at gmail.com Thu Jun 27 22:21:40 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 12:21:40 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <51CCFF39.3060401@mcs.anl.gov> References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> Message-ID: My bad, I just found out that there was a duplicate entry like: weiser1 127.0.1.1 weiser1 192.168.0.101 so i removed teh 127.x.x.x. entry and kept the hostfile contents similar on both nodes. Now previous error is reduced to this one: ------ START OF OUTPUT ------- ....some HPL startup string (no final result) ...skip..... =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 9 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== [proxy:0:0 at weiser1] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed [proxy:0:0 at weiser1] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status [proxy:0:0 at weiser1] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event [mpiexec at weiser1] HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting [mpiexec at weiser1] HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion [mpiexec at weiser1] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for completion [mpiexec at weiser1] main (./ui/mpich/mpiexec.c:331): process manager error waiting for completion ------ END OF OUTPUT ------- On Fri, Jun 28, 2013 at 12:12 PM, Pavan Balaji wrote: > > On 06/27/2013 10:08 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >> P4-businesscard=description#**weiser2$port#57651$ifname#192.**168.0.102$ >> P5-businesscard=description#**weiser2$port#52622$ifname#192.**168.0.102$ >> P6-businesscard=description#**weiser2$port#55935$ifname#192.**168.0.102$ >> P7-businesscard=description#**weiser2$port#54952$ifname#192.**168.0.102$ >> P0-businesscard=description#**weiser1$port#41958$ifname#127.**0.1.1$ >> P2-businesscard=description#**weiser1$port#35049$ifname#127.**0.1.1$ >> P1-businesscard=description#**weiser1$port#39634$ifname#127.**0.1.1$ >> P3-businesscard=description#**weiser1$port#51802$ifname#127.**0.1.1$ >> > > I have two concerns with your output. Let's start with the first. > > Did you look at this question on the FAQ page? > > "Is your /etc/hosts file consistent across all nodes? Unless you are using > an external DNS server, the /etc/hosts file on every machine should contain > the correct IP information about all hosts in the system." > > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 27 22:24:11 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 27 Jun 2013 22:24:11 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> Message-ID: <51CD01DB.30403@mcs.anl.gov> Looks like your application aborted for some reason. -- Pavan On 06/27/2013 10:21 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > My bad, I just found out that there was a duplicate entry like: > weiser1 127.0.1.1 > weiser1 192.168.0.101 > so i removed teh 127.x.x.x. entry and kept the hostfile contents similar > on both nodes. Now previous error is reduced to this one: > > ------ START OF OUTPUT ------- > > ....some HPL startup string (no final result) > ...skip..... > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 9 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > =================================================================================== > [proxy:0:0 at weiser1] HYD_pmcd_pmip_control_cmd_cb > (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed > [proxy:0:0 at weiser1] HYDT_dmxu_poll_wait_for_event > (./tools/demux/demux_poll.c:77): callback returned error status > [proxy:0:0 at weiser1] main (./pm/pmiserv/pmip.c:206): demux engine error > waiting for event > [mpiexec at weiser1] HYDT_bscu_wait_for_completion > (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes > terminated badly; aborting > [mpiexec at weiser1] HYDT_bsci_wait_for_completion > (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting > for completion > [mpiexec at weiser1] HYD_pmci_wait_for_completion > (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for > completion > [mpiexec at weiser1] main (./ui/mpich/mpiexec.c:331): process manager error > waiting for completion > > ------ END OF OUTPUT ------- > > > > On Fri, Jun 28, 2013 at 12:12 PM, Pavan Balaji > wrote: > > > On 06/27/2013 10:08 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > > P4-businesscard=description#__weiser2$port#57651$ifname#192.__168.0.102$ > P5-businesscard=description#__weiser2$port#52622$ifname#192.__168.0.102$ > P6-businesscard=description#__weiser2$port#55935$ifname#192.__168.0.102$ > P7-businesscard=description#__weiser2$port#54952$ifname#192.__168.0.102$ > P0-businesscard=description#__weiser1$port#41958$ifname#127.__0.1.1$ > P2-businesscard=description#__weiser1$port#35049$ifname#127.__0.1.1$ > P1-businesscard=description#__weiser1$port#39634$ifname#127.__0.1.1$ > P3-businesscard=description#__weiser1$port#51802$ifname#127.__0.1.1$ > > > I have two concerns with your output. Let's start with the first. > > Did you look at this question on the FAQ page? > > "Is your /etc/hosts file consistent across all nodes? Unless you are > using an external DNS server, the /etc/hosts file on every machine > should contain the correct IP information about all hosts in the > system." > > > -- Pavan > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jahanzeb.maqbool at gmail.com Thu Jun 27 22:29:33 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 12:29:33 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <51CD01DB.30403@mcs.anl.gov> References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> Message-ID: again that same error: Fatal error in PMPI_Wait: A process has failed, error stack: PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, status=0xbebb99f0) failed MPIR_Wait_impl(77)........: dequeue_and_set_error(888): Communication error with rank 4 here is the verbose output: --------------START------------------ host: weiser1 host: weiser2 ================================================================================================== mpiexec options: ---------------- Base path: /mnt/nfs/install/mpich-install/bin/ Launcher: (null) Debug level: 1 Enable X: -1 Global environment: ------------------- TERM=xterm SHELL=/bin/bash XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422 SSH_CLIENT=192.168.0.3 57311 22 OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1 SSH_TTY=/dev/pts/0 USER=linaro LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib MAIL=/var/mail/linaro PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a LANG=C.UTF-8 SHLVL=1 HOME=/home/linaro LOGNAME=linaro SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22 LESSOPEN=| /usr/bin/lesspipe %s LESSCLOSE=/usr/bin/lesspipe %s %s _=/mnt/nfs/install/mpich-install/bin/mpiexec Hydra internal environment: --------------------------- GFORTRAN_UNBUFFERED_PRECONNECTED=y Proxy information: ********************* [1] proxy: weiser1 (4 cores) Exec list: ./xhpl (4 processes); [2] proxy: weiser2 (4 cores) Exec list: ./xhpl (4 processes); ================================================================================================== [mpiexec at weiser1] Timeout set to -1 (-1 means infinite) [mpiexec at weiser1] Got a control port string of weiser1:45851 Proxy launch args: /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy --control-port weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id Arguments being passed to proxy 0: --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname weiser1 --global-core-map 0,4,8 --pmi-id-map 0,0 --global-process-count 8 --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' 'SHELL=/bin/bash' 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' 'SSH_CLIENT=192.168.0.3 57311 22' 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' 'USER=linaro' 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' 'MAIL=/var/mail/linaro' 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' 'SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' 'LESSCLOSE=/usr/bin/lesspipe %s %s' '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 --exec-local-env 0 --exec-wdir /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl Arguments being passed to proxy 1: --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname weiser2 --global-core-map 0,4,8 --pmi-id-map 0,4 --global-process-count 8 --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' 'SHELL=/bin/bash' 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' 'SSH_CLIENT=192.168.0.3 57311 22' 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' 'USER=linaro' 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' 'MAIL=/var/mail/linaro' 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' 'SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' 'LESSCLOSE=/usr/bin/lesspipe %s %s' '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 --exec-local-env 0 --exec-wdir /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl [mpiexec at weiser1] Launch arguments: /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy --control-port weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id 0 [mpiexec at weiser1] Launch arguments: /usr/bin/ssh -x weiser2 "/mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy" --control-port weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 --retries 10 --usize -2 --proxy-id 1 [proxy:0:0 at weiser1] got pmi command (from 0): init pmi_version=1 pmi_subversion=1 [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:0 at weiser1] got pmi command (from 0): get_maxes [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:0 at weiser1] got pmi command (from 15): init pmi_version=1 pmi_subversion=1 [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:0 at weiser1] got pmi command (from 15): get_maxes [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:0 at weiser1] got pmi command (from 8): init pmi_version=1 pmi_subversion=1 [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:0 at weiser1] got pmi command (from 0): get_appnum [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 [proxy:0:0 at weiser1] got pmi command (from 15): get_appnum [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:0 at weiser1] got pmi command (from 8): get_maxes [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:0 at weiser1] got pmi command (from 6): init pmi_version=1 pmi_subversion=1 [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:0 at weiser1] got pmi command (from 0): get kvsname=kvs_24541_0 key=PMI_process_mapping [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:0 at weiser1] got pmi command (from 8): get_appnum [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:0 at weiser1] got pmi command (from 0): put kvsname=kvs_24541_0 key=sharedFilename[0] value=/dev/shm/mpich_shar_tmpnEZdQ9 [proxy:0:0 at weiser1] cached command: sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 15): get kvsname=kvs_24541_0 key=PMI_process_mapping [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in [proxy:0:0 at weiser1] got pmi command (from 6): get_maxes [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in [proxy:0:0 at weiser1] got pmi command (from 8): get kvsname=kvs_24541_0 key=PMI_process_mapping [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:0 at weiser1] got pmi command (from 6): get_appnum [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:0 at weiser1] got pmi command (from 6): get kvsname=kvs_24541_0 key=PMI_process_mapping [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in [proxy:0:0 at weiser1] flushing 1 put command(s) out [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 [proxy:0:0 at weiser1] forwarding command (cmd=put sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9) upstream [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in [proxy:0:1 at weiser2] got pmi command (from 7): init pmi_version=1 pmi_subversion=1 [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:1 at weiser2] got pmi command (from 5): init pmi_version=1 pmi_subversion=1 [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:1 at weiser2] got pmi command (from 7): get_maxes [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:1 at weiser2] got pmi command (from 4): init pmi_version=1 pmi_subversion=1 [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:1 at weiser2] got pmi command (from 7): get_appnum [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 [proxy:0:1 at weiser2] got pmi command (from 4): get_maxes [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:1 at weiser2] got pmi command (from 4): get_appnum [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:1 at weiser2] got pmi command (from 7): get kvsname=kvs_24541_0 key=PMI_process_mapping [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in [proxy:0:1 at weiser2] got pmi command (from 4): get kvsname=kvs_24541_0 key=PMI_process_mapping [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:1 at weiser2] got pmi command (from 5): get_maxes [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:1 at weiser2] got pmi command (from 5): get_appnum [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 [proxy:0:1 at weiser2] got pmi command (from 4): put kvsname=kvs_24541_0 key=sharedFilename[4] value=/dev/shm/mpich_shar_tmpuKzlSa [proxy:0:1 at weiser2] cached command: sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:1 at weiser2] got pmi command (from 5): get kvsname=kvs_24541_0 key=PMI_process_mapping [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:1 at weiser2] got pmi command (from 10): init pmi_version=1 pmi_subversion=1 [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0 [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in [proxy:0:1 at weiser2] got pmi command (from 10): get_maxes [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=1024 [proxy:0:1 at weiser2] got pmi command (from 10): get_appnum [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 [proxy:0:1 at weiser2] got pmi command (from 10): get kvsname=kvs_24541_0 key=PMI_process_mapping [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,2,4)) [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in [proxy:0:1 at weiser2] flushing 1 put command(s) out [proxy:0:1 at weiser2] forwarding command (cmd=put sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa) upstream [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] got pmi command (from 6): get kvsname=kvs_24541_0 key=sharedFilename[0] [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpnEZdQ9 [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] got pmi command (from 5): get kvsname=kvs_24541_0 key=sharedFilename[4] [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpuKzlSa [proxy:0:1 at weiser2] got pmi command (from 7): get kvsname=kvs_24541_0 key=sharedFilename[4] [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpuKzlSa [proxy:0:1 at weiser2] got pmi command (from 10): get kvsname=kvs_24541_0 key=sharedFilename[4] [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpuKzlSa [proxy:0:0 at weiser1] got pmi command (from 8): get kvsname=kvs_24541_0 key=sharedFilename[0] [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpnEZdQ9 [proxy:0:0 at weiser1] got pmi command (from 15): get kvsname=kvs_24541_0 key=sharedFilename[0] [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=/dev/shm/mpich_shar_tmpnEZdQ9 [proxy:0:0 at weiser1] got pmi command (from 0): put kvsname=kvs_24541_0 key=P0-businesscard value=description#weiser1$port#56190$ifname#192.168.0.101$ [proxy:0:0 at weiser1] cached command: P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 8): put kvsname=kvs_24541_0 key=P2-businesscard value=description#weiser1$port#40019$ifname#192.168.0.101$ [proxy:0:0 at weiser1] cached command: P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 15): put kvsname=kvs_24541_0 key=P3-businesscard value=description#weiser1$port#57150$ifname#192.168.0.101$ [proxy:0:0 at weiser1] cached command: P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in [proxy:0:0 at weiser1] got pmi command (from 6): put kvsname=kvs_24541_0 key=P1-businesscard value=description#weiser1$port#34048$ifname#192.168.0.101$ [proxy:0:0 at weiser1] cached command: P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in [proxy:0:0 at weiser1] flushing 4 put command(s) out [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ [proxy:0:0 at weiser1] forwarding command (cmd=put P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$) upstream [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in [proxy:0:1 at weiser2] got pmi command (from 4): put kvsname=kvs_24541_0 key=P4-businesscard value=description#weiser2$port#60693$ifname#192.168.0.102$ [proxy:0:1 at weiser2] cached command: P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 5): put kvsname=kvs_24541_0 key=P5-businesscard value=description#weiser2$port#49938$ifname#192.168.0.102$ [proxy:0:1 at weiser2] cached command: P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 7): put kvsname=kvs_24541_0 key=P6-businesscard value=description#weiser2$port#33516$ifname#192.168.0.102$ [proxy:0:1 at weiser2] cached command: P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 10): put kvsname=kvs_24541_0 key=P7-businesscard value=description#weiser2$port#43116$ifname#192.168.0.102$ [proxy:0:1 at weiser2] cached command: P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ [proxy:0:1 at weiser2] [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ PMI response: cmd=put_result rc=0 msg=success [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in [proxy:0:1 at weiser2] flushing 4 put command(s) out [proxy:0:1 at weiser2] forwarding command (cmd=put P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$) upstream [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:0 at weiser1] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] PMI response: cmd=barrier_out [proxy:0:1 at weiser2] got pmi command (from 4): get kvsname=kvs_24541_0 key=P0-businesscard [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=description#weiser1$port#56190$ifname#192.168.0.101$ ================================================================================ HPLinpack 2.1 -- High-Performance Linpack benchmark -- October 26, 2012 Written by A. Petitet and R. Clint Whaley, Innovative Computing Laboratory, UTK Modified by Piotr Luszczek, Innovative Computing Laboratory, UTK Modified by Julien Langou, University of Colorado Denver ================================================================================ An explanation of the input/output parameters follows: T/V : Wall time / encoded variant. N : The order of the coefficient matrix A. NB : The partitioning blocking factor. P : The number of process rows. Q : The number of process columns. Time : Time in seconds to solve the linear system. Gflops : Rate of execution for solving the linear system. The following parameter values will be used: N : 14616 NB : 168 PMAP : Row-major process mapping P : 2 Q : 4 PFACT : Right NBMIN : 4 NDIV : 2 RFACT : Crout BCAST : 1ringM DEPTH : 1 SWAP : Mix (threshold = 64) L1 : transposed form U : transposed form EQUIL : yes ALIGN : 8 double precision words -------------------------------------------------------------------------------- - The matrix A is randomly generated for each test. - The following scaled residual check will be computed: ||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * N ) - The relative machine precision (eps) is taken to be 1.110223e-16 [proxy:0:0 at weiser1] got pmi command (from 6): get - Computational tests pass if scaled residuals are less than 16.0 kvsname=kvs_24541_0 key=P5-businesscard [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=description#weiser2$port#49938$ifname#192.168.0.102$ [proxy:0:0 at weiser1] got pmi command (from 15): get kvsname=kvs_24541_0 key=P7-businesscard [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=description#weiser2$port#43116$ifname#192.168.0.102$ [proxy:0:0 at weiser1] got pmi command (from 8): get kvsname=kvs_24541_0 key=P6-businesscard [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success value=description#weiser2$port#33516$ifname#192.168.0.102$ [proxy:0:1 at weiser2] got pmi command (from 5): get kvsname=kvs_24541_0 key=P1-businesscard [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success value=description#weiser1$port#34048$ifname#192.168.0.101$ =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 9 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== ----------- END -------------- if that can help :( On Fri, Jun 28, 2013 at 12:24 PM, Pavan Balaji wrote: > > Looks like your application aborted for some reason. > > -- Pavan > > > On 06/27/2013 10:21 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >> My bad, I just found out that there was a duplicate entry like: >> weiser1 127.0.1.1 >> weiser1 192.168.0.101 >> so i removed teh 127.x.x.x. entry and kept the hostfile contents similar >> on both nodes. Now previous error is reduced to this one: >> >> ------ START OF OUTPUT ------- >> >> ....some HPL startup string (no final result) >> ...skip..... >> >> ==============================**==============================** >> ======================= >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = EXIT CODE: 9 >> = CLEANING UP REMAINING PROCESSES >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> ==============================**==============================** >> ======================= >> [proxy:0:0 at weiser1] HYD_pmcd_pmip_control_cmd_cb >> (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed >> [proxy:0:0 at weiser1] HYDT_dmxu_poll_wait_for_event >> (./tools/demux/demux_poll.c:**77): callback returned error status >> [proxy:0:0 at weiser1] main (./pm/pmiserv/pmip.c:206): demux engine error >> waiting for event >> [mpiexec at weiser1] HYDT_bscu_wait_for_completion >> (./tools/bootstrap/utils/bscu_**wait.c:76): one of the processes >> terminated badly; aborting >> [mpiexec at weiser1] HYDT_bsci_wait_for_completion >> (./tools/bootstrap/src/bsci_**wait.c:23): launcher returned error waiting >> for completion >> [mpiexec at weiser1] HYD_pmci_wait_for_completion >> (./pm/pmiserv/pmiserv_pmci.c:**217): launcher returned error waiting for >> completion >> [mpiexec at weiser1] main (./ui/mpich/mpiexec.c:331): process manager error >> waiting for completion >> >> ------ END OF OUTPUT ------- >> >> >> >> On Fri, Jun 28, 2013 at 12:12 PM, Pavan Balaji > > wrote: >> >> >> On 06/27/2013 10:08 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> >> P4-businesscard=description#__**weiser2$port#57651$ifname#192.** >> __168.0.102$ >> P5-businesscard=description#__**weiser2$port#52622$ifname#192.** >> __168.0.102$ >> P6-businesscard=description#__**weiser2$port#55935$ifname#192.** >> __168.0.102$ >> P7-businesscard=description#__**weiser2$port#54952$ifname#192.** >> __168.0.102$ >> P0-businesscard=description#__**weiser1$port#41958$ifname#127.** >> __0.1.1$ >> P2-businesscard=description#__**weiser1$port#35049$ifname#127.** >> __0.1.1$ >> P1-businesscard=description#__**weiser1$port#39634$ifname#127.** >> __0.1.1$ >> P3-businesscard=description#__**weiser1$port#51802$ifname#127.** >> __0.1.1$ >> >> >> >> I have two concerns with your output. Let's start with the first. >> >> Did you look at this question on the FAQ page? >> >> "Is your /etc/hosts file consistent across all nodes? Unless you are >> using an external DNS server, the /etc/hosts file on every machine >> should contain the correct IP information about all hosts in the >> system." >> >> >> -- Pavan >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji >> >> >> > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.science at gmail.com Thu Jun 27 22:31:45 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Thu, 27 Jun 2013 22:31:45 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> Message-ID: Can you run the cpi program? If that doesn't run, something is wrong, because that program is trivial and correct. Jeff On Thu, Jun 27, 2013 at 10:29 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > again that same error: > Fatal error in PMPI_Wait: A process has failed, error stack: > PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, status=0xbebb99f0) > failed > MPIR_Wait_impl(77)........: > dequeue_and_set_error(888): Communication error with rank 4 > > here is the verbose output: > > --------------START------------------ > > host: weiser1 > host: weiser2 > > ================================================================================================== > mpiexec options: > ---------------- > Base path: /mnt/nfs/install/mpich-install/bin/ > Launcher: (null) > Debug level: 1 > Enable X: -1 > > Global environment: > ------------------- > TERM=xterm > SHELL=/bin/bash > > XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422 > SSH_CLIENT=192.168.0.3 57311 22 > OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1 > SSH_TTY=/dev/pts/0 > USER=linaro > > LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: > LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib > MAIL=/var/mail/linaro > > PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin > PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a > LANG=C.UTF-8 > SHLVL=1 > HOME=/home/linaro > LOGNAME=linaro > SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22 > LESSOPEN=| /usr/bin/lesspipe %s > LESSCLOSE=/usr/bin/lesspipe %s %s > _=/mnt/nfs/install/mpich-install/bin/mpiexec > > Hydra internal environment: > --------------------------- > GFORTRAN_UNBUFFERED_PRECONNECTED=y > > > Proxy information: > ********************* > [1] proxy: weiser1 (4 cores) > Exec list: ./xhpl (4 processes); > > [2] proxy: weiser2 (4 cores) > Exec list: ./xhpl (4 processes); > > > ================================================================================================== > > [mpiexec at weiser1] Timeout set to -1 (-1 means infinite) > [mpiexec at weiser1] Got a control port string of weiser1:45851 > > Proxy launch args: /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy > --control-port weiser1:45851 --debug --rmk user --launcher ssh --demux poll > --pgid 0 --retries 10 --usize -2 --proxy-id > > Arguments being passed to proxy 0: > --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname > weiser1 --global-core-map 0,4,8 --pmi-id-map 0,0 --global-process-count 8 > --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping > (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' > 'SHELL=/bin/bash' > 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' > 'SSH_CLIENT=192.168.0.3 57311 22' > 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' > 'USER=linaro' > 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' > 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' > 'MAIL=/var/mail/linaro' > 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' > 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' > 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' 'SSH_CONNECTION=192.168.0.3 > 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' > 'LESSCLOSE=/usr/bin/lesspipe %s %s' > '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 > --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' > --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 > --exec-local-env 0 --exec-wdir > /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl > > Arguments being passed to proxy 1: > --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname > weiser2 --global-core-map 0,4,8 --pmi-id-map 0,4 --global-process-count 8 > --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping > (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' > 'SHELL=/bin/bash' > 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' > 'SSH_CLIENT=192.168.0.3 57311 22' > 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' > 'USER=linaro' > 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' > 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' > 'MAIL=/var/mail/linaro' > 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' > 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' > 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' 'SSH_CONNECTION=192.168.0.3 > 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' > 'LESSCLOSE=/usr/bin/lesspipe %s %s' > '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 > --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' > --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 > --exec-local-env 0 --exec-wdir > /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl > > [mpiexec at weiser1] Launch arguments: > /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy --control-port > weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 > --retries 10 --usize -2 --proxy-id 0 > [mpiexec at weiser1] Launch arguments: /usr/bin/ssh -x weiser2 > "/mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy" --control-port > weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 > --retries 10 --usize -2 --proxy-id 1 > [proxy:0:0 at weiser1] got pmi command (from 0): init > pmi_version=1 pmi_subversion=1 > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 > pmi_subversion=1 rc=0 > [proxy:0:0 at weiser1] got pmi command (from 0): get_maxes > > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 > vallen_max=1024 > [proxy:0:0 at weiser1] got pmi command (from 15): init > pmi_version=1 pmi_subversion=1 > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 > pmi_subversion=1 rc=0 > [proxy:0:0 at weiser1] got pmi command (from 15): get_maxes > > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 > vallen_max=1024 > [proxy:0:0 at weiser1] got pmi command (from 8): init > pmi_version=1 pmi_subversion=1 > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 > pmi_subversion=1 rc=0 > [proxy:0:0 at weiser1] got pmi command (from 0): get_appnum > > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 > [proxy:0:0 at weiser1] got pmi command (from 15): get_appnum > > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 > [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:0 at weiser1] got pmi command (from 8): get_maxes > > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 > vallen_max=1024 > [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:0 at weiser1] got pmi command (from 6): init > pmi_version=1 pmi_subversion=1 > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 > pmi_subversion=1 rc=0 > [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:0 at weiser1] got pmi command (from 0): get > kvsname=kvs_24541_0 key=PMI_process_mapping > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=(vector,(0,2,4)) > [proxy:0:0 at weiser1] got pmi command (from 8): get_appnum > > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 > [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:0 at weiser1] got pmi command (from 0): put > kvsname=kvs_24541_0 key=sharedFilename[0] > value=/dev/shm/mpich_shar_tmpnEZdQ9 > [proxy:0:0 at weiser1] cached command: > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:0 at weiser1] got pmi command (from 15): get > kvsname=kvs_24541_0 key=PMI_process_mapping > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=(vector,(0,2,4)) > [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in > > [proxy:0:0 at weiser1] got pmi command (from 6): get_maxes > > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 > vallen_max=1024 > [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in > > [proxy:0:0 at weiser1] got pmi command (from 8): get > kvsname=kvs_24541_0 key=PMI_process_mapping > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=(vector,(0,2,4)) > [proxy:0:0 at weiser1] got pmi command (from 6): get_appnum > > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 > [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in > > [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:0 at weiser1] got pmi command (from 6): get > kvsname=kvs_24541_0 key=PMI_process_mapping > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=(vector,(0,2,4)) > [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in > > [proxy:0:0 at weiser1] flushing 1 put command(s) out > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 > [proxy:0:0 at weiser1] forwarding command (cmd=put > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9) upstream > [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in > [proxy:0:1 at weiser2] got pmi command (from 7): init > pmi_version=1 pmi_subversion=1 > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 > pmi_subversion=1 rc=0 > [proxy:0:1 at weiser2] got pmi command (from 5): init > pmi_version=1 pmi_subversion=1 > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 > pmi_subversion=1 rc=0 > [proxy:0:1 at weiser2] got pmi command (from 7): get_maxes > > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 > vallen_max=1024 > [proxy:0:1 at weiser2] got pmi command (from 4): init > pmi_version=1 pmi_subversion=1 > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 > pmi_subversion=1 rc=0 > [proxy:0:1 at weiser2] got pmi command (from 7): get_appnum > > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 > [proxy:0:1 at weiser2] got pmi command (from 4): get_maxes > > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 > vallen_max=1024 > [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:1 at weiser2] got pmi command (from 4): get_appnum > > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 > [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:1 at weiser2] got pmi command (from 7): get > kvsname=kvs_24541_0 key=PMI_process_mapping > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=(vector,(0,2,4)) > [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in > > [proxy:0:1 at weiser2] got pmi command (from 4): get > kvsname=kvs_24541_0 key=PMI_process_mapping > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=(vector,(0,2,4)) > [proxy:0:1 at weiser2] got pmi command (from 5): get_maxes > > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 > vallen_max=1024 > [proxy:0:1 at weiser2] got pmi command (from 5): get_appnum > > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 > [proxy:0:1 at weiser2] got pmi command (from 4): put > kvsname=kvs_24541_0 key=sharedFilename[4] > value=/dev/shm/mpich_shar_tmpuKzlSa > [proxy:0:1 at weiser2] cached command: > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in > > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out > [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:1 at weiser2] got pmi command (from 5): get > kvsname=kvs_24541_0 key=PMI_process_mapping > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=(vector,(0,2,4)) > [proxy:0:1 at weiser2] got pmi command (from 10): init > pmi_version=1 pmi_subversion=1 > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 > pmi_subversion=1 rc=0 > [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in > > [proxy:0:1 at weiser2] got pmi command (from 10): get_maxes > > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 > vallen_max=1024 > [proxy:0:1 at weiser2] got pmi command (from 10): get_appnum > > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 > [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > [proxy:0:1 at weiser2] got pmi command (from 10): get > kvsname=kvs_24541_0 key=PMI_process_mapping > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=(vector,(0,2,4)) > [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in > > [proxy:0:1 at weiser2] flushing 1 put command(s) out > [proxy:0:1 at weiser2] forwarding command (cmd=put > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa) upstream > [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > [proxy:0:0 at weiser1] got pmi command (from 6): get > kvsname=kvs_24541_0 key=sharedFilename[0] > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=/dev/shm/mpich_shar_tmpnEZdQ9 > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] got pmi command (from 5): get > kvsname=kvs_24541_0 key=sharedFilename[4] > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=/dev/shm/mpich_shar_tmpuKzlSa > [proxy:0:1 at weiser2] got pmi command (from 7): get > kvsname=kvs_24541_0 key=sharedFilename[4] > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=/dev/shm/mpich_shar_tmpuKzlSa > [proxy:0:1 at weiser2] got pmi command (from 10): get > kvsname=kvs_24541_0 key=sharedFilename[4] > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=/dev/shm/mpich_shar_tmpuKzlSa > [proxy:0:0 at weiser1] got pmi command (from 8): get > kvsname=kvs_24541_0 key=sharedFilename[0] > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=/dev/shm/mpich_shar_tmpnEZdQ9 > [proxy:0:0 at weiser1] got pmi command (from 15): get > kvsname=kvs_24541_0 key=sharedFilename[0] > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=/dev/shm/mpich_shar_tmpnEZdQ9 > [proxy:0:0 at weiser1] got pmi command (from 0): put > kvsname=kvs_24541_0 key=P0-businesscard > value=description#weiser1$port#56190$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] cached command: > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:0 at weiser1] got pmi command (from 8): put > kvsname=kvs_24541_0 key=P2-businesscard > value=description#weiser1$port#40019$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] cached command: > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:0 at weiser1] got pmi command (from 15): put > kvsname=kvs_24541_0 key=P3-businesscard > value=description#weiser1$port#57150$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] cached command: > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in > > [proxy:0:0 at weiser1] got pmi command (from 6): put > kvsname=kvs_24541_0 key=P1-businesscard > value=description#weiser1$port#34048$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] cached command: > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in > > [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in > > [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in > > [proxy:0:0 at weiser1] flushing 4 put command(s) out > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ > [proxy:0:0 at weiser1] forwarding command (cmd=put > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$) > upstream > [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in > [proxy:0:1 at weiser2] got pmi command (from 4): put > kvsname=kvs_24541_0 key=P4-businesscard > value=description#weiser2$port#60693$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] cached command: > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:1 at weiser2] got pmi command (from 5): put > kvsname=kvs_24541_0 key=P5-businesscard > value=description#weiser2$port#49938$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] cached command: > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:1 at weiser2] got pmi command (from 7): put > kvsname=kvs_24541_0 key=P6-businesscard > value=description#weiser2$port#33516$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] cached command: > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success > [proxy:0:1 at weiser2] got pmi command (from 10): put > kvsname=kvs_24541_0 key=P7-businesscard > value=description#weiser2$port#43116$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] cached command: > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ > PMI response: cmd=put_result rc=0 msg=success > [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in > > [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in > > [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > [proxy:0:0 at weiser1] > [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in > > [proxy:0:1 at weiser2] flushing 4 put command(s) out > [proxy:0:1 at weiser2] forwarding command (cmd=put > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$) > upstream > [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream > PMI response: cmd=barrier_out > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > [proxy:0:1 at weiser2] got pmi command (from 4): get > kvsname=kvs_24541_0 key=P0-businesscard > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=description#weiser1$port#56190$ifname#192.168.0.101$ > ================================================================================ > HPLinpack 2.1 -- High-Performance Linpack benchmark -- October 26, 2012 > Written by A. Petitet and R. Clint Whaley, Innovative Computing Laboratory, > UTK > Modified by Piotr Luszczek, Innovative Computing Laboratory, UTK > Modified by Julien Langou, University of Colorado Denver > ================================================================================ > > An explanation of the input/output parameters follows: > T/V : Wall time / encoded variant. > N : The order of the coefficient matrix A. > NB : The partitioning blocking factor. > P : The number of process rows. > Q : The number of process columns. > Time : Time in seconds to solve the linear system. > Gflops : Rate of execution for solving the linear system. > > The following parameter values will be used: > > N : 14616 > NB : 168 > PMAP : Row-major process mapping > P : 2 > Q : 4 > PFACT : Right > NBMIN : 4 > NDIV : 2 > RFACT : Crout > BCAST : 1ringM > DEPTH : 1 > SWAP : Mix (threshold = 64) > L1 : transposed form > U : transposed form > EQUIL : yes > ALIGN : 8 double precision words > > -------------------------------------------------------------------------------- > > - The matrix A is randomly generated for each test. > - The following scaled residual check will be computed: > ||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * N ) > - The relative machine precision (eps) is taken to be > 1.110223e-16 > [proxy:0:0 at weiser1] got pmi command (from 6): get > - Computational tests pass if scaled residuals are less than > 16.0 > > kvsname=kvs_24541_0 key=P5-businesscard > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=description#weiser2$port#49938$ifname#192.168.0.102$ > [proxy:0:0 at weiser1] got pmi command (from 15): get > kvsname=kvs_24541_0 key=P7-businesscard > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=description#weiser2$port#43116$ifname#192.168.0.102$ > [proxy:0:0 at weiser1] got pmi command (from 8): get > kvsname=kvs_24541_0 key=P6-businesscard > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > value=description#weiser2$port#33516$ifname#192.168.0.102$ > [proxy:0:1 at weiser2] got pmi command (from 5): get > kvsname=kvs_24541_0 key=P1-businesscard > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > value=description#weiser1$port#34048$ifname#192.168.0.101$ > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 9 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > =================================================================================== > > > ----------- END -------------- > > if that can help :( > > > > > > > On Fri, Jun 28, 2013 at 12:24 PM, Pavan Balaji wrote: >> >> >> Looks like your application aborted for some reason. >> >> -- Pavan >> >> >> On 06/27/2013 10:21 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >>> >>> My bad, I just found out that there was a duplicate entry like: >>> weiser1 127.0.1.1 >>> weiser1 192.168.0.101 >>> so i removed teh 127.x.x.x. entry and kept the hostfile contents similar >>> on both nodes. Now previous error is reduced to this one: >>> >>> ------ START OF OUTPUT ------- >>> >>> ....some HPL startup string (no final result) >>> ...skip..... >>> >>> >>> =================================================================================== >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>> = EXIT CODE: 9 >>> = CLEANING UP REMAINING PROCESSES >>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >>> >>> =================================================================================== >>> [proxy:0:0 at weiser1] HYD_pmcd_pmip_control_cmd_cb >>> (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed >>> [proxy:0:0 at weiser1] HYDT_dmxu_poll_wait_for_event >>> (./tools/demux/demux_poll.c:77): callback returned error status >>> [proxy:0:0 at weiser1] main (./pm/pmiserv/pmip.c:206): demux engine error >>> waiting for event >>> [mpiexec at weiser1] HYDT_bscu_wait_for_completion >>> (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes >>> terminated badly; aborting >>> [mpiexec at weiser1] HYDT_bsci_wait_for_completion >>> (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting >>> for completion >>> [mpiexec at weiser1] HYD_pmci_wait_for_completion >>> (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for >>> completion >>> [mpiexec at weiser1] main (./ui/mpich/mpiexec.c:331): process manager error >>> waiting for completion >>> >>> ------ END OF OUTPUT ------- >>> >>> >>> >>> On Fri, Jun 28, 2013 at 12:12 PM, Pavan Balaji >> > wrote: >>> >>> >>> On 06/27/2013 10:08 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >>> >>> >>> P4-businesscard=description#__weiser2$port#57651$ifname#192.__168.0.102$ >>> >>> P5-businesscard=description#__weiser2$port#52622$ifname#192.__168.0.102$ >>> >>> P6-businesscard=description#__weiser2$port#55935$ifname#192.__168.0.102$ >>> >>> P7-businesscard=description#__weiser2$port#54952$ifname#192.__168.0.102$ >>> >>> P0-businesscard=description#__weiser1$port#41958$ifname#127.__0.1.1$ >>> >>> P2-businesscard=description#__weiser1$port#35049$ifname#127.__0.1.1$ >>> >>> P1-businesscard=description#__weiser1$port#39634$ifname#127.__0.1.1$ >>> >>> P3-businesscard=description#__weiser1$port#51802$ifname#127.__0.1.1$ >>> >>> >>> >>> I have two concerns with your output. Let's start with the first. >>> >>> Did you look at this question on the FAQ page? >>> >>> "Is your /etc/hosts file consistent across all nodes? Unless you are >>> using an external DNS server, the /etc/hosts file on every machine >>> should contain the correct IP information about all hosts in the >>> system." >>> >>> >>> -- Pavan >>> >>> -- >>> Pavan Balaji >>> http://www.mcs.anl.gov/~balaji >>> >>> >> >> -- >> Pavan Balaji >> http://www.mcs.anl.gov/~balaji > > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.science at gmail.com From jahanzeb.maqbool at gmail.com Thu Jun 27 22:35:46 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 12:35:46 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> Message-ID: Yes I am successfully able to run cpi program. No such error at all. On Fri, Jun 28, 2013 at 12:31 PM, Jeff Hammond wrote: > Can you run the cpi program? If that doesn't run, something is wrong, > because that program is trivial and correct. > > Jeff > > On Thu, Jun 27, 2013 at 10:29 PM, Syed. Jahanzeb Maqbool Hashmi > wrote: > > again that same error: > > Fatal error in PMPI_Wait: A process has failed, error stack: > > PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, > status=0xbebb99f0) > > failed > > MPIR_Wait_impl(77)........: > > dequeue_and_set_error(888): Communication error with rank 4 > > > > here is the verbose output: > > > > --------------START------------------ > > > > host: weiser1 > > host: weiser2 > > > > > ================================================================================================== > > mpiexec options: > > ---------------- > > Base path: /mnt/nfs/install/mpich-install/bin/ > > Launcher: (null) > > Debug level: 1 > > Enable X: -1 > > > > Global environment: > > ------------------- > > TERM=xterm > > SHELL=/bin/bash > > > > > XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422 > > SSH_CLIENT=192.168.0.3 57311 22 > > OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1 > > SSH_TTY=/dev/pts/0 > > USER=linaro > > > > > LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35 > > :*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: > > LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib > > MAIL=/var/mail/linaro > > > > > PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin > > PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a > > LANG=C.UTF-8 > > SHLVL=1 > > HOME=/home/linaro > > LOGNAME=linaro > > SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22 > > LESSOPEN=| /usr/bin/lesspipe %s > > LESSCLOSE=/usr/bin/lesspipe %s %s > > _=/mnt/nfs/install/mpich-install/bin/mpiexec > > > > Hydra internal environment: > > --------------------------- > > GFORTRAN_UNBUFFERED_PRECONNECTED=y > > > > > > Proxy information: > > ********************* > > [1] proxy: weiser1 (4 cores) > > Exec list: ./xhpl (4 processes); > > > > [2] proxy: weiser2 (4 cores) > > Exec list: ./xhpl (4 processes); > > > > > > > ================================================================================================== > > > > [mpiexec at weiser1] Timeout set to -1 (-1 means infinite) > > [mpiexec at weiser1] Got a control port string of weiser1:45851 > > > > Proxy launch args: /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy > > --control-port weiser1:45851 --debug --rmk user --launcher ssh --demux > poll > > --pgid 0 --retries 10 --usize -2 --proxy-id > > > > Arguments being passed to proxy 0: > > --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname > > weiser1 --global-core-map 0,4,8 --pmi-id-map 0,0 --global-process-count 8 > > --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping > > (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' > > 'SHELL=/bin/bash' > > > 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' > > 'SSH_CLIENT=192.168.0.3 57311 22' > > 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' > > 'USER=linaro' > > > 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;3 > > 5:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' > > 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' > > 'MAIL=/var/mail/linaro' > > > 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' > > 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' > > 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' > 'SSH_CONNECTION=192.168.0.3 > > 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' > > 'LESSCLOSE=/usr/bin/lesspipe %s %s' > > '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 > > --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' > > --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 > > --exec-local-env 0 --exec-wdir > > /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl > > > > Arguments being passed to proxy 1: > > --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname > > weiser2 --global-core-map 0,4,8 --pmi-id-map 0,4 --global-process-count 8 > > --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping > > (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' > > 'SHELL=/bin/bash' > > > 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' > > 'SSH_CLIENT=192.168.0.3 57311 22' > > 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' > > 'USER=linaro' > > > 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;3 > > 5:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' > > 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' > > 'MAIL=/var/mail/linaro' > > > 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' > > 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' > > 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' > 'SSH_CONNECTION=192.168.0.3 > > 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' > > 'LESSCLOSE=/usr/bin/lesspipe %s %s' > > '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 > > --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' > > --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 > > --exec-local-env 0 --exec-wdir > > /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl > > > > [mpiexec at weiser1] Launch arguments: > > /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy --control-port > > weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 > > --retries 10 --usize -2 --proxy-id 0 > > [mpiexec at weiser1] Launch arguments: /usr/bin/ssh -x weiser2 > > "/mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy" --control-port > > weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 > > --retries 10 --usize -2 --proxy-id 1 > > [proxy:0:0 at weiser1] got pmi command (from 0): init > > pmi_version=1 pmi_subversion=1 > > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 > > pmi_subversion=1 rc=0 > > [proxy:0:0 at weiser1] got pmi command (from 0): get_maxes > > > > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 > keylen_max=64 > > vallen_max=1024 > > [proxy:0:0 at weiser1] got pmi command (from 15): init > > pmi_version=1 pmi_subversion=1 > > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 > > pmi_subversion=1 rc=0 > > [proxy:0:0 at weiser1] got pmi command (from 15): get_maxes > > > > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 > keylen_max=64 > > vallen_max=1024 > > [proxy:0:0 at weiser1] got pmi command (from 8): init > > pmi_version=1 pmi_subversion=1 > > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 > > pmi_subversion=1 rc=0 > > [proxy:0:0 at weiser1] got pmi command (from 0): get_appnum > > > > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 > > [proxy:0:0 at weiser1] got pmi command (from 15): get_appnum > > > > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 > > [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname > > > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:0 at weiser1] got pmi command (from 8): get_maxes > > > > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 > keylen_max=64 > > vallen_max=1024 > > [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname > > > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:0 at weiser1] got pmi command (from 6): init > > pmi_version=1 pmi_subversion=1 > > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 > > pmi_subversion=1 rc=0 > > [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname > > > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:0 at weiser1] got pmi command (from 0): get > > kvsname=kvs_24541_0 key=PMI_process_mapping > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=(vector,(0,2,4)) > > [proxy:0:0 at weiser1] got pmi command (from 8): get_appnum > > > > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 > > [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname > > > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname > > > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:0 at weiser1] got pmi command (from 0): put > > kvsname=kvs_24541_0 key=sharedFilename[0] > > value=/dev/shm/mpich_shar_tmpnEZdQ9 > > [proxy:0:0 at weiser1] cached command: > > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 > > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:0 at weiser1] got pmi command (from 15): get > > kvsname=kvs_24541_0 key=PMI_process_mapping > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=(vector,(0,2,4)) > > [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in > > > > [proxy:0:0 at weiser1] got pmi command (from 6): get_maxes > > > > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 > keylen_max=64 > > vallen_max=1024 > > [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname > > > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in > > > > [proxy:0:0 at weiser1] got pmi command (from 8): get > > kvsname=kvs_24541_0 key=PMI_process_mapping > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=(vector,(0,2,4)) > > [proxy:0:0 at weiser1] got pmi command (from 6): get_appnum > > > > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 > > [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in > > > > [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname > > > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname > > > > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:0 at weiser1] got pmi command (from 6): get > > kvsname=kvs_24541_0 key=PMI_process_mapping > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=(vector,(0,2,4)) > > [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in > > > > [proxy:0:0 at weiser1] flushing 1 put command(s) out > > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put > > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 > > [proxy:0:0 at weiser1] forwarding command (cmd=put > > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9) upstream > > [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream > > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in > > [proxy:0:1 at weiser2] got pmi command (from 7): init > > pmi_version=1 pmi_subversion=1 > > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 > > pmi_subversion=1 rc=0 > > [proxy:0:1 at weiser2] got pmi command (from 5): init > > pmi_version=1 pmi_subversion=1 > > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 > > pmi_subversion=1 rc=0 > > [proxy:0:1 at weiser2] got pmi command (from 7): get_maxes > > > > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 > keylen_max=64 > > vallen_max=1024 > > [proxy:0:1 at weiser2] got pmi command (from 4): init > > pmi_version=1 pmi_subversion=1 > > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 > > pmi_subversion=1 rc=0 > > [proxy:0:1 at weiser2] got pmi command (from 7): get_appnum > > > > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 > > [proxy:0:1 at weiser2] got pmi command (from 4): get_maxes > > > > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 > keylen_max=64 > > vallen_max=1024 > > [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname > > > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:1 at weiser2] got pmi command (from 4): get_appnum > > > > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 > > [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname > > > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname > > > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:1 at weiser2] got pmi command (from 7): get > > kvsname=kvs_24541_0 key=PMI_process_mapping > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=(vector,(0,2,4)) > > [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname > > > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in > > > > [proxy:0:1 at weiser2] got pmi command (from 4): get > > kvsname=kvs_24541_0 key=PMI_process_mapping > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=(vector,(0,2,4)) > > [proxy:0:1 at weiser2] got pmi command (from 5): get_maxes > > > > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 > keylen_max=64 > > vallen_max=1024 > > [proxy:0:1 at weiser2] got pmi command (from 5): get_appnum > > > > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 > > [proxy:0:1 at weiser2] got pmi command (from 4): put > > kvsname=kvs_24541_0 key=sharedFilename[4] > > value=/dev/shm/mpich_shar_tmpuKzlSa > > [proxy:0:1 at weiser2] cached command: > > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa > > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname > > > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in > > > > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put > > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa > > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in > > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache > > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 > > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa > > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache > > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 > > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa > > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out > > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out > > [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname > > > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:1 at weiser2] got pmi command (from 5): get > > kvsname=kvs_24541_0 key=PMI_process_mapping > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=(vector,(0,2,4)) > > [proxy:0:1 at weiser2] got pmi command (from 10): init > > pmi_version=1 pmi_subversion=1 > > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 > > pmi_subversion=1 rc=0 > > [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in > > > > [proxy:0:1 at weiser2] got pmi command (from 10): get_maxes > > > > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 > keylen_max=64 > > vallen_max=1024 > > [proxy:0:1 at weiser2] got pmi command (from 10): get_appnum > > > > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 > > [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname > > > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname > > > > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 > > [proxy:0:1 at weiser2] got pmi command (from 10): get > > kvsname=kvs_24541_0 key=PMI_process_mapping > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=(vector,(0,2,4)) > > [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in > > > > [proxy:0:1 at weiser2] flushing 1 put command(s) out > > [proxy:0:1 at weiser2] forwarding command (cmd=put > > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa) upstream > > [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream > > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > > [proxy:0:0 at weiser1] got pmi command (from 6): get > > kvsname=kvs_24541_0 key=sharedFilename[0] > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=/dev/shm/mpich_shar_tmpnEZdQ9 > > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] got pmi command (from 5): get > > kvsname=kvs_24541_0 key=sharedFilename[4] > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=/dev/shm/mpich_shar_tmpuKzlSa > > [proxy:0:1 at weiser2] got pmi command (from 7): get > > kvsname=kvs_24541_0 key=sharedFilename[4] > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=/dev/shm/mpich_shar_tmpuKzlSa > > [proxy:0:1 at weiser2] got pmi command (from 10): get > > kvsname=kvs_24541_0 key=sharedFilename[4] > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=/dev/shm/mpich_shar_tmpuKzlSa > > [proxy:0:0 at weiser1] got pmi command (from 8): get > > kvsname=kvs_24541_0 key=sharedFilename[0] > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=/dev/shm/mpich_shar_tmpnEZdQ9 > > [proxy:0:0 at weiser1] got pmi command (from 15): get > > kvsname=kvs_24541_0 key=sharedFilename[0] > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=/dev/shm/mpich_shar_tmpnEZdQ9 > > [proxy:0:0 at weiser1] got pmi command (from 0): put > > kvsname=kvs_24541_0 key=P0-businesscard > > value=description#weiser1$port#56190$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] cached command: > > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:0 at weiser1] got pmi command (from 8): put > > kvsname=kvs_24541_0 key=P2-businesscard > > value=description#weiser1$port#40019$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] cached command: > > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:0 at weiser1] got pmi command (from 15): put > > kvsname=kvs_24541_0 key=P3-businesscard > > value=description#weiser1$port#57150$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] cached command: > > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in > > > > [proxy:0:0 at weiser1] got pmi command (from 6): put > > kvsname=kvs_24541_0 key=P1-businesscard > > value=description#weiser1$port#34048$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] cached command: > > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in > > > > [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in > > > > [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in > > > > [proxy:0:0 at weiser1] flushing 4 put command(s) out > > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put > > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ > > [proxy:0:0 at weiser1] forwarding command (cmd=put > > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$) > > upstream > > [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream > > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in > > [proxy:0:1 at weiser2] got pmi command (from 4): put > > kvsname=kvs_24541_0 key=P4-businesscard > > value=description#weiser2$port#60693$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] cached command: > > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:1 at weiser2] got pmi command (from 5): put > > kvsname=kvs_24541_0 key=P5-businesscard > > value=description#weiser2$port#49938$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] cached command: > > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:1 at weiser2] got pmi command (from 7): put > > kvsname=kvs_24541_0 key=P6-businesscard > > value=description#weiser2$port#33516$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] cached command: > > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:1 at weiser2] got pmi command (from 10): put > > kvsname=kvs_24541_0 key=P7-businesscard > > value=description#weiser2$port#43116$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] cached command: > > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put > > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ > > PMI response: cmd=put_result rc=0 msg=success > > [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in > > > > [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in > > > > [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in > > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in > > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache > > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ > > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ > > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache > > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ > > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ > > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ > > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ > > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ > > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out > > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out > > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > > [proxy:0:0 at weiser1] > > [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in > > > > [proxy:0:1 at weiser2] flushing 4 put command(s) out > > [proxy:0:1 at weiser2] forwarding command (cmd=put > > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ > > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ > > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ > > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$) > > upstream > > [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream > > PMI response: cmd=barrier_out > > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out > > [proxy:0:1 at weiser2] got pmi command (from 4): get > > kvsname=kvs_24541_0 key=P0-businesscard > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=description#weiser1$port#56190$ifname#192.168.0.101$ > > > ================================================================================ > > HPLinpack 2.1 -- High-Performance Linpack benchmark -- October 26, > 2012 > > Written by A. Petitet and R. Clint Whaley, Innovative Computing > Laboratory, > > UTK > > Modified by Piotr Luszczek, Innovative Computing Laboratory, UTK > > Modified by Julien Langou, University of Colorado Denver > > > ================================================================================ > > > > An explanation of the input/output parameters follows: > > T/V : Wall time / encoded variant. > > N : The order of the coefficient matrix A. > > NB : The partitioning blocking factor. > > P : The number of process rows. > > Q : The number of process columns. > > Time : Time in seconds to solve the linear system. > > Gflops : Rate of execution for solving the linear system. > > > > The following parameter values will be used: > > > > N : 14616 > > NB : 168 > > PMAP : Row-major process mapping > > P : 2 > > Q : 4 > > PFACT : Right > > NBMIN : 4 > > NDIV : 2 > > RFACT : Crout > > BCAST : 1ringM > > DEPTH : 1 > > SWAP : Mix (threshold = 64) > > L1 : transposed form > > U : transposed form > > EQUIL : yes > > ALIGN : 8 double precision words > > > > > -------------------------------------------------------------------------------- > > > > - The matrix A is randomly generated for each test. > > - The following scaled residual check will be computed: > > ||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * N > ) > > - The relative machine precision (eps) is taken to be > > 1.110223e-16 > > [proxy:0:0 at weiser1] got pmi command (from 6): get > > - Computational tests pass if scaled residuals are less than > > 16.0 > > > > kvsname=kvs_24541_0 key=P5-businesscard > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=description#weiser2$port#49938$ifname#192.168.0.102$ > > [proxy:0:0 at weiser1] got pmi command (from 15): get > > kvsname=kvs_24541_0 key=P7-businesscard > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=description#weiser2$port#43116$ifname#192.168.0.102$ > > [proxy:0:0 at weiser1] got pmi command (from 8): get > > kvsname=kvs_24541_0 key=P6-businesscard > > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success > > value=description#weiser2$port#33516$ifname#192.168.0.102$ > > [proxy:0:1 at weiser2] got pmi command (from 5): get > > kvsname=kvs_24541_0 key=P1-businesscard > > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success > > value=description#weiser1$port#34048$ifname#192.168.0.101$ > > > > > =================================================================================== > > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > > = EXIT CODE: 9 > > = CLEANING UP REMAINING PROCESSES > > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > > > =================================================================================== > > > > > > ----------- END -------------- > > > > if that can help :( > > > > > > > > > > > > > > On Fri, Jun 28, 2013 at 12:24 PM, Pavan Balaji > wrote: > >> > >> > >> Looks like your application aborted for some reason. > >> > >> -- Pavan > >> > >> > >> On 06/27/2013 10:21 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >>> > >>> My bad, I just found out that there was a duplicate entry like: > >>> weiser1 127.0.1.1 > >>> weiser1 192.168.0.101 > >>> so i removed teh 127.x.x.x. entry and kept the hostfile contents > similar > >>> on both nodes. Now previous error is reduced to this one: > >>> > >>> ------ START OF OUTPUT ------- > >>> > >>> ....some HPL startup string (no final result) > >>> ...skip..... > >>> > >>> > >>> > =================================================================================== > >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > >>> = EXIT CODE: 9 > >>> = CLEANING UP REMAINING PROCESSES > >>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > >>> > >>> > =================================================================================== > >>> [proxy:0:0 at weiser1] HYD_pmcd_pmip_control_cmd_cb > >>> (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed > >>> [proxy:0:0 at weiser1] HYDT_dmxu_poll_wait_for_event > >>> (./tools/demux/demux_poll.c:77): callback returned error status > >>> [proxy:0:0 at weiser1] main (./pm/pmiserv/pmip.c:206): demux engine error > >>> waiting for event > >>> [mpiexec at weiser1] HYDT_bscu_wait_for_completion > >>> (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes > >>> terminated badly; aborting > >>> [mpiexec at weiser1] HYDT_bsci_wait_for_completion > >>> (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting > >>> for completion > >>> [mpiexec at weiser1] HYD_pmci_wait_for_completion > >>> (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for > >>> completion > >>> [mpiexec at weiser1] main (./ui/mpich/mpiexec.c:331): process manager > error > >>> waiting for completion > >>> > >>> ------ END OF OUTPUT ------- > >>> > >>> > >>> > >>> On Fri, Jun 28, 2013 at 12:12 PM, Pavan Balaji >>> > wrote: > >>> > >>> > >>> On 06/27/2013 10:08 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >>> > >>> > >>> > P4-businesscard=description#__weiser2$port#57651$ifname#192.__168.0.102$ > >>> > >>> > P5-businesscard=description#__weiser2$port#52622$ifname#192.__168.0.102$ > >>> > >>> > P6-businesscard=description#__weiser2$port#55935$ifname#192.__168.0.102$ > >>> > >>> > P7-businesscard=description#__weiser2$port#54952$ifname#192.__168.0.102$ > >>> > >>> P0-businesscard=description#__weiser1$port#41958$ifname#127.__0.1.1$ > >>> > >>> P2-businesscard=description#__weiser1$port#35049$ifname#127.__0.1.1$ > >>> > >>> P1-businesscard=description#__weiser1$port#39634$ifname#127.__0.1.1$ > >>> > >>> P3-businesscard=description#__weiser1$port#51802$ifname#127.__0.1.1$ > >>> > >>> > >>> > >>> I have two concerns with your output. Let's start with the first. > >>> > >>> Did you look at this question on the FAQ page? > >>> > >>> "Is your /etc/hosts file consistent across all nodes? Unless you > are > >>> using an external DNS server, the /etc/hosts file on every machine > >>> should contain the correct IP information about all hosts in the > >>> system." > >>> > >>> > >>> -- Pavan > >>> > >>> -- > >>> Pavan Balaji > >>> http://www.mcs.anl.gov/~balaji > >>> > >>> > >> > >> -- > >> Pavan Balaji > >> http://www.mcs.anl.gov/~balaji > > > > > > > > _______________________________________________ > > discuss mailing list discuss at mpich.org > > To manage subscription options or unsubscribe: > > https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > jeff.science at gmail.com > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahanzeb.maqbool at gmail.com Thu Jun 27 22:36:18 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 12:36:18 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> Message-ID: and here is that output: Process 0 of 8 is on weiser1 Process 1 of 8 is on weiser1 Process 2 of 8 is on weiser1 Process 3 of 8 is on weiser1 Process 4 of 8 is on weiser2 Process 5 of 8 is on weiser2 Process 6 of 8 is on weiser2 Process 7 of 8 is on weiser2 pi is approximately 3.1415926544231247, Error is 0.0000000008333316 wall clock time = 0.018203 --------------- On Fri, Jun 28, 2013 at 12:35 PM, Syed. Jahanzeb Maqbool Hashmi < jahanzeb.maqbool at gmail.com> wrote: > Yes I am successfully able to run cpi program. No such error at all. > > > > On Fri, Jun 28, 2013 at 12:31 PM, Jeff Hammond wrote: > >> Can you run the cpi program? If that doesn't run, something is wrong, >> because that program is trivial and correct. >> >> Jeff >> >> On Thu, Jun 27, 2013 at 10:29 PM, Syed. Jahanzeb Maqbool Hashmi >> wrote: >> > again that same error: >> > Fatal error in PMPI_Wait: A process has failed, error stack: >> > PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, >> status=0xbebb99f0) >> > failed >> > MPIR_Wait_impl(77)........: >> > dequeue_and_set_error(888): Communication error with rank 4 >> > >> > here is the verbose output: >> > >> > --------------START------------------ >> > >> > host: weiser1 >> > host: weiser2 >> > >> > >> ================================================================================================== >> > mpiexec options: >> > ---------------- >> > Base path: /mnt/nfs/install/mpich-install/bin/ >> > Launcher: (null) >> > Debug level: 1 >> > Enable X: -1 >> > >> > Global environment: >> > ------------------- >> > TERM=xterm >> > SHELL=/bin/bash >> > >> > >> XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422 >> > SSH_CLIENT=192.168.0.3 57311 22 >> > OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1 >> > SSH_TTY=/dev/pts/0 >> > USER=linaro >> > >> > >> LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35 >> >> :*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: >> > LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib >> > MAIL=/var/mail/linaro >> > >> > >> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin >> > PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a >> > LANG=C.UTF-8 >> > SHLVL=1 >> > HOME=/home/linaro >> > LOGNAME=linaro >> > SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22 >> > LESSOPEN=| /usr/bin/lesspipe %s >> > LESSCLOSE=/usr/bin/lesspipe %s %s >> > _=/mnt/nfs/install/mpich-install/bin/mpiexec >> > >> > Hydra internal environment: >> > --------------------------- >> > GFORTRAN_UNBUFFERED_PRECONNECTED=y >> > >> > >> > Proxy information: >> > ********************* >> > [1] proxy: weiser1 (4 cores) >> > Exec list: ./xhpl (4 processes); >> > >> > [2] proxy: weiser2 (4 cores) >> > Exec list: ./xhpl (4 processes); >> > >> > >> > >> ================================================================================================== >> > >> > [mpiexec at weiser1] Timeout set to -1 (-1 means infinite) >> > [mpiexec at weiser1] Got a control port string of weiser1:45851 >> > >> > Proxy launch args: /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy >> > --control-port weiser1:45851 --debug --rmk user --launcher ssh --demux >> poll >> > --pgid 0 --retries 10 --usize -2 --proxy-id >> > >> > Arguments being passed to proxy 0: >> > --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname >> > weiser1 --global-core-map 0,4,8 --pmi-id-map 0,0 --global-process-count >> 8 >> > --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping >> > (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' >> > 'SHELL=/bin/bash' >> > >> 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' >> > 'SSH_CLIENT=192.168.0.3 57311 22' >> > 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' >> > 'USER=linaro' >> > >> 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;3 >> >> 5:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' >> > 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' >> > 'MAIL=/var/mail/linaro' >> > >> 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' >> > 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' >> > 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' >> 'SSH_CONNECTION=192.168.0.3 >> > 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' >> > 'LESSCLOSE=/usr/bin/lesspipe %s %s' >> > '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 >> > --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' >> > --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 >> > --exec-local-env 0 --exec-wdir >> > /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl >> > >> > Arguments being passed to proxy 1: >> > --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname >> > weiser2 --global-core-map 0,4,8 --pmi-id-map 0,4 --global-process-count >> 8 >> > --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping >> > (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' >> > 'SHELL=/bin/bash' >> > >> 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' >> > 'SSH_CLIENT=192.168.0.3 57311 22' >> > 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' >> > 'USER=linaro' >> > >> 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;3 >> >> 5:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' >> > 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' >> > 'MAIL=/var/mail/linaro' >> > >> 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' >> > 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' >> > 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' >> 'SSH_CONNECTION=192.168.0.3 >> > 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' >> > 'LESSCLOSE=/usr/bin/lesspipe %s %s' >> > '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 >> > --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' >> > --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 >> > --exec-local-env 0 --exec-wdir >> > /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl >> > >> > [mpiexec at weiser1] Launch arguments: >> > /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy --control-port >> > weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 >> > --retries 10 --usize -2 --proxy-id 0 >> > [mpiexec at weiser1] Launch arguments: /usr/bin/ssh -x weiser2 >> > "/mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy" --control-port >> > weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 >> > --retries 10 --usize -2 --proxy-id 1 >> > [proxy:0:0 at weiser1] got pmi command (from 0): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get_maxes >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:0 at weiser1] got pmi command (from 15): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get_maxes >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:0 at weiser1] got pmi command (from 8): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get_appnum >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get_appnum >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 8): get_maxes >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 6): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:0 at weiser1] got pmi command (from 8): get_appnum >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): put >> > kvsname=kvs_24541_0 key=sharedFilename[0] >> > value=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] cached command: >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 15): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 6): get_maxes >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 8): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:0 at weiser1] got pmi command (from 6): get_appnum >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 >> > [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 6): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in >> > >> > [proxy:0:0 at weiser1] flushing 1 put command(s) out >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] forwarding command (cmd=put >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9) upstream >> > [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in >> > [proxy:0:1 at weiser2] got pmi command (from 7): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:1 at weiser2] got pmi command (from 5): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get_maxes >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:1 at weiser2] got pmi command (from 4): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get_appnum >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): get_maxes >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): get_appnum >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in >> > >> > [proxy:0:1 at weiser2] got pmi command (from 4): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:1 at weiser2] got pmi command (from 5): get_maxes >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:1 at weiser2] got pmi command (from 5): get_appnum >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): put >> > kvsname=kvs_24541_0 key=sharedFilename[4] >> > value=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:1 at weiser2] cached command: >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in >> > >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in >> > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa >> > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa >> > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out >> > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out >> > [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 5): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:1 at weiser2] got pmi command (from 10): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in >> > >> > [proxy:0:1 at weiser2] got pmi command (from 10): get_maxes >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:1 at weiser2] got pmi command (from 10): get_appnum >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 >> > [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 10): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in >> > >> > [proxy:0:1 at weiser2] flushing 1 put command(s) out >> > [proxy:0:1 at weiser2] forwarding command (cmd=put >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa) upstream >> > [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] got pmi command (from 6): get >> > kvsname=kvs_24541_0 key=sharedFilename[0] >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] got pmi command (from 5): get >> > kvsname=kvs_24541_0 key=sharedFilename[4] >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:1 at weiser2] got pmi command (from 7): get >> > kvsname=kvs_24541_0 key=sharedFilename[4] >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:1 at weiser2] got pmi command (from 10): get >> > kvsname=kvs_24541_0 key=sharedFilename[4] >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:0 at weiser1] got pmi command (from 8): get >> > kvsname=kvs_24541_0 key=sharedFilename[0] >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get >> > kvsname=kvs_24541_0 key=sharedFilename[0] >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] got pmi command (from 0): put >> > kvsname=kvs_24541_0 key=P0-businesscard >> > value=description#weiser1$port#56190$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] cached command: >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 8): put >> > kvsname=kvs_24541_0 key=P2-businesscard >> > value=description#weiser1$port#40019$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] cached command: >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 15): put >> > kvsname=kvs_24541_0 key=P3-businesscard >> > value=description#weiser1$port#57150$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] cached command: >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 6): put >> > kvsname=kvs_24541_0 key=P1-businesscard >> > value=description#weiser1$port#34048$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] cached command: >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in >> > >> > [proxy:0:0 at weiser1] flushing 4 put command(s) out >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] forwarding command (cmd=put >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$) >> > upstream >> > [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in >> > [proxy:0:1 at weiser2] got pmi command (from 4): put >> > kvsname=kvs_24541_0 key=P4-businesscard >> > value=description#weiser2$port#60693$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] cached command: >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 5): put >> > kvsname=kvs_24541_0 key=P5-businesscard >> > value=description#weiser2$port#49938$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] cached command: >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 7): put >> > kvsname=kvs_24541_0 key=P6-businesscard >> > value=description#weiser2$port#33516$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] cached command: >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 10): put >> > kvsname=kvs_24541_0 key=P7-businesscard >> > value=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] cached command: >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] [mpiexec at weiser1] [pgid: 0] got PMI command: >> cmd=put >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ >> > PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in >> > >> > [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in >> > >> > [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in >> > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out >> > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] >> > [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in >> > >> > [proxy:0:1 at weiser2] flushing 4 put command(s) out >> > [proxy:0:1 at weiser2] forwarding command (cmd=put >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$) >> > upstream >> > [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream >> > PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] got pmi command (from 4): get >> > kvsname=kvs_24541_0 key=P0-businesscard >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser1$port#56190$ifname#192.168.0.101$ >> > >> ================================================================================ >> > HPLinpack 2.1 -- High-Performance Linpack benchmark -- October 26, >> 2012 >> > Written by A. Petitet and R. Clint Whaley, Innovative Computing >> Laboratory, >> > UTK >> > Modified by Piotr Luszczek, Innovative Computing Laboratory, UTK >> > Modified by Julien Langou, University of Colorado Denver >> > >> ================================================================================ >> > >> > An explanation of the input/output parameters follows: >> > T/V : Wall time / encoded variant. >> > N : The order of the coefficient matrix A. >> > NB : The partitioning blocking factor. >> > P : The number of process rows. >> > Q : The number of process columns. >> > Time : Time in seconds to solve the linear system. >> > Gflops : Rate of execution for solving the linear system. >> > >> > The following parameter values will be used: >> > >> > N : 14616 >> > NB : 168 >> > PMAP : Row-major process mapping >> > P : 2 >> > Q : 4 >> > PFACT : Right >> > NBMIN : 4 >> > NDIV : 2 >> > RFACT : Crout >> > BCAST : 1ringM >> > DEPTH : 1 >> > SWAP : Mix (threshold = 64) >> > L1 : transposed form >> > U : transposed form >> > EQUIL : yes >> > ALIGN : 8 double precision words >> > >> > >> -------------------------------------------------------------------------------- >> > >> > - The matrix A is randomly generated for each test. >> > - The following scaled residual check will be computed: >> > ||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * >> N ) >> > - The relative machine precision (eps) is taken to be >> > 1.110223e-16 >> > [proxy:0:0 at weiser1] got pmi command (from 6): get >> > - Computational tests pass if scaled residuals are less than >> > 16.0 >> > >> > kvsname=kvs_24541_0 key=P5-businesscard >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser2$port#49938$ifname#192.168.0.102$ >> > [proxy:0:0 at weiser1] got pmi command (from 15): get >> > kvsname=kvs_24541_0 key=P7-businesscard >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [proxy:0:0 at weiser1] got pmi command (from 8): get >> > kvsname=kvs_24541_0 key=P6-businesscard >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser2$port#33516$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] got pmi command (from 5): get >> > kvsname=kvs_24541_0 key=P1-businesscard >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser1$port#34048$ifname#192.168.0.101$ >> > >> > >> =================================================================================== >> > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> > = EXIT CODE: 9 >> > = CLEANING UP REMAINING PROCESSES >> > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> > >> =================================================================================== >> > >> > >> > ----------- END -------------- >> > >> > if that can help :( >> > >> > >> > >> > >> > >> > >> > On Fri, Jun 28, 2013 at 12:24 PM, Pavan Balaji >> wrote: >> >> >> >> >> >> Looks like your application aborted for some reason. >> >> >> >> -- Pavan >> >> >> >> >> >> On 06/27/2013 10:21 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> >>> >> >>> My bad, I just found out that there was a duplicate entry like: >> >>> weiser1 127.0.1.1 >> >>> weiser1 192.168.0.101 >> >>> so i removed teh 127.x.x.x. entry and kept the hostfile contents >> similar >> >>> on both nodes. Now previous error is reduced to this one: >> >>> >> >>> ------ START OF OUTPUT ------- >> >>> >> >>> ....some HPL startup string (no final result) >> >>> ...skip..... >> >>> >> >>> >> >>> >> =================================================================================== >> >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> >>> = EXIT CODE: 9 >> >>> = CLEANING UP REMAINING PROCESSES >> >>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> >>> >> >>> >> =================================================================================== >> >>> [proxy:0:0 at weiser1] HYD_pmcd_pmip_control_cmd_cb >> >>> (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed >> >>> [proxy:0:0 at weiser1] HYDT_dmxu_poll_wait_for_event >> >>> (./tools/demux/demux_poll.c:77): callback returned error status >> >>> [proxy:0:0 at weiser1] main (./pm/pmiserv/pmip.c:206): demux engine >> error >> >>> waiting for event >> >>> [mpiexec at weiser1] HYDT_bscu_wait_for_completion >> >>> (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes >> >>> terminated badly; aborting >> >>> [mpiexec at weiser1] HYDT_bsci_wait_for_completion >> >>> (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error >> waiting >> >>> for completion >> >>> [mpiexec at weiser1] HYD_pmci_wait_for_completion >> >>> (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for >> >>> completion >> >>> [mpiexec at weiser1] main (./ui/mpich/mpiexec.c:331): process manager >> error >> >>> waiting for completion >> >>> >> >>> ------ END OF OUTPUT ------- >> >>> >> >>> >> >>> >> >>> On Fri, Jun 28, 2013 at 12:12 PM, Pavan Balaji > >>> > wrote: >> >>> >> >>> >> >>> On 06/27/2013 10:08 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> >>> >> >>> >> >>> >> P4-businesscard=description#__weiser2$port#57651$ifname#192.__168.0.102$ >> >>> >> >>> >> P5-businesscard=description#__weiser2$port#52622$ifname#192.__168.0.102$ >> >>> >> >>> >> P6-businesscard=description#__weiser2$port#55935$ifname#192.__168.0.102$ >> >>> >> >>> >> P7-businesscard=description#__weiser2$port#54952$ifname#192.__168.0.102$ >> >>> >> >>> P0-businesscard=description#__weiser1$port#41958$ifname#127.__0.1.1$ >> >>> >> >>> P2-businesscard=description#__weiser1$port#35049$ifname#127.__0.1.1$ >> >>> >> >>> P1-businesscard=description#__weiser1$port#39634$ifname#127.__0.1.1$ >> >>> >> >>> P3-businesscard=description#__weiser1$port#51802$ifname#127.__0.1.1$ >> >>> >> >>> >> >>> >> >>> I have two concerns with your output. Let's start with the first. >> >>> >> >>> Did you look at this question on the FAQ page? >> >>> >> >>> "Is your /etc/hosts file consistent across all nodes? Unless you >> are >> >>> using an external DNS server, the /etc/hosts file on every machine >> >>> should contain the correct IP information about all hosts in the >> >>> system." >> >>> >> >>> >> >>> -- Pavan >> >>> >> >>> -- >> >>> Pavan Balaji >> >>> http://www.mcs.anl.gov/~balaji >> >>> >> >>> >> >> >> >> -- >> >> Pavan Balaji >> >> http://www.mcs.anl.gov/~balaji >> > >> > >> > >> > _______________________________________________ >> > discuss mailing list discuss at mpich.org >> > To manage subscription options or unsubscribe: >> > https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> -- >> Jeff Hammond >> jeff.science at gmail.com >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji at mcs.anl.gov Thu Jun 27 22:43:47 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Thu, 27 Jun 2013 22:43:47 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> Message-ID: <51CD0673.9060403@mcs.anl.gov> It looks like one of your application processes is dying because of some error that is likely unrelated to MPI. You'll need to attach a debugger to the processes to figure out what's going on. -- Pavan On 06/27/2013 10:29 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > again that same error: > Fatal error in PMPI_Wait: A process has failed, error stack: > PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, > status=0xbebb99f0) failed > MPIR_Wait_impl(77)........: > dequeue_and_set_error(888): Communication error with rank 4 -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jahanzeb.maqbool at gmail.com Thu Jun 27 22:45:23 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 12:45:23 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <51CD0673.9060403@mcs.anl.gov> References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> <51CD0673.9060403@mcs.anl.gov> Message-ID: Yes it seems like that HPL is crashing. Thanks a lot for helping out. On Friday, June 28, 2013, Pavan Balaji wrote: > > It looks like one of your application processes is dying because of some > error that is likely unrelated to MPI. You'll need to attach a debugger to > the processes to figure out what's going on. > > -- Pavan > > On 06/27/2013 10:29 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > >> again that same error: >> Fatal error in PMPI_Wait: A process has failed, error stack: >> PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, >> status=0xbebb99f0) failed >> MPIR_Wait_impl(77)........: >> dequeue_and_set_error(888): Communication error with rank 4 >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.science at gmail.com Thu Jun 27 22:48:17 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Thu, 27 Jun 2013 22:48:17 -0500 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> Message-ID: <-5097376753174424245@unknownmsgid> If CPI runs and your code doesn't, it's an app issue. You said this was HPL? Ask UTK for support with this. It's their code. HPL is dirt simple so I guess you are running it incorrectly. Jeff Sent from my iPhone On Jun 27, 2013, at 10:36 PM, "Syed. Jahanzeb Maqbool Hashmi" < jahanzeb.maqbool at gmail.com> wrote: and here is that output: Process 0 of 8 is on weiser1 Process 1 of 8 is on weiser1 Process 2 of 8 is on weiser1 Process 3 of 8 is on weiser1 Process 4 of 8 is on weiser2 Process 5 of 8 is on weiser2 Process 6 of 8 is on weiser2 Process 7 of 8 is on weiser2 pi is approximately 3.1415926544231247, Error is 0.0000000008333316 wall clock time = 0.018203 --------------- On Fri, Jun 28, 2013 at 12:35 PM, Syed. Jahanzeb Maqbool Hashmi < jahanzeb.maqbool at gmail.com> wrote: > Yes I am successfully able to run cpi program. No such error at all. > > > > On Fri, Jun 28, 2013 at 12:31 PM, Jeff Hammond wrote: > >> Can you run the cpi program? If that doesn't run, something is wrong, >> because that program is trivial and correct. >> >> Jeff >> >> On Thu, Jun 27, 2013 at 10:29 PM, Syed. Jahanzeb Maqbool Hashmi >> wrote: >> > again that same error: >> > Fatal error in PMPI_Wait: A process has failed, error stack: >> > PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, >> status=0xbebb99f0) >> > failed >> > MPIR_Wait_impl(77)........: >> > dequeue_and_set_error(888): Communication error with rank 4 >> > >> > here is the verbose output: >> > >> > --------------START------------------ >> > >> > host: weiser1 >> > host: weiser2 >> > >> > >> ================================================================================================== >> > mpiexec options: >> > ---------------- >> > Base path: /mnt/nfs/install/mpich-install/bin/ >> > Launcher: (null) >> > Debug level: 1 >> > Enable X: -1 >> > >> > Global environment: >> > ------------------- >> > TERM=xterm >> > SHELL=/bin/bash >> > >> > >> XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422 >> > SSH_CLIENT=192.168.0.3 57311 22 >> > OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1 >> > SSH_TTY=/dev/pts/0 >> > USER=linaro >> > >> > >> LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35 >> >> :*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: >> > LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib >> > MAIL=/var/mail/linaro >> > >> > >> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin >> > PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a >> > LANG=C.UTF-8 >> > SHLVL=1 >> > HOME=/home/linaro >> > LOGNAME=linaro >> > SSH_CONNECTION=192.168.0.3 57311 192.168.0.101 22 >> > LESSOPEN=| /usr/bin/lesspipe %s >> > LESSCLOSE=/usr/bin/lesspipe %s %s >> > _=/mnt/nfs/install/mpich-install/bin/mpiexec >> > >> > Hydra internal environment: >> > --------------------------- >> > GFORTRAN_UNBUFFERED_PRECONNECTED=y >> > >> > >> > Proxy information: >> > ********************* >> > [1] proxy: weiser1 (4 cores) >> > Exec list: ./xhpl (4 processes); >> > >> > [2] proxy: weiser2 (4 cores) >> > Exec list: ./xhpl (4 processes); >> > >> > >> > >> ================================================================================================== >> > >> > [mpiexec at weiser1] Timeout set to -1 (-1 means infinite) >> > [mpiexec at weiser1] Got a control port string of weiser1:45851 >> > >> > Proxy launch args: /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy >> > --control-port weiser1:45851 --debug --rmk user --launcher ssh --demux >> poll >> > --pgid 0 --retries 10 --usize -2 --proxy-id >> > >> > Arguments being passed to proxy 0: >> > --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname >> > weiser1 --global-core-map 0,4,8 --pmi-id-map 0,0 --global-process-count >> 8 >> > --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping >> > (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' >> > 'SHELL=/bin/bash' >> > >> 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' >> > 'SSH_CLIENT=192.168.0.3 57311 22' >> > 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' >> > 'USER=linaro' >> > >> 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;3 >> >> 5:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' >> > 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' >> > 'MAIL=/var/mail/linaro' >> > >> 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' >> > 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' >> > 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' >> 'SSH_CONNECTION=192.168.0.3 >> > 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' >> > 'LESSCLOSE=/usr/bin/lesspipe %s %s' >> > '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 >> > --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' >> > --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 >> > --exec-local-env 0 --exec-wdir >> > /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl >> > >> > Arguments being passed to proxy 1: >> > --version 3.0.4 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname >> > weiser2 --global-core-map 0,4,8 --pmi-id-map 0,4 --global-process-count >> 8 >> > --auto-cleanup 1 --pmi-kvsname kvs_24541_0 --pmi-process-mapping >> > (vector,(0,2,4)) --ckpoint-num -1 --global-inherited-env 20 'TERM=xterm' >> > 'SHELL=/bin/bash' >> > >> 'XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422' >> > 'SSH_CLIENT=192.168.0.3 57311 22' >> > 'OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1' 'SSH_TTY=/dev/pts/0' >> > 'USER=linaro' >> > >> 'LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;3 >> >> 5:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:' >> > 'LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib' >> > 'MAIL=/var/mail/linaro' >> > >> 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin' >> > 'PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a' 'LANG=C.UTF-8' >> > 'SHLVL=1' 'HOME=/home/linaro' 'LOGNAME=linaro' >> 'SSH_CONNECTION=192.168.0.3 >> > 57311 192.168.0.101 22' 'LESSOPEN=| /usr/bin/lesspipe %s' >> > 'LESSCLOSE=/usr/bin/lesspipe %s %s' >> > '_=/mnt/nfs/install/mpich-install/bin/mpiexec' --global-user-env 0 >> > --global-system-env 1 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' >> > --proxy-core-count 4 --exec --exec-appnum 0 --exec-proc-count 4 >> > --exec-local-env 0 --exec-wdir >> > /mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a --exec-args 1 ./xhpl >> > >> > [mpiexec at weiser1] Launch arguments: >> > /mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy --control-port >> > weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 >> > --retries 10 --usize -2 --proxy-id 0 >> > [mpiexec at weiser1] Launch arguments: /usr/bin/ssh -x weiser2 >> > "/mnt/nfs/install/mpich-install/bin/hydra_pmi_proxy" --control-port >> > weiser1:45851 --debug --rmk user --launcher ssh --demux poll --pgid 0 >> > --retries 10 --usize -2 --proxy-id 1 >> > [proxy:0:0 at weiser1] got pmi command (from 0): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get_maxes >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:0 at weiser1] got pmi command (from 15): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get_maxes >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:0 at weiser1] got pmi command (from 8): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get_appnum >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get_appnum >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 8): get_maxes >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 6): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:0 at weiser1] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:0 at weiser1] got pmi command (from 8): get_appnum >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 0): put >> > kvsname=kvs_24541_0 key=sharedFilename[0] >> > value=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] cached command: >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 15): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 6): get_maxes >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:0 at weiser1] got pmi command (from 8): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 8): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:0 at weiser1] got pmi command (from 6): get_appnum >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=appnum appnum=0 >> > [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 6): get_my_kvsname >> > >> > [proxy:0:0 at weiser1] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:0 at weiser1] got pmi command (from 6): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in >> > >> > [proxy:0:0 at weiser1] flushing 1 put command(s) out >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] forwarding command (cmd=put >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9) upstream >> > [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in >> > [proxy:0:1 at weiser2] got pmi command (from 7): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:1 at weiser2] got pmi command (from 5): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get_maxes >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:1 at weiser2] got pmi command (from 4): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get_appnum >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): get_maxes >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): get_appnum >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:1 at weiser2] got pmi command (from 4): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in >> > >> > [proxy:0:1 at weiser2] got pmi command (from 4): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:1 at weiser2] got pmi command (from 5): get_maxes >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:1 at weiser2] got pmi command (from 5): get_appnum >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): put >> > kvsname=kvs_24541_0 key=sharedFilename[4] >> > value=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:1 at weiser2] cached command: >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in >> > >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in >> > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa >> > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache >> > sharedFilename[0]=/dev/shm/mpich_shar_tmpnEZdQ9 >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa >> > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out >> > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out >> > [proxy:0:1 at weiser2] got pmi command (from 5): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 5): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:1 at weiser2] got pmi command (from 10): init >> > pmi_version=1 pmi_subversion=1 >> > [proxy:0:1 at weiser2] PMI response: cmd=response_to_init pmi_version=1 >> > pmi_subversion=1 rc=0 >> > [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in >> > >> > [proxy:0:1 at weiser2] got pmi command (from 10): get_maxes >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=maxes kvsname_max=256 >> keylen_max=64 >> > vallen_max=1024 >> > [proxy:0:1 at weiser2] got pmi command (from 10): get_appnum >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=appnum appnum=0 >> > [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 10): get_my_kvsname >> > >> > [proxy:0:1 at weiser2] PMI response: cmd=my_kvsname kvsname=kvs_24541_0 >> > [proxy:0:1 at weiser2] got pmi command (from 10): get >> > kvsname=kvs_24541_0 key=PMI_process_mapping >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=(vector,(0,2,4)) >> > [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in >> > >> > [proxy:0:1 at weiser2] flushing 1 put command(s) out >> > [proxy:0:1 at weiser2] forwarding command (cmd=put >> > sharedFilename[4]=/dev/shm/mpich_shar_tmpuKzlSa) upstream >> > [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] got pmi command (from 6): get >> > kvsname=kvs_24541_0 key=sharedFilename[0] >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] got pmi command (from 5): get >> > kvsname=kvs_24541_0 key=sharedFilename[4] >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:1 at weiser2] got pmi command (from 7): get >> > kvsname=kvs_24541_0 key=sharedFilename[4] >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:1 at weiser2] got pmi command (from 10): get >> > kvsname=kvs_24541_0 key=sharedFilename[4] >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpuKzlSa >> > [proxy:0:0 at weiser1] got pmi command (from 8): get >> > kvsname=kvs_24541_0 key=sharedFilename[0] >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] got pmi command (from 15): get >> > kvsname=kvs_24541_0 key=sharedFilename[0] >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=/dev/shm/mpich_shar_tmpnEZdQ9 >> > [proxy:0:0 at weiser1] got pmi command (from 0): put >> > kvsname=kvs_24541_0 key=P0-businesscard >> > value=description#weiser1$port#56190$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] cached command: >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 8): put >> > kvsname=kvs_24541_0 key=P2-businesscard >> > value=description#weiser1$port#40019$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] cached command: >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 15): put >> > kvsname=kvs_24541_0 key=P3-businesscard >> > value=description#weiser1$port#57150$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] cached command: >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 0): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 6): put >> > kvsname=kvs_24541_0 key=P1-businesscard >> > value=description#weiser1$port#34048$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] cached command: >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:0 at weiser1] got pmi command (from 8): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 6): barrier_in >> > >> > [proxy:0:0 at weiser1] got pmi command (from 15): barrier_in >> > >> > [proxy:0:0 at weiser1] flushing 4 put command(s) out >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=put >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ >> > [proxy:0:0 at weiser1] forwarding command (cmd=put >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$) >> > upstream >> > [proxy:0:0 at weiser1] forwarding command (cmd=barrier_in) upstream >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in >> > [proxy:0:1 at weiser2] got pmi command (from 4): put >> > kvsname=kvs_24541_0 key=P4-businesscard >> > value=description#weiser2$port#60693$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] cached command: >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 5): put >> > kvsname=kvs_24541_0 key=P5-businesscard >> > value=description#weiser2$port#49938$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] cached command: >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 7): put >> > kvsname=kvs_24541_0 key=P6-businesscard >> > value=description#weiser2$port#33516$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] cached command: >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 10): put >> > kvsname=kvs_24541_0 key=P7-businesscard >> > value=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] cached command: >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] [mpiexec at weiser1] [pgid: 0] got PMI command: >> cmd=put >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ >> > PMI response: cmd=put_result rc=0 msg=success >> > [proxy:0:1 at weiser2] got pmi command (from 4): barrier_in >> > >> > [proxy:0:1 at weiser2] got pmi command (from 5): barrier_in >> > >> > [proxy:0:1 at weiser2] got pmi command (from 7): barrier_in >> > [mpiexec at weiser1] [pgid: 0] got PMI command: cmd=barrier_in >> > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=keyval_cache >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=keyval_cache >> > P0-businesscard=description#weiser1$port#56190$ifname#192.168.0.101$ >> > P2-businesscard=description#weiser1$port#40019$ifname#192.168.0.101$ >> > P3-businesscard=description#weiser1$port#57150$ifname#192.168.0.101$ >> > P1-businesscard=description#weiser1$port#34048$ifname#192.168.0.101$ >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [mpiexec at weiser1] PMI response to fd 6 pid 10: cmd=barrier_out >> > [mpiexec at weiser1] PMI response to fd 7 pid 10: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] >> > [proxy:0:1 at weiser2] got pmi command (from 10): barrier_in >> > >> > [proxy:0:1 at weiser2] flushing 4 put command(s) out >> > [proxy:0:1 at weiser2] forwarding command (cmd=put >> > P4-businesscard=description#weiser2$port#60693$ifname#192.168.0.102$ >> > P5-businesscard=description#weiser2$port#49938$ifname#192.168.0.102$ >> > P6-businesscard=description#weiser2$port#33516$ifname#192.168.0.102$ >> > P7-businesscard=description#weiser2$port#43116$ifname#192.168.0.102$) >> > upstream >> > [proxy:0:1 at weiser2] forwarding command (cmd=barrier_in) upstream >> > PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:0 at weiser1] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] PMI response: cmd=barrier_out >> > [proxy:0:1 at weiser2] got pmi command (from 4): get >> > kvsname=kvs_24541_0 key=P0-businesscard >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser1$port#56190$ifname#192.168.0.101$ >> > >> ================================================================================ >> > HPLinpack 2.1 -- High-Performance Linpack benchmark -- October 26, >> 2012 >> > Written by A. Petitet and R. Clint Whaley, Innovative Computing >> Laboratory, >> > UTK >> > Modified by Piotr Luszczek, Innovative Computing Laboratory, UTK >> > Modified by Julien Langou, University of Colorado Denver >> > >> ================================================================================ >> > >> > An explanation of the input/output parameters follows: >> > T/V : Wall time / encoded variant. >> > N : The order of the coefficient matrix A. >> > NB : The partitioning blocking factor. >> > P : The number of process rows. >> > Q : The number of process columns. >> > Time : Time in seconds to solve the linear system. >> > Gflops : Rate of execution for solving the linear system. >> > >> > The following parameter values will be used: >> > >> > N : 14616 >> > NB : 168 >> > PMAP : Row-major process mapping >> > P : 2 >> > Q : 4 >> > PFACT : Right >> > NBMIN : 4 >> > NDIV : 2 >> > RFACT : Crout >> > BCAST : 1ringM >> > DEPTH : 1 >> > SWAP : Mix (threshold = 64) >> > L1 : transposed form >> > U : transposed form >> > EQUIL : yes >> > ALIGN : 8 double precision words >> > >> > >> -------------------------------------------------------------------------------- >> > >> > - The matrix A is randomly generated for each test. >> > - The following scaled residual check will be computed: >> > ||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * >> N ) >> > - The relative machine precision (eps) is taken to be >> > 1.110223e-16 >> > [proxy:0:0 at weiser1] got pmi command (from 6): get >> > - Computational tests pass if scaled residuals are less than >> > 16.0 >> > >> > kvsname=kvs_24541_0 key=P5-businesscard >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser2$port#49938$ifname#192.168.0.102$ >> > [proxy:0:0 at weiser1] got pmi command (from 15): get >> > kvsname=kvs_24541_0 key=P7-businesscard >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser2$port#43116$ifname#192.168.0.102$ >> > [proxy:0:0 at weiser1] got pmi command (from 8): get >> > kvsname=kvs_24541_0 key=P6-businesscard >> > [proxy:0:0 at weiser1] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser2$port#33516$ifname#192.168.0.102$ >> > [proxy:0:1 at weiser2] got pmi command (from 5): get >> > kvsname=kvs_24541_0 key=P1-businesscard >> > [proxy:0:1 at weiser2] PMI response: cmd=get_result rc=0 msg=success >> > value=description#weiser1$port#34048$ifname#192.168.0.101$ >> > >> > >> =================================================================================== >> > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> > = EXIT CODE: 9 >> > = CLEANING UP REMAINING PROCESSES >> > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> > >> =================================================================================== >> > >> > >> > ----------- END -------------- >> > >> > if that can help :( >> > >> > >> > >> > >> > >> > >> > On Fri, Jun 28, 2013 at 12:24 PM, Pavan Balaji >> wrote: >> >> >> >> >> >> Looks like your application aborted for some reason. >> >> >> >> -- Pavan >> >> >> >> >> >> On 06/27/2013 10:21 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> >>> >> >>> My bad, I just found out that there was a duplicate entry like: >> >>> weiser1 127.0.1.1 >> >>> weiser1 192.168.0.101 >> >>> so i removed teh 127.x.x.x. entry and kept the hostfile contents >> similar >> >>> on both nodes. Now previous error is reduced to this one: >> >>> >> >>> ------ START OF OUTPUT ------- >> >>> >> >>> ....some HPL startup string (no final result) >> >>> ...skip..... >> >>> >> >>> >> >>> >> =================================================================================== >> >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> >>> = EXIT CODE: 9 >> >>> = CLEANING UP REMAINING PROCESSES >> >>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> >>> >> >>> >> =================================================================================== >> >>> [proxy:0:0 at weiser1] HYD_pmcd_pmip_control_cmd_cb >> >>> (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed >> >>> [proxy:0:0 at weiser1] HYDT_dmxu_poll_wait_for_event >> >>> (./tools/demux/demux_poll.c:77): callback returned error status >> >>> [proxy:0:0 at weiser1] main (./pm/pmiserv/pmip.c:206): demux engine >> error >> >>> waiting for event >> >>> [mpiexec at weiser1] HYDT_bscu_wait_for_completion >> >>> (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes >> >>> terminated badly; aborting >> >>> [mpiexec at weiser1] HYDT_bsci_wait_for_completion >> >>> (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error >> waiting >> >>> for completion >> >>> [mpiexec at weiser1] HYD_pmci_wait_for_completion >> >>> (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for >> >>> completion >> >>> [mpiexec at weiser1] main (./ui/mpich/mpiexec.c:331): process manager >> error >> >>> waiting for completion >> >>> >> >>> ------ END OF OUTPUT ------- >> >>> >> >>> >> >>> >> >>> On Fri, Jun 28, 2013 at 12:12 PM, Pavan Balaji > >>> > wrote: >> >>> >> >>> >> >>> On 06/27/2013 10:08 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> >>> >> >>> >> >>> >> P4-businesscard=description#__weiser2$port#57651$ifname#192.__168.0.102$ >> >>> >> >>> >> P5-businesscard=description#__weiser2$port#52622$ifname#192.__168.0.102$ >> >>> >> >>> >> P6-businesscard=description#__weiser2$port#55935$ifname#192.__168.0.102$ >> >>> >> >>> >> P7-businesscard=description#__weiser2$port#54952$ifname#192.__168.0.102$ >> >>> >> >>> P0-businesscard=description#__weiser1$port#41958$ifname#127.__0.1.1$ >> >>> >> >>> P2-businesscard=description#__weiser1$port#35049$ifname#127.__0.1.1$ >> >>> >> >>> P1-businesscard=description#__weiser1$port#39634$ifname#127.__0.1.1$ >> >>> >> >>> P3-businesscard=description#__weiser1$port#51802$ifname#127.__0.1.1$ >> >>> >> >>> >> >>> >> >>> I have two concerns with your output. Let's start with the first. >> >>> >> >>> Did you look at this question on the FAQ page? >> >>> >> >>> "Is your /etc/hosts file consistent across all nodes? Unless you >> are >> >>> using an external DNS server, the /etc/hosts file on every machine >> >>> should contain the correct IP information about all hosts in the >> >>> system." >> >>> >> >>> >> >>> -- Pavan >> >>> >> >>> -- >> >>> Pavan Balaji >> >>> http://www.mcs.anl.gov/~balaji >> >>> >> >>> >> >> >> >> -- >> >> Pavan Balaji >> >> http://www.mcs.anl.gov/~balaji >> > >> > >> > >> > _______________________________________________ >> > discuss mailing list discuss at mpich.org >> > To manage subscription options or unsubscribe: >> > https://lists.mpich.org/mailman/listinfo/discuss >> >> >> >> -- >> Jeff Hammond >> jeff.science at gmail.com >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> > > _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahanzeb.maqbool at gmail.com Thu Jun 27 22:49:35 2013 From: jahanzeb.maqbool at gmail.com (Syed. Jahanzeb Maqbool Hashmi) Date: Fri, 28 Jun 2013 12:49:35 +0900 Subject: [mpich-discuss] mpich hangs In-Reply-To: <-5097376753174424245@unknownmsgid> References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> <-5097376753174424245@unknownmsgid> Message-ID: Yes I agree. Thanks for the help :) On Friday, June 28, 2013, Jeff Hammond wrote: > If CPI runs and your code doesn't, it's an app issue. You said this was > HPL? Ask UTK for support with this. It's their code. HPL is dirt simple so > I guess you are running it incorrectly. > > Jeff > > Sent from my iPhone > > On Jun 27, 2013, at 10:36 PM, "Syed. Jahanzeb Maqbool Hashmi" < > jahanzeb.maqbool at gmail.com> wrote: > > and here is that output: > > Process 0 of 8 is on weiser1 > Process 1 of 8 is on weiser1 > Process 2 of 8 is on weiser1 > Process 3 of 8 is on weiser1 > Process 4 of 8 is on weiser2 > Process 5 of 8 is on weiser2 > Process 6 of 8 is on weiser2 > Process 7 of 8 is on weiser2 > pi is approximately 3.1415926544231247, Error is 0.0000000008333316 > wall clock time = 0.018203 > > --------------- > > > On Fri, Jun 28, 2013 at 12:35 PM, Syed. Jahanzeb Maqbool Hashmi < > jahanzeb.maqbool at gmail.com> wrote: > > Yes I am successfully able to run cpi program. No such error at all. > > > > On Fri, Jun 28, 2013 at 12:31 PM, Jeff Hammond wrote: > > Can you run the cpi program? If that doesn't run, something is wrong, > because that program is trivial and correct. > > Jeff > > On Thu, Jun 27, 2013 at 10:29 PM, Syed. Jahanzeb Maqbool Hashmi > wrote: > > again that same error: > > Fatal error in PMPI_Wait: A process has failed, error stack: > > PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, > status=0xbebb99f0) > > failed > > MPIR_Wait_impl(77)........: > > dequeue_and_set_error(888): Communication error with rank 4 > > > > here is the verbose output: > > > > --------------START------------------ > > > > host: weiser1 > > host: weiser2 > > > > > ================================================================================================== > > mpiexec options: > > ---------------- > > Base path: /mnt/nfs/install/mpich-install/bin/ > > Launcher: (null) > > Debug level: 1 > > Enable X: -1 > > > > Global environment: > > ------------------- > > TERM=xterm > > SHELL=/bin/bash > > > > > XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422 > > SSH_CLIENT=192.168.0.3 57311 22 > > OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1 > > SSH_TTY=/dev/pts/0 > > USER=linaro > > > > > LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35 > > :*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: > > LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib > > MAIL=/var/mail/linaro > > > > > PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin > > PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gus at ldeo.columbia.edu Thu Jun 27 23:45:54 2013 From: gus at ldeo.columbia.edu (Gustavo Correa) Date: Fri, 28 Jun 2013 00:45:54 -0400 Subject: [mpich-discuss] mpich hangs In-Reply-To: References: <51CCF747.70308@mcs.anl.gov> <51CCF858.6020304@mcs.anl.gov> <51CCFA0C.4020607@mcs.anl.gov> <51CCFBAC.3090404@mcs.anl.gov> <51CCFD05.10109@mcs.anl.gov> <51CCFF39.3060401@mcs.anl.gov> <51CD01DB.30403@mcs.anl.gov> <-50973! 76753174424245@unknownmsgid> Message-ID: <117B54DB-FF54-4D6C-8F5A-76AF6134C916@ldeo.columbia.edu> Although this may not be really an MPI or MPICH suggestions, there they go. Check if you're not running out of memory while HPL is running. The maximum problem size (N) you can solve depends on the memory size. There are many HPL formulas (and even calculators) on the web for N(RAM). Also, set the stack size limit to a large number (or to unlimited). Most Linux distributions come with a low default value. I hope this helps, Gus Correa On Jun 27, 2013, at 11:49 PM, Syed. Jahanzeb Maqbool Hashmi wrote: > Yes I agree. Thanks for the help :) > > On Friday, June 28, 2013, Jeff Hammond wrote: > If CPI runs and your code doesn't, it's an app issue. You said this was HPL? Ask UTK for support with this. It's their code. HPL is dirt simple so I guess you are running it incorrectly. > > Jeff > > Sent from my iPhone > > On Jun 27, 2013, at 10:36 PM, "Syed. Jahanzeb Maqbool Hashmi" wrote: > >> and here is that output: >> >> Process 0 of 8 is on weiser1 >> Process 1 of 8 is on weiser1 >> Process 2 of 8 is on weiser1 >> Process 3 of 8 is on weiser1 >> Process 4 of 8 is on weiser2 >> Process 5 of 8 is on weiser2 >> Process 6 of 8 is on weiser2 >> Process 7 of 8 is on weiser2 >> pi is approximately 3.1415926544231247, Error is 0.0000000008333316 >> wall clock time = 0.018203 >> >> --------------- >> >> >> On Fri, Jun 28, 2013 at 12:35 PM, Syed. Jahanzeb Maqbool Hashmi wrote: >> Yes I am successfully able to run cpi program. No such error at all. >> >> >> >> On Fri, Jun 28, 2013 at 12:31 PM, Jeff Hammond wrote: >> Can you run the cpi program? If that doesn't run, something is wrong, >> because that program is trivial and correct. >> >> Jeff >> >> On Thu, Jun 27, 2013 at 10:29 PM, Syed. Jahanzeb Maqbool Hashmi >> wrote: >> > again that same error: >> > Fatal error in PMPI_Wait: A process has failed, error stack: >> > PMPI_Wait(180)............: MPI_Wait(request=0xbebb9a1c, status=0xbebb99f0) >> > failed >> > MPIR_Wait_impl(77)........: >> > dequeue_and_set_error(888): Communication error with rank 4 >> > >> > here is the verbose output: >> > >> > --------------START------------------ >> > >> > host: weiser1 >> > host: weiser2 >> > >> > ================================================================================================== >> > mpiexec options: >> > ---------------- >> > Base path: /mnt/nfs/install/mpich-install/bin/ >> > Launcher: (null) >> > Debug level: 1 >> > Enable X: -1 >> > >> > Global environment: >> > ------------------- >> > TERM=xterm >> > SHELL=/bin/bash >> > >> > XDG_SESSION_COOKIE=218a1dd8e20ea6d6ec61475b00000019-1372384778.679329-1845893422 >> > SSH_CLIENT=192.168.0.3 57311 22 >> > OLDPWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1 >> > SSH_TTY=/dev/pts/0 >> > USER=linaro >> > >> > LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35 >> :*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36: >> > LD_LIBRARY_PATH=:/mnt/nfs/install/mpich-install/lib >> > MAIL=/var/mail/linaro >> > >> > PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/mnt/nfs/install/mpich-install/bin >> > PWD=/mnt/nfs/jahanzeb/bench/hpl/hpl-2.1/bin/armv7-a > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From apeironoriepa at aol.com Fri Jun 28 08:17:52 2013 From: apeironoriepa at aol.com (Danilo) Date: Fri, 28 Jun 2013 09:17:52 -0400 (EDT) Subject: [mpich-discuss] mpi assertion error In-Reply-To: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> Message-ID: <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> Good afternoon, I wrote a little application in C to compute 2D fft. This app was firstly executed on a cluster on which it was installed 2007 mpi version (don't remember the package name) and then adapted for a different cluster with mpi 1.4.1 (had to change the scatter/gather because in the previous version I could use the same buffer for both sendbuff and recvbuff). By the way, when executing with 2 processes it works fine. When trying with 4/8/16/32 and so on it gives firstly an assertion error as shown in the file attached, and starting from the second time you try to run it on more than 2 procs it gives error code 139. The error I'm talking about appears just when you run it with "realDim=16384" (it means that you have 16384 rows and 16384x2 columns since it is designed for real/imaginary numbers). I know the code is working since it was all ok on the previous cluster (even with 4-8-16-32 procs) and I can't find out which is the problem now.. Can you help? As said attached you can find my application as well as the errors appearing and mpi info.. Regards, Danilo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error+app.tar.gz Type: application/x-gzip Size: 5364 bytes Desc: not available URL: From wbland at mcs.anl.gov Fri Jun 28 08:22:09 2013 From: wbland at mcs.anl.gov (Wesley Bland) Date: Fri, 28 Jun 2013 08:22:09 -0500 Subject: [mpich-discuss] mpi assertion error In-Reply-To: <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> Message-ID: <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> Can you just copy paste your error into the email? Most of us will probably not be all that excited about opening up strange tarballs attached to an email. Also, we get these emails on our phones and tablets where unzipping source code isn't as much of an option. Wesley On Jun 28, 2013, at 8:17 AM, Danilo wrote: > Good afternoon, > > I wrote a little application in C to compute 2D fft. This app was firstly executed on a cluster on which it was installed 2007 mpi version (don't remember the package name) and then adapted for a different cluster with mpi 1.4.1 (had to change the scatter/gather because in the previous version I could use the same buffer for both sendbuff and recvbuff). By the way, when executing with 2 processes it works fine. When trying with 4/8/16/32 and so on it gives firstly an assertion error as shown in the file attached, and starting from the second time you try to run it on more than 2 procs it gives error code 139. The error I'm talking about appears just when you run it with "realDim=16384" (it means that you have 16384 rows and 16384x2 columns since it is designed for real/imaginary numbers). I know the code is working since it was all ok on the previous cluster (even with 4-8-16-32 procs) and I can't find out which is the problem now.. Can you help? > > As said attached you can find my application as well as the errors appearing and mpi info.. > > Regards, > Danilo > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From apeironoriepa at aol.com Fri Jun 28 09:46:42 2013 From: apeironoriepa at aol.com (Danilo) Date: Fri, 28 Jun 2013 10:46:42 -0400 (EDT) Subject: [mpich-discuss] mpi assertion error In-Reply-To: <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> Message-ID: <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> In the last topic I read it was asked more that once to zip the files and I did it.. By the way, this is the first error: Assertion failed in file helper_fns.c at line 361: ((((char *) sendbuf + sendtype_true_lb))) != NULL internal ABORT - process 2 Starting from the second execution I get: ===================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 139 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES ===================================================================================== HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) failed HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) failed HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) failed HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:70): one of the processes terminated badly; aborting HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:191): launcher returned error waiting for completion main (./ui/mpich/mpiexec.c:405): process manager error waiting for completion Regards, Danilo -----Original Message----- From: Wesley Bland To: discuss Sent: Fri, Jun 28, 2013 3:22 pm Subject: Re: [mpich-discuss] mpi assertion error Can you just copy paste your error into the email? Most of us will probably not be all that excited about opening up strange tarballs attached to an email. Also, we get these emails on our phones and tablets where unzipping source code isn't as much of an option. Wesley On Jun 28, 2013, at 8:17 AM, Danilo wrote: Good afternoon, I wrote a little application in C to compute 2D fft. This app was firstly executed on a cluster on which it was installed 2007 mpi version (don't remember the package name) and then adapted for a different cluster with mpi 1.4.1 (had to change the scatter/gather because in the previous version I could use the same buffer for both sendbuff and recvbuff). By the way, when executing with 2 processes it works fine. When trying with 4/8/16/32 and so on it gives firstly an assertion error as shown in the file attached, and starting from the second time you try to run it on more than 2 procs it gives error code 139. The error I'm talking about appears just when you run it with "realDim=16384" (it means that you have 16384 rows and 16384x2 columns since it is designed for real/imaginary numbers). I know the code is working since it was all ok on the previous cluster (even with 4-8-16-32 procs) and I can't find out which is the problem now.. Can you help? As said attached you can find my application as well as the errors appearing and mpi info.. Regards, Danilo _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.science at gmail.com Fri Jun 28 10:11:54 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Fri, 28 Jun 2013 10:11:54 -0500 Subject: [mpich-discuss] mpi assertion error In-Reply-To: <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> Message-ID: Null buffer assertions are suggestive of incorrect programs. Can you share the source of this program? As for the inline vs attached files debate, I think that pastebin is a superior option for large output since it is plain-text readable from any internet-enabled device and doesn't lead to huge messages on the list. But for short messages, inlining is definitely good for email reading on phones. Jeff On Fri, Jun 28, 2013 at 9:46 AM, Danilo wrote: > In the last topic I read it was asked more that once to zip the files and I > did it.. By the way, this is the first error: > Assertion failed in file helper_fns.c at line 361: ((((char *) sendbuf + > sendtype_true_lb))) != NULL > internal ABORT - process 2 > > Starting from the second execution I get: > > ===================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 139 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > ===================================================================================== > > HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) > failed > HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback > returned error status > main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event > HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) > failed > HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback > returned error status > main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event > HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) > failed > HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback > returned error status > main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event > HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:70): one > of the processes terminated badly; aborting > HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): > launcher returned error waiting for completion > HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:191): launcher > returned error waiting for completion > main (./ui/mpich/mpiexec.c:405): process manager error waiting for > completion > > > Regards, > Danilo > > > -----Original Message----- > From: Wesley Bland > To: discuss > Sent: Fri, Jun 28, 2013 3:22 pm > Subject: Re: [mpich-discuss] mpi assertion error > > Can you just copy paste your error into the email? Most of us will probably > not be all that excited about opening up strange tarballs attached to an > email. Also, we get these emails on our phones and tablets where unzipping > source code isn't as much of an option. > > Wesley > > On Jun 28, 2013, at 8:17 AM, Danilo wrote: > > Good afternoon, > > I wrote a little application in C to compute 2D fft. This app was firstly > executed on a cluster on which it was installed 2007 mpi version (don't > remember the package name) and then adapted for a different cluster with mpi > 1.4.1 (had to change the scatter/gather because in the previous version I > could use the same buffer for both sendbuff and recvbuff). By the way, when > executing with 2 processes it works fine. When trying with 4/8/16/32 and so > on it gives firstly an assertion error as shown in the file attached, and > starting from the second time you try to run it on more than 2 procs it > gives error code 139. The error I'm talking about appears just when you run > it with "realDim=16384" (it means that you have 16384 rows and 16384x2 > columns since it is designed for real/imaginary numbers). I know the code is > working since it was all ok on the previous cluster (even with 4-8-16-32 > procs) and I can't find out which is the problem now.. Can you help? > > As said attached you can find my application as well as the errors appearing > and mpi info.. > > Regards, > Danilo > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.science at gmail.com From jeff.science at gmail.com Fri Jun 28 10:14:24 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Fri, 28 Jun 2013 10:14:24 -0500 Subject: [mpich-discuss] mpi assertion error In-Reply-To: References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> Message-ID: Sorry, I didn't realize that you attached the code already. I braved the unknown and opened it to find only benign text files :-) Jeff On Fri, Jun 28, 2013 at 10:11 AM, Jeff Hammond wrote: > Null buffer assertions are suggestive of incorrect programs. Can you > share the source of this program? > > As for the inline vs attached files debate, I think that pastebin is a > superior option for large output since it is plain-text readable from > any internet-enabled device and doesn't lead to huge messages on the > list. But for short messages, inlining is definitely good for email > reading on phones. > > Jeff > > On Fri, Jun 28, 2013 at 9:46 AM, Danilo wrote: >> In the last topic I read it was asked more that once to zip the files and I >> did it.. By the way, this is the first error: >> Assertion failed in file helper_fns.c at line 361: ((((char *) sendbuf + >> sendtype_true_lb))) != NULL >> internal ABORT - process 2 >> >> Starting from the second execution I get: >> >> ===================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = EXIT CODE: 139 >> = CLEANING UP REMAINING PROCESSES >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> ===================================================================================== >> >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:70): one >> of the processes terminated badly; aborting >> HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): >> launcher returned error waiting for completion >> HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:191): launcher >> returned error waiting for completion >> main (./ui/mpich/mpiexec.c:405): process manager error waiting for >> completion >> >> >> Regards, >> Danilo >> >> >> -----Original Message----- >> From: Wesley Bland >> To: discuss >> Sent: Fri, Jun 28, 2013 3:22 pm >> Subject: Re: [mpich-discuss] mpi assertion error >> >> Can you just copy paste your error into the email? Most of us will probably >> not be all that excited about opening up strange tarballs attached to an >> email. Also, we get these emails on our phones and tablets where unzipping >> source code isn't as much of an option. >> >> Wesley >> >> On Jun 28, 2013, at 8:17 AM, Danilo wrote: >> >> Good afternoon, >> >> I wrote a little application in C to compute 2D fft. This app was firstly >> executed on a cluster on which it was installed 2007 mpi version (don't >> remember the package name) and then adapted for a different cluster with mpi >> 1.4.1 (had to change the scatter/gather because in the previous version I >> could use the same buffer for both sendbuff and recvbuff). By the way, when >> executing with 2 processes it works fine. When trying with 4/8/16/32 and so >> on it gives firstly an assertion error as shown in the file attached, and >> starting from the second time you try to run it on more than 2 procs it >> gives error code 139. The error I'm talking about appears just when you run >> it with "realDim=16384" (it means that you have 16384 rows and 16384x2 >> columns since it is designed for real/imaginary numbers). I know the code is >> working since it was all ok on the previous cluster (even with 4-8-16-32 >> procs) and I can't find out which is the problem now.. Can you help? >> >> As said attached you can find my application as well as the errors appearing >> and mpi info.. >> >> Regards, >> Danilo >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > jeff.science at gmail.com -- Jeff Hammond jeff.science at gmail.com From apeironoriepa at aol.com Fri Jun 28 10:28:28 2013 From: apeironoriepa at aol.com (Danilo) Date: Fri, 28 Jun 2013 11:28:28 -0400 (EDT) Subject: [mpich-discuss] mpi assertion error In-Reply-To: References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> Message-ID: <8D0422B45E0B60B-CCC-322AE@webmail-d170.sysops.aol.com> Hi Jeff, the program was tested intensively on the previous cluster. The changes made are in scatter/gather (due to sendbuf and recvbuff that has to be differente in this version it seems..). The other main change is due to hydra, because on the previous cluster there wasn't such a process management system. But I'm quite new to programming, so I don't know... Thanks for your help. Regards -----Original Message----- From: Jeff Hammond To: discuss Sent: Fri, Jun 28, 2013 5:12 pm Subject: Re: [mpich-discuss] mpi assertion error Null buffer assertions are suggestive of incorrect programs. Can you share the source of this program? As for the inline vs attached files debate, I think that pastebin is a superior option for large output since it is plain-text readable from any internet-enabled device and doesn't lead to huge messages on the list. But for short messages, inlining is definitely good for email reading on phones. Jeff On Fri, Jun 28, 2013 at 9:46 AM, Danilo wrote: > In the last topic I read it was asked more that once to zip the files and I > did it.. By the way, this is the first error: > Assertion failed in file helper_fns.c at line 361: ((((char *) sendbuf + > sendtype_true_lb))) != NULL > internal ABORT - process 2 > > Starting from the second execution I get: > > ===================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 139 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > ===================================================================================== > > HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) > failed > HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback > returned error status > main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event > HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) > failed > HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback > returned error status > main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event > HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) > failed > HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback > returned error status > main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event > HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:70): one > of the processes terminated badly; aborting > HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): > launcher returned error waiting for completion > HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:191): launcher > returned error waiting for completion > main (./ui/mpich/mpiexec.c:405): process manager error waiting for > completion > > > Regards, > Danilo > > > -----Original Message----- > From: Wesley Bland > To: discuss > Sent: Fri, Jun 28, 2013 3:22 pm > Subject: Re: [mpich-discuss] mpi assertion error > > Can you just copy paste your error into the email? Most of us will probably > not be all that excited about opening up strange tarballs attached to an > email. Also, we get these emails on our phones and tablets where unzipping > source code isn't as much of an option. > > Wesley > > On Jun 28, 2013, at 8:17 AM, Danilo wrote: > > Good afternoon, > > I wrote a little application in C to compute 2D fft. This app was firstly > executed on a cluster on which it was installed 2007 mpi version (don't > remember the package name) and then adapted for a different cluster with mpi > 1.4.1 (had to change the scatter/gather because in the previous version I > could use the same buffer for both sendbuff and recvbuff). By the way, when > executing with 2 processes it works fine. When trying with 4/8/16/32 and so > on it gives firstly an assertion error as shown in the file attached, and > starting from the second time you try to run it on more than 2 procs it > gives error code 139. The error I'm talking about appears just when you run > it with "realDim=16384" (it means that you have 16384 rows and 16384x2 > columns since it is designed for real/imaginary numbers). I know the code is > working since it was all ok on the previous cluster (even with 4-8-16-32 > procs) and I can't find out which is the problem now.. Can you help? > > As said attached you can find my application as well as the errors appearing > and mpi info.. > > Regards, > Danilo > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.science at gmail.com _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.science at gmail.com Fri Jun 28 10:41:20 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Fri, 28 Jun 2013 10:41:20 -0500 Subject: [mpich-discuss] mpi assertion error In-Reply-To: <8D0422B45E0B60B-CCC-322AE@webmail-d170.sysops.aol.com> References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> <8D0422B45E0B60B-CCC-322AE@webmail-d170.sysops.aol.com> Message-ID: Yes, I can tell your new to programming. Stop reading K&R C as a style guide :-) Your program hung my virtual machine so I cannot help you. I recommend you use FFTW instead of rolling your own FFTs. Best, Jeff On Fri, Jun 28, 2013 at 10:28 AM, Danilo wrote: > Hi Jeff, > the program was tested intensively on the previous cluster. The changes made > are in scatter/gather (due to sendbuf and recvbuff that has to be differente > in this version it seems..). The other main change is due to hydra, because > on the previous cluster there wasn't such a process management system. But > I'm quite new to programming, so I don't know... > > > Thanks for your help. > > Regards > > -----Original Message----- > From: Jeff Hammond > To: discuss > Sent: Fri, Jun 28, 2013 5:12 pm > Subject: Re: [mpich-discuss] mpi assertion error > > Null buffer assertions are suggestive of incorrect programs. Can you > share the source of this program? > > As for the inline vs attached files debate, I think that pastebin is a > superior option for large output since it is plain-text readable from > any internet-enabled device and doesn't lead to huge messages on the > list. But for short messages, inlining is definitely good for email > reading on phones. > > Jeff > > On Fri, Jun 28, 2013 at 9:46 AM, Danilo wrote: >> In the last topic I read it was asked more that once to zip the files and >> I >> did it.. By the way, this is the first error: >> Assertion failed in file helper_fns.c at line 361: ((((char *) sendbuf + >> sendtype_true_lb))) != NULL >> internal ABORT - process 2 >> >> Starting from the second execution I get: >> >> >> ===================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = EXIT CODE: 139 >> = CLEANING UP REMAINING PROCESSES >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> >> ===================================================================================== >> >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert >> (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert >> (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert >> (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:70): >> one >> of the processes terminated badly; aborting >> HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): >> launcher returned error waiting for completion >> HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:191): launcher >> returned error waiting for completion >> main (./ui/mpich/mpiexec.c:405): process manager error waiting for >> completion >> >> >> Regards, >> Danilo >> >> >> -----Original Message----- >> From: Wesley Bland >> To: discuss >> Sent: Fri, Jun 28, 2013 3:22 pm >> Subject: Re: [mpich-discuss] mpi assertion error >> >> Can you just copy paste your error into the email? Most of us will >> probably >> not be all that excited about opening up strange tarballs attached to an >> email. Also, we get these emails on our phones and tablets where unzipping >> source code isn't as much of an option. >> >> Wesley >> >> On Jun 28, 2013, at 8:17 AM, Danilo wrote: >> >> Good afternoon, >> >> I wrote a little application in C to compute 2D fft. This app was firstly >> executed on a cluster on which it was installed 2007 mpi version (don't >> remember the package name) and then adapted for a different cluster with >> mpi >> 1.4.1 (had to change the scatter/gather because in the previous version I >> could use the same buffer for both sendbuff and recvbuff). By the way, >> when >> executing with 2 processes it works fine. When trying with 4/8/16/32 and >> so >> on it gives firstly an assertion error as shown in the file attached, and >> starting from the second time you try to run it on more than 2 procs it >> gives error code 139. The error I'm talking about appears just when you >> run >> it with "realDim=16384" (it means that you have 16384 rows and 16384x2 >> columns since it is designed for real/imaginary numbers). I know the code >> is >> working since it was all ok on the previous cluster (even with 4-8-16-32 >> procs) and I can't find out which is the problem now.. Can you help? >> >> As said attached you can find my application as well as the errors >> appearing >> and mpi info.. >> >> Regards, >> Danilo >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > jeff.science at gmail.com > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.science at gmail.com From balaji at mcs.anl.gov Fri Jun 28 10:46:06 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 28 Jun 2013 10:46:06 -0500 Subject: [mpich-discuss] mpi assertion error In-Reply-To: <8D0422B45E0B60B-CCC-322AE@webmail-d170.sysops.aol.com> References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> <8D0422B45E0B60B-CCC-322AE@webmail-d170.sysops.aol.com> Message-ID: <51CDAFBE.4030707@mcs.anl.gov> Danilo, On 06/28/2013 10:28 AM, Danilo wrote: > the program was tested intensively on the previous cluster. The changes > made are in scatter/gather (due to sendbuf and recvbuff that has to be > differente in this version it seems..). The other main change is due to > hydra, because on the previous cluster there wasn't such a process > management system. But I'm quite new to programming, so I don't know... This doesn't look like a Hydra problem. Hydra is just telling you that the application died in an unexpected manner. The error the assert is showing is the real culprit. Can you try to strip out most of the code and create a simple benchmark that reproduces this error? -- Pavan -- Pavan Balaji http://www.mcs.anl.gov/~balaji From apeironoriepa at aol.com Fri Jun 28 10:47:53 2013 From: apeironoriepa at aol.com (Danilo) Date: Fri, 28 Jun 2013 11:47:53 -0400 (EDT) Subject: [mpich-discuss] mpi assertion error In-Reply-To: References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> <8D0422B45E0B60B-CCC-322AE@webmail-d170.sysops.aol.com> Message-ID: <8D0422DFC7D92AB-CCC-324BD@webmail-d170.sysops.aol.com> Probably you should try to change the value realDim=16384 to something more at your machine level, such as 1024... So that you won't experiment a hung :) My purpose is no to use an already done fft algorithm, but just to try how it is to write one starting from a serial version.. I know that exists 10000kk times better algorithms.. -----Original Message----- From: Jeff Hammond To: discuss Sent: Fri, Jun 28, 2013 5:41 pm Subject: Re: [mpich-discuss] mpi assertion error Yes, I can tell your new to programming. Stop reading K&R C as a style guide :-) Your program hung my virtual machine so I cannot help you. I recommend you use FFTW instead of rolling your own FFTs. Best, Jeff On Fri, Jun 28, 2013 at 10:28 AM, Danilo wrote: > Hi Jeff, > the program was tested intensively on the previous cluster. The changes made > are in scatter/gather (due to sendbuf and recvbuff that has to be differente > in this version it seems..). The other main change is due to hydra, because > on the previous cluster there wasn't such a process management system. But > I'm quite new to programming, so I don't know... > > > Thanks for your help. > > Regards > > -----Original Message----- > From: Jeff Hammond > To: discuss > Sent: Fri, Jun 28, 2013 5:12 pm > Subject: Re: [mpich-discuss] mpi assertion error > > Null buffer assertions are suggestive of incorrect programs. Can you > share the source of this program? > > As for the inline vs attached files debate, I think that pastebin is a > superior option for large output since it is plain-text readable from > any internet-enabled device and doesn't lead to huge messages on the > list. But for short messages, inlining is definitely good for email > reading on phones. > > Jeff > > On Fri, Jun 28, 2013 at 9:46 AM, Danilo wrote: >> In the last topic I read it was asked more that once to zip the files and >> I >> did it.. By the way, this is the first error: >> Assertion failed in file helper_fns.c at line 361: ((((char *) sendbuf + >> sendtype_true_lb))) != NULL >> internal ABORT - process 2 >> >> Starting from the second execution I get: >> >> >> ===================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = EXIT CODE: 139 >> = CLEANING UP REMAINING PROCESSES >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> >> ===================================================================================== >> >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert >> (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert >> (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert >> (!closed) >> failed >> HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback >> returned error status >> main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event >> HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:70): >> one >> of the processes terminated badly; aborting >> HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): >> launcher returned error waiting for completion >> HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:191): launcher >> returned error waiting for completion >> main (./ui/mpich/mpiexec.c:405): process manager error waiting for >> completion >> >> >> Regards, >> Danilo >> >> >> -----Original Message----- >> From: Wesley Bland >> To: discuss >> Sent: Fri, Jun 28, 2013 3:22 pm >> Subject: Re: [mpich-discuss] mpi assertion error >> >> Can you just copy paste your error into the email? Most of us will >> probably >> not be all that excited about opening up strange tarballs attached to an >> email. Also, we get these emails on our phones and tablets where unzipping >> source code isn't as much of an option. >> >> Wesley >> >> On Jun 28, 2013, at 8:17 AM, Danilo wrote: >> >> Good afternoon, >> >> I wrote a little application in C to compute 2D fft. This app was firstly >> executed on a cluster on which it was installed 2007 mpi version (don't >> remember the package name) and then adapted for a different cluster with >> mpi >> 1.4.1 (had to change the scatter/gather because in the previous version I >> could use the same buffer for both sendbuff and recvbuff). By the way, >> when >> executing with 2 processes it works fine. When trying with 4/8/16/32 and >> so >> on it gives firstly an assertion error as shown in the file attached, and >> starting from the second time you try to run it on more than 2 procs it >> gives error code 139. The error I'm talking about appears just when you >> run >> it with "realDim=16384" (it means that you have 16384 rows and 16384x2 >> columns since it is designed for real/imaginary numbers). I know the code >> is >> working since it was all ok on the previous cluster (even with 4-8-16-32 >> procs) and I can't find out which is the problem now.. Can you help? >> >> As said attached you can find my application as well as the errors >> appearing >> and mpi info.. >> >> Regards, >> Danilo >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss >> >> >> _______________________________________________ >> discuss mailing list discuss at mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > jeff.science at gmail.com > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss > > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.science at gmail.com _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jayesh at mcs.anl.gov Fri Jun 28 11:21:34 2013 From: jayesh at mcs.anl.gov (Jayesh Krishna) Date: Fri, 28 Jun 2013 11:21:34 -0500 (CDT) Subject: [mpich-discuss] mpich2 and windows server 2012 In-Reply-To: <51CC5732.5050606@gmail.com> Message-ID: <1976217456.8690315.1372436494166.JavaMail.root@mcs.anl.gov> Hi, Did you try, "c:\Program Files (x86)\MPICH2\bin\mpiexec.exe" -n 1 nonmem.exe "c:\Program Files (x86)\MPICH2\bin\mpiexec.exe" -n 2 "c:\Program Files (x86)\MPICH2\examples\cpi.exe" Regards, Jayesh ----- Original Message ----- From: "Costas Yamin" To: discuss at mpich.org Sent: Thursday, June 27, 2013 10:16:02 AM Subject: Re: [mpich-discuss] mpich2 and windows server 2012 Hi, I have installed mpich2 on a Windows 2012 Essentials server machine and I have successfully sent jobs to it remotely without doing anything different than in my own workstation (Windows8 x64). I have configured mpiexec to use particular ports and added the relevant rule in windows firewall. So there shouldn't be any compatibility issues arising from the OS itself... I assume you installed mpich2 as an Administrator. Costas On 27/6/2013 10:59 ??, Lars Lindbom wrote: Hi, I have a problem getting mpich2 to run on Windows Server 2012. We have a small set of servers successfully configured and running Windows Server 2008 R2 and MPICH2 without any problem. The error I get seems to indicate a problem in the authentication process. >"c:\Program Files (x86)\MPICH2\bin\mpiexec.exe" ?validate SUCCESS >"c:\Program Files (x86)\MPICH2\bin\mpiexec.exe" -host localhost nonmem.exe Credentials for Lars rejected connecting to localhost Aborting: Unable to connect to localhost The credentials are correct and I have tried multiple user accounts with the same result. I don't think it's related but for what it's worth I have made sure that I have the same firewall settings for the mpich executables as on the 2008 R2 servers. I would appreciate any help in getting this solved. Thanks, Lars _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss at mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss From balaji at mcs.anl.gov Fri Jun 28 12:12:39 2013 From: balaji at mcs.anl.gov (Pavan Balaji) Date: Fri, 28 Jun 2013 12:12:39 -0500 Subject: [mpich-discuss] mpi assertion error In-Reply-To: <8D04238D24B6F07-1114-44076@webmail-va009.sysops.aol.com> References: <8D0421891500EDF-CCC-314D6@webmail-d170.sysops.aol.com> <8D0421906CC0BFF-CCC-31514@webmail-d170.sysops.aol.com> <7693860E-5B71-47B6-966A-645F138B8AFF@mcs.anl.gov> <8D042256FF7CFD7-CCC-31DC4@webmail-d170.sysops.aol.com> <8D0422B45E0B60B-CCC-322AE@webmail-d170.sysops.aol.com> <51CDAFBE.4030707@mcs.anl.gov> <8D042312D057ACC-13C8-B635@webmail-m213.sysops.aol.com> <51CDBCE8.8090800@mcs.anl.gov> <8D04238D24B6F07-1114-44076@webmail-va009.sysops.aol.com> Message-ID: <51CDC407.8040402@mcs.anl.gov> Cc'ing discuss at mpich.org. Please keep it cc'ed. On 06/28/2013 12:05 PM, Danilo wrote: > Hi Pavan, > so it worked for you with realDim = 16384 and 3 processes.. Can you try > with 4 or 8 processes? > If so, what could be the problem? Probably the configuration of mpi on > the cluster here? > > > -----Original Message----- > From: Pavan Balaji > To: Danilo > Sent: Fri, Jun 28, 2013 6:42 pm > Subject: Re: [mpich-discuss] mpi assertion error > > > Please keepdiscuss at mpich.org cc'ed. > > I can run your application fine. I tried 3 processes. > > -- Pavan > > On 06/28/2013 11:10 AM, Danilo wrote: >> Hi Pavan, >> >> here it is a simple c file that creates a matrix with 16384 rows, >> 16384x2 columns and fills it with random float numbers (the columns are >> double than the rows because they should represent real and imaginary >> part of a number). Then the master tries to distribute the elements >> trough scatter/gather. >> This gives the error as the fft app i posted before.. >> >> Thanks for your help >> >> >> >> -----Original Message----- >> From: Pavan Balaji > >> To: discuss > >> Cc: Danilo > >> Sent: Fri, Jun 28, 2013 5:46 pm >> Subject: Re: [mpich-discuss] mpi assertion error >> >> Danilo, >> >> On 06/28/2013 10:28 AM, Danilo wrote: >>> the program was tested intensively on the previous cluster. The changes >>> made are in scatter/gather (due to sendbuf and recvbuff that has to be >>> differente in this version it seems..). The other main change is due to >>> hydra, because on the previous cluster there wasn't such a process >>> management system. But I'm quite new to programming, so I don't know... >> >> This doesn't look like a Hydra problem. Hydra is just telling you that >> the application died in an unexpected manner. >> >> The error the assert is showing is the real culprit. Can you try to >> strip out most of the code and create a simple benchmark that reproduces >> this error? >> >> -- Pavan >> >> -- >> Pavan Balaji >>http://www.mcs.anl.gov/~balaji >> > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji > -- Pavan Balaji http://www.mcs.anl.gov/~balaji From jedbrown at mcs.anl.gov Fri Jun 28 17:21:33 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 28 Jun 2013 17:21:33 -0500 Subject: [mpich-discuss] MPICH mpif.h incompatible with gfortran-4.8 -std=f2003 Message-ID: <87li5tdggi.fsf@mcs.anl.gov> As shown below in SPECFEM3D, mpif.h cannot be included when using -std=f2003 with gfortran-4.8. I've Cc'd the MPICH list in case there is interest in converting mpif.h to be compatible with -std=f2003. Open MPI uses 'double precision' instead of 'real*8', rendering mpif.h safe to include. On the SPECFEM side, I believe this can be handled by switching from including mpif.h to using the f90 mpi module. configure:6665: /opt/mpich/bin/mpif90 -c -g -O2 -std=f2003 -fimplicit-none -frange-check -O2 -DFORCE_VECTORIZATION -fmax-errors=10 -pedantic -pedantic-errors -Waliasing -Wam persand -Wcharacter-truncation -Wline-truncation -Wsurprising -Wno-tabs -Wunderflow -ffpe-trap=invalid,zero,overflow conftest.f90 >&5 mpif.h:16.18: Included at conftest.f90:4: CHARACTER*1 MPI_ARGVS_NULL(1,1) 1 Warning: Obsolescent feature: Old-style character length at (1) mpif.h:17.18: Included at conftest.f90:4: CHARACTER*1 MPI_ARGV_NULL(1) 1 Warning: Obsolescent feature: Old-style character length at (1) mpif.h:528.16: Included at conftest.f90:4: integer*8 MPI_DISPLACEMENT_CURRENT 1 Error: GNU Extension: Nonstandard type declaration INTEGER*8 at (1) mpif.h:529.42: Included at conftest.f90:4: PARAMETER (MPI_DISPLACEMENT_CURRENT=-54278278) 1 Error: Symbol 'mpi_displacement_current' at (1) has no IMPLICIT type mpif.h:546.13: Included at conftest.f90:4: REAL*8 MPI_WTIME, MPI_WTICK 1 Error: GNU Extension: Nonstandard type declaration REAL*8 at (1) mpif.h:547.13: Included at conftest.f90:4: REAL*8 PMPI_WTIME, PMPI_WTICK 1 Error: GNU Extension: Nonstandard type declaration REAL*8 at (1) mpif.h:550.18: Included at conftest.f90:4: CHARACTER*1 PADS_A(3), PADS_B(3) 1 Warning: Obsolescent feature: Old-style character length at (1) configure:6665: $? = 1 configure: failed program was: | | program main | | include 'mpif.h' | integer, parameter :: CUSTOM_MPI_TYPE = MPI_REAL | integer ier | call MPI_INIT(ier) | call MPI_BARRIER(MPI_COMM_WORLD,ier) | call MPI_FINALIZE(ier) | | end | -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Fri Jun 28 17:35:13 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 28 Jun 2013 17:35:13 -0500 Subject: [mpich-discuss] [CIG-SEISMO] MPICH mpif.h incompatible with gfortran-4.8 -std=f2003 In-Reply-To: <51CE0E9D.6040006@lma.cnrs-mrs.fr> References: <87li5tdggi.fsf@mcs.anl.gov> <51CE0E9D.6040006@lma.cnrs-mrs.fr> Message-ID: <87a9m9dftq.fsf@mcs.anl.gov> Dimitri Komatitsch writes: > Dear Jed, > > Elliott Sales de Andrade in Toronto is in the process of changing that > by using "use mpi" statements instead of "include mpif.h". Thus in a few > days please svn update and the problem should be fixed. Cool. > PS: I mentioned that to MPICH developers seven years ago, back then they > said they would fix it i.e. switch to standard-conforming headers but > never did... Okay, well they may as well be reminded that it's still causing trouble. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jeff.science at gmail.com Fri Jun 28 17:52:30 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Fri, 28 Jun 2013 17:52:30 -0500 Subject: [mpich-discuss] MPICH mpif.h incompatible with gfortran-4.8 -std=f2003 In-Reply-To: <87li5tdggi.fsf@mcs.anl.gov> References: <87li5tdggi.fsf@mcs.anl.gov> Message-ID: <1782691080022105791@unknownmsgid> mpif.h is for F77 codes. Use the module for F90+. There is no point in fixing mpif.h for F03 because the solution already exists. The F08 bindings in MPI-3 are the long-term solution. Jeff Sent from my iPhone On Jun 28, 2013, at 5:21 PM, Jed Brown wrote: > As shown below in SPECFEM3D, mpif.h cannot be included when using > -std=f2003 with gfortran-4.8. I've Cc'd the MPICH list in case there is > interest in converting mpif.h to be compatible with -std=f2003. Open > MPI uses 'double precision' instead of 'real*8', rendering mpif.h safe > to include. > > On the SPECFEM side, I believe this can be handled by switching from > including mpif.h to using the f90 mpi module. > > > configure:6665: /opt/mpich/bin/mpif90 -c -g -O2 -std=f2003 -fimplicit-none -frange-check -O2 -DFORCE_VECTORIZATION -fmax-errors=10 -pedantic -pedantic-errors -Waliasing -Wam > persand -Wcharacter-truncation -Wline-truncation -Wsurprising -Wno-tabs -Wunderflow -ffpe-trap=invalid,zero,overflow conftest.f90 >&5 > mpif.h:16.18: > Included at conftest.f90:4: > > CHARACTER*1 MPI_ARGVS_NULL(1,1) > 1 > Warning: Obsolescent feature: Old-style character length at (1) > mpif.h:17.18: > Included at conftest.f90:4: > > CHARACTER*1 MPI_ARGV_NULL(1) > 1 > Warning: Obsolescent feature: Old-style character length at (1) > mpif.h:528.16: > Included at conftest.f90:4: > > integer*8 MPI_DISPLACEMENT_CURRENT > 1 > Error: GNU Extension: Nonstandard type declaration INTEGER*8 at (1) > mpif.h:529.42: > Included at conftest.f90:4: > > PARAMETER (MPI_DISPLACEMENT_CURRENT=-54278278) > 1 > Error: Symbol 'mpi_displacement_current' at (1) has no IMPLICIT type > mpif.h:546.13: > Included at conftest.f90:4: > > REAL*8 MPI_WTIME, MPI_WTICK > 1 > Error: GNU Extension: Nonstandard type declaration REAL*8 at (1) > mpif.h:547.13: > Included at conftest.f90:4: > > REAL*8 PMPI_WTIME, PMPI_WTICK > 1 > Error: GNU Extension: Nonstandard type declaration REAL*8 at (1) > mpif.h:550.18: > Included at conftest.f90:4: > > CHARACTER*1 PADS_A(3), PADS_B(3) > 1 > Warning: Obsolescent feature: Old-style character length at (1) > configure:6665: $? = 1 > configure: failed program was: > | > | program main > | > | include 'mpif.h' > | integer, parameter :: CUSTOM_MPI_TYPE = MPI_REAL > | integer ier > | call MPI_INIT(ier) > | call MPI_BARRIER(MPI_COMM_WORLD,ier) > | call MPI_FINALIZE(ier) > | > | end > | > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From jedbrown at mcs.anl.gov Fri Jun 28 17:56:26 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 28 Jun 2013 17:56:26 -0500 Subject: [mpich-discuss] MPICH mpif.h incompatible with gfortran-4.8 -std=f2003 In-Reply-To: <1782691080022105791@unknownmsgid> References: <87li5tdggi.fsf@mcs.anl.gov> <1782691080022105791@unknownmsgid> Message-ID: <877ghddeud.fsf@mcs.anl.gov> Jeff Hammond writes: > mpif.h is for F77 codes. Use the module for F90+. There is no point > in fixing mpif.h for F03 because the solution already exists. It would be trivially easy to make work, though probably not to get rid of the warnings. > The F08 bindings in MPI-3 are the long-term solution. Yup, and your son will be in college by the time MPI-3 is available on all supported RHEL versions. ;-) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jeff.science at gmail.com Fri Jun 28 18:36:02 2013 From: jeff.science at gmail.com (Jeff Hammond) Date: Fri, 28 Jun 2013 18:36:02 -0500 Subject: [mpich-discuss] MPICH mpif.h incompatible with gfortran-4.8 -std=f2003 In-Reply-To: <877ghddeud.fsf@mcs.anl.gov> References: <87li5tdggi.fsf@mcs.anl.gov> <1782691080022105791@unknownmsgid> <877ghddeud.fsf@mcs.anl.gov> Message-ID: <3978537995863099864@unknownmsgid> On Jun 28, 2013, at 5:56 PM, Jed Brown wrote: > Jeff Hammond writes: > >> mpif.h is for F77 codes. Use the module for F90+. There is no point >> in fixing mpif.h for F03 because the solution already exists. > > It would be trivially easy to make work, though probably not to get rid > of the warnings. > Great. We look forward to your patch. >> The F08 bindings in MPI-3 are the long-term solution. > > Yup, and your son will be in college by the time MPI-3 is available on > all supported RHEL versions. ;-) You can compile MPICH 3.0.4 on Linux today. It's trivial. Why wait for RPMs? Those might even exist already too. Did you check the MPICH website? Jeff From jedbrown at mcs.anl.gov Sat Jun 29 09:41:55 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 29 Jun 2013 09:41:55 -0500 Subject: [mpich-discuss] [CIG-SEISMO] MPICH mpif.h incompatible with gfortran-4.8 -std=f2003 In-Reply-To: <51CED024.2060501@lma.cnrs-mrs.fr> References: <87li5tdggi.fsf@mcs.anl.gov> <1782691080022105791@unknownmsgid> <877ghddeud.fsf@mcs.anl.gov> <3978537995863099864@unknownmsgid> <51CED024.2060501@lma.cnrs-mrs.fr> Message-ID: <87ehblasi4.fsf@mcs.anl.gov> Dimitri Komatitsch writes: > Dear all, > > The (only) three things to do to make mpif.h conform to recent standards > (F90, F95, F2003) would be: > > - change all character* declarations to character(len=...) > > - change all real*8 to either double precision or real(kind=8) (both > are OK) > > - change all integer*8 to integer(kind=8) Well, mpif.h is supposed to be backward-compatible to F77. Upon checking the code, I see that MPICH offers the feature of tolerating -i8 and -r8 options. Open MPI uses 'double precision' for MPI_WTIME, which would be promoted to 16 bytes under -r8, so it won't behave correctly if the user expects a real*8 (perhaps specified as real(kind=8) [1]). The MPI standard says the prototype is DOUBLE PRECISION MPI_WTIME() so I would say that MPICH is non-conforming by using REAL*8, though the standard does not make any statements about what changes when compiled with -r8. I haven't checked to see how MPI data types are mapped in case of -r8. For example, MPI_REAL is defined to a constant in mpif.h, and the MPI library (which was compiled in advance) would not be aware of whether user code was being built with -r8. Is the user expected to use MPI_REAL4 and MPI_REAL8 explicitly in this case? Is -r8 and -i8 tested somewhere in MPICH? What exactly is the user expected to do in order to use this feature? I see this comment in src/binding/f90/mpi_sizeofs.f90.in: ! If reals and doubles have been forced to the same size (e.g., with ! -i8 -r8 to compilers like g95), then the compiler may refuse to ! allow interfaces that use real and double precision (failing to ! determine which one is intended) [1] Note that real(kind=8) is not guaranteed by the Fortran standard to be 8 bytes. For example, g77 and the Salford F95 compilers denote 4-bytes reals with real(kind=1), and 8-byte reals with real(kind=2). http://www.silverfrost.com/manuals/salfordftn95.pdf -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From rreddypsc at gmail.com Sat Jun 29 13:22:25 2013 From: rreddypsc at gmail.com (rreddypsc at gmail.com) Date: Sat, 29 Jun 2013 14:22:25 -0400 Subject: [mpich-discuss] Non-blocking collectives Message-ID: I apologize for creating a new thread, I have accidentally deleted the original message. Not to split hairs, can you please clarify the following: After doing a non-blocking collective: The MPI_Wait would be satisfied as soon as the other ranks in the communicator have called the non-blocking collective (and may not yet have called MPI_Wait), right? Or do they wait until all of them have called MPI_Wait? On the contrary, if one of the ranks calls MPI_Wait (without calling the non-blocking collective, which is an incorrect program), will the MPI_Wait be satisfied? Thanks, Raghu [mpich-discuss] Non-blocking Collectives Jiri Simsa jsimsa at cs.cmu.edu Thu Jun 27 10:33:14 CDT 2013 Previous message: [mpich-discuss] Non-blocking Collectives Next message: [mpich-discuss] PMGR_COLLECTIVE ERROR Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Hi Pavan, To rephrase, I am interested in understanding when would MPI_Wait() block indefinitely, waiting for other process to make progress. I believe that your response answers my question. Thanks again. --Jiri On Wed, Jun 26, 2013 at 4:48 PM, Pavan Balaji wrote: > Hi Jiri, > > > On 06/26/2013 03:08 PM, Jiri Simsa wrote: > >> Thank you for your quick answer. I am trying to understand the blocking >> behavior of MPI_Wait in the case of non-blocking collectives. Is it safe >> to assume that, for a non-blocking collective, MPI_Wait is guaranteed to >> return once all other processes call the corresponding completion >> operation (e.g. MPI_Wait or MPI_Test)? >> > > I'm not sure I understand your question. Are you asking if MPI_WAIT in a > process is guaranteed to return after some finite amount of time after > every other process has called MPI_WAIT? Then, yes. > > > -- > Pavan Balaji > http://www.mcs.anl.gov/~balaji From jedbrown at mcs.anl.gov Sat Jun 29 13:25:12 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 29 Jun 2013 13:25:12 -0500 Subject: [mpich-discuss] Non-blocking collectives In-Reply-To: References: Message-ID: <87r4fkai5z.fsf@mcs.anl.gov> rreddypsc at gmail.com writes: > I apologize for creating a new thread, I have accidentally deleted the > original message. > > Not to split hairs, can you please clarify the following: > > After doing a non-blocking collective: > > The MPI_Wait would be satisfied as soon as the other ranks in the > communicator have called the non-blocking collective (and may not yet have > called MPI_Wait), right? Yes. > Or do they wait until all of them have called MPI_Wait? No, that would make the MPI_Wait synchronizing, mostly defeating any potential benefit of non-blocking collectives. Note that the non-blocking request can complete in other ways, e.g., via MPI_Test. > On the contrary, if one of the ranks calls MPI_Wait (without calling the > non-blocking collective, which is an incorrect program), will the MPI_Wait > be satisfied? This doesn't even make sense. Where would you be getting an MPI_Request on which to call MPI_Wait? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From thakur at mcs.anl.gov Sat Jun 29 14:25:03 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Sat, 29 Jun 2013 14:25:03 -0500 Subject: [mpich-discuss] Non-blocking collectives In-Reply-To: <87r4fkai5z.fsf@mcs.anl.gov> References: <87r4fkai5z.fsf@mcs.anl.gov> Message-ID: <484592F2-DE5D-4BEC-BCFE-37B90B38158B@mcs.anl.gov> I think the Wait is allowed to wait until others call Wait. See the Advice to Users on 197:27-29: "Users should be aware that implementations are allowed, but not required (with exception of MPI_IBARRIER), to synchronize processes during the completion of a nonblocking collective operation." On Jun 29, 2013, at 1:25 PM, Jed Brown wrote: > rreddypsc at gmail.com writes: > >> I apologize for creating a new thread, I have accidentally deleted the >> original message. >> >> Not to split hairs, can you please clarify the following: >> >> After doing a non-blocking collective: >> >> The MPI_Wait would be satisfied as soon as the other ranks in the >> communicator have called the non-blocking collective (and may not yet have >> called MPI_Wait), right? > > Yes. > >> Or do they wait until all of them have called MPI_Wait? > > No, that would make the MPI_Wait synchronizing, mostly defeating any > potential benefit of non-blocking collectives. Note that the > non-blocking request can complete in other ways, e.g., via MPI_Test. > >> On the contrary, if one of the ranks calls MPI_Wait (without calling the >> non-blocking collective, which is an incorrect program), will the MPI_Wait >> be satisfied? > > This doesn't even make sense. Where would you be getting an MPI_Request > on which to call MPI_Wait? > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss From jedbrown at mcs.anl.gov Sat Jun 29 14:43:10 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 29 Jun 2013 14:43:10 -0500 Subject: [mpich-discuss] Non-blocking collectives In-Reply-To: <484592F2-DE5D-4BEC-BCFE-37B90B38158B@mcs.anl.gov> References: <87r4fkai5z.fsf@mcs.anl.gov> <484592F2-DE5D-4BEC-BCFE-37B90B38158B@mcs.anl.gov> Message-ID: <87li5saek1.fsf@mcs.anl.gov> Rajeev Thakur writes: > I think the Wait is allowed to wait until others call Wait. Do you read this as allowing an implementation to defer completion until a _matching_ MPI_Wait is called, or merely until the application re-enters MPI somehow (sufficiently many times, or blocking)? I had the latter interpretation, which would allow non-blocking operations on different communicators to be used without risking deadlock. If the intent was really that the implementation can block until a _matching_ MPI_Wait is called, then I think the standard should clarify this point because it risks deadlock in a natural use case. > See the Advice to Users on 197:27-29: > > "Users should be aware that implementations are allowed, but not > required (with exception of MPI_IBARRIER), to synchronize processes > during the completion of a nonblocking collective operation." -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From thakur at mcs.anl.gov Sat Jun 29 14:45:30 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Sat, 29 Jun 2013 14:45:30 -0500 Subject: [mpich-discuss] Non-blocking collectives In-Reply-To: <87li5saek1.fsf@mcs.anl.gov> References: <87r4fkai5z.fsf@mcs.anl.gov> <484592F2-DE5D-4BEC-BCFE-37B90B38158B@mcs.anl.gov> <87li5saek1.fsf@mcs.anl.gov> Message-ID: <5E5BE5FC-3E6F-4150-BF0D-77070E5C100F@mcs.anl.gov> It doesn't have to be a matching Wait. The implementation is required to make progress on other pending communication while it is blocked in a Wait. On Jun 29, 2013, at 2:43 PM, Jed Brown wrote: > Rajeev Thakur writes: > >> I think the Wait is allowed to wait until others call Wait. > > Do you read this as allowing an implementation to defer completion until > a _matching_ MPI_Wait is called, or merely until the application > re-enters MPI somehow (sufficiently many times, or blocking)? I had the > latter interpretation, which would allow non-blocking operations on > different communicators to be used without risking deadlock. > > If the intent was really that the implementation can block until a > _matching_ MPI_Wait is called, then I think the standard should clarify > this point because it risks deadlock in a natural use case. > >> See the Advice to Users on 197:27-29: >> >> "Users should be aware that implementations are allowed, but not >> required (with exception of MPI_IBARRIER), to synchronize processes >> during the completion of a nonblocking collective operation." From jedbrown at mcs.anl.gov Sat Jun 29 15:05:35 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 29 Jun 2013 15:05:35 -0500 Subject: [mpich-discuss] Non-blocking collectives In-Reply-To: <5E5BE5FC-3E6F-4150-BF0D-77070E5C100F@mcs.anl.gov> References: <87r4fkai5z.fsf@mcs.anl.gov> <484592F2-DE5D-4BEC-BCFE-37B90B38158B@mcs.anl.gov> <87li5saek1.fsf@mcs.anl.gov> <5E5BE5FC-3E6F-4150-BF0D-77070E5C100F@mcs.anl.gov> Message-ID: <87ip0wadio.fsf@mcs.anl.gov> Rajeev Thakur writes: > It doesn't have to be a matching Wait. The implementation is required > to make progress on other pending communication while it is blocked in > a Wait. Or blocked in any of a number of other places, if non-blocking collective requests have the same completion semantics as point-to-point requests. E.g., this shouldn't deadlock: MPI_Iallreduce(...,COMM_WORLD,&req); if (!rank) MPI_Wait(&req); MPI_Allgather(...,COMM_WORLD); if (rank) MPI_Wait(&req); -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From thakur at mcs.anl.gov Sat Jun 29 15:08:10 2013 From: thakur at mcs.anl.gov (Rajeev Thakur) Date: Sat, 29 Jun 2013 15:08:10 -0500 Subject: [mpich-discuss] Non-blocking collectives In-Reply-To: <87ip0wadio.fsf@mcs.anl.gov> References: <87r4fkai5z.fsf@mcs.anl.gov> <484592F2-DE5D-4BEC-BCFE-37B90B38158B@mcs.anl.gov> <87li5saek1.fsf@mcs.anl.gov> <5E5BE5FC-3E6F-4150-BF0D-77070E5C100F@mcs.anl.gov> <87ip0wadio.fsf@mcs.anl.gov> Message-ID: <9FF380E5-CF81-4C13-9B72-286ECCFEB24F@mcs.anl.gov> Yes. On Jun 29, 2013, at 3:05 PM, Jed Brown wrote: > Rajeev Thakur writes: > >> It doesn't have to be a matching Wait. The implementation is required >> to make progress on other pending communication while it is blocked in >> a Wait. > > Or blocked in any of a number of other places, if non-blocking > collective requests have the same completion semantics as point-to-point > requests. E.g., this shouldn't deadlock: > > MPI_Iallreduce(...,COMM_WORLD,&req); > if (!rank) MPI_Wait(&req); > MPI_Allgather(...,COMM_WORLD); > if (rank) MPI_Wait(&req); From rreddypsc at gmail.com Sat Jun 29 19:53:50 2013 From: rreddypsc at gmail.com (rreddypsc at gmail.com) Date: Sat, 29 Jun 2013 20:53:50 -0400 Subject: [mpich-discuss] Non-blocking collectives In-Reply-To: <9FF380E5-CF81-4C13-9B72-286ECCFEB24F@mcs.anl.gov> References: <87r4fkai5z.fsf@mcs.anl.gov> <484592F2-DE5D-4BEC-BCFE-37B90B38158B@mcs.anl.gov> <87li5saek1.fsf@mcs.anl.gov> <5E5BE5FC-3E6F-4150-BF0D-77070E5C100F@mcs.anl.gov> <87ip0wadio.fsf@mcs.anl.gov> <9FF380E5-CF81-4C13-9B72-286ECCFEB24F@mcs.anl.gov> Message-ID: This is a very good example! Thanks for the clarification! Thanks, Raghu --On Saturday, June 29, 2013 3:08 PM -0500 Rajeev Thakur wrote: > Yes. > > On Jun 29, 2013, at 3:05 PM, Jed Brown wrote: > >> Rajeev Thakur writes: >> >>> It doesn't have to be a matching Wait. The implementation is required >>> to make progress on other pending communication while it is blocked in >>> a Wait. >> >> Or blocked in any of a number of other places, if non-blocking >> collective requests have the same completion semantics as point-to-point >> requests. E.g., this shouldn't deadlock: >> >> MPI_Iallreduce(...,COMM_WORLD,&req); >> if (!rank) MPI_Wait(&req); >> MPI_Allgather(...,COMM_WORLD); >> if (rank) MPI_Wait(&req); > > _______________________________________________ > discuss mailing list discuss at mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss