<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><div dir="ltr"><div class="gmail_extra"><span style="font-size:12.8000001907349px">I did try initializing Multithreading support </span></div><div class="gmail_extra"><span style="font-size:12.8000001907349px">#include<mpi.h></span></div><div class="gmail_extra"><span style="font-size:12.8000001907349px">int main()</span></div><div class="gmail_extra"><span style="font-size:12.8000001907349px">{</span></div><div class="gmail_extra"><br></div><div class="gmail_extra"><span style="font-size:12.8000001907349px"> int provided;</span><br></div><div class="gmail_extra"><span style="font-size:12.8000001907349px"><div class="gmail_extra"> int returnvalue = MPI_Init_thread(&argc,&argv,MPI_THREAD_MULTIPLE,&provided);</div><div class="gmail_extra"> <span style="font-size:12.8000001907349px">if(provided < MPI_THREAD_MULTIPLE)</span></div><div class="gmail_extra"> {</div><div class="gmail_extra"> printf("\n THREAD LIBRARY DOESN'T HAVE MULTITHREADING SUPPORT:");</div><div class="gmail_extra"> exit(1);</div><div class="gmail_extra"> }</div><div class="gmail_extra">}</div><div class="gmail_extra"><br></div><div class="gmail_extra">The code compiles but throws an error -</div><div class="gmail_extra"> </div><div class="gmail_extra"><div class="gmail_extra">Assertion failed in file /home/viswa/libraries/mpich-3.1.4/src/include/mpiimplthreadpost.h at line 163: depth > 0 && depth < 10</div><div class="gmail_extra">internal ABORT - process 1</div><div class="gmail_extra">internal ABORT - process 0</div></div><div><br></div><div>Could you please refer me to some documentation for mpi_init_thread/</div><div>MPICH-multithreading documentation as I am relatively new to it.</div><div><br></div><div>Thanks,</div><div>Viswanath</div></span></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jul 31, 2015 at 2:41 AM, <span dir="ltr"><<a href="mailto:discuss-request@mpich.org" target="_blank">discuss-request@mpich.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send discuss mailing list submissions to<br>
<a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:discuss-request@mpich.org">discuss-request@mpich.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:discuss-owner@mpich.org">discuss-owner@mpich.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of discuss digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. PANFS Remove RAID0 and add RAIDN to MPICH 3.2 (Victorelli, Ron)<br>
2. Re: PANFS Remove RAID0 and add RAIDN to MPICH 3.2 (Rob Latham)<br>
3. Re: hydra, stdin close(), and SLURM (Aaron Knister)<br>
4. Re: Nemesis engine (Viswanath Krishnamurthy)<br>
5. Re: Nemesis engine (Halim Amer)<br>
6. Active loop in MPI_Waitany? (Dorier, Matthieu)<br>
7. Re: Active loop in MPI_Waitany? (Jeff Hammond)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Wed, 29 Jul 2015 12:07:40 +0000<br>
From: "Victorelli, Ron" <<a href="mailto:rvictorelli@panasas.com">rvictorelli@panasas.com</a>><br>
To: "<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>" <<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>><br>
Subject: [mpich-discuss] PANFS Remove RAID0 and add RAIDN to MPICH 3.2<br>
Message-ID:<br>
<<a href="mailto:BN3PR08MB12888AACA7907A8ACED7531AA18C0@BN3PR08MB1288.namprd08.prod.outlook.com">BN3PR08MB12888AACA7907A8ACED7531AA18C0@BN3PR08MB1288.namprd08.prod.outlook.com</a>><br>
<br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
I am a developer at Panasas, and we would like to provide a patch that<br>
removes RAID0 support and adds RAIDN support to romio (MPICH 3.2):<br>
<br>
src/mpi/romio/adio/ad_panfs/ad_panfs_open.c<br>
<br>
I currently do not have an MCS or trac account.<br>
<br>
Thank You<br>
<br>
Ron Victorelli<br>
Software Engineer<br>
Panasas, Inc<br>
Email: <a href="mailto:rvictorelli@panasas.com">rvictorelli@panasas.com</a><mailto:<a href="mailto:rvictorelli@panasas.com">rvictorelli@panasas.com</a>><br>
Tel: <a href="tel:412%20-323-6422" value="+14123236422">412 -323-6422</a><br>
<a href="http://www.panasas.com" rel="noreferrer" target="_blank">www.panasas.com</a><<a href="http://www.panasas.com" rel="noreferrer" target="_blank">http://www.panasas.com</a>><br>
[Panasas_Logo_LR]<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20150729/00b240fc/attachment-0001.html" rel="noreferrer" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20150729/00b240fc/attachment-0001.html</a>><br>
-------------- next part --------------<br>
A non-text attachment was scrubbed...<br>
Name: image001.jpg<br>
Type: image/jpeg<br>
Size: 3610 bytes<br>
Desc: image001.jpg<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20150729/00b240fc/attachment-0001.jpg" rel="noreferrer" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20150729/00b240fc/attachment-0001.jpg</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 29 Jul 2015 15:52:31 -0500<br>
From: Rob Latham <<a href="mailto:robl@mcs.anl.gov">robl@mcs.anl.gov</a>><br>
To: <<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>>, <<a href="mailto:rvictorelli@panasas.com">rvictorelli@panasas.com</a>><br>
Subject: Re: [mpich-discuss] PANFS Remove RAID0 and add RAIDN to MPICH<br>
3.2<br>
Message-ID: <<a href="mailto:55B93D0F.7040703@mcs.anl.gov">55B93D0F.7040703@mcs.anl.gov</a>><br>
Content-Type: text/plain; charset="windows-1252"; format=flowed<br>
<br>
<br>
<br>
On 07/29/2015 07:07 AM, Victorelli, Ron wrote:<br>
> I am a developer at Panasas, and we would like to provide a patch that<br>
><br>
> removes RAID0 support and adds RAIDN support to romio (MPICH 3.2):<br>
><br>
> src/mpi/romio/adio/ad_panfs/ad_panfs_open.c<br>
><br>
> I currently do not have an MCS or trac account.<br>
<br>
Hi Ron. I'm pleased to have contributions from Panasas. It's your<br>
first since 2007!<br>
<br>
If you've got a lot of patches in the works, maybe we should go ahead<br>
and set you up with a trac account and/or a git tree.<br>
<br>
If you're just looking to get this patch into the tree, that's fine too:<br>
it's definitely easier and you will just need to 'git format-patch' your<br>
changes and email them to me.<br>
<br>
==rob<br>
<br>
><br>
> Thank You<br>
><br>
> Ron Victorelli<br>
><br>
> Software Engineer<br>
><br>
> Panasas, Inc<br>
><br>
> Email: <a href="mailto:rvictorelli@panasas.com">rvictorelli@panasas.com</a> <mailto:<a href="mailto:rvictorelli@panasas.com">rvictorelli@panasas.com</a>><br>
><br>
> Tel: <a href="tel:412%20-323-6422" value="+14123236422">412 -323-6422</a><br>
><br>
> <a href="http://www.panasas.com" rel="noreferrer" target="_blank">www.panasas.com</a> <<a href="http://www.panasas.com" rel="noreferrer" target="_blank">http://www.panasas.com</a>><br>
><br>
> Panasas_Logo_LR<br>
><br>
><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
<br>
--<br>
Rob Latham<br>
Mathematics and Computer Science Division<br>
Argonne National Lab, IL USA<br>
<br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Wed, 29 Jul 2015 17:14:53 -0400<br>
From: Aaron Knister <<a href="mailto:aaron.s.knister@nasa.gov">aaron.s.knister@nasa.gov</a>><br>
To: <<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>><br>
Subject: Re: [mpich-discuss] hydra, stdin close(), and SLURM<br>
Message-ID: <<a href="mailto:55B9424D.6030104@nasa.gov">55B9424D.6030104@nasa.gov</a>><br>
Content-Type: text/plain; charset="windows-1252"; Format="flowed"<br>
<br>
Thanks Pavan!<br>
<br>
-Aaron<br>
<br>
On 7/28/15 3:23 PM, Balaji, Pavan wrote:<br>
> Hi Aaron,<br>
><br>
> I've committed it to mpich/master:<br>
><br>
> <a href="http://git.mpich.org/mpich.git/commitdiff/6b41775b2056ff18b3c28aab71764e35904c00fa" rel="noreferrer" target="_blank">http://git.mpich.org/mpich.git/commitdiff/6b41775b2056ff18b3c28aab71764e35904c00fa</a><br>
><br>
> Thanks for the contribution.<br>
><br>
> This should be in tonight's nightlies:<br>
><br>
> <a href="http://www.mpich.org/static/downloads/nightly/master/mpich/" rel="noreferrer" target="_blank">http://www.mpich.org/static/downloads/nightly/master/mpich/</a><br>
><br>
> ... and in the upcoming mpich-3.2rc1 release.<br>
><br>
> -- Pavan<br>
><br>
><br>
><br>
><br>
> On 7/27/15, 1:40 PM, "Balaji, Pavan" <<a href="mailto:balaji@anl.gov">balaji@anl.gov</a>> wrote:<br>
><br>
>> Hi Aaron,<br>
>><br>
>><br>
>><br>
>> Please send the patch to me directly.<br>
>><br>
>> General guidelines as to the kind of patches we ask for:<br>
>><br>
>> <a href="https://wiki.mpich.org/mpich/index.php/Version_Control_Systems_101" rel="noreferrer" target="_blank">https://wiki.mpich.org/mpich/index.php/Version_Control_Systems_101</a><br>
>><br>
>> You can ignore the git workflow related text, which is for our internal testing. I'll take care of that for you.<br>
>><br>
>> Thanks,<br>
>><br>
>> -- Pavan<br>
>><br>
>> On 7/27/15, 1:36 PM, "Aaron Knister" <<a href="mailto:aaron.s.knister@nasa.gov">aaron.s.knister@nasa.gov</a>> wrote:<br>
>><br>
>>> Hi Pavan,<br>
>>><br>
>>> I see your reply in the archives but it didn't make it to my inbox so<br>
>>> I'm replying to my post. I don't disagree without you about the error<br>
>>> being in the SLURM code, but I'm not sure how one would prevent this<br>
>>> reliably. SLURM has no expectation that an external library will open<br>
>>> something at file descriptor 0 before it reaches the point in the code<br>
>>> where it's ready to poll for stdin. Do you have any suggestions?<br>
>>><br>
>>> It's been a long while since I've done a git e-mail patch so it might<br>
>>> take me a bit to figure out. Should I send the patch to the list or to<br>
>>> you directly?<br>
>>><br>
>>> Thanks!<br>
>>><br>
>>> -Aaron<br>
>>><br>
>>> On 7/25/15 10:26 PM, Aaron Knister wrote:<br>
>>>> I sent this off to the mvapich list yesterday and it was suggested I<br>
>>>> raise it here since this is the upstream project:<br>
>>>><br>
>>>> This is a bit of a cross post from a thread I started on the slurm dev<br>
>>>> list: <a href="http://article.gmane.org/gmane.comp.distributed.slurm.devel/8176" rel="noreferrer" target="_blank">http://article.gmane.org/gmane.comp.distributed.slurm.devel/8176</a><br>
>>>><br>
>>>> I'd like to get feedback on the idea that "--input none" be passed to<br>
>>>> srun when using the SLURM hydra bootstrap mechanism. I figured it<br>
>>>> would be inserted somewhere around here<br>
>>>> <a href="http://trac.mpich.org/projects/mpich/browser/src/pm/hydra/tools/bootstrap/external/slurm_launch.c#L98" rel="noreferrer" target="_blank">http://trac.mpich.org/projects/mpich/browser/src/pm/hydra/tools/bootstrap/external/slurm_launch.c#L98</a>.<br>
>>>><br>
>>>><br>
>>>> Without this argument I'm getting spurious job aborts and confusing<br>
>>>> errors. The gist of it is mpiexec.hydra closes stdin before it exec's<br>
>>>> srun. srun then (possibly via the munge libraries) calls some function<br>
>>>> that does a look up via nss. We use sssd for AAA so libnss_sssd will<br>
>>>> handle this request. Part of the caching mechanism sssd uses will<br>
>>>> cause the library to open() the cache file. The lowest fd available is<br>
>>>> 0 so the cache file is opened on fd 0. srun then believes it's got<br>
>>>> stdin attached and it causes the issues outlined in the slurm dev<br>
>>>> post. I think passing "--input none" is the right thing to do here<br>
>>>> since hydra has in fact closed stdin to srun. I tested this via the<br>
>>>> HYDRA_LAUNCHER_EXTRA_ARGS environment variable and it does resolve the<br>
>>>> errors I described.<br>
>>>><br>
>>>> Thanks!<br>
>>>> -Aaron<br>
>>>><br>
>>> --<br>
>>> Aaron Knister<br>
>>> NASA Center for Climate Simulation (Code 606.2)<br>
>>> Goddard Space Flight Center<br>
>>> <a href="tel:%28301%29%20286-2776" value="+13012862776">(301) 286-2776</a><br>
>>><br>
>>><br>
>> _______________________________________________<br>
>> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
>> To manage subscription options or unsubscribe:<br>
>> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
--<br>
Aaron Knister<br>
NASA Center for Climate Simulation (Code 606.2)<br>
Goddard Space Flight Center<br>
<a href="tel:%28301%29%20286-2776" value="+13012862776">(301) 286-2776</a><br>
<br>
<br>
-------------- next part --------------<br>
A non-text attachment was scrubbed...<br>
Name: signature.asc<br>
Type: application/pgp-signature<br>
Size: 842 bytes<br>
Desc: OpenPGP digital signature<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20150729/0280399d/attachment-0001.pgp" rel="noreferrer" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20150729/0280399d/attachment-0001.pgp</a>><br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Thu, 30 Jul 2015 17:40:35 +0300<br>
From: Viswanath Krishnamurthy <<a href="mailto:writetoviswa@gmail.com">writetoviswa@gmail.com</a>><br>
To: <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
Subject: Re: [mpich-discuss] Nemesis engine<br>
Message-ID:<br>
<CADhQ-jDZix3e2TmPAPjX3O7GO+Z7vOzSphPgc4Py+B=<a href="mailto:eRGBypA@mail.gmail.com">eRGBypA@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi All,<br>
<br>
I am currently working on MPICH-version 3.1.4 on Ubuntu..<br>
where I get an error stating that<br>
<br>
Assertion failed in<br>
file src/mpid/ch3/channels/nemesis/src/ch3_progress.c at Line 252<br>
<br>
The actual problem which I face is that currently even if MPI_Sends have<br>
already been dispatched, certain nodes keep waiting for MPI_Recvs which<br>
never arrive at all(Using multithreading). When I referred the internet, my<br>
understanding is that nemesis is written to handle only one thread receive.<br>
Please let me know about the latest patch for nemesis engine or the mpich<br>
version which has the changes.<br>
*src/mpid/ch3/channels/nemesis/src/ch3_progress.c *<br>
<br>
Thanks,<br>
Viswanath<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20150730/c9d4e337/attachment-0001.html" rel="noreferrer" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20150730/c9d4e337/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 5<br>
Date: Thu, 30 Jul 2015 09:51:40 -0500<br>
From: Halim Amer <<a href="mailto:aamer@anl.gov">aamer@anl.gov</a>><br>
To: <<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>><br>
Subject: Re: [mpich-discuss] Nemesis engine<br>
Message-ID: <<a href="mailto:55BA39FC.20406@anl.gov">55BA39FC.20406@anl.gov</a>><br>
Content-Type: text/plain; charset="windows-1252"; format=flowed<br>
<br>
Hi Viswanath,<br>
<br>
Nemesis supports multithreading. Have you initialized the MPI<br>
environment with MPI_THREAD_MULTIPLE threading support?<br>
<br>
If you still see the problem after the above initialization, please send<br>
us a minimal example code that reproduces it.<br>
<br>
Thank you,<br>
--Halim<br>
<br>
Abdelhalim Amer (Halim)<br>
Postdoctoral Appointee<br>
MCS Division<br>
Argonne National Laboratory<br>
<br>
On 7/30/15 9:40 AM, Viswanath Krishnamurthy wrote:<br>
> Hi All,<br>
><br>
> I am currently working on MPICH-version 3.1.4 on Ubuntu..<br>
> where I get an error stating that<br>
><br>
> Assertion failed in<br>
> file src/mpid/ch3/channels/nemesis/src/ch3_progress.c at Line 252<br>
> *<br>
> *<br>
> The actual problem which I face is that currently even if MPI_Sends have<br>
> already been dispatched, certain nodes keep waiting for MPI_Recvs which<br>
> never arrive at all(Using multithreading). When I referred the internet,<br>
> my understanding is that nemesis is written to handle only one thread<br>
> receive. Please let me know about the latest patch for nemesis engine or<br>
> the mpich version which has the changes.<br>
> *src/mpid/ch3/channels/nemesis/src/ch3_progress.c *<br>
><br>
> Thanks,<br>
> Viswanath<br>
><br>
><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
<br>
<br>
------------------------------<br>
<br>
Message: 6<br>
Date: Thu, 30 Jul 2015 21:09:04 +0000<br>
From: "Dorier, Matthieu" <<a href="mailto:mdorier@anl.gov">mdorier@anl.gov</a>><br>
To: "<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>" <<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>><br>
Subject: [mpich-discuss] Active loop in MPI_Waitany?<br>
Message-ID: <<a href="mailto:37142D5FC373A846ACE4F75AA11EA84D21BA0122@DITKA.anl.gov">37142D5FC373A846ACE4F75AA11EA84D21BA0122@DITKA.anl.gov</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
Hi,<br>
<br>
I have a code that looks like this:<br>
<br>
while(true) {<br>
do some I/O (HDF5 POSIX output to a remote, parallel file system)<br>
wait for communication (MPI_Waitany) from other processes (in the same node and outside the node)<br>
}<br>
<br>
I'm measuring the energy consumption of the node that runs this process for the same duration, as a function of the amount of data written in each I/O operation.<br>
Surprisingly, the larger the I/O in proposition to the communication, the lower the energy consumption. In other words, the longer I wait in MPI_Waitany, the more I consume.<br>
<br>
Does anyone have a good explanation for that? Is there an active loop in MPI_Waitany? Another reason?<br>
<br>
Thanks!<br>
<br>
Matthieu<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20150730/6777b87d/attachment-0001.html" rel="noreferrer" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20150730/6777b87d/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 7<br>
Date: Thu, 30 Jul 2015 19:41:21 -0400<br>
From: Jeff Hammond <<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a>><br>
To: "<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>" <<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>><br>
Subject: Re: [mpich-discuss] Active loop in MPI_Waitany?<br>
Message-ID:<br>
<CAGKz=<a href="mailto:uJr5NmO%2BcsEBDOtk67zz%2BHDEaax_JoLNzHWswZipcPCyA@mail.gmail.com">uJr5NmO+csEBDOtk67zz+HDEaax_JoLNzHWswZipcPCyA@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Seems obvious that Waitany spins on the array of requests until one<br>
completes. Is that an active loop by your definition?<br>
<br>
Jeff<br>
<br>
On Thursday, July 30, 2015, Dorier, Matthieu <<a href="mailto:mdorier@anl.gov">mdorier@anl.gov</a>> wrote:<br>
<br>
> Hi,<br>
><br>
> I have a code that looks like this:<br>
><br>
> while(true) {<br>
> do some I/O (HDF5 POSIX output to a remote, parallel file system)<br>
> wait for communication (MPI_Waitany) from other processes (in the same<br>
> node and outside the node)<br>
> }<br>
><br>
> I'm measuring the energy consumption of the node that runs this process<br>
> for the same duration, as a function of the amount of data written in each<br>
> I/O operation.<br>
> Surprisingly, the larger the I/O in proposition to the communication, the<br>
> lower the energy consumption. In other words, the longer I wait in<br>
> MPI_Waitany, the more I consume.<br>
><br>
> Does anyone have a good explanation for that? Is there an active loop in<br>
> MPI_Waitany? Another reason?<br>
><br>
> Thanks!<br>
><br>
> Matthieu<br>
><br>
<br>
<br>
--<br>
Jeff Hammond<br>
<a href="mailto:jeff.science@gmail.com">jeff.science@gmail.com</a><br>
<a href="http://jeffhammond.github.io/" rel="noreferrer" target="_blank">http://jeffhammond.github.io/</a><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20150730/8390a38a/attachment.html" rel="noreferrer" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20150730/8390a38a/attachment.html</a>><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
discuss mailing list<br>
<a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" rel="noreferrer" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
End of discuss Digest, Vol 33, Issue 10<br>
***************************************<br>
</blockquote></div><br></div></div>