Hello Dave,<div><br></div><div><div>Thanks for your reply.</div><div>We followed your advice and installed Hydra on each node. </div><div><br></div><div>We specify ip address in hosts file. For example: </div><div><br></div>
<div>shell $ mpiexec –f hosts –np 2 ./app</div><div>shell $ cat hosts</div><div>192.168.0.1</div><div>192.168.1.1</div><div><br></div><div>(The two node IP belongs to 2 different subnets: for example,</div><div>subnet #1 <a href="http://192.168.0.0/24">192.168.0.0/24</a> and subnet #2: <a href="http://192.168.1.0/24">192.168.1.0/24</a>)</div>
<div><br></div><div>The error out put is </div><div>“ssh connect to host 192.168.1.1 port 22: connection time out”.</div><div><br></div><div>So is there a option for Hydra to solve this problem? </div><div><br></div><div>
Thank you!</div><div><br></div><div>Sincerely,</div><div>Na Zhang</div><br><div class="gmail_quote">On Fri, Jan 11, 2013 at 5:20 PM, <span dir="ltr"><<a href="mailto:discuss-request@mpich.org" target="_blank">discuss-request@mpich.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send discuss mailing list submissions to<br>
<a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:discuss-request@mpich.org">discuss-request@mpich.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:discuss-owner@mpich.org">discuss-owner@mpich.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of discuss digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: [PATCH] Use attribute layout_compatible for pair types<br>
(Jed Brown)<br>
2. Re: [PATCH] Use attribute layout_compatible for pair types<br>
(Dmitri Gribenko)<br>
3. start MPD daemons on 2 different subnets (Na Zhang)<br>
4. Re: start MPD daemons on 2 different subnets (Dave Goodell)<br>
5. Fatal error in PMPI_Reduce (Michael Colonno)<br>
6. Re: Fatal error in PMPI_Reduce (Pavan Balaji)<br>
7. Re: Fatal error in PMPI_Reduce (Pavan Balaji)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Wed, 9 Jan 2013 14:00:50 -0600<br>
From: Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>><br>
To: <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
Subject: Re: [mpich-discuss] [PATCH] Use attribute layout_compatible<br>
for pair types<br>
Message-ID:<br>
<<a href="mailto:CAM9tzSnqJHaj6wbKBdAWp5YveG%2BUW_OWiA768GRb1spHjn%2BTZw@mail.gmail.com">CAM9tzSnqJHaj6wbKBdAWp5YveG+UW_OWiA768GRb1spHjn+TZw@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On Jan 9, 2013 12:56 PM, "Dmitri Gribenko" <<a href="mailto:gribozavr@gmail.com">gribozavr@gmail.com</a>> wrote:<br>
<br>
> On Wed, Jan 9, 2013 at 8:19 PM, Dave Goodell <<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>> wrote:<br>
> > Both implemented and pushed as d440abb and ac15f7a. Thanks.<br>
> ><br>
> > -Dave<br>
> ><br>
> > On Jan 1, 2013, at 11:14 PM CST, Jed Brown wrote:<br>
> ><br>
> >> In addition, I suggest guarding these definitions. Leaving these in<br>
> increases the total number of symbols in an example executable linking<br>
> PETSc by a factor of 2. (They're all read-only, but they're still there.)<br>
> Clang is smart enough to remove these, presumably because it understands<br>
> the special attributes.<br>
><br>
> No, LLVM removes these not because of the attributes, but because<br>
> these are unused. And when they are used, most of the time they don't<br>
> have their address taken, so their value is propagated to the point<br>
> where they are read and the constants again become unused.<br>
><br>
> I don't think GCC isn't smart enough to do the same. Do you compile<br>
> with optimization?<br>
><br>
<br>
Dmitri, as discussed in the other thread, it's smart enough, but only when<br>
optimization is turned on. There's no reason to needlessly make debug<br>
builds heavier than necessary. This is not a big deal either way.<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20130109/92c5a5f7/attachment-0001.html" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20130109/92c5a5f7/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 9 Jan 2013 22:57:14 +0200<br>
From: Dmitri Gribenko <<a href="mailto:gribozavr@gmail.com">gribozavr@gmail.com</a>><br>
To: <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
Subject: Re: [mpich-discuss] [PATCH] Use attribute layout_compatible<br>
for pair types<br>
Message-ID:<br>
<<a href="mailto:CA%2BY5xYeBp974pDiL0QFAhjxpeqpB2Xykjx-atYFtLWQ2Oq%2BaoA@mail.gmail.com">CA+Y5xYeBp974pDiL0QFAhjxpeqpB2Xykjx-atYFtLWQ2Oq+aoA@mail.gmail.com</a>><br>
Content-Type: text/plain; charset=UTF-8<br>
<br>
On Wed, Jan 9, 2013 at 10:00 PM, Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>> wrote:<br>
> On Jan 9, 2013 12:56 PM, "Dmitri Gribenko" <<a href="mailto:gribozavr@gmail.com">gribozavr@gmail.com</a>> wrote:<br>
>> I don't think GCC isn't smart enough to do the same. Do you compile<br>
>> with optimization?<br>
><br>
> Dmitri, as discussed in the other thread, it's smart enough, but only when<br>
> optimization is turned on. There's no reason to needlessly make debug builds<br>
> heavier than necessary. This is not a big deal either way.<br>
<br>
Oh, now I see -- in debug builds it still emits these. Thank you for fixing!<br>
<br>
Dmitri<br>
<br>
--<br>
main(i,j){for(i=2;;i++){for(j=2;j<i;j++){if(!(i%j)){j=0;break;}}if<br>
(j){printf("%d\n",i);}}} /*Dmitri Gribenko <<a href="mailto:gribozavr@gmail.com">gribozavr@gmail.com</a>>*/<br>
<br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Fri, 11 Jan 2013 13:55:34 -0500<br>
From: Na Zhang <<a href="mailto:na.zhang@stonybrook.edu">na.zhang@stonybrook.edu</a>><br>
To: <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
Subject: [mpich-discuss] start MPD daemons on 2 different subnets<br>
Message-ID:<br>
<<a href="mailto:CAFbC_ZLoN6vwy3JnwvOpWR50tff8zn1qWt2WcxxfKHghigRzDw@mail.gmail.com">CAFbC_ZLoN6vwy3JnwvOpWR50tff8zn1qWt2WcxxfKHghigRzDw@mail.gmail.com</a>><br>
Content-Type: text/plain; charset=ISO-8859-1<br>
<br>
Dear developers,<br>
<br>
We want to start MPD daemons on 2 different subnets: for example,<br>
subnet #1 <a href="http://192.168.0.0/24" target="_blank">192.168.0.0/24</a> and subnet #2: <a href="http://192.168.1.0/24" target="_blank">192.168.1.0/24</a><br>
<br>
The two subnets are connected via switches and they can talk to each<br>
other. Next, we'd start MPD daemons on two nodes:<br>
<br>
Node #1 (in subnet #1): (hostname:node_001) IP=192.168.0.1<br>
Node #2 (in subnet #2): (hostname:node_002) IP=192.168.1.1<br>
<br>
We used the following commands:<br>
<br>
on Node #1: mpd --ifhn=192.168.0.1 --daemon<br>
(daemon is successfully started on node_001)<br>
<br>
on Node #2: mpd -h node_001 -p <node1's_port_number><br>
--ifhn=192.168.1.1 --daemon<br>
(daemon cannot be started on Node #2. there is no error message, we<br>
use "mpdtrace" to check on Node #1, it shows no daemon started on Node<br>
#2 )<br>
<br>
Node #2 cannot join the ring that is generated by node #1.<br>
<br>
How should we do?<br>
<br>
Thank you in advance.<br>
<br>
Sincerely,<br>
Na Zhang<br>
<br>
--<br>
Na Zhang, Ph.D. Candidate<br>
Dept. of Applied Mathematics and Statistics<br>
Stony Brook University<br>
Phone: 631-838-3205<br>
<br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Fri, 11 Jan 2013 12:59:07 -0600<br>
From: Dave Goodell <<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>><br>
To: <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
Subject: Re: [mpich-discuss] start MPD daemons on 2 different subnets<br>
Message-ID: <<a href="mailto:F78FA019-E83D-40C1-A1CE-26C778411507@mcs.anl.gov">F78FA019-E83D-40C1-A1CE-26C778411507@mcs.anl.gov</a>><br>
Content-Type: text/plain; charset=us-ascii<br>
<br>
Use hydra instead of MPD:<br>
<br>
<a href="http://wiki.mpich.org/mpich/index.php/FAQ#Q:_I_don.27t_like_.3CWHATEVER.3E_about_mpd.2C_or_I.27m_having_a_problem_with_mpdboot.2C_can_you_fix_it.3F" target="_blank">http://wiki.mpich.org/mpich/index.php/FAQ#Q:_I_don.27t_like_.3CWHATEVER.3E_about_mpd.2C_or_I.27m_having_a_problem_with_mpdboot.2C_can_you_fix_it.3F</a><br>
<br>
-Dave<br>
<br>
On Jan 11, 2013, at 12:55 PM CST, Na Zhang wrote:<br>
<br>
> Dear developers,<br>
><br>
> We want to start MPD daemons on 2 different subnets: for example,<br>
> subnet #1 <a href="http://192.168.0.0/24" target="_blank">192.168.0.0/24</a> and subnet #2: <a href="http://192.168.1.0/24" target="_blank">192.168.1.0/24</a><br>
><br>
> The two subnets are connected via switches and they can talk to each<br>
> other. Next, we'd start MPD daemons on two nodes:<br>
><br>
> Node #1 (in subnet #1): (hostname:node_001) IP=192.168.0.1<br>
> Node #2 (in subnet #2): (hostname:node_002) IP=192.168.1.1<br>
><br>
> We used the following commands:<br>
><br>
> on Node #1: mpd --ifhn=192.168.0.1 --daemon<br>
> (daemon is successfully started on node_001)<br>
><br>
> on Node #2: mpd -h node_001 -p <node1's_port_number><br>
> --ifhn=192.168.1.1 --daemon<br>
> (daemon cannot be started on Node #2. there is no error message, we<br>
> use "mpdtrace" to check on Node #1, it shows no daemon started on Node<br>
> #2 )<br>
><br>
> Node #2 cannot join the ring that is generated by node #1.<br>
><br>
> How should we do?<br>
><br>
> Thank you in advance.<br>
><br>
> Sincerely,<br>
> Na Zhang<br>
><br>
> --<br>
> Na Zhang, Ph.D. Candidate<br>
> Dept. of Applied Mathematics and Statistics<br>
> Stony Brook University<br>
> Phone: 631-838-3205<br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 5<br>
Date: Fri, 11 Jan 2013 13:31:48 -0800<br>
From: "Michael Colonno" <<a href="mailto:mcolonno@stanford.edu">mcolonno@stanford.edu</a>><br>
To: <<a href="mailto:discuss@mpich.org">discuss@mpich.org</a>><br>
Subject: [mpich-discuss] Fatal error in PMPI_Reduce<br>
Message-ID: <0cc801cdf043$10bc1890$323449b0$@<a href="http://stanford.edu" target="_blank">stanford.edu</a>><br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
Hi All ~<br>
<br>
<br>
<br>
I've compiled MPICH2 3.0 with the Intel compiler (v. 13) on a<br>
CentOS 6.3 x64 system using SLURM as the process manager. My configure was<br>
simply:<br>
<br>
<br>
<br>
./configure --with-pmi=slurm --with-pm=no --prefix=/usr/local/apps/MPICH2<br>
<br>
<br>
<br>
No errors during build or install. When I compile and run the example<br>
program cxxcpi I get (truncated):<br>
<br>
<br>
<br>
$ srun -n32 /usr/local/apps/cxxcpi<br>
<br>
Fatal error in PMPI_Reduce: A process has failed, error stack:<br>
<br>
PMPI_Reduce(1217)...............: MPI_Reduce(sbuf=0x7fff4ad18120,<br>
rbuf=0x7fff4ad18128, count=1, MPI_DOUBLE, MPI_SUM, root=0, MPI_COMM_WORLD)<br>
failed<br>
<br>
MPIR_Reduce_impl(1029)..........:<br>
<br>
MPIR_Reduce_intra(779)..........:<br>
<br>
MPIR_Reduce_impl(1029)..........:<br>
<br>
MPIR_Reduce_intra(835)..........:<br>
<br>
MPIR_Reduce_binomial(144).......:<br>
<br>
MPIDI_CH3U_Recvq_FDU_or_AEP(612): Communication error with rank 16<br>
<br>
MPIR_Reduce_intra(799)..........:<br>
<br>
MPIR_Reduce_impl(1029)..........:<br>
<br>
MPIR_Reduce_intra(835)..........:<br>
<br>
MPIR_Reduce_binomial(206).......: Failure during collective<br>
<br>
srun: error: task 0: Exited with exit code 1<br>
<br>
<br>
<br>
This error is experienced with many of my MPI programs. A<br>
different application yields:<br>
<br>
<br>
<br>
PMPI_Bcast(1525)......: MPI_Bcast(buf=0x7fff545be5fc, count=1, MPI_INT,<br>
root=0, MPI_COMM_WORLD) failed<br>
<br>
MPIR_Bcast_impl(1369).:<br>
<br>
MPIR_Bcast_intra(1160):<br>
<br>
MPIR_SMP_Bcast(1077)..: Failure during collective<br>
<br>
<br>
<br>
Can anyone point me in the right direction?<br>
<br>
<br>
<br>
Thanks,<br>
<br>
~Mike C.<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20130111/8aa6c13a/attachment-0001.html" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20130111/8aa6c13a/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 6<br>
Date: Fri, 11 Jan 2013 16:19:23 -0600<br>
From: Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>><br>
To: <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
Subject: Re: [mpich-discuss] Fatal error in PMPI_Reduce<br>
Message-ID: <<a href="mailto:50F08FEB.1050901@mcs.anl.gov">50F08FEB.1050901@mcs.anl.gov</a>><br>
Content-Type: text/plain; charset=ISO-8859-1<br>
<br>
Michael,<br>
<br>
Did you try just using mpiexec?<br>
<br>
mpiexec -n 32 /usr/local/apps/cxxcpi<br>
<br>
-- Pavan<br>
<br>
On 01/11/2013 03:31 PM US Central Time, Michael Colonno wrote:<br>
> Hi All ~<br>
><br>
><br>
><br>
> I've compiled MPICH2 3.0 with the Intel compiler (v. 13) on<br>
> a CentOS 6.3 x64 system using SLURM as the process manager. My configure<br>
> was simply:<br>
><br>
><br>
><br>
> ./configure --with-pmi=slurm --with-pm=no --prefix=/usr/local/apps/MPICH2<br>
><br>
><br>
><br>
> No errors during build or install. When I compile and run the example<br>
> program cxxcpi I get (truncated):<br>
><br>
><br>
><br>
> $ srun -n32 /usr/local/apps/cxxcpi<br>
><br>
> Fatal error in PMPI_Reduce: A process has failed, error stack:<br>
><br>
> PMPI_Reduce(1217)...............: MPI_Reduce(sbuf=0x7fff4ad18120,<br>
> rbuf=0x7fff4ad18128, count=1, MPI_DOUBLE, MPI_SUM, root=0,<br>
> MPI_COMM_WORLD) failed<br>
><br>
> MPIR_Reduce_impl(1029)..........:<br>
><br>
> MPIR_Reduce_intra(779)..........:<br>
><br>
> MPIR_Reduce_impl(1029)..........:<br>
><br>
> MPIR_Reduce_intra(835)..........:<br>
><br>
> MPIR_Reduce_binomial(144).......:<br>
><br>
> MPIDI_CH3U_Recvq_FDU_or_AEP(612): Communication error with rank 16<br>
><br>
> MPIR_Reduce_intra(799)..........:<br>
><br>
> MPIR_Reduce_impl(1029)..........:<br>
><br>
> MPIR_Reduce_intra(835)..........:<br>
><br>
> MPIR_Reduce_binomial(206).......: Failure during collective<br>
><br>
> srun: error: task 0: Exited with exit code 1<br>
><br>
><br>
><br>
> This error is experienced with many of my MPI programs. A<br>
> different application yields:<br>
><br>
><br>
><br>
> PMPI_Bcast(1525)......: MPI_Bcast(buf=0x7fff545be5fc, count=1, MPI_INT,<br>
> root=0, MPI_COMM_WORLD) failed<br>
><br>
> MPIR_Bcast_impl(1369).:<br>
><br>
> MPIR_Bcast_intra(1160):<br>
><br>
> MPIR_SMP_Bcast(1077)..: Failure during collective<br>
><br>
><br>
><br>
> Can anyone point me in the right direction?<br>
><br>
><br>
><br>
> Thanks,<br>
><br>
> ~Mike C.<br>
><br>
><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
<br>
--<br>
Pavan Balaji<br>
<a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
<br>
<br>
------------------------------<br>
<br>
Message: 7<br>
Date: Fri, 11 Jan 2013 16:20:00 -0600<br>
From: Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>><br>
To: <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
Subject: Re: [mpich-discuss] Fatal error in PMPI_Reduce<br>
Message-ID: <<a href="mailto:50F09010.5010003@mcs.anl.gov">50F09010.5010003@mcs.anl.gov</a>><br>
Content-Type: text/plain; charset=ISO-8859-1<br>
<br>
<br>
FYI, the reason I suggested this is because mpiexec will automatically<br>
detect and use slurm internally.<br>
<br>
-- Pavan<br>
<br>
On 01/11/2013 04:19 PM US Central Time, Pavan Balaji wrote:<br>
> Michael,<br>
><br>
> Did you try just using mpiexec?<br>
><br>
> mpiexec -n 32 /usr/local/apps/cxxcpi<br>
><br>
> -- Pavan<br>
><br>
> On 01/11/2013 03:31 PM US Central Time, Michael Colonno wrote:<br>
>> Hi All ~<br>
>><br>
>><br>
>><br>
>> I've compiled MPICH2 3.0 with the Intel compiler (v. 13) on<br>
>> a CentOS 6.3 x64 system using SLURM as the process manager. My configure<br>
>> was simply:<br>
>><br>
>><br>
>><br>
>> ./configure --with-pmi=slurm --with-pm=no --prefix=/usr/local/apps/MPICH2<br>
>><br>
>><br>
>><br>
>> No errors during build or install. When I compile and run the example<br>
>> program cxxcpi I get (truncated):<br>
>><br>
>><br>
>><br>
>> $ srun -n32 /usr/local/apps/cxxcpi<br>
>><br>
>> Fatal error in PMPI_Reduce: A process has failed, error stack:<br>
>><br>
>> PMPI_Reduce(1217)...............: MPI_Reduce(sbuf=0x7fff4ad18120,<br>
>> rbuf=0x7fff4ad18128, count=1, MPI_DOUBLE, MPI_SUM, root=0,<br>
>> MPI_COMM_WORLD) failed<br>
>><br>
>> MPIR_Reduce_impl(1029)..........:<br>
>><br>
>> MPIR_Reduce_intra(779)..........:<br>
>><br>
>> MPIR_Reduce_impl(1029)..........:<br>
>><br>
>> MPIR_Reduce_intra(835)..........:<br>
>><br>
>> MPIR_Reduce_binomial(144).......:<br>
>><br>
>> MPIDI_CH3U_Recvq_FDU_or_AEP(612): Communication error with rank 16<br>
>><br>
>> MPIR_Reduce_intra(799)..........:<br>
>><br>
>> MPIR_Reduce_impl(1029)..........:<br>
>><br>
>> MPIR_Reduce_intra(835)..........:<br>
>><br>
>> MPIR_Reduce_binomial(206).......: Failure during collective<br>
>><br>
>> srun: error: task 0: Exited with exit code 1<br>
>><br>
>><br>
>><br>
>> This error is experienced with many of my MPI programs. A<br>
>> different application yields:<br>
>><br>
>><br>
>><br>
>> PMPI_Bcast(1525)......: MPI_Bcast(buf=0x7fff545be5fc, count=1, MPI_INT,<br>
>> root=0, MPI_COMM_WORLD) failed<br>
>><br>
>> MPIR_Bcast_impl(1369).:<br>
>><br>
>> MPIR_Bcast_intra(1160):<br>
>><br>
>> MPIR_SMP_Bcast(1077)..: Failure during collective<br>
>><br>
>><br>
>><br>
>> Can anyone point me in the right direction?<br>
>><br>
>><br>
>><br>
>> Thanks,<br>
>><br>
>> ~Mike C.<br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> discuss mailing list <a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
>> To manage subscription options or unsubscribe:<br>
>> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
>><br>
><br>
<br>
--<br>
Pavan Balaji<br>
<a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
<br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
discuss mailing list<br>
<a href="mailto:discuss@mpich.org">discuss@mpich.org</a><br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
End of discuss Digest, Vol 3, Issue 9<br>
*************************************<br>
</blockquote></div><br></div><br clear="all"><div><br></div>-- <br>Sincerely,<br><br>Na Zhang, Ph.D. Candidate<br>Dept. of Applied Mathematics and Statistics<br>Stony Brook University<br>Phone: 631-838-3205