<div dir="ltr"><span style="font-family:arial,sans-serif;font-size:13px">Hi, Antonio</span><br><div class="gmail_extra"><br></div><div class="gmail_extra">Thanks a lot for your reply. I run my program on 64 bit OS for each nodes. Do you know how can overcome this OS problems? Should I add compile flags as mpicc -m64 ....?</div>
<div class="gmail_extra"><br></div><div class="gmail_extra" style>Thanks a lot!</div><div class="gmail_extra" style><br></div><div class="gmail_extra" style>Sufeng</div>
<div class="gmail_extra"><br><div class="gmail_quote">On Sat, Jun 15, 2013 at 10:03 AM, <span dir="ltr"><<a href="mailto:discuss-request@mpich.org" target="_blank">discuss-request@mpich.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Send discuss mailing list submissions to<br>
<a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:discuss-request@mpich.org" target="_blank">discuss-request@mpich.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:discuss-owner@mpich.org" target="_blank">discuss-owner@mpich.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of discuss digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: MPI server setup issue (Antonio J. Pe?a)<br>
2. Re: Running an mpi program that needs to access /dev/mem<br>
(Jim Dinan)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Fri, 14 Jun 2013 16:46:20 -0500<br>
From: Antonio J. Pe?a <<a href="mailto:apenya@mcs.anl.gov" target="_blank">apenya@mcs.anl.gov</a>><br>
To: <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
Subject: Re: [mpich-discuss] MPI server setup issue<br>
Message-ID: <3198378.OIJ6uL42Ef@localhost.localdomain><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
<br>
Hi Sufeng,<br>
<br>
<br>
> On Friday, June 14, 2013 04:35:39 PM Sufeng Niu wrote:<br>
<br>
<br>
> Hello,<br>
><br>
<br>
<br>
> I am a beginner on MPI programming, and right now I am working on an<br>
MPI project. I got a few questions related to implementation issues:<br>
><br>
<br>
<br>
> 1. when I run a simple MPI hello world on multiple nodes, (I already<br>
installed mpich3 library on master node, mount the nfs, shared the<br>
executable file and mpi library, set slave node to be keyless ssh), my<br>
program was stoped there say:<br>
> bash: /mnt/mpi/mpich-install/bin/hydra_pmi_proxy: /lib/ld-linux.so.2: bad<br>
ELF interpreter: No such file or directory.<br>
> I can not get rid of it for a long times. even though I reset everything (I<br>
already add PATH=/mnt/mpi/mpich-install/bin:$PATH in .bash_profile). Do<br>
you have any clues on this problems?<br>
><br>
<br>
<br>
This issue may be related to mismatch between 32 and 64 bit libraries. Are<br>
you running 64 or 32 bit operating systems in all of your nodes<br>
consistently?<br>
<br>
> 2. for multiple servers, each of them has 10G ethernet card. for<br>
example, one network card address is eth5: 10.0.5.55. So if I want to<br>
launch MPI communication through 10G network card. Should I set the<br>
hostfile as: 10.0.5.55:$(PROCESS_NUM)? Or using iface eth5<br>
<br>
<br>
You can address those nodes by either IP or DNS name in the hostfile,<br>
depending on how your system is configured. Using IP addresses is<br>
completely OK.<br>
<br>
<br>
Best,<br>
Antonio<br>
<br>
><br>
<br>
<br>
> Thanks a lot!<br>
><br>
<br>
<br>
> -- Best Regards,<br>
> Sufeng Niu<br>
> ECASP lab, ECE department, Illinois Institute of Technology<br>
> Tel: <a href="tel:312-731-7219" value="+13127317219" target="_blank">312-731-7219</a><br>
><br>
<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20130614/67207b83/attachment-0001.html" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20130614/67207b83/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Sat, 15 Jun 2013 10:03:02 -0500<br>
From: Jim Dinan <<a href="mailto:james.dinan@gmail.com" target="_blank">james.dinan@gmail.com</a>><br>
To: <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
Subject: Re: [mpich-discuss] Running an mpi program that needs to<br>
access /dev/mem<br>
Message-ID:<br>
<CAOoEU4E87SNHZS2KmbtywMLF=<a href="mailto:T0q4Kq2a7kDJHV2q54WT34nBg@mail.gmail.com" target="_blank">T0q4Kq2a7kDJHV2q54WT34nBg@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
Eibhlin,<br>
<br>
Did you make those permissions changes on every node where your program<br>
runs? What happens if you run "mpiexec touch /dev/mem"?<br>
<br>
~Jim.<br>
<br>
<br>
On Fri, Jun 14, 2013 at 4:43 PM, Lee, Eibhlin<br>
<<a href="mailto:eibhlin.lee10@imperial.ac.uk" target="_blank">eibhlin.lee10@imperial.ac.uk</a>>wrote:<br>
<br>
> Pavan,<br>
> sorry when I do run mpiexec id the output is<br>
> uid=1000(pi) gid=1000(pi)<br>
> groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),105(netdev),999(input)<br>
><br>
> regardless of whether I'm in root or my usual user. root@raspi or<br>
> pi@raspi. Is this output what you would expect?<br>
><br>
> Jim,<br>
> I have tried changing the ownership of /dev/mem by<br>
> chmod 755 /dev/mem so that the output of ls -l /dev/mem is<br>
> crwxr-xr-x 1 root kmem 1, 1 Jan 1 1970 /dev/mem<br>
> but I still can't open /dev/mem inside my program. I also tried with the<br>
> code 777.<br>
><br>
> I tried adding my user to the kmem group by doing<br>
> usermod -a -G kmem pi<br>
> but this doesn't fix the problem.<br>
><br>
><br>
> Have I gotten totally confused and pi isn't my user?<br>
><br>
> Thank you in advance,<br>
> Eibhlin<br>
> ------------------------------<br>
> *From:* <a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a> [<a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a>] on behalf<br>
> of Jim Dinan [<a href="mailto:james.dinan@gmail.com" target="_blank">james.dinan@gmail.com</a>]<br>
> *Sent:* 14 June 2013 21:31<br>
><br>
> *To:* <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> *Subject:* Re: [mpich-discuss] Running an mpi program that needs to<br>
> access /dev/mem<br>
><br>
> I don't know if this has been suggested, but you could also add your<br>
> user to the kmem group and chmod /dev/mem so that you have the access you<br>
> need.<br>
><br>
> ~Jim.<br>
><br>
><br>
> On Fri, Jun 14, 2013 at 1:24 PM, Pavan Balaji <<a href="mailto:balaji@mcs.anl.gov" target="_blank">balaji@mcs.anl.gov</a>> wrote:<br>
><br>
>><br>
>> You can run mpich as root. There's no restriction on that. You still<br>
>> haven't tried out my suggestion of running "id" to check what user ID you<br>
>> are running your processes as. My guess is that you are not setting your<br>
>> user ID correctly.<br>
>><br>
>> -- Pavan<br>
>><br>
>><br>
>> On 06/14/2013 06:27 AM, Lee, Eibhlin wrote:<br>
>><br>
>>> I found that the reason we want to access /dev/mem is to setup memory<br>
>>> regions to access the peripherals. (We are trying to read the output of an<br>
>>> ADC). At this point it becomes more a linux/raspberry-pi specific problem<br>
>>> than an MPICH problem. Although the fact that you can't run a program that<br>
>>> needs access to memory mapping (even as the root user) seems something that<br>
>>> MPICH could improve on for future versions. I know I am using smpd instead<br>
>>> of hydra so this problem may already be solved. But if someone could<br>
>>> confirm that, it would be really helpful.<br>
>>> ______________________________**__________<br>
>>> From: <a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a> [<a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a>] on behalf<br>
>>> of Lee, Eibhlin [<a href="mailto:eibhlin.lee10@imperial.ac.uk" target="_blank">eibhlin.lee10@imperial.ac.uk</a>]<br>
>>> Sent: 14 June 2013 11:20<br>
>>> To: <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>> Subject: Re: [mpich-discuss] Running an mpi program that needs<br>
>>> to access /dev/mem<br>
>>><br>
>>> Gus,<br>
>>> I tried running cpi, as is included in the installation of MPI, on two<br>
>>> machines with two processes. The output message confirmed that it had<br>
>>> started only 1 process instead of 2.<br>
>>> Process 0 of 1 is on raspi<br>
>>> pi is approximately...<br>
>>><br>
>>> Then it just hung. I think this is because the other machine didn't know<br>
>>> where to output the data?<br>
>>><br>
>>> When I tried running two processes on the one machine using the wrapper<br>
>>> you suggested the output was the same but doubled. It didn't hang. This<br>
>>> confirms that every process was started with rank 0.<br>
>>><br>
>>> I'm not entirely sure why /dev/mem is needed. I'm working in a group and<br>
>>> another member set up io and gpio and it seemed it needed access to<br>
>>> /dev/mem I am going to do a strace as suggested by Pavan Balaji to see<br>
>>> where it is used and see if I can somehow work around it.<br>
>>><br>
>>> Thank you for your help.<br>
>>> Eibhlin<br>
>>> ______________________________**__________<br>
>>> From: <a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a> [<a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a>] on behalf<br>
>>> of Gus Correa [<a href="mailto:gus@ldeo.columbia.edu" target="_blank">gus@ldeo.columbia.edu</a>]<br>
>>> Sent: 13 June 2013 21:11<br>
>>> To: Discuss Mpich<br>
>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to<br>
>>> access /dev/mem<br>
>>><br>
>>> Hi Eibhlin<br>
>>><br>
>>> On 06/13/2013 12:59 PM, Lee, Eibhlin wrote:<br>
>>><br>
>>>> Gus,<br>
>>>> I believe your first assumption is correct. Unfortunately it just<br>
>>>> seemed to hang. I think this might be because each one is being made to<br>
>>>> have the same rank...<br>
>>>><br>
>>><br>
>>> Darn! I was afraid that it might give only rank 0 to all MPI processes.<br>
>>> So, with the script wrapper the process being launched by mpiexec may<br>
>>> indeed be sudo,<br>
>>> not the actual mpi executable (main) :(<br>
>>> Then it may actually launch a bunch of separate rank 0 replicas of your<br>
>>> program,<br>
>>> instead of assigning to them different ranks.<br>
>>> However, without any output or error message, it is hard to tell.<br>
>>><br>
>>> No output at all?<br>
>>> No error message, just hangs?<br>
>>> Have you tried a verbose flag (-v) to mpiexec?<br>
>>> (Not sure if it exists in MPICH mpiexec, you'd need to check.)<br>
>>><br>
>>> Would you care to try it with another mpi program,<br>
>>> one that doesn't deal with /dev/mem (a risky business),<br>
>>> say cpi.c (in the examples directory), or an mpi version of Hello, world,<br>
>>> just to see if the mpiexec+sudo_script_wrapper works as expected or<br>
>>> if everybody gets rank 0?<br>
>>><br>
>>><br>
>>> It may already be obvious but this is the first time I am using Linux.<br>
>>>> I had tried sudo $(which mpiexec ....) and sudo $(which mpiexec) ... both<br>
>>>> without success.<br>
>>>><br>
>>><br>
>>> "which mpiexec" will return the path to mpiexec, but won't execute it.<br>
>>><br>
>>> You could try this (with backquotes):<br>
>>><br>
>>> `which mpiexec` -n 2 ~/main<br>
>>><br>
>>> On a side note, make sure the mpiexec you're using matches the<br>
>>> mpicc/mpif90/MPI library from the MPICH that<br>
>>> you used to compile the program.<br>
>>> Often times computers have several flavors of MPI installed, and mixing<br>
>>> them just doesn't work.<br>
>>><br>
>>> Is putting the full path to it similar to/is a symlink? (This still<br>
>>>> doesn't make main have super user privileges though.)<br>
>>>><br>
>>><br>
>>> No, nothing to do with sudo privileges.<br>
>>><br>
>>> This suggestion was just to avoid messing up your /usr/bin,<br>
>>> which is a directory that despite the somewhat misleading name (/usr,<br>
>>> for historical reasons I think),<br>
>>> is supposed to hold system (Linux) programs (that users can use), but<br>
>>> not user-installed programs.<br>
>>> Normally things are that are installed in /usr get there via some Linux<br>
>>> package manager program<br>
>>> (yum, rpm, apt-get, etc), to keep consistency with libraries, etc.<br>
>>><br>
>>> I belive MPICH would install by default in /usr/local/ (and put mpiexec<br>
>>> in /usr/local/bin),<br>
>>> which is kind of a default location for non-system applications.<br>
>>><br>
>>> The full path suggestion would be something like:<br>
>>> /path/to/where/you/installed/**mpiexec -n 2 ~/main<br>
>>><br>
>>> However, this won't solve the other problem w.r.t. sudo and /dev/mem.<br>
>>><br>
>>> You must know what you are doing, but it made me wonder,<br>
>>> even if your program were sequential, why would you want to mess with<br>
>>> /dev/mem directly?<br>
>>> Just curious about it.<br>
>>><br>
>>> Gus Correa<br>
>>><br>
>>><br>
>>><br>
>>> Eibhlin<br>
>>>> ______________________________**__________<br>
>>>> From: <a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a> [<a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a>] on behalf<br>
>>>> of Gus Correa [<a href="mailto:gus@ldeo.columbia.edu" target="_blank">gus@ldeo.columbia.edu</a>]<br>
>>>> Sent: 13 June 2013 15:37<br>
>>>> To: Discuss Mpich<br>
>>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to<br>
>>>> access /dev/mem<br>
>>>><br>
>>>> Hi Lee<br>
>>>><br>
>>>> How about replacing "~/main" in the mpiexec command line<br>
>>>> by one-liner script?<br>
>>>> Say, "sudo_main.sh", something like this:<br>
>>>><br>
>>>> #! /bin/bash<br>
>>>> sudo ~/main<br>
>>>><br>
>>>> After all, it is "main" that accesses /dev/mem,<br>
>>>> and needs "sudo" permissions, not mpiexec, right?<br>
>>>> [Or do the mpiexec-launched processes inherit<br>
>>>> the "sudo" stuff from mpiexec?]<br>
>>>><br>
>>>> Not related, but, instead of putting mpiexec in /usr/bin,<br>
>>>> can't you just use the full path to it?<br>
>>>><br>
>>>> I hope this helps,<br>
>>>> Gus Correa<br>
>>>><br>
>>>> On 06/13/2013 10:09 AM, Lee, Eibhlin wrote:<br>
>>>><br>
>>>>> Pavan,<br>
>>>>> I had a lot of trouble getting hydra to work without having to enter a<br>
>>>>> password/passphrase. I saw the option to pass a phrase in the mpich<br>
>>>>> installers guide. I eventually found that for that command you needed to<br>
>>>>> use the smpd process manager. That's the only reason I chose smpd over<br>
>>>>> hydra.<br>
>>>>> As to your other suggestion. I ran ./main and the same error (Can't<br>
>>>>> open /dev/mem...) appeared. sudo ./main works but of course without<br>
>>>>> multiple processes.<br>
>>>>> Eibhlin<br>
>>>>> ______________________________**__________<br>
>>>>> From: <a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a> [<a href="mailto:discuss-bounces@mpich.org" target="_blank">discuss-bounces@mpich.org</a>] on behalf<br>
>>>>> of Pavan Balaji [<a href="mailto:balaji@mcs.anl.gov" target="_blank">balaji@mcs.anl.gov</a>]<br>
>>>>> Sent: 13 June 2013 14:34<br>
>>>>> To: <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>>>> Subject: Re: [mpich-discuss] Running an mpi program that needs to<br>
>>>>> access /dev/mem<br>
>>>>><br>
>>>>> I just saw your older email. Why are you using smpd instead of the<br>
>>>>> default process manager (hydra)?<br>
>>>>><br>
>>>>> -- Pavan<br>
>>>>><br>
>>>>> On 06/13/2013 08:05 AM, Pavan Balaji wrote:<br>
>>>>><br>
>>>>>> What's "-phrase"? That's not a recognized option. I'm not sure where<br>
>>>>>> the /dev/mem check is coming from. Try running ~/main without mpiexec<br>
>>>>>> first.<br>
>>>>>><br>
>>>>>> -- Pavan<br>
>>>>>><br>
>>>>>> On 06/13/2013 06:56 AM, Lee, Eibhlin wrote:<br>
>>>>>><br>
>>>>>>> Hello all,<br>
>>>>>>><br>
>>>>>>> I am trying to use two raspberry-pi to sample and then process some<br>
>>>>>>> data. The first process samples while the second processes and vice<br>
>>>>>>> versa. To do this I use gpio and also mpich-3.0.4 with the process<br>
>>>>>>> manager smpd. I have successfully run cpi on both machines (from the<br>
>>>>>>> master machine). I have also managed to run a similar program but<br>
>>>>>>> without the MPI, this involved compiling with gcc and when running<br>
>>>>>>> putting sudo in front of the binary file.<br>
>>>>>>><br>
>>>>>>> When I combine these two processes I get various error messages.<br>
>>>>>>> For input:<br>
>>>>>>> mpiexec -phrase cat -machinefile machinefile -n 2 ~/main<br>
>>>>>>> the error is:<br>
>>>>>>> Can't open /dev/mem<br>
>>>>>>> Did you forget to use 'sudo .. ?'<br>
>>>>>>><br>
>>>>>>> For input:<br>
>>>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main<br>
>>>>>>> the error is:<br>
>>>>>>> sudo: mpiexec: Command not found<br>
>>>>>>><br>
>>>>>>> I therefore put mpiexec into /usr/bin<br>
>>>>>>><br>
>>>>>>> now for input:<br>
>>>>>>> sudo mpiexec -phrase cat -machinefile machinefile -n 2 ~/main<br>
>>>>>>> the error is:<br>
>>>>>>> Can't open /dev/mem<br>
>>>>>>> Did you forget to use 'sudo .. ?'<br>
>>>>>>><br>
>>>>>>> Does anyone know how I can work around this?<br>
>>>>>>> Thanks,<br>
>>>>>>> Eibhlin<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> ______________________________**_________________<br>
>>>>>>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>>>>>> To manage subscription options or unsubscribe:<br>
>>>>>>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>>>>>>><br>
>>>>>>> --<br>
>>>>> Pavan Balaji<br>
>>>>> <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
>>>>> ______________________________**_________________<br>
>>>>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>>>> To manage subscription options or unsubscribe:<br>
>>>>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>>>>> ______________________________**_________________<br>
>>>>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>>>> To manage subscription options or unsubscribe:<br>
>>>>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>>>>><br>
>>>> ______________________________**_________________<br>
>>>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>>> To manage subscription options or unsubscribe:<br>
>>>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>>>> ______________________________**_________________<br>
>>>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>>> To manage subscription options or unsubscribe:<br>
>>>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>>>><br>
>>><br>
>>> ______________________________**_________________<br>
>>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>> To manage subscription options or unsubscribe:<br>
>>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>>> ______________________________**_________________<br>
>>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>> To manage subscription options or unsubscribe:<br>
>>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>>> ______________________________**_________________<br>
>>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>>> To manage subscription options or unsubscribe:<br>
>>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>>><br>
>>><br>
>> --<br>
>> Pavan Balaji<br>
>> <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
>> ______________________________**_________________<br>
>> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
>> To manage subscription options or unsubscribe:<br>
>> <a href="https://lists.mpich.org/**mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/**mailman/listinfo/discuss</a><<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a>><br>
>><br>
><br>
><br>
> _______________________________________________<br>
> discuss mailing list <a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mpich.org/pipermail/discuss/attachments/20130615/d29dd202/attachment.html" target="_blank">http://lists.mpich.org/pipermail/discuss/attachments/20130615/d29dd202/attachment.html</a>><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
discuss mailing list<br>
<a href="mailto:discuss@mpich.org" target="_blank">discuss@mpich.org</a><br>
<a href="https://lists.mpich.org/mailman/listinfo/discuss" target="_blank">https://lists.mpich.org/mailman/listinfo/discuss</a><br>
<br>
End of discuss Digest, Vol 8, Issue 29<br>
**************************************<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Best Regards,<div>Sufeng Niu</div><div>ECASP lab, ECE department, Illinois Institute of Technology</div><div>Tel: <a href="tel:312-731-7219" value="+13127317219" target="_blank">312-731-7219</a></div>
</div></div>