<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Gus,<br>
thanks for your suggestions and links to more info; I was
contemplating using any of the three unused ethernet interfaces at
the back of each of the three computers, and using a cluster-only
subnet for the inversion. You comments about a separate hostname and
so forth are great, and I can re-activate iptables on those
interfaces connected to the outside world. <br>
<br>
I have just read up on 'screen', an approach that suits me like
glove - all my parallel processing is via perl scripts/process
control and command line commands, and I have no use for xwindows or
other windows managers. Being able to remotely detach (-d),
re-attach (-R) and even duplicate (-x) are real time savers for when
I log in remote to check the progress.<br>
<br>
Thanks,<br>
Ron<br>
<div class="moz-cite-prefix">On 4/03/2014 08:54, Gus Correa wrote:<br>
</div>
<blockquote cite="mid:53150829.60305@ldeo.columbia.edu" type="cite">
<br>
On 03/03/2014 04:36 PM, Ron Palmer wrote:
<br>
<blockquote type="cite">Thanks Reuti for your comments. I will
peruse that FAQ detail.
<br>
<br>
I just thought of the fact that each of these rack computers
have 4
<br>
ethernet sockets, eth0 - eth3... I could connect the cluster on
a
<br>
separate ethernet sockets via an extra switch not connected to
the
<br>
internet or any other computers, and accept all communication
among
<br>
them, and keep iptables up on the ethx connected to the outside
world. I
<br>
guess I would have to set up routing tables or something. Ah,
more
<br>
reading :-)
<br>
<br>
Thanks for your help.
<br>
Ron
<br>
<br>
</blockquote>
Hi Ron
<br>
<br>
If those extra interfaces are not in use,
<br>
and if you have a spare switch,
<br>
you can setup a separate private subnet exclusively for MPI.
<br>
You need to configure the interfaces consistently (IP, subnet
mask,
<br>
perhaps a gateway). Configuring them statically is easy:
<br>
<br>
<a class="moz-txt-link-freetext" href="https://access.redhat.com/site/documentation//en-US/Red_Hat_Enterprise_Linux/6/html-single/Deployment_Guide/index.html#s2-networkscripts-interfaces-eth0">https://access.redhat.com/site/documentation//en-US/Red_Hat_Enterprise_Linux/6/html-single/Deployment_Guide/index.html#s2-networkscripts-interfaces-eth0</a>
<br>
<br>
Use a subnet that doesn't intersect the existent/original IP
range.
<br>
<br>
<a class="moz-txt-link-freetext" href="http://en.wikipedia.org/wiki/Private_network">http://en.wikipedia.org/wiki/Private_network</a>
<br>
<br>
You could also create host names associated to those IPs (say
<br>
node01, node02, node02), resolve them via /etc/hosts on each
computer,
<br>
set passwordless ssh across these newly named "hosts".
<br>
This may simpler/safer than messing with the iptables.
<br>
<br>
[Actually, the IP addresses you showed 192.168.X.Y, sound as a
private
<br>
subnet already, not Internet, but that may be the subnet for your
<br>
organization/school/department already. So, you may set up a
different
<br>
one on these three computers for MPI and very-local access.]
<br>
<br>
OpenMPI allows you to choose the interface that it will use,
<br>
so you can direct it to your very-local subnet:
<br>
<br>
<a class="moz-txt-link-freetext" href="http://www.open-mpi.org/faq/?category=tcp#tcp-selection">http://www.open-mpi.org/faq/?category=tcp#tcp-selection</a>
<br>
<br>
I hope this helps,
<br>
Gus
<br>
<br>
<br>
<blockquote type="cite">On 4/03/2014 00:13, Reuti wrote:
<br>
<blockquote type="cite">Am 03.03.2014 um 01:34 schrieb Ron
Palmer:
<br>
<br>
<blockquote type="cite">Gus,
<br>
I have just replied with the details of the success, but I
will
<br>
clarify your questions here, if it helps next time.
<br>
<br>
Re the actual application to be run, 'inversion', I have
only
<br>
received binaries. I used to run them on gainsborough
(without mpi)
<br>
and that worked fine.
<br>
<br>
Home directories are not nfs shared, they are individual and
<br>
separate, only the name is repeated.
<br>
<br>
I had tried password free ssh in all directions and
permutations.
<br>
<br>
I had iptables down on sargeant and up on the other two.
<br>
<br>
Yes, I installed the gcc-gfortran.x86_64 AFTER I took those
<br>
screnshots, and the post install output was identical to the
top one
<br>
(sargeant).
<br>
<br>
I am unsure about cpi and fortran...
<br>
<br>
Stuff remaining to get sorted out:
<br>
1. Get that hyperthreading set up - what is your
suggestions? Disable
<br>
and let mpi manage the cores?
<br>
</blockquote>
It depends on your applications, whether you can make use of
it. Test
<br>
it with HT switched off, switched on and increase the number
of
<br>
processes, until you see that any process is no longer running
at 100%
<br>
and you judge that you don't tolerate this slow down.
<br>
<br>
It's not uncommon to see an improvement up to 150% with HT
turned on,
<br>
but not 200% (depending on the workload).
<br>
<br>
<br>
<blockquote type="cite">2. run mpiexec with iptables up, need
to figure out what traffic to
<br>
allow.
<br>
</blockquote>
<a class="moz-txt-link-freetext" href="https://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_How_do_I_control_which_ports_MPICH_uses.3F">https://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_How_do_I_control_which_ports_MPICH_uses.3F</a>
<br>
<br>
<br>
-- Reuti
<br>
<br>
<br>
<blockquote type="cite">Great many thanks to all, and Gus,
Reuti and Rajeev in particular.
<br>
<br>
Cheers,
<br>
Ron
<br>
<br>
On 3/03/2014 10:09, Gustavo Correa wrote:
<br>
<blockquote type="cite">Hi Ron
<br>
<br>
</blockquote>
</blockquote>
</blockquote>
<br>
<br>
<br>
_______________________________________________
<br>
discuss mailing list <a class="moz-txt-link-abbreviated" href="mailto:discuss@mpich.org">discuss@mpich.org</a>
<br>
To manage subscription options or unsubscribe:
<br>
<a class="moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a>
<br>
</blockquote>
<br>
_______________________________________________
<br>
discuss mailing list <a class="moz-txt-link-abbreviated" href="mailto:discuss@mpich.org">discuss@mpich.org</a>
<br>
To manage subscription options or unsubscribe:
<br>
<a class="moz-txt-link-freetext" href="https://lists.mpich.org/mailman/listinfo/discuss">https://lists.mpich.org/mailman/listinfo/discuss</a>
<br>
</blockquote>
<br>
<div class="moz-signature">-- <br>
<meta http-equiv="CONTENT-TYPE" content="text/html;
charset=ISO-8859-1">
<title></title>
<meta name="GENERATOR" content="OpenOffice.org 3.3 (Win32)">
<meta name="AUTHOR" content="Ron Palmer">
<meta name="CREATED" content="20120715;16240238">
<meta name="CHANGEDBY" content="Ron Palmer">
<meta name="CHANGED" content="20120715;16254174">
<style type="text/css">
<!--
@page { margin: 2cm }
P { margin-bottom: 0.21cm }
P.western { so-language: en-AU }
A:link { so-language: zxx }
-->
</style>
<p class="western" style="margin-bottom: 0cm; line-height: 100%"><font
color="#0000a2"><font face="Times New Roman"><font size="3"><span
lang="en"><b>Ron
Palmer</b></span></font></font></font><font
color="#000000"> </font><font color="#000000"><font size="2"><span
lang="en">MSc
MBA</span></font></font><font color="#000000"><span
lang="en">. </span></font>
</p>
<p class="western" style="margin-bottom: 0cm; line-height: 100%"
lang="en">
<font color="#000000"><font face="Times New Roman"><font
size="3">Principal
Geophysicist</font></font></font></p>
<p class="western" style="margin-bottom: 0cm; line-height: 100%"><a
href="mailto:ron.palmer@pgcgroup.com.au"><font color="#0000a2"><font
face="Times New Roman"><font size="3"><span lang="en">ron.palmer@pgcgroup.com.au</span></font></font></font></a></p>
<p class="western" style="margin-bottom: 0cm; line-height: 100%"
lang="en">
<font color="#000000"><font face="Times New Roman"><font
size="3">0413
579 099</font></font></font></p>
<p class="western" style="line-height: 100%" lang="en"><font
color="#000000"><font face="Times New Roman"><font size="3">07
3103 4963</font></font></font></p>
<p class="western" style="margin-bottom: 0cm"><br>
</p>
</div>
</body>
</html>