[mpich-discuss] Problems with running make
Ron Palmer
ron.palmer at pgcgroup.com.au
Mon Mar 3 17:08:33 CST 2014
Gus,
thanks for your suggestions and links to more info; I was contemplating
using any of the three unused ethernet interfaces at the back of each of
the three computers, and using a cluster-only subnet for the inversion.
You comments about a separate hostname and so forth are great, and I can
re-activate iptables on those interfaces connected to the outside world.
I have just read up on 'screen', an approach that suits me like glove -
all my parallel processing is via perl scripts/process control and
command line commands, and I have no use for xwindows or other windows
managers. Being able to remotely detach (-d), re-attach (-R) and even
duplicate (-x) are real time savers for when I log in remote to check
the progress.
Thanks,
Ron
On 4/03/2014 08:54, Gus Correa wrote:
>
> On 03/03/2014 04:36 PM, Ron Palmer wrote:
>> Thanks Reuti for your comments. I will peruse that FAQ detail.
>>
>> I just thought of the fact that each of these rack computers have 4
>> ethernet sockets, eth0 - eth3... I could connect the cluster on a
>> separate ethernet sockets via an extra switch not connected to the
>> internet or any other computers, and accept all communication among
>> them, and keep iptables up on the ethx connected to the outside world. I
>> guess I would have to set up routing tables or something. Ah, more
>> reading :-)
>>
>> Thanks for your help.
>> Ron
>>
> Hi Ron
>
> If those extra interfaces are not in use,
> and if you have a spare switch,
> you can setup a separate private subnet exclusively for MPI.
> You need to configure the interfaces consistently (IP, subnet mask,
> perhaps a gateway). Configuring them statically is easy:
>
> https://access.redhat.com/site/documentation//en-US/Red_Hat_Enterprise_Linux/6/html-single/Deployment_Guide/index.html#s2-networkscripts-interfaces-eth0
>
>
> Use a subnet that doesn't intersect the existent/original IP range.
>
> http://en.wikipedia.org/wiki/Private_network
>
> You could also create host names associated to those IPs (say
> node01, node02, node02), resolve them via /etc/hosts on each computer,
> set passwordless ssh across these newly named "hosts".
> This may simpler/safer than messing with the iptables.
>
> [Actually, the IP addresses you showed 192.168.X.Y, sound as a private
> subnet already, not Internet, but that may be the subnet for your
> organization/school/department already. So, you may set up a different
> one on these three computers for MPI and very-local access.]
>
> OpenMPI allows you to choose the interface that it will use,
> so you can direct it to your very-local subnet:
>
> http://www.open-mpi.org/faq/?category=tcp#tcp-selection
>
> I hope this helps,
> Gus
>
>
>> On 4/03/2014 00:13, Reuti wrote:
>>> Am 03.03.2014 um 01:34 schrieb Ron Palmer:
>>>
>>>> Gus,
>>>> I have just replied with the details of the success, but I will
>>>> clarify your questions here, if it helps next time.
>>>>
>>>> Re the actual application to be run, 'inversion', I have only
>>>> received binaries. I used to run them on gainsborough (without mpi)
>>>> and that worked fine.
>>>>
>>>> Home directories are not nfs shared, they are individual and
>>>> separate, only the name is repeated.
>>>>
>>>> I had tried password free ssh in all directions and permutations.
>>>>
>>>> I had iptables down on sargeant and up on the other two.
>>>>
>>>> Yes, I installed the gcc-gfortran.x86_64 AFTER I took those
>>>> screnshots, and the post install output was identical to the top one
>>>> (sargeant).
>>>>
>>>> I am unsure about cpi and fortran...
>>>>
>>>> Stuff remaining to get sorted out:
>>>> 1. Get that hyperthreading set up - what is your suggestions? Disable
>>>> and let mpi manage the cores?
>>> It depends on your applications, whether you can make use of it. Test
>>> it with HT switched off, switched on and increase the number of
>>> processes, until you see that any process is no longer running at 100%
>>> and you judge that you don't tolerate this slow down.
>>>
>>> It's not uncommon to see an improvement up to 150% with HT turned on,
>>> but not 200% (depending on the workload).
>>>
>>>
>>>> 2. run mpiexec with iptables up, need to figure out what traffic to
>>>> allow.
>>> https://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_How_do_I_control_which_ports_MPICH_uses.3F
>>>
>>>
>>>
>>> -- Reuti
>>>
>>>
>>>> Great many thanks to all, and Gus, Reuti and Rajeev in particular.
>>>>
>>>> Cheers,
>>>> Ron
>>>>
>>>> On 3/03/2014 10:09, Gustavo Correa wrote:
>>>>> Hi Ron
>>>>>
>>
>>
>>
>> _______________________________________________
>> discuss mailing list discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
--
*Ron Palmer*MSc MBA.
Principal Geophysicist
ron.palmer at pgcgroup.com.au <mailto:ron.palmer at pgcgroup.com.au>
0413 579 099
07 3103 4963
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20140304/7eaac649/attachment.html>
More information about the discuss
mailing list