[mpich-discuss] Predefined datatype implementation
Jeff Hammond
jeff.science at gmail.com
Wed Nov 20 15:11:58 CST 2013
On Wed, Nov 20, 2013 at 2:20 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> Jeff Hammond <jeff.science at gmail.com> writes:
>
>>> 1. System headers and libraries are still often on a remote filesystem.
>>> The workflow is a lot more complicated if you mirror that locally.
>>
>> The workflow does not change at all. E.g. "#include <stdio.h>" works
>> irrespective of where you build.
>
> Yes, slowly because it's coming from the global file system.
Sorry, but no, /usr, /opt or /soft are not mounted from a global file
system. The beauty of static linkage on this huge supercomputers is
that the system headers and objects don't have to be visible to the
compute nodes. When they are, it is usually (e.g. Cray and IBM Blue
Gene) via a special ramdisk or local mount that makes the access fast.
>>> 2. /tmp isn't shared between login nodes, let alone compute nodes, so
>>> you have to copy back to global storage. With multiple login nodes,
>>> when you background/nohup a compilation task and log out or are
>>> disconnected, you have to remember the login node number to get back to
>>> the result. The cognitive load of doing all compilation on /tmp,
>>> especially with multiple packages, is not trivial.
>>
>> This is not complicated. You just ssh back to the login node you were
>> building on before or you rsync between them. People that don't know
>> how to use ssh and rsync shouldn't be building MPICH from source.
>
> 1. You have to remember an extra number (the login node number).
Clearly, this is too great a burden for any computational scientist to bear.
> 2. Normal users don't build MPICH from source on expensive machines, but
> they do build PETSc and downstream packages. Since there is no standard
> system for caching the results of configure tests, each test will be run
> at each relevant node within the dependency graph, which is a lot for
> applications with a deep stack consisting of well-factored libraries.
>
>>> 3. Try teaching this to a novice. The fact is that if HPC is going to
>>> grow its user base, it has to be easier to use. If you want to take the
>>> elitist stance that expensive machines should only be used by experts,
>>> stop giving INCITE awards to people that don't know how to use
>>> compilers, debuggers, or the shell. Until then, hiccups along the way
>>> land in our lap, and a complicated workflow only serves to intimidate
>>> and to create more opportunity for human error, which again comes back
>>> in the form of support mail.
>>
>> Novices don't build MPICH from source.
>
> The whole question is about configure tests for packages depending on
> MPI, not for building the MPI implementation.
Novices don't build anything from source, just like novice drives
don't chair their own oil or tires. You're abusing the word novice in
an attempt to make a weak argument if you contend otherwise.
>> I like fish, but I refuse to swallow all the red herrings you keep
>> feeding me.
>
> Does this mean you are volunteering to take over the support email for
> incompetent INCITE awardees and that you are happy to alienate the
> scientists with modest technical skills that are not yet able to win an
> INCITE?
TMI for this list.
> Or we can agree that although requiring every package to test a zillion
> things in their configure sucks for workflow, usability, and speed,
> there is no viable technical solution available, so we have few options
> short of the life-consuming task of attempting to build and distribute a
> real solution.
Your novice users should use the provided installations or request one
if the package they need is not available. If they want more than
release versions of standard packages, they aren't novices and need to
RTFM their way out of their problems.
Jeff
--
Jeff Hammond
jeff.science at gmail.com
More information about the discuss
mailing list