Re: union mounts

From: David Jeske <jeske_at_nospam.org>
Date: Tue Nov 10 1998 - 15:12:59 PST

This discussion has started to get a bit off topic. However, seeing as
it has to do with union mounts and 'doing a better UNIX than UNIX' it
still seems somewhat relevant for VSTa. If anyone thinks otherwise,
let me know and we can take it private.

On Tue, Nov 10, 1998 at 02:08:21PM -0800, Eric Dorman wrote:
> > 1) All modern shells (bash,tcsh,zsh) do this regardless of whether you
> > use union mounts or not. I suppose the union mount lib could keep hash
> > tables itself.
>
> Um, so? These shells need such stuff for traditional unices that
> have a plague of paths, particularly older systems that don't do
> path caching in the kernel (if any such still exist).

I agree. I realized the weakness of my argument there after writing
it. However, I still prefer the shell solution, more below...

> Seems to me one would want to identify that a cache would solve a
> performance hit rather than adding it just because legacy systems do so
> or continue carrying around legacy baggage for the sake of continuity
> (an unfortunate affliction vsta suffers from).

Agreed. However, I don't like the idea that the architecture of the
operating system's filesystem code is 'dissecting my packages' to
create command line items. I think about command lines as the first
avenue of component software. Now that systems are to a point where we
have different types of IPC, and different ways to connect components
together, command lines should be one manifestation of a more general
component mechanism. As a result, I'd prefer if my packages simply
published 'component functionality' and the shell dealt with command
line issues.

For example, I'd much prefer if we could externalize the ideas of
command line options, and piping into the command line shell
completly. This would leave merely the component API publishing to the
package itself.

As a more specific example. I'd prefer if, say, compilers would
conform to a 'basic compiler API', which would allow you to control
basic compiler functions. The shell could translate these functions
into a standard set of command line arguments. Thus, a compile could
be setup to use "cc --debuginfo --strict-ansi" and it would work with
whatever compiler happened to be bound into the environment.

However, I think this is once again getting outside of the scope of
the discussion... I'll concede this point as not as important as my
next one.

> > 2) To me it's not so important whether you use union mounts or my
> > scheme for adding the package to your environment. The important part
> > is that you specify the package with a logical name (like
> [etc]
>
> Your solution works for end applications on unices (which is lousy
> for architectural dependencies anyway), but you still do not address the
> problems associated with different headers with the same name (where is
> <tcl.h> this week? Which version is it? How does Joe User access it?),
> dependencies on static libraries or apps you may not have source for, and
> intermediate processing files or databases located outside the scope of
> the filesystem. Most solutions to these problems are intrusive on the
> users, complicated to administer or downright impossible to implement
> in an encapsulated way.

Actually, my system works really well (IMO) for the above issues. I
allow the installation of multiple library/header packages. The
packages are identified by logical names instead of physical ones, and
the tool system can build you include and library paths automatically,
no matter where stuff is installed.

For example:

PACKAGES=xpm-3.4k mysql-3.21.33
INC_PATHS=`ut.incs ${PACKAGES}`

# which expands on my machine to:
# -I/usr/local/encap/xpm-3.4k.encap/include \
# -I/home/jeske/local/encap/mysql-3.21.33.encap/include

Of course this would be better if the idea of logical packages was
pervasive, and I didn't have to shell out to some script to do
it. However, this works much better than alternatives for me.

> > 3) what do you do with union mounts when you have naming collisions?
>
> A Plan9-esque MBEFORE/MAFTER option easily resolves collisions in
> a predictable fashion.

That's not exactly what I was asking. If you have a collision, how do
you make sure both versions of the same filename are accessable to the
user somehow?

> If you presume a unix platform, we always use a clean slate when
> rebuilding an old version. This eliminates subtle dependencies
> and is a closer representation of the field hardware.

This is exactly the kind of unix behavior I don't like. I don't like
the idea that UNIX-like systems are like 'castle building'. Start with
a foundation (clean slate) and only by laying every custom brick on
top can you get to the final destination. Union mounts 'feel' to me to
be in the same spirit.

I want to lower administartion costs (i.e. things normally associated
with cost of ownership). In a sense, I want to 'prefab' as much as
possible to interlock in logical instead of physical ways. That way,
you can download something which depends on logical functionality or
logical packages, instead of having to custom build every system from
scratch.

> > > [xxx 'encapsulation']
> > > > This allows me to have lots of versions of the same things installed,
> > > > and allows me to be more aware of command line naming conflicts.
> > > This is an advantage? Seems the road to madness, with the possibilities
> > > of different header, log or data file formats with misc. versions floating
> > > around. Plus it gets icky quickly if multiple architectures must be
> > > supported.
>
> > The road to madness is forcing everyone to upgrade to a new version at
> > the same time.
>
> You presume we switch everyone at the same time.

My presumption was that there was some network shared (NFS/SMB)
repository of applications, yes. Although in my opinion, forcing
someone to upgrade without being able to keep his old stuff in tact is
not usually acceptable. I work in Engineering, so if an IS/IT
policy/procedure costs me time, then it's non-optimal. I have come up
with the UnixTools system because I was tired of upgrades, old
versions, and recreating ancient environments eating up my time.

> As far as unix goes, we prefer to exercise 'proactive administration'.
> New versions are staged outside user purview, then after major milestones
> dependent applications are brought into the new tree, regression tested,
> etc until we're happy, then dependencies/applications are switched around
> with a symlink; there is only the 'old tree' and the 'current tree'.
>
> This of course presumes a dependency change doesn't force changing a
> database structure or somesuch; when that happens we have to
> migrate data (bleh).
>
> The milestone process archives apps with all their dependencies
> simultaneously, so rebuilding older versions is trivial. I do not
> recall ever having to build an older version in production (after
> 'stamping' masters) but the formalism is there.
>
> Things with little dependency (ala' netscape-4.03 et al.) are
> easily pulled from backup if the install is bad; I don't recall
> ever having to do this, but again the formalism is there. At any
> rate things are always backed up so bad builds/installs are no
> big deal.

To me, what you've described above is a 'dictate down' type of
administration. The sysadmins/IS people test, configurure and finally
push new versions down to users. That's fine, but it dosn't work very
well (IMO) when you have people who need to build their own
environments. If we had used the above system at my last job we would
have had to hire an IS person for every three engineers.

I use the UnixTools stuff to provide the ideas you express above to
administration that _all_ users can perform. Every user is able to get
the robust ability to recreate known environments, to recreate old
environments, and to publish the details of environments to other
users.

> > Personally, I like to test the seaworthyness of my next ship before I
> > sink the old one. Others clearly like to sail the ocean in a more
> > risky manner.
>
> Unfair and ill-informed slight.

Agreed. I think I understand the distinction between our two
approaches now. Correct me if my above summary was incorrect.

> > FWIW, UnixTools beautifully handles multiple architecutres:
> > netscape-4.03.encap/sun-solaris/bin/*
> > i386-linux/bin/*
>
> Now add 20 headers, 25 libraries, 12 apps in 3 languages, a pile of
> shell scripts, 3 daemon users + daemons and a 36Gb database all with
> a myriad of interdependencies, for 3 architectures. Thus not
> 'sliced-bread', surely.

That is exactly the kind of scenerio UnixTools was designed to
handle. Personally, I'd prefer to revamp the OS to handle this kind of
thing more natively, but given what UNIX offers, I've not seen a
mechanism which handled this kind of dependency better than my
UnixTools setup.

Just remember that one of my prerequisites is that every user needs to
have multiple configurations, different users need to have different
configurations, and a user needs to be able to recreate an 'old'
environment, all without the IS department getting involved or some
complicated rebuild.

A coworker of mine went on to another job and recreated my UnixTools
system with a twist. Instead of building paths and the like, his tools
would automatically build symlink trees. Each user could have multiple
symlink trees, they would be automatically built and torn down, and
standard paths would be pointed at the appropriate symlink
environment. I find this less clean than my solution, but it acheved
the goals as well.

> Users depend on apps, but apps and packages depend on a host of things
> unrelated to /bin not nearly so easily encapsulated as you suggest.

I agree that the dependencies are not necessarily trivial to
specify. However, in my experience, the ability to 'freeze dry' a
logical list of packages goes a long way towards taking the 'custom
setup' out of Unix installations.

-- 
David Jeske (N9LCA) + http://www.chat.net/~jeske/ + jeske_at_chat.net
Received on Tue Nov 10 12:06:00 1998

This archive was generated by hypermail 2.1.8 : Thu Sep 22 2005 - 15:12:56 PDT