Re: capabilities / security

From: MS Research Fellow <t-jont_at_nospam.org>
Date: Sun Dec 18 1994 - 20:45:42 PST

Andrew Valencia <vandys@cisco.com> wrote:
> [Jonathon Tidswell (MS Research Fellow) <t-jont@microsoft.com> writes:]

[ ...]

> >All resources need to be protected by their respective servers, a
microkernel
> >design does not want to include excess material in the kernel, but
> >shifting the responsibility does not solve the problem it simply
> >increases the number of
> >places it needs to be addressed, and correspondingly the number of places
> >it can be poorly addressed.
>
> True, but much of the code is still shared (courtesy of libraries). It also
> allows filesystems to omit code which makes no sense--for instance, the DOS
> server has no per-user security or resource controls because the filesystem
> itself does not understand the concept of multiple users.
In a multiuser "secure" system, I would hope all servers that did not
support both
mulitple users and security protection would be "provably" hidden
behind servers
that do.

Fortunately personal development machines (based on DOS) are unlikely to either
multiuser or secure and the advantages of FAT justify its use.
(did I write those words :-)
Equally in many cases the overhead of security checking is unnecessary.
And in these cases not burdening the kernel with security is a big win.

[ However, the way the Internet is going and the way system prices are going,
I expect companies will be willing to spend for dual pentiums if that
will allow them
to have a more secure system. ]

[ actually quoting me - JonT ]
> >> > Non-disclosure is normally an issue of simple Trojan horses, covert
> >> > channels (unintended information channels) and users incorrectly
> >> > setting their permissions.
>
> Covert channels are a black hole. There are too many possibilities for
> modulation, and a useful general-purpose system does not result from
> blocking them.
I agree with the first statement and I'd love any references supporting
the second.
[ I can see that the second could easily be true, but I'd like to be
able to cite
something at uni. ]

> >I would like to describe a Trojan horse that currently seems possible.
> >If I've slipped up please tell me so.
>
> Nope. Well, maybe. Since any process can enable/disable any of its ID's, it
> is entirely possible to disable all your "useful" abilities and forge
> perhaps a very modest subset ID. Your "po" utility thus does NOT have to
> inherit everything the user possesses.
Absolutely.
But how do I stop 'po' inheriting them in a way that itself cannot be
attacked, but is
sufficiently user trivial that users wont skip it ?
[ Not rhetorical - beyond the limit of my knowledge of ID manipulation. ]

> I've pondered a database of paths and a tabulation of "how much" to trust
> programs under them. This might contribute to a solution, because you could
> then have your shell automagically reduce abilities as it goes about running
> the suspect code.
Ahhh, part answer of my above question.

There are a number of possibly separate questions on this idea:
a) What is to be controlled based on the concept of "how much trust" ?
b) How to manage the labelling of progams ?
c) Where to manage the controls ?
d) How to verify that the above is correct and sufficient ? (is it ?)

Obnote:
It would be nice if "high security" were an optional feature, but this
requires that most
code be independent of it. I have no idea how feasible this is.

Sidenote:
If security is in a library, cant I start a server that provides an
alternative library for
all programs started from then on, effectively bypassing the security ?

- JonT
Received on Sun Dec 18 20:45:42 1994

This archive was generated by hypermail 2.1.8 : Thu Sep 22 2005 - 15:12:11 PDT