Re: Q: messagesystem

From: Andrew Valencia <vandys_at_nospam.org>
Date: Wed Oct 19 1994 - 08:52:06 PDT

[jw@ibch50.inf.tu-dresden.de (Wittenberger) writes:]

>1) Correct me if I'm wrong:
>As far as I understood, a request like "read" is done by turning the
>given buffer to be the area where the read bytes go and send out a
>appropriate message. Is there any case yet, where multi segment
>messages are generated?

Things are more subtle than this. The M_READ modifier to an m_op (such
as FS_READ) controls whether the buffer is provided to the server, or
whether the segments which the servers msg_reply()'s with will be copied out
to the buffer.

For a typical filesystem read, M_READ is set, the server sends back some
segments and they are copied out before the msg_send() completes.

For DMA, M_READ is *not* set, although the operation is still an FS_READ.
The buffers are thus made available to the DMA server, who will directly
write into them.

Note that the buffers made available to a normal (non-DMA) server are
read-only. DMA server get them read-write, so they can do either physical
or programmed I/O.

>2) A message is transfered to the receiver by making segments from the
>buffer(s) and mapping them to the address space of the receiver. There
>is no copying. (?)
>Assumed one task of a process 1 send a message to a second process and a
>second task of process 1 modifies the data of the message later. What
>happens? Is this a caught violation, is the data process 2 will read
>modified or anything else?

As you can see, this race condition is why you can't just leave pieces of a
server's address space attached to a client.

>3) The rule is: all what can be done in the library goes to the
>library. So for every kernel service there is a good reason to make it
>a kernel service. But I can't find a good reason for msg_connect,
>msg_accept, msg_disconnect (and msg_err). In my oppinion it could be
>done by the library and under certain circumstances it could be left
>out.

You also can't let misbehaving processes violate security, or break the
system. The connection phase authenticates the connecting user to the
server. If, for instance, the user got to build his own connect message
then something else would have to be done to gain this authentication.

A state machine is also applied to protect against the various edge
conditions of servers or clients dying during each phase of the connection
and ongoing I/O. Nobody (plus or minus a bug!) is ever stranded by the
failure of the "other side".

>4) A non-blocking send/receive pair would make it easier to build a
>level with async services between the file-like level and the
>kernel. We feel to need this. Is there something to say against it?

Read the QNX papers. They make a good starting point. I had coded async
messaging ages ago, and the implementation was a lot more trouble, and your
average client benefited little. VSTa has threads, which are used for the
rare cases where we want to handle multiple I/O's in parallel (see KA9Q).

>One more idea about kernel services: The kernel entry is expensive (in
>intel systems at least). The L3-system saved a lot of time by
>providing a additional service reply_and_receive_next. I think it
>would be a good idea to provide a library function like that and
>prefer the use of it over the use of the single calls. So a later
>optimization by a additional kernel service won't hurd.

It would be interesting to characterize how often a reply is done with
another request already on the queue. I'd guess not very often, on a single
user system. When we get the VSTa source server machine on the net we could
measure under a multi-user load.

Converting to reply-and-receive-next would break the structure of many
servers (the reply is deeply buried while the receive is at the top of the
server loop). It would probably require some global variables, so I'd
rather verify the benefit before going in this direction.

                                                Andy
Received on Wed Oct 19 07:37:54 1994

This archive was generated by hypermail 2.1.8 : Thu Sep 22 2005 - 15:11:46 PDT