Re: Event handlers

From: Eric Jacobs <Eric_Jacobs_at_nospam.org>
Date: Mon Dec 18 2000 - 13:57:34 PST

Andy Valencia <vandys@zendo.com>:

> [Eric Jacobs <eaj@ricochet.net> writes:]
>
> > * A process has two distinct incoming streams
> > * of events--a process-generated one and a system-generated one.
> > * The system events take precedence. For process events, a sender
> > * will sleep until the target process has accepted a current event and
> > * (if he has a handler registered) returned from the handler.
> >
> > If the sender blocks, we'd have to use multiple threads to simulate
> > re-entrant interrupts, which could be kind of clumsy.
>
> You should use a thread to accept the ISR message. From there, I'd
> suggest just running the ISR code directly from that thread. There's no
> reason to suspend executing the ISR code to service the same ISR.

The Linux drivers probably do not expect their ISR's to run
concurrently with the service threads. So we would need some way
to suspend the currently running threads. It appears that notify()
would be the best way to do that.

I'm inclined to try to emulate the Linux environment as closely as
possible, to maintain compatibility with the many idiosyncratic drivers
and devices out there. If you look at the "Execution Environments"
section of the OSKit documentation, you'll see that they have it
spelled out.

If I was going to translate each driver individually, I would probably
look at changing around the ISR calling procedures to make it
simpler and faster. But as the OSKit simply wraps the drivers,
assuming a Linux execution model, I'm wary of such shortcuts.

> If there's something long enough that you feel you need to get back to
> waiting for front-line ISR's, you should use mutex_thread() to kick
> awake a background thread to do the rest of the work.
>
> > But I don't see any code to actually make it do this. It looks as
> > though the sender doesn't block at all and the receiver just picks up
> > the event the next time it enters the kernel.
>
> See signal_thread(), for the EGAIN case right near the top of the
> routine.

Right, but t_evproc is cleared before the event is sent, not after
it returns. That's what was throwing me off.

> > Actually, having it asynchronous like that would be more suitable for
> > simulating interrupts. I'm thinking of a scenario where we have N
> > shepherd threads that all block on msg_receive(), synchronize with the
> > global lock and then jump into the driver. When an interrupt message
> > is received, we can notify the currently running thread (which itself
> > may be in an interrupt.) The interrupted thread would be "wasted" for
> > the duration of that interrupt handler, of course, but I see no reason
> > why the interruptor thread should be similarly held up (it could go
> > back for more interrupts, or at least get the next request set up.) Am
> > I on the right track here?
>
> Usually a single driver will handle a single ISR source. If this comes
> to a unique port (see how mach/rs232 does this) you avoid priority
> inversions with the clients of the driver. But so long as you you use a
> single thread, you're implicitly serialized for interrupt handling, and
> thus don't have to weigh down your design with lots of locks.

I've thought about having two receiving ports open like that. But each
port requires at least one thread to service it, so we're automatically
multithreaded and I don't see how we can avoid some kind of locking.
The big lock for the linux code would be its global component lock,
which shouldn't be too difficult anyway.

It may be possible to have a compile or run time option that NOP's
out the locking, if there's only going to be one client service
thread (just have it always appear to be holding the lock.)

> When the ISR code is done with its part and wants to kick some work to
> the "top half" of the driver, it does it by way of mutex_thread()
> (rather than just sending a message to the main message queue, again to
> avoid priority inversion with other work the top half code may be
> executing at any given time). The thread waiting for this wakeup can
> either handle the work, or *it* turns it into a message to the main
> queue (which is what the RS-232 driver does), because it's OK for
> this slave to get blocked on the main
> message queue--it's just the ISR code we don't want to have hung.

The way that the OSKit wraps the drivers (and I'm not saying that this
is the best way for VSTa) is to let the Linux code do all of the signaling
between the top-half and bottom-half driver portions. So in the OSKit
conception, all that the host OS (or process as we'll have it) needs to
do is to suspend the thread that's executing the Linux code (if any)
and then call that appropriate interrupt routine that was registered at
initialization time.

> You really, really don't want to try and run a single process which
> represents all of the Linux drivers collectively. Each individual
> driver should be linked into an emulation environment which runs as its
> own distinct process. Then you're generally looking at one ISR source,
> and if you map it to one thread, things simplify nicely.

If we have two ports, one for clients and one for interrupts, we'll
need at least two threads because msg_receive() will block, so
single-threading is out. If we have only port, then we'll still at least
need the option to have another thread to look for IRQ's while the
thread is running (for instance, some Linux drivers could busy
wait for interrupts.)

I agree that running multiple drivers in one process is not going to
be important for most applications. My attitude is that if we implement
the driver set correctly (as specified by the OSKit), then drivers
aggregated together will work. If not, random things will break.
On the other hand, being able to run, say, a driver and a filesystem
in the same process could be a big win, because then we avoid a
whole layer of IPC.

I know that all of this is not going to be optimal, and if we were in the
business of rewriting the driver set I'm sure we could do things
better. The good news is that a lot of these options could be selected
at run-time (for example, if we knew it was okay, we could load a
driver to run single-threaded, and its structure would reduce to
something vaguely similar to the way native VSTa drivers are
written.) Maybe we need something like the FreeBSD "ports" system,
where we can collect data on how to run each driver best.
Received on Mon Dec 18 13:52:27 2000

This archive was generated by hypermail 2.1.8 : Thu Sep 22 2005 - 15:12:57 PDT