Re: VSTa, tsync, user level semaphores

From: Gary Shea <shea_at_nospam.org>
Date: Wed Dec 14 1994 - 12:26:05 PST

I'm bringing this in-progress conversation to the list (by
request) to see what the experts have to say. Sorry
about the length... I'm the >>>'s; the other quotes are
Masato Kataoka-san (kataoka@dbg.bs1.fc.nec.co.jp).

His first mail to me asked why I was using an infinite-loop
spinlock to protect the user-level semaphore structures:

>>> Masato Kataoka San writes:
>>> >If so, is this package assuming a specific scheduling policy?
>>> >I mean the one that lowers priority of a process and preempts it
>>> >if it's holding cpu time too long.
>>> >Otherwise this slock_wait() spins forever.
>>>
>>> Interesting question. I must admit I never even thought about
>>> it, so I guess the answer is Yes, I am assuming a pre-emptive
>>> (at least) scheduling policy.
>>>
>>> There has been some talk of a yield() system call which would
>>> give the current thread's time to another thread in the process;
>>> I plan on using that whenever it appears, but I haven't really
>>> thought it through.

Any comments on this part? Will there be a yield() system call?
I think maybe Dave Hudson mentioned that one...

In the absence of such a beast, should I be giving the user more
control over what the spinlock does? I guess it could simply
fail after some number of (user-specified, maybe 0 => forever?)
loops and return an error... EACCESS?

>Is it possible to use msleep() to give CPU to another thread in the
>same process?
>I don't know very much about threads scheduling policy in VSTa but
>yield() syscall is a good thing to have, though.

The referred-to function, msleep(), is apparently available on
AIX -- I don't know that VSTa has it, and I don't know anything
about what it does on AIX. I suspect that what I'm calling yield()
above (I can't remember what Dave called it) is the same thing as
msleep().

>BTW, am I correct in saying that user level semaphores are preferable
>to kernel level ones in the following reason?
>
> If resources are not contended so often (i.e. im most cases
> taking and freeing a semaphore merely increments and decrements
> the semaphore count), switching to kernel is just too expensive.
> But since a resource is held in arbitrary period of time,
> spin-waiting is suicidal in case of a contension even if
> sleeping for a certain fixed time in spin-loop.
>
>Since you didn't write much on VSTa list about why user level
>semaphores are good, I justed wanted to know why. If there're
>other pros and cons about user-level semaphores, please enlighten me.

If you're saying that using a kernel semphore is too expensive
sometimes when it needn't be, and that a spinlock is impractical
because it wastes cycles spinning while the resource is held, well,
that's exactly what I would have said!

I didn't initiate this project, by the way, I just asked Andy what might
be a good thing to work on...

>>> Do you have a suggestions about what we should be doing instead
>>> of the infinite loop? I'm open to suggestions.

>But if msleep() can give CPU to another thread in the same process
>(even if it MAY give CPU to another process), it might be
>favorable to call msleep with a certain fixed period in spin-loop.

This seems reasonable, and is pretty much what I plan to do when
an msleep()-like routine appears.
>
>Other things I wish (if I may :-) you could consider:
>
>1) In order to reduce chances of spin-waiting, is it possible to not
> call slock_wait() for "tsq" structure in tsema_wait()? Is it
> possible to return the pointer to tsq.ts_l[i] rather than slot number?

I have been getting a lock on the root structure that holds the
array of semaphore structs while I test to see if the index the
user gave me is valid. I think you're right, that lock is pointless;
the user can trash the semaphore any time! So we might as well get
rid of that particular lock-grab, but still use an index. If I didn't
allow the user to delete a semaphore that's in use (i.e., reference
count it), then I would need that lock, and the lock sequence to modify
the reference count would be tricky. Should I ref count the
semaphores?

>2) Is it possible to implement kernel level semaphores with the same
> interface? Thus a user can specify user-level or kernel-level in
> semaphore creation call. I think kernel-level threads are better
> for highly contended resources.

I think the package _is_ that, right now. If a thread tries to
get the resource, but it's taken, then they call tsleep (via the
semaphore code) and get blocked on a kernel semaphore (which holds
all user-level-semaphore-blocked threads). There _is_ the additional
overhead of creating a user-level pid_t queue entry for the thread...
 
>3) SMP case?

Ideally, it should be possible to spin for some user-specified
amount of time before dropping into the kernel, just in case a
thread on a different processor gives up the resources. That
code is not yet in place. The current code should work in the
SMP case anyway (I think!).

>Since I'm not an OS researcher or developer or anything, some or all
>of the above comments maybe completely wrong or irrelevant.

They seem right on to me...

>I hope you open up a discussion on the list and let experts to talk.
>(Especially thread scheduling things).
>That way I can enjoy reading the discussion :-)
>---
>Masato Kataoka
>kataoka@dbg.bs1.fc.nec.co.jp

You got it :)

        Gary
Received on Wed Dec 14 12:00:49 1994

This archive was generated by hypermail 2.1.8 : Thu Sep 22 2005 - 15:12:11 PDT