Re: Threads race for VM

From: Andy Valencia <vandys_at_nospam.org>
Date: Wed Nov 03 1999 - 11:41:51 PST

[Eric_Jacobs@fc.mcps.k12.md.us (Eric Jacobs) writes:]

>What this means is that when lock_slot finds the slot is busy and
>has to wait, we need to recheck all of the conditions that could
>have caused a fault, because the processor wasn't using "thread-safe"
>information. Fortunately, this condition seems to be rather rare.
>Perhaps a more ideal solution would be to have lock_slot() return
>a flag which indicates whether it had to wait for the slot to be
>free. That way we wouldn't have to scan the pp_atl every time.
>Or maybe we could just have vm_fault return in such a case, to try
>to regenerate the fault now that the HAT is up-to-date?

After taking a slightly longer look at this, I've come the realization that
this approach won't work. Imagine that thread1 has faulted on the address,
and comes into the kernel. But then his clock ticks, and the CPU goes off
to do that.

In parallel, thread2, on another process (SMP), faults on the same address,
comes in, resolves the fault, and returns.

Thread1 finishes with the clock tick, and takes the lock on the slot. There
is no contention, because thread2 has come and gone. And yet, not only is
the page valid, but there's a mapping in their (shared) address space.

I'm going to need to ponder whether it's better to add a per-virtual-page
data structure to the pview, or whether it's acceptable to consult the atl
list for a physical page at mapping time. I'm leaning towards the latter,
since this is already done for the detach. But I can probably fix both add
and delete cases if I convert to a pview-level per-page flag.

Andy
Received on Wed Nov 3 12:41:15 1999

This archive was generated by hypermail 2.1.8 : Thu Sep 22 2005 - 15:12:56 PDT