On Wed, Mar 23, 2005 at 12:44:52PM +0100, Esben Nielsen wrote:
> On Tue, 22 Mar 2005, Paul E. McKenney wrote:
>
> > On Tue, Mar 22, 2005 at 09:55:26AM +0100, Esben Nielsen wrote:
> > > On Mon, 21 Mar 2005, Paul E. McKenney wrote:
> > [ . . . ]
> > > > On Mon, Mar 21, 2005 at 12:23:22AM +0100, Esbe
On Tue, 22 Mar 2005, Paul E. McKenney wrote:
> On Tue, Mar 22, 2005 at 09:55:26AM +0100, Esben Nielsen wrote:
> > On Mon, 21 Mar 2005, Paul E. McKenney wrote:
> [ . . . ]
> > > On Mon, Mar 21, 2005 at 12:23:22AM +0100, Esben Nielsen wrote:
> > > This is in some ways similar to the K42 approach to
On Tue, Mar 22, 2005 at 09:55:26AM +0100, Esben Nielsen wrote:
> On Mon, 21 Mar 2005, Paul E. McKenney wrote:
[ . . . ]
> > On Mon, Mar 21, 2005 at 12:23:22AM +0100, Esben Nielsen wrote:
> > This is in some ways similar to the K42 approach to RCU (which they call
> > "generations"). Dipankar put t
On Tue, 22 Mar 2005, Ingo Molnar wrote:
>
> * Esben Nielsen <[EMAIL PROTECTED]> wrote:
>
> > > > +static inline void rcu_read_lock(void)
> > > > +{
> > > > + preempt_disable();
> > > > + __get_cpu_var(rcu_data).active_readers++;
> > > > + preempt_enable();
> > > > +}
> >
* Esben Nielsen <[EMAIL PROTECTED]> wrote:
> > > +static inline void rcu_read_lock(void)
> > > +{
> > > + preempt_disable();
> > > + __get_cpu_var(rcu_data).active_readers++;
> > > + preempt_enable();
> > > +}
> >
> > this is buggy. Nothing guarantees that we'll do the rcu_read_unlock()
On Tue, 22 Mar 2005, Ingo Molnar wrote:
>
> * Esben Nielsen <[EMAIL PROTECTED]> wrote:
>
> > +static inline void rcu_read_lock(void)
> > +{
> > + preempt_disable();
> > + __get_cpu_var(rcu_data).active_readers++;
> > + preempt_enable();
> > +}
>
> this is buggy. Nothing guarantees that
* Esben Nielsen <[EMAIL PROTECTED]> wrote:
> +static inline void rcu_read_lock(void)
> +{
> + preempt_disable();
> + __get_cpu_var(rcu_data).active_readers++;
> + preempt_enable();
> +}
this is buggy. Nothing guarantees that we'll do the rcu_read_unlock() on
the same CPU, and he
On Tue, 22 Mar 2005, Bill Huey wrote:
> On Fri, Mar 18, 2005 at 05:55:44PM +0100, Esben Nielsen wrote:
> > On Fri, 18 Mar 2005, Ingo Molnar wrote:
> > > i really have no intention to allow multiple readers for rt-mutexes. We
> > > got away with that so far, and i'd like to keep it so. Imagine 100
On Tue, Mar 22, 2005 at 02:17:27AM -0800, Bill Huey wrote:
> > A notion of priority across a quiescience operation is crazy anyways[-,-] so
> > it would be safe just to use to the old rwlock-semaphore "in place" without
> > any changes or priorty handling add[i]tions. The RCU algorithm is only
> >
On Tue, 22 Mar 2005, Ingo Molnar wrote:
>
> * Esben Nielsen <[EMAIL PROTECTED]> wrote:
>
> > On the other hand with a rw-lock being unlimited - and thus do not
> > keep track of it readers - the readers can't be boosted by the writer.
> > Then you are back to square 1: The grace period can take
On Tue, Mar 22, 2005 at 02:04:46AM -0800, Bill Huey wrote:
> RCU isn't write deterministic like typical RT apps are[, so] we can... (below
> :-))
...
> Just came up with an idea after I thought about how much of a bitch it
> would be to get a fast RCU multipule reader semantic (our current shared
On Fri, Mar 18, 2005 at 05:55:44PM +0100, Esben Nielsen wrote:
> On Fri, 18 Mar 2005, Ingo Molnar wrote:
> > i really have no intention to allow multiple readers for rt-mutexes. We
> > got away with that so far, and i'd like to keep it so. Imagine 100
> > threads all blocked in the same critical se
* Esben Nielsen <[EMAIL PROTECTED]> wrote:
> On the other hand with a rw-lock being unlimited - and thus do not
> keep track of it readers - the readers can't be boosted by the writer.
> Then you are back to square 1: The grace period can take a very long
> time.
btw., is the 'very long grace pe
On Mon, 21 Mar 2005, Paul E. McKenney wrote:
> On Mon, Mar 21, 2005 at 12:23:22AM +0100, Esben Nielsen wrote:
> > > [...]
> > Well, I was actually thinking of an API like
> > preempt_by_nonrt_disable()
> > preempt_by_nonrt_enable()
> > working like the old preempt_disable()/preempt_enable() bu
* Kyle Moffett <[EMAIL PROTECTED]> wrote:
> One solution I can think of, although it bloats memory usage for
> many-way boxen, is to just have a table in the rwlock with one entry
> per cpu. Each CPU would get one concurrent reader, others would need
> to sleep
yes, it bloats memory usage, and
On Mon, Mar 21, 2005 at 12:23:22AM +0100, Esben Nielsen wrote:
> On Sun, 20 Mar 2005, Paul E. McKenney wrote:
>
> > On Sun, Mar 20, 2005 at 02:29:17PM +0100, Esben Nielsen wrote:
> > > On Fri, 18 Mar 2005, Ingo Molnar wrote:
> > >
> > > > [...]
> > >
> > > I think it can be deterministic (on the
On Sun, 20 Mar 2005, Paul E. McKenney wrote:
> On Sun, Mar 20, 2005 at 02:29:17PM +0100, Esben Nielsen wrote:
> > On Fri, 18 Mar 2005, Ingo Molnar wrote:
> >
> > > [...]
> >
> > I think it can be deterministic (on the long timescale of memory
> > management)
> > anyway: Boost any non-RT task e
On Sun, Mar 20, 2005 at 02:29:17PM +0100, Esben Nielsen wrote:
> On Fri, 18 Mar 2005, Ingo Molnar wrote:
>
> >
> > * Esben Nielsen <[EMAIL PROTECTED]> wrote:
> >
> > > Why can should there only be one RCU-reader per CPU at each given
> > > instance? Even on a real-time UP system it would be very
On Sun, Mar 20, 2005 at 01:38:24PM -0800, Bill Huey wrote:
> On Sun, Mar 20, 2005 at 05:57:23PM +0100, Manfred Spraul wrote:
> > That was just one random example.
> > Another one would be :
> >
> > drivers/chat/tty_io.c, __do_SAK() contains
> >read_lock(&tasklist_lock);
> >task_lock(p);
>
On Sun, Mar 20, 2005 at 05:57:23PM +0100, Manfred Spraul wrote:
> That was just one random example.
> Another one would be :
>
> drivers/chat/tty_io.c, __do_SAK() contains
>read_lock(&tasklist_lock);
>task_lock(p);
>
> kernel/sys.c, sys_setrlimit contains
>task_lock(current->group_lea
Thomas Gleixner wrote:
On Sun, 2005-03-20 at 07:36 +0100, Manfred Spraul wrote:
cpu 1:
acquire random networking spin_lock_bh()
cpu 2:
read_lock(&tasklist_lock) from process context
interrupt. softirq. within softirq: try to acquire the networking lock.
* spins.
cpu 1:
hardware interrupt
within
On Fri, 18 Mar 2005, Ingo Molnar wrote:
>
> * Esben Nielsen <[EMAIL PROTECTED]> wrote:
>
> > Why can should there only be one RCU-reader per CPU at each given
> > instance? Even on a real-time UP system it would be very helpfull to
> > have RCU areas to be enterable by several tasks as once. It
On Sun, 2005-03-20 at 07:36 +0100, Manfred Spraul wrote:
> cpu 1:
> acquire random networking spin_lock_bh()
>
> cpu 2:
> read_lock(&tasklist_lock) from process context
> interrupt. softirq. within softirq: try to acquire the networking lock.
> * spins.
>
> cpu 1:
> hardware interrupt
> within hw
On Mar 19, 2005, at 11:31, Ingo Molnar wrote:
What about allowing only as many concurrent readers as there are CPUs?
since a reader may be preempted by a higher prio task, there is no
linear relationship between CPU utilization and the number of readers
allowed. You could easily end up having all t
Ingo Molnar wrote:
which precise locking situation do you mean?
cpu 1:
acquire random networking spin_lock_bh()
cpu 2:
read_lock(&tasklist_lock) from process context
interrupt. softirq. within softirq: try to acquire the networking lock.
* spins.
cpu 1:
hardware interrupt
within hw interrupt: si
* Herbert Xu <[EMAIL PROTECTED]> wrote:
> Ingo Molnar <[EMAIL PROTECTED]> wrote:
> >
> > i really have no intention to allow multiple readers for rt-mutexes. We
> > got away with that so far, and i'd like to keep it so. Imagine 100
> > threads all blocked in the same critical section (holding the
* Manfred Spraul <[EMAIL PROTECTED]> wrote:
> Ingo Molnar wrote:
>
> > read_lock(&rwlock);
> > ...
> > read_lock(&rwlock);
> >
> >are still legal. (it's also done quite often.)
> >
> >
> >
>
> How do you handle the write_lock_irq()/read_lock locks? E.g. the
> tasklist_lock o
Ingo Molnar wrote:
read_lock(&rwlock);
...
read_lock(&rwlock);
are still legal. (it's also done quite often.)
How do you handle the write_lock_irq()/read_lock locks?
E.g. the tasklist_lock or the fasync_lock.
--
Manfred
-
To unsubscribe from this list: send the
On Fri, Mar 18, 2005 at 02:22:30PM -0800, Paul E. McKenney wrote:
> On Fri, Mar 18, 2005 at 12:35:17PM -0800, Paul E. McKenney wrote:
> > Compiles, probably dies horribly. "diff" didn't do such a good job
> > on this one, so attaching the raw rcupdate.[hc] files as well.
>
> My prediction was all
On Fri, Mar 18, 2005 at 12:35:17PM -0800, Paul E. McKenney wrote:
> Compiles, probably dies horribly. "diff" didn't do such a good job
> on this one, so attaching the raw rcupdate.[hc] files as well.
My prediction was all too accurate. ;-)
The attached patch at least boots on a 1-CPU x86 box.
Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> i really have no intention to allow multiple readers for rt-mutexes. We
> got away with that so far, and i'd like to keep it so. Imagine 100
> threads all blocked in the same critical section (holding the read-lock)
> when a highprio writer thread comes ar
On Fri, Mar 18, 2005 at 06:11:26PM +0100, Ingo Molnar wrote:
>
> * Paul E. McKenney <[EMAIL PROTECTED]> wrote:
>
> > For the patch, here are my questions:
> >
> > o What is the best way to select between classic RCU and this
> > scheme?
> >
> > 1. Massive #ifdef across rcupdate.c
On Fri, Mar 18, 2005 at 06:11:26PM +0100, Ingo Molnar wrote:
>
> * Paul E. McKenney <[EMAIL PROTECTED]> wrote:
>
> > For the patch, here are my questions:
> >
> > o What is the best way to select between classic RCU and this
> > scheme?
> >
> > 1. Massive #ifdef across rcupdate.c
* Esben Nielsen <[EMAIL PROTECTED]> wrote:
> Why can should there only be one RCU-reader per CPU at each given
> instance? Even on a real-time UP system it would be very helpfull to
> have RCU areas to be enterable by several tasks as once. It would
> perform better, both wrt. latencies and throu
* Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> For the patch, here are my questions:
>
> o What is the best way to select between classic RCU and this
> scheme?
>
> 1. Massive #ifdef across rcupdate.c
>
> 2. Create an rcupdate_rt.c and browbeat the build system
On Fri, 18 Mar 2005, Ingo Molnar wrote:
>
> * Bill Huey <[EMAIL PROTECTED]> wrote:
>
> > I'd like to note another problem. Mingo's current implementation of
> > rt_mutex (super mutex for all blocking synchronization) is still
> > missing reader counts and something like that would have to be
>
On Fri, 18 Mar 2005, Ingo Molnar wrote:
>
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > [...] How about something like:
> >
> > void
> > rcu_read_lock(void)
> > {
> > preempt_disable();
> > if (current->rcu_read_lock_nesting++ == 0) {
>
On Fri, Mar 18, 2005 at 08:49:03AM +0100, Ingo Molnar wrote:
>
> * Paul E. McKenney <[EMAIL PROTECTED]> wrote:
>
> > Seems to me that it would be good to have an RCU implementation that
> > meet the requirements of the Real-Time Preemption patch, but that is
> > 100% compatible with the "classic
* Bill Huey <[EMAIL PROTECTED]> wrote:
> I'd like to note another problem. Mingo's current implementation of
> rt_mutex (super mutex for all blocking synchronization) is still
> missing reader counts and something like that would have to be
> implemented if you want to do priority inheritance ove
* Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> > > preempt_disable();
> > > if (current->rcu_read_lock_nesting++ == 0) {
> > > current->rcu_read_lock_ptr =
> > > &__get_cpu_var(rcu_data).lock;
> > > read_lock(curre
On Fri, Mar 18, 2005 at 05:17:29AM -0800, Bill Huey wrote:
> On Fri, Mar 18, 2005 at 04:56:41AM -0800, Bill Huey wrote:
> > On Thu, Mar 17, 2005 at 04:20:26PM -0800, Paul E. McKenney wrote:
> > > 5. Scalability -and- Realtime Response.
> > ...
> >
> > > void
> > > rcu_read_lock(void)
> > > {
On Fri, Mar 18, 2005 at 04:56:41AM -0800, Bill Huey wrote:
> On Thu, Mar 17, 2005 at 04:20:26PM -0800, Paul E. McKenney wrote:
> > 5. Scalability -and- Realtime Response.
> ...
>
> > void
> > rcu_read_lock(void)
> > {
> > preempt_disable();
> > if (current->rcu_
On Fri, Mar 18, 2005 at 11:03:39AM +0100, Ingo Molnar wrote:
>
> there's a problem in #5's rcu_read_lock():
>
> void
> rcu_read_lock(void)
> {
> preempt_disable();
> if (current->rcu_read_lock_nesting++ == 0) {
> curr
On Fri, Mar 18, 2005 at 10:53:27AM +0100, Ingo Molnar wrote:
>
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > there's one detail on PREEMPT_RT though (which i think you noticed too).
> >
> > Priority inheritance handling can be done in a pretty straightforward
> > way as long as no true read-
On Fri, Mar 18, 2005 at 04:56:41AM -0800, Bill Huey wrote:
> On Thu, Mar 17, 2005 at 04:20:26PM -0800, Paul E. McKenney wrote:
> > 5. Scalability -and- Realtime Response.
> ...
>
> > void
> > rcu_read_lock(void)
> > {
> > preempt_disable();
> > if (current->rcu_
On Thu, Mar 17, 2005 at 04:20:26PM -0800, Paul E. McKenney wrote:
> 5. Scalability -and- Realtime Response.
...
> void
> rcu_read_lock(void)
> {
> preempt_disable();
> if (current->rcu_read_lock_nesting++ == 0) {
> current->rcu_re
* Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> 5. Scalability -and- Realtime Response.
>
> The trick here is to keep track of the CPU that did the
> rcu_read_lock() in the task structure. If there is a preemption,
> there will be some slight inefficiency, but the correct lock will be
> release
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [...] How about something like:
>
> void
> rcu_read_lock(void)
> {
> preempt_disable();
> if (current->rcu_read_lock_nesting++ == 0) {
> current->rcu_read_lock_ptr =
>
there's a problem in #5's rcu_read_lock():
void
rcu_read_lock(void)
{
preempt_disable();
if (current->rcu_read_lock_nesting++ == 0) {
current->rcu_read_lock_ptr =
&__get_cpu_var(rcu_dat
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> there's one detail on PREEMPT_RT though (which i think you noticed too).
>
> Priority inheritance handling can be done in a pretty straightforward
> way as long as no true read-side nesting is allowed for rwsems and
> rwlocks - i.e. there's only one ow
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> * Paul E. McKenney <[EMAIL PROTECTED]> wrote:
>
> > 4. Preemptible read side.
> >
> > RCU read-side critical sections can (in theory, anyway) be quite
> > large, degrading realtime scheduling response. Preemptible RCU
> > read-side critica
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Paul E. McKenney <[EMAIL PROTECTED]> wrote:
>
> > I have tested this approach, but in user-level scaffolding. All of
> > these implementations should therefore be regarded with great
> > suspicion: untested, probably don't even compile. Besides w
* Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> I have tested this approach, but in user-level scaffolding. All of
> these implementations should therefore be regarded with great
> suspicion: untested, probably don't even compile. Besides which, I
> certainly can't claim to fully understand the
* Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> 4. Preemptible read side.
>
> RCU read-side critical sections can (in theory, anyway) be quite
> large, degrading realtime scheduling response. Preemptible RCU
> read-side critical sections avoid such degradation. Manual
>
* Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> [I believe that the real-time preemption patch moves code
> from softirq/interrupt to process context, but could easily be
> missing something. If need be, there are ways of handling cases
> were realtime RCU is called from
* Paul E. McKenney <[EMAIL PROTECTED]> wrote:
> Seems to me that it would be good to have an RCU implementation that
> meet the requirements of the Real-Time Preemption patch, but that is
> 100% compatible with the "classic RCU" API. Such an implementation
> must meet a number of requirements, w
Hello!
As promised/threatened earlier in another forum...
Thanx, Paul
The Real-Time Preemption patch modified RCU to permit its read-side
critical sections to be safe
57 matches
Mail list logo