[PATCH v7 28/28] mm/filemap: Convert page wait queues to be folios

2021-04-09 Thread Matthew Wilcox (Oracle)
Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Increases the size of cachefiles by two bytes, but the kernel core is unchanged in size. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Ack

Re: [PATCH v6 27/27] mm/filemap: Convert page wait queues to be folios

2021-04-06 Thread Christoph Hellwig
On Wed, Mar 31, 2021 at 07:47:28PM +0100, Matthew Wilcox (Oracle) wrote: > Reinforce that if we're waiting for a bit in a struct page, that's > actually in the head page by changing the type from page to folio. > Increases the size of cachefiles by two bytes, but the kernel core > is unchanged in s

[PATCH v6 27/27] mm/filemap: Convert page wait queues to be folios

2021-03-31 Thread Matthew Wilcox (Oracle)
Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Increases the size of cachefiles by two bytes, but the kernel core is unchanged in size. Signed-off-by: Matthew Wilcox (Oracle) --- fs/cachefiles/rdwr.c| 16 +

Re: [PATCH v5 26/27] mm/filemap: Convert page wait queues to be folios

2021-03-20 Thread kernel test robot
Hi "Matthew, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on next-20210319] [cannot apply to linux/master linus/master hnaz-linux-mm/master v5.12-rc3 v5.12-rc2 v5.12-rc1 v5.12-rc3] [If your patch is applied to the wrong git tree, kindly drop us a note. And when

[PATCH v5 26/27] mm/filemap: Convert page wait queues to be folios

2021-03-19 Thread Matthew Wilcox (Oracle)
Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Increases the size of cachefiles by two bytes, but the kernel core is unchanged in size. Signed-off-by: Matthew Wilcox (Oracle) --- fs/cachefiles/rdwr.c| 16 +

[PATCH v4 24/25] mm/filemap: Convert page wait queues to be folios

2021-03-04 Thread Matthew Wilcox (Oracle)
Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 6 +++--- mm/filemap.c| 30 -- 2 files changed, 19 i

[PATCH v3 24/25] mm/filemap: Convert page wait queues to be folios

2021-01-27 Thread Matthew Wilcox (Oracle)
Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 6 +++--- mm/filemap.c| 40 +--- 2 files cha

[PATCH v2 26/27] mm/filemap: Convert page wait queues to be folios

2021-01-18 Thread Matthew Wilcox (Oracle)
Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 6 +++--- mm/filemap.c| 30 -- 2 files changed, 19 i

Re: [PATCH v2] staging/android: use multiple futex wait queues

2019-03-27 Thread Greg Kroah-Hartman
On Fri, Feb 15, 2019 at 08:44:01AM +0100, Hugo Lefeuvre wrote: > Use multiple per-offset wait queues instead of one big wait queue per > region. > > Signed-off-by: Hugo Lefeuvre > --- > Changes in v2: > - dereference the it pointer instead of wait_queue (which is

Re: [PATCH] staging/android: use multiple futex wait queues

2019-02-16 Thread Hugo Lefeuvre
Hi, > Have you tested this? I have finally set up a cuttlefish test env and tested both my first patch set[0] and this patch (v2). My first patch set works fine. I have nothing to say about it. > Noticed any performance speedups or slow downs? This patch doesn't work. The boot process goes we

Re: [PATCH] staging/android: use multiple futex wait queues

2019-02-14 Thread Hugo Lefeuvre
> > + list_for_each_entry(it, &data->futex_wait_queue_list, list) { > > + if (wait_queue->offset == arg->offset) { > ^^ > You meant "it->offset". Right, this is not good. Fixed in v2. Thanks for the feedback! regards, Hugo -- Hug

[PATCH v2] staging/android: use multiple futex wait queues

2019-02-14 Thread Hugo Lefeuvre
Use multiple per-offset wait queues instead of one big wait queue per region. Signed-off-by: Hugo Lefeuvre --- Changes in v2: - dereference the it pointer instead of wait_queue (which is not set yet) in handle_vsoc_cond_wait() --- drivers/staging/android/TODO | 4 --- drivers/staging

Re: [PATCH] staging/android: use multiple futex wait queues

2019-02-14 Thread Dan Carpenter
On Thu, Feb 14, 2019 at 06:34:59PM +0100, Hugo Lefeuvre wrote: > @@ -402,6 +410,7 @@ static int handle_vsoc_cond_wait(struct file *filp, > struct vsoc_cond_wait *arg) > struct vsoc_region_data *data = vsoc_dev.regions_data + region_number; > int ret = 0; > struct vsoc_device_regi

Re: [PATCH] staging/android: use multiple futex wait queues

2019-02-14 Thread Hugo Lefeuvre
> > Use multiple per-offset wait queues instead of one big wait queue per > > region. > > > > Signed-off-by: Hugo Lefeuvre > > Have you tested this? > > Noticed any performance speedups or slow downs? Not yet. I have started to set up a cuttlefish test env

Re: [PATCH] staging/android: use multiple futex wait queues

2019-02-14 Thread Greg Kroah-Hartman
On Thu, Feb 14, 2019 at 06:34:59PM +0100, Hugo Lefeuvre wrote: > Use multiple per-offset wait queues instead of one big wait queue per > region. > > Signed-off-by: Hugo Lefeuvre Have you tested this? Noticed any performance speedups or slow downs? thanks, greg k-h

[PATCH] staging/android: use multiple futex wait queues

2019-02-14 Thread Hugo Lefeuvre
Use multiple per-offset wait queues instead of one big wait queue per region. Signed-off-by: Hugo Lefeuvre --- This patch is based on the simplify handle_vsoc_cond_wait patchset, currently under review: https://lkml.org/lkml/2019/2/7/870 --- drivers/staging/android/TODO | 4 --- drivers

Re: [PATCH v2 2/3] locking/percpu-rwsem: Rework writer block/wake to not use wait-queues

2016-12-05 Thread Davidlohr Bueso
On Mon, 05 Dec 2016, Oleg Nesterov wrote: Yes. But percpu_down_write() should not be used after exit_notify(), so we can rely on rcu_read_lock(), release_task()->call_rcu(delayed_put_task_struct) can't be called until an exiting task passes exit_notify(). But then we probably need WARN_ON(curre

Re: [PATCH v2 2/3] locking/percpu-rwsem: Rework writer block/wake to not use wait-queues

2016-12-05 Thread Oleg Nesterov
On 12/05, Peter Zijlstra wrote: > > > + for (;;) { > > + set_current_state(TASK_UNINTERRUPTIBLE); > > + > > + if (readers_active_check(sem)) > > + break; > > + > > + schedule(); > > + } > > + > > + rcu_assign_pointer(sem->writer, NULL); > > And

Re: [PATCH v2 2/3] locking/percpu-rwsem: Rework writer block/wake to not use wait-queues

2016-12-05 Thread Oleg Nesterov
On 12/02, Davidlohr Bueso wrote: > > @@ -102,8 +103,13 @@ void __percpu_up_read(struct percpu_rw_semaphore *sem) >*/ > __this_cpu_dec(*sem->read_count); > > + rcu_read_lock(); > + writer = rcu_dereference(sem->writer); > + > /* Prod writer to recheck readers_active */ >

Re: [PATCH v2 2/3] locking/percpu-rwsem: Rework writer block/wake to not use wait-queues

2016-12-05 Thread Oleg Nesterov
On 12/05, Oleg Nesterov wrote: > > Yes, but on a second thought task_rcu_dereference() won't really help, I forgot to explain why, see below. > #define xxx_wait_event(xxx, event) { > // comment to explain why > WARN_ON(current->exit_state); Otherwise this proces

Re: [PATCH v2 2/3] locking/percpu-rwsem: Rework writer block/wake to not use wait-queues

2016-12-05 Thread Oleg Nesterov
Davidlohr, Peter, I'll try to read this patch later, just one note. On 12/05, Peter Zijlstra wrote: > > On Fri, Dec 02, 2016 at 06:18:39PM -0800, Davidlohr Bueso wrote: > > @@ -102,8 +103,13 @@ void __percpu_up_read(struct percpu_rw_semaphore *sem) > > */ > > __this_cpu_dec(*sem->read_cou

Re: [PATCH v2 2/3] locking/percpu-rwsem: Rework writer block/wake to not use wait-queues

2016-12-05 Thread Peter Zijlstra
On Fri, Dec 02, 2016 at 06:18:39PM -0800, Davidlohr Bueso wrote: > @@ -102,8 +103,13 @@ void __percpu_up_read(struct percpu_rw_semaphore *sem) >*/ > __this_cpu_dec(*sem->read_count); > > + rcu_read_lock(); > + writer = rcu_dereference(sem->writer); Don't think this is correc

[PATCH v2 2/3] locking/percpu-rwsem: Rework writer block/wake to not use wait-queues

2016-12-02 Thread Davidlohr Bueso
The use of any kind of wait queue is an overkill for pcpu-rwsems. While one option would be to use the less heavy simple (swait) flavor, this is still too much for what pcpu-rwsems needs. For one, we do not care about any sort of queuing in that the only (rare) time writers (and readers, for that

Re: [PATCH 2/3] locking/percpu-rwsem: Replace bulky wait-queues with swait

2016-11-21 Thread Davidlohr Bueso
On Mon, 21 Nov 2016, Oleg Nesterov wrote: On 11/18, Davidlohr Bueso wrote: @@ -12,7 +12,7 @@ struct percpu_rw_semaphore { struct rcu_sync rss; unsigned int __percpu *read_count; struct rw_semaphore rw_sem; - wait_queue_head_t writer; + st

Re: [PATCH 2/3] locking/percpu-rwsem: Replace bulky wait-queues with swait

2016-11-21 Thread Oleg Nesterov
On 11/18, Davidlohr Bueso wrote: > > @@ -12,7 +12,7 @@ struct percpu_rw_semaphore { > struct rcu_sync rss; > unsigned int __percpu *read_count; > struct rw_semaphore rw_sem; > - wait_queue_head_t writer; > + struct swait_queue_head writer; I won't argu

[PATCH 2/3] locking/percpu-rwsem: Replace bulky wait-queues with swait

2016-11-18 Thread Davidlohr Bueso
In the case of the percpu-rwsem, they don't need any of the fancy/bulky features, such as custom callbacks or fine grained wakeups. Users that can convert to simple wait-queues are encouraged to do so for the various rt and (indirect) performance benefits. Signed-off-by: Davidlohr

[PATCH v4 2/6] sbitmap: allocate wait queues on a specific node

2016-09-17 Thread Omar Sandoval
From: Omar Sandoval The original bt_alloc() we converted from was using kzalloc(), not kzalloc_node(), to allocate the wait queues. This was probably an oversight, so fix it for sbitmap_queue_init_node(). Signed-off-by: Omar Sandoval --- lib/sbitmap.c | 2 +- 1 file changed, 1 insertion(+), 1

[PATCH v3 2/5] sbitmap: allocate wait queues on a specific node

2016-09-09 Thread Omar Sandoval
From: Omar Sandoval The original bt_alloc() we converted from was using kzalloc(), not kzalloc_node(), to allocate the wait queues. This was probably an oversight, so fix it for sbitmap_queue_init_node(). Signed-off-by: Omar Sandoval --- lib/sbitmap.c | 2 +- 1 file changed, 1 insertion(+), 1

[PATCH v2 2/5] scale_bitmap: allocate wait queues on a specific node

2016-09-07 Thread Omar Sandoval
From: Omar Sandoval The original `bt_alloc()` we converted from was using `kzalloc()`, not `kzalloc_node()`, to allocate the wait queues. This was probably an oversight, so fix it for `scale_bitmap_queue_init_node()`. Signed-off-by: Omar Sandoval --- lib/scale_bitmap.c | 2 +- 1 file changed

Re: [PATCH v2] sched/completion: convert completions to use simple wait queues

2016-05-23 Thread Daniel Wagner
[Sorry for the late response. I was a few days on holiday] On 05/16/2016 10:38 PM, Luiz Capitulino wrote: > On Thu, 12 May 2016 16:08:34 +0200 > Daniel Wagner wrote: > >> In short, I haven't figured out yet why the kernel builds get slightly >> slower. > > You're doing make -j 200, right? How

Re: [PATCH v2] sched/completion: convert completions to use simple wait queues

2016-05-16 Thread Luiz Capitulino
On Thu, 12 May 2016 16:08:34 +0200 Daniel Wagner wrote: > In short, I haven't figured out yet why the kernel builds get slightly > slower. You're doing make -j 200, right? How many cores do you have? Couldn't it be that you're saturating your CPUs? You could try make -j, or some process creat

Re: [PATCH v2] sched/completion: convert completions to use simple wait queues

2016-05-16 Thread Luiz Capitulino
On Thu, 28 Apr 2016 14:57:24 +0200 Daniel Wagner wrote: > From: Daniel Wagner > > Completions have no long lasting callbacks and therefore do not need > the complex waitqueue variant. Use simple waitqueues which reduces > the contention on the waitqueue lock. > > This was a carry forward from

Re: [PATCH v2] sched/completion: convert completions to use simple wait queues

2016-05-12 Thread Daniel Wagner
On 04/28/2016 02:57 PM, Daniel Wagner wrote: > As one can see above in the swait_stat output, the fork() path is > using completion. A histogram of a fork bomp (1000 forks) benchmark > shows a slight performance drop by 4%. > > [wagi@handman completion-test-5 (master)]$ cat forky-4.6.0-rc4.txt | p

Re: [PATCH v2] sched/completion: convert completions to use simple wait queues

2016-04-28 Thread Daniel Wagner
On 04/28/2016 02:57 PM, Daniel Wagner wrote: > Only one complete_all() user could been identified so far, which happens > to be drivers/base/power/main.c. Several waiters appear when suspend > to disk or mem is executed. BTW, this is what I get when doing a 'echo "disk" > /sys/power/state' on a 4

[PATCH v2] sched/completion: convert completions to use simple wait queues

2016-04-28 Thread Daniel Wagner
From: Daniel Wagner Completions have no long lasting callbacks and therefore do not need the complex waitqueue variant. Use simple waitqueues which reduces the contention on the waitqueue lock. This was a carry forward from v3.10-rt, with some RT specific chunks, dropped, and updated to align w

Re: [RFC v1] sched/completion: convert completions to use simple wait queues

2016-03-30 Thread Daniel Wagner
On 03/30/2016 05:21 PM, Peter Zijlstra wrote: > On Wed, Mar 30, 2016 at 05:17:29PM +0200, Sebastian Andrzej Siewior wrote: >> On 03/30/2016 05:07 PM, Peter Zijlstra wrote: >>> On Wed, Mar 30, 2016 at 04:53:05PM +0200, Daniel Wagner wrote: From: Daniel Wagner Completions have no long

Re: [RFC v1] sched/completion: convert completions to use simple wait queues

2016-03-30 Thread Peter Zijlstra
On Wed, Mar 30, 2016 at 05:17:29PM +0200, Sebastian Andrzej Siewior wrote: > On 03/30/2016 05:07 PM, Peter Zijlstra wrote: > > On Wed, Mar 30, 2016 at 04:53:05PM +0200, Daniel Wagner wrote: > >> From: Daniel Wagner > >> > >> Completions have no long lasting callbacks and therefore do not need > >>

Re: [RFC v1] sched/completion: convert completions to use simple wait queues

2016-03-30 Thread Sebastian Andrzej Siewior
On 03/30/2016 05:07 PM, Peter Zijlstra wrote: > On Wed, Mar 30, 2016 at 04:53:05PM +0200, Daniel Wagner wrote: >> From: Daniel Wagner >> >> Completions have no long lasting callbacks and therefore do not need >> the complex waitqueue variant. Use simple waitqueues which reduces >> the contention

Re: [RFC v1] sched/completion: convert completions to use simple wait queues

2016-03-30 Thread Peter Zijlstra
On Wed, Mar 30, 2016 at 04:53:05PM +0200, Daniel Wagner wrote: > From: Daniel Wagner > > Completions have no long lasting callbacks and therefore do not need > the complex waitqueue variant. Use simple waitqueues which reduces > the contention on the waitqueue lock. Changelog really should have

[RFC v1] sched/completion: convert completions to use simple wait queues

2016-03-30 Thread Daniel Wagner
From: Daniel Wagner Completions have no long lasting callbacks and therefore do not need the complex waitqueue variant. Use simple waitqueues which reduces the contention on the waitqueue lock. This was a carry forward from v3.10-rt, with some RT specific chunks, dropped, and updated to align w

[RFC v0] sched/completion: convert completions to use simple wait queues

2016-03-08 Thread Daniel Wagner
From: Daniel Wagner Completions have no long lasting callbacks and therefore do not need the complex waitqueue variant. Use simple waitqueues which reduces the contention on the waitqueue lock. This was a carry forward from v3.10-rt, with some RT specific chunks, dropped, and updated to align w

[tip:sched/core] rcu: Use simple wait queues where possible in rcutree

2016-02-25 Thread tip-bot for Paul Gortmaker
queues where possible in rcutree As of commit dae6e64d2bcfd ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don't need the ex

[PATCH v8 5/5] rcu: use simple wait queues where possible in rcutree

2016-02-19 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

[PATCH tip v7 7/7] rcu: use simple wait queues where possible in rcutree

2016-01-29 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

[PATCH tip v6 5/5] rcu: use simple wait queues where possible in rcutree

2016-01-28 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

[PATCH tip v5 5/5] rcu: use simple wait queues where possible in rcutree

2015-11-30 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

[PATCH tip v4 5/5] rcu: use simple wait queues where possible in rcutree

2015-11-24 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

[PATCH v3 4/4] rcu: use simple wait queues where possible in rcutree

2015-10-20 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

Re: [PATCH v2 3/4] rcu: use simple wait queues where possible in rcutree

2015-10-20 Thread Daniel Wagner
Hi Paul, On 10/20/2015 01:31 AM, Paul E. McKenney wrote: > On Wed, Oct 14, 2015 at 09:43:21AM +0200, Daniel Wagner wrote: >> From: Paul Gortmaker >> @@ -4178,7 +4178,8 @@ static void __init rcu_init_one(struct rcu_state *rsp, >> } >> } >> >> -init_waitqueue_head(&rsp->gp_wq)

Re: [PATCH v2 3/4] rcu: use simple wait queues where possible in rcutree

2015-10-19 Thread Paul E. McKenney
On Wed, Oct 14, 2015 at 09:43:21AM +0200, Daniel Wagner wrote: > From: Paul Gortmaker > > As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce > proper blocking to no-CBs kthreads GP waits") the RCU subsystem started > making use of wait queues.

[PATCH v2 3/4] rcu: use simple wait queues where possible in rcutree

2015-10-14 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

Re: [PATCH v1 3/8] sched/completion: convert completions to use simple wait queues

2015-10-12 Thread Daniel Wagner
On 10/12/2015 01:58 PM, Peter Zijlstra wrote: > On Mon, Oct 12, 2015 at 12:03:06PM +0200, Daniel Wagner wrote: >> On 10/12/2015 11:17 AM, Daniel Wagner wrote: >>> On 09/09/2015 04:26 PM, Peter Zijlstra wrote: On Wed, Sep 09, 2015 at 02:05:29PM +0200, Daniel Wagner wrote: > @@ -50,10 +50,10

Re: [PATCH v1 3/8] sched/completion: convert completions to use simple wait queues

2015-10-12 Thread Peter Zijlstra
On Mon, Oct 12, 2015 at 12:03:06PM +0200, Daniel Wagner wrote: > On 10/12/2015 11:17 AM, Daniel Wagner wrote: > > On 09/09/2015 04:26 PM, Peter Zijlstra wrote: > >> On Wed, Sep 09, 2015 at 02:05:29PM +0200, Daniel Wagner wrote: > >>> @@ -50,10 +50,10 @@ void complete_all(struct completion *x) > >>>

Re: [PATCH v1 3/8] sched/completion: convert completions to use simple wait queues

2015-10-12 Thread Daniel Wagner
On 10/12/2015 11:17 AM, Daniel Wagner wrote: > On 09/09/2015 04:26 PM, Peter Zijlstra wrote: >> On Wed, Sep 09, 2015 at 02:05:29PM +0200, Daniel Wagner wrote: >>> @@ -50,10 +50,10 @@ void complete_all(struct completion *x) >>> { >>> unsigned long flags; >>> >>> - spin_lock_irqsave(&x->wait

Re: [PATCH v1 3/8] sched/completion: convert completions to use simple wait queues

2015-10-12 Thread Daniel Wagner
On 09/09/2015 04:26 PM, Peter Zijlstra wrote: > On Wed, Sep 09, 2015 at 02:05:29PM +0200, Daniel Wagner wrote: >> @@ -50,10 +50,10 @@ void complete_all(struct completion *x) >> { >> unsigned long flags; >> >> -spin_lock_irqsave(&x->wait.lock, flags); >> +raw_spin_lock_irqsave(&x->wa

Re: [PATCH v1 3/8] sched/completion: convert completions to use simple wait queues

2015-09-09 Thread Peter Zijlstra
On Wed, Sep 09, 2015 at 02:05:29PM +0200, Daniel Wagner wrote: > @@ -50,10 +50,10 @@ void complete_all(struct completion *x) > { > unsigned long flags; > > - spin_lock_irqsave(&x->wait.lock, flags); > + raw_spin_lock_irqsave(&x->wait.lock, flags); > x->done += UINT_MAX/2; > -

[PATCH v1 3/8] sched/completion: convert completions to use simple wait queues

2015-09-09 Thread Daniel Wagner
From: Paul Gortmaker Completions have no long lasting callbacks and therefore do not need the complex waitqueue variant. Use simple waitqueues which reduces the contention on the waitqueue lock. This was a carry forward from v3.10-rt, with some RT specific chunks, dropped, and updated to align

[PATCH v1 4/8] rcu: use simple wait queues where possible in rcutree

2015-09-09 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

[RFC v0 3/3] rcu: use simple wait queues where possible in rcutree

2015-08-05 Thread Daniel Wagner
From: Paul Gortmaker As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don&

[RFC v0 2/3] sched/completion: convert completions to use simple wait queues

2015-08-05 Thread Daniel Wagner
From: Paul Gortmaker Completions have no long lasting callbacks and therefore do not need the complex waitqueue variant. Use simple waitqueues which reduces the contention on the waitqueue lock. This was a carry forward from v3.10-rt, with some RT specific chunks, dropped, and updated to align

[PATCH 4/7] sched/completion: convert completions to use simple wait queues

2014-10-17 Thread Paul Gortmaker
Completions have no long lasting callbacks and therefore do not need the complex waitqueue variant. Use simple waitqueues which reduces the contention on the waitqueue lock. This was a carry forward from v3.10-rt, with some RT specific chunks, dropped, and updated to align with names that were ch

[PATCH 5/7] rcu: use simple wait queues where possible in rcutree

2014-10-17 Thread Paul Gortmaker
As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the RCU subsystem started making use of wait queues. Here we convert all additions of RCU wait queues to use simple wait queues, since they don't need the extra

Re: [RFC PATCH 1/1] rcu: Use separate wait queues for leaders and followers

2014-07-28 Thread Paul E. McKenney
hat all the kthreads wait on the same wait > queue. > When we try to wake up the leader threads on the wait queue, we also try to > wake > up the follower threads because of which there is still wake up overhead. > > This commit tries to avoid that by using separate wait queues f

Re: [RFC PATCH 1/1] rcu: Use separate wait queues for leaders and followers

2014-07-28 Thread Pranith Kumar
he kthreads wait on the same wait > queue. > When we try to wake up the leader threads on the wait queue, we also try to > wake > up the follower threads because of which there is still wake up overhead. > > This commit tries to avoid that by using separate wait queues for the leade

[RFC PATCH 1/1] rcu: Use separate wait queues for leaders and followers

2014-07-28 Thread Pranith Kumar
n the wait queue, we also try to wake up the follower threads because of which there is still wake up overhead. This commit tries to avoid that by using separate wait queues for the leaders and followers. Signed-off-by: Pranith Kumar --- kernel/rcu/tree.h| 3 ++- kernel/rcu/tree_plu

Re: [PATCH 3/3] rcu: use simple wait queues where possible in rcutree

2013-12-11 Thread Paul E. McKenney
On Wed, Dec 11, 2013 at 08:06:39PM -0500, Paul Gortmaker wrote: > From: Thomas Gleixner > > As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce > proper blocking to no-CBs kthreads GP waits") the rcu subsystem started > making use of wait queues.

[PATCH 0/3] Introduce simple wait queues

2013-12-11 Thread Paul Gortmaker
functionality, giving it a smaller footprint vs. the normal wait queue. For non-RT, we can still benefit from the footprint reduction factor. Here in this series, we deploy the simple wait queues in two places: (1) for completions, and (2) in RCU processing. As can be seen below from the bloat

[PATCH 2/3] sched/core: convert completions to use simple wait queues

2013-12-11 Thread Paul Gortmaker
From: Thomas Gleixner Completions have no long lasting callbacks and therefore do not need the complex waitqueue variant. Use simple waitqueues which reduces the contention on the waitqueue lock. Signed-off-by: Thomas Gleixner [PG: carry forward from v3.10-rt, drop RT specific chunks, align wi

[PATCH 3/3] rcu: use simple wait queues where possible in rcutree

2013-12-11 Thread Paul Gortmaker
From: Thomas Gleixner As of commit dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4 ("rcu: Introduce proper blocking to no-CBs kthreads GP waits") the rcu subsystem started making use of wait queues. Here we convert all additions of rcu wait queues to use simple wait queues, since they don&

Wait Queues

2000-12-11 Thread Carlo Pagano
I am trying to modify a driver that worked great on 2.2.16 to 2.4.0-x..   My old code was:   static struct wait_queue *roundrobin_wait; static struct wait_queue *task_stop_wait;  static struct tq_struct roundrobin_task; static struct timer_list timeout_timer;   ...   init_timer(&timeout_tim

Re: Assistance requested in demystifying wait queues.

2000-12-05 Thread Eli Carter
Andrew Reitz wrote: > > Hello, > > I'm absolutely green when it comes to Linux kernel development, and so > working on a school project to port a TCP/IP-based service into the kernel > has been quite challenging (but also intesting)! Currently, I'm absolutely > mystified regarding how the "wait

Assistance requested in demystifying wait queues.

2000-12-04 Thread Andrew Reitz
Hello, I'm absolutely green when it comes to Linux kernel development, and so working on a school project to port a TCP/IP-based service into the kernel has been quite challenging (but also intesting)! Currently, I'm absolutely mystified regarding how the "wait queue" subsystem works. I've been r

Wait queues and a race condition on 2.2.x

2000-09-20 Thread Lee Cremeans
I'm working on a driver for the Hi/fn 7751 encryption chip, and I've run into a weird problem and I'm not entirely sure how to fix it. The driver was originally written for NT, but has been broken out into OS-specific and OS-independent parts, and the Linux-specific part calls code in the OS-i