On Wed, 2014-02-26 at 11:26 +, Mel Gorman wrote:
> On Tue, Feb 25, 2014 at 10:16:46AM -0800, Davidlohr Bueso wrote:
> >
> > struct kioctx_table;
> > struct mm_struct {
> > - struct vm_area_struct * mmap; /* list of VMAs */
> > + struct vm_area_struct *mmap;/* list o
On Tue, Feb 25, 2014 at 10:16:46AM -0800, Davidlohr Bueso wrote:
> From: Davidlohr Bueso
>
> This patch is a continuation of efforts trying to optimize find_vma(),
> avoiding potentially expensive rbtree walks to locate a vma upon faults.
> The original approach (https://lkml.org/lkml/2013/11/1/4
On Wed, Feb 26, 2014 at 09:50:48AM +0100, Peter Zijlstra wrote:
> On Tue, Feb 25, 2014 at 10:16:46AM -0800, Davidlohr Bueso wrote:
> > +void vmacache_invalidate_all(void)
> > +{
> > + struct task_struct *g, *p;
> > +
> > + rcu_read_lock();
> > + for_each_process_thread(g, p) {
> > +
On Tue, Feb 25, 2014 at 10:16:46AM -0800, Davidlohr Bueso wrote:
> +void vmacache_invalidate_all(void)
> +{
> + struct task_struct *g, *p;
> +
> + rcu_read_lock();
> + for_each_process_thread(g, p) {
> + /*
> + * Only flush the vmacache pointers as the
> +
On Tue, Feb 25, 2014 at 8:04 PM, Davidlohr Bueso wrote:
> On Tue, 2014-02-25 at 18:04 -0800, Michel Lespinasse wrote:
>> On Tue, Feb 25, 2014 at 10:16 AM, Davidlohr Bueso wrote:
>> > This patch is a continuation of efforts trying to optimize find_vma(),
>> > avoiding potentially expensive rbtree
On Tue, 2014-02-25 at 18:04 -0800, Michel Lespinasse wrote:
> On Tue, Feb 25, 2014 at 10:16 AM, Davidlohr Bueso wrote:
> > This patch is a continuation of efforts trying to optimize find_vma(),
> > avoiding potentially expensive rbtree walks to locate a vma upon faults.
> > The original approach (
On Tue, Feb 25, 2014 at 10:16 AM, Davidlohr Bueso wrote:
> This patch is a continuation of efforts trying to optimize find_vma(),
> avoiding potentially expensive rbtree walks to locate a vma upon faults.
> The original approach (https://lkml.org/lkml/2013/11/1/410), where the
> largest vma was al
On Tue, Feb 25, 2014 at 10:37:34AM -0800, Davidlohr Bueso wrote:
> On Tue, 2014-02-25 at 19:35 +0100, Peter Zijlstra wrote:
> > On Tue, Feb 25, 2014 at 10:16:46AM -0800, Davidlohr Bueso wrote:
> > > +void vmacache_update(struct mm_struct *mm, unsigned long addr,
> > > + struct vm_area_
On Tue, Feb 25, 2014 at 11:04 AM, Davidlohr Bueso wrote:
>
>> So it walks completely the wrong list of threads.
>
> But we still need to deal with the rest of the tasks in the system, so
> anytime there's an overflow we need to nullify all cached vmas, not just
> current's. Am I missing something
On Tue, 2014-02-25 at 11:04 -0800, Davidlohr Bueso wrote:
> On Tue, 2014-02-25 at 10:37 -0800, Linus Torvalds wrote:
> > On Tue, Feb 25, 2014 at 10:16 AM, Davidlohr Bueso wrote:
> > > index a17621c..14396bf 100644
> > > --- a/kernel/fork.c
> > > +++ b/kernel/fork.c
> > > @@ -363,7 +363,12 @@ stati
On Tue, 2014-02-25 at 10:37 -0800, Linus Torvalds wrote:
> On Tue, Feb 25, 2014 at 10:16 AM, Davidlohr Bueso wrote:
> > index a17621c..14396bf 100644
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -363,7 +363,12 @@ static int dup_mmap(struct mm_struct *mm, struct
> > mm_struct *oldmm)
> >
On Tue, Feb 25, 2014 at 10:37 AM, Linus Torvalds
wrote:
>
> - clear all the cache entries (of the new 'struct task_struct'! - so
> not in dup_mmap, but make sure it's zeroed when allocating!)(
>
> - set vmcache_seqnum to 0 in dup_mmap (since any sequence number is
> fine when it got invalidated,
On Tue, 2014-02-25 at 13:24 -0500, Rik van Riel wrote:
> On 02/25/2014 01:16 PM, Davidlohr Bueso wrote:
>
> > The proposed approach is to keep the current cache and adding a small, per
> > thread, LRU cache. By keeping the mm->mmap_cache,
>
> This bit of the changelog may want updating :)
bah,
On Tue, Feb 25, 2014 at 10:16 AM, Davidlohr Bueso wrote:
> index a17621c..14396bf 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -363,7 +363,12 @@ static int dup_mmap(struct mm_struct *mm, struct
> mm_struct *oldmm)
>
> mm->locked_vm = 0;
> mm->mmap = NULL;
> - mm->
On Tue, 2014-02-25 at 19:35 +0100, Peter Zijlstra wrote:
> On Tue, Feb 25, 2014 at 10:16:46AM -0800, Davidlohr Bueso wrote:
> > +void vmacache_update(struct mm_struct *mm, unsigned long addr,
> > +struct vm_area_struct *newvma)
> > +{
> > + /*
> > +* Hash based on the page num
On Tue, Feb 25, 2014 at 10:16:46AM -0800, Davidlohr Bueso wrote:
> +void vmacache_update(struct mm_struct *mm, unsigned long addr,
> + struct vm_area_struct *newvma)
> +{
> + /*
> + * Hash based on the page number. Provides a good
> + * hit rate for workloads with goo
On 02/25/2014 01:16 PM, Davidlohr Bueso wrote:
> The proposed approach is to keep the current cache and adding a small, per
> thread, LRU cache. By keeping the mm->mmap_cache,
This bit of the changelog may want updating :)
> Changes from v1 (https://lkml.org/lkml/2014/2/21/8):
> - Removed the
From: Davidlohr Bueso
This patch is a continuation of efforts trying to optimize find_vma(),
avoiding potentially expensive rbtree walks to locate a vma upon faults.
The original approach (https://lkml.org/lkml/2013/11/1/410), where the
largest vma was also cached, ended up being too specific and
18 matches
Mail list logo