On Tue, Dec 08, 2020 at 05:13:01PM -0800, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> This commit adds vmalloc() support to mem_dump_obj(). Note that the
> vmalloc_dump_obj() function combines the checking and dumping, in
> contrast with the split between kmem_valid_obj() and kmem_d
On Wed, Dec 09, 2020 at 06:51:20PM +0100, Vlastimil Babka wrote:
> On 12/9/20 2:13 AM, paul...@kernel.org wrote:
> > From: "Paul E. McKenney"
> >
> > This commit adds vmalloc() support to mem_dump_obj(). Note that the
> > vmalloc_dump_obj() function combines the checking and dumping, in
> > cont
On Wed, Dec 09, 2020 at 11:42:39AM -0800, Paul E. McKenney wrote:
> On Wed, Dec 09, 2020 at 08:36:37PM +0100, Uladzislau Rezki wrote:
> > On Tue, Dec 08, 2020 at 05:13:01PM -0800, paul...@kernel.org wrote:
> > > From: "Paul E. McKenney"
> > >
> &
probes: Init kprobes in early_initcall")
> Signed-off-by: Uladzislau Rezki (Sony)
> ---
> include/linux/rcupdate.h | 6 ++
> init/main.c | 1 +
> kernel/rcu/tasks.h | 26 ++
> 3 files changed, 29 insertions(+), 4 deletion
On Wed, Dec 09, 2020 at 07:26:13PM -0800, Paul E. McKenney wrote:
> On Wed, Dec 09, 2020 at 09:27:31PM +0100, Uladzislau Rezki (Sony) wrote:
> > Initialize the RCU-tasks earlier, before *_initcall() callbacks are
> > invoked. Do it after the workqueue subsytem is up and running. Th
On Thu, Nov 19, 2020 at 09:40:29AM +0800, Huang, Ying wrote:
> Uladzislau Rezki writes:
>
> > On Wed, Nov 18, 2020 at 10:44:13AM +0800, huang ying wrote:
> >> On Tue, Nov 17, 2020 at 9:04 PM Uladzislau Rezki wrote:
> >> >
> >> > On Tue, Nov 1
On Thu, Nov 19, 2020 at 01:49:34PM -0800, Paul E. McKenney wrote:
> On Wed, Nov 18, 2020 at 11:53:09AM +0800, qiang.zh...@windriver.com wrote:
> > From: Zqiang
> >
> > Add kasan_record_aux_stack function for kvfree_call_rcu function to
> > record call stacks.
> >
> > Signed-off-by: Zqiang
>
>
> On Fri, Nov 20, 2020 at 12:59 PM Uladzislau Rezki wrote:
> >
> > On Thu, Nov 19, 2020 at 01:49:34PM -0800, Paul E. McKenney wrote:
> > > On Wed, Nov 18, 2020 at 11:53:09AM +0800, qiang.zh...@windriver.com wrote:
> > > > From: Zqiang
> > > >
On Fri, Nov 20, 2020 at 10:34:19AM +0800, Huang, Ying wrote:
> Uladzislau Rezki writes:
>
> > On Thu, Nov 19, 2020 at 09:40:29AM +0800, Huang, Ying wrote:
> >> Uladzislau Rezki writes:
> >>
> >> > On Wed, Nov 18, 2020 at 10:44:13AM +0800, huang ying w
> >> >>
> >> >> That's the typical long latency avoidance method.
> >> >>
> >> >> > The question is, which value we should use as a batch_threshold: 100,
> >> >> > 1000, etc.
> >> >>
> >> >> I think we can do some measurement to determine it?
> >> >>
> >> > Hmm.. looking at it one more time i
On Tue, Nov 24, 2020 at 11:55:51AM -0800, Paul E. McKenney wrote:
> On Fri, Nov 20, 2020 at 12:49:17PM +0100, Uladzislau Rezki (Sony) wrote:
> > Make the kvfree_rcu_arg_2() to be open-coded, so readability
> > and review look better from the first glance. Moreover, that
> >
On Wed, Nov 25, 2020 at 08:52:58AM +0800, Huang, Ying wrote:
> Uladzislau Rezki writes:
> >> >> > - lazy_max_pages() can slightly be decreased. If there are existing
> >> >> > workloads which suffer from such long value. It would be good to ge
> Memory mappings inside kernel allocated with vmalloc() are in
> predictable order and packed tightly toward the low addresses, except
> for per-cpu areas which start from top of the vmalloc area. With
> new kernel boot parameter 'randomize_vmalloc=1', the entire area is
> used randomly to make th
On Mon, Mar 15, 2021 at 11:04:42AM +0200, Topi Miettinen wrote:
> On 14.3.2021 19.23, Uladzislau Rezki wrote:
> > > Memory mappings inside kernel allocated with vmalloc() are in
> > > predictable order and packed tightly toward the low addresses, except
> > > for per-
> On 14.3.2021 19.23, Uladzislau Rezki wrote:
> > Also, using vmaloc test driver i can trigger a kernel BUG:
> >
> >
> > [ 24.627577] kernel BUG at mm/vmalloc.c:1272!
>
> It seems that most tests indeed fail. Perhaps the vmalloc subsystem isn't
>
On Wed, Jan 20, 2021 at 08:57:57PM +0100, Sebastian Andrzej Siewior wrote:
> On 2021-01-20 17:21:46 [+0100], Uladzislau Rezki (Sony) wrote:
> > For a single argument we can directly request a page from a caller
> > context when a "carry page block" is run out of free spot
On Wed, Jan 20, 2021 at 01:54:03PM -0800, Paul E. McKenney wrote:
> On Wed, Jan 20, 2021 at 08:57:57PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2021-01-20 17:21:46 [+0100], Uladzislau Rezki (Sony) wrote:
> > > For a single argument we can directly request a page from a ca
On Thu, Jan 21, 2021 at 07:07:40AM -0800, Paul E. McKenney wrote:
> On Thu, Jan 21, 2021 at 02:35:10PM +0100, Uladzislau Rezki wrote:
> > On Wed, Jan 20, 2021 at 01:54:03PM -0800, Paul E. McKenney wrote:
> > > On Wed, Jan 20, 2021 at 08:57:57PM +0100, Sebastian And
Hello, Qiang,
> On Thu, Jan 21, 2021 at 02:49:49PM +0800, qiang.zh...@windriver.com wrote:
> > From: Zqiang
> >
> > If CPUs go offline, the corresponding krcp's page cache can
> > not be use util the CPU come back online, or maybe the CPU
> > will never go online again, this commit therefore fre
On Fri, Jan 22, 2021 at 01:44:36AM +, Zhang, Qiang wrote:
>
>
> ____
> 发件人: Uladzislau Rezki
> 发送时间: 2021年1月22日 4:26
> 收件人: Zhang, Qiang
> 抄送: Paul E. McKenney; r...@vger.kernel.org; linux-kernel@vger.kernel.org;
> ure...@gmail.c
> On 2021-01-21 13:38:34 [+0100], Uladzislau Rezki wrote:
> > __get_free_page() returns "unsigned long" whereas a bnode is a pointer
> > to kvfree_rcu_bulk_data struct, without a casting the compiler will
> > emit a warning.
>
> Yes, learned about it,
;nlru->lock);
>
> - call_rcu(&old->rcu, kvfree_rcu_local);
> + kvfree_rcu(old, rcu);
> return 0;
> }
>
> --
> 2.30.0.478.g8a0d178c01-goog
>
Reviewed-by: Uladzislau Rezki
--
Vlad Rezki
On Thu, Feb 04, 2021 at 02:04:27PM -0800, Paul E. McKenney wrote:
> On Fri, Jan 29, 2021 at 09:05:05PM +0100, Uladzislau Rezki (Sony) wrote:
> > Running an rcuscale stress-suite can lead to "Out of memory"
> > of a system. This can happen under high memory pressure w
Hello, Zqiang.
Thank you for your v4!
Some small nits see below:
> From: Zqiang
>
> Add free per-cpu existing krcp's page cache operation, when
> the system is under memory pressure.
>
> Signed-off-by: Zqiang
> Co-developed-by: Uladzislau Rezki (Sony)
> ---
>
>
>
> 发件人: Uladzislau Rezki
> 发送时间: 2021年1月25日 5:57
> 收件人: Zhang, Qiang
> 抄送: Uladzislau Rezki (Sony); LKML; RCU; Paul E . McKenney; Michael Ellerman;
> Andrew Morton; Daniel Axtens; Frederic Weisbecker; Neeraj Upadhyay; Joel
> F
On Tue, Jan 26, 2021 at 09:33:40AM +, Zhang, Qiang wrote:
>
>
> ____
> 发件人: Uladzislau Rezki
> 发送时间: 2021年1月25日 21:49
> 收件人: Zhang, Qiang
> 抄送: Uladzislau Rezki; LKML; RCU; Paul E . McKenney; Michael Ellerman; Andrew
> Morton;
>
> 发件人: Uladzislau Rezki
> 发送时间: 2021年1月22日 22:31
> 收件人: Zhang, Qiang
> 抄送: Uladzislau Rezki; Paul E. McKenney; r...@vger.kernel.org;
> linux-kernel@vger.kernel.org
> 主题: Re: 回复: [PATCH] rcu: Release per-cpu krcp page cache when
entation.
>
Thanks for your suggestion. I am not sure if htmldocs supports something
like "__maybe_unused", but tend to say that it does not. See below the
patch:
>From 65ecc7c58810c963c02e0596ce2e5758c54ef55d Mon Sep 17 00:00:00 2001
From: "Uladzislau Rezki (Sony)"
Da
On Sun, Jan 24, 2021 at 02:21:07AM +, Zhang, Qiang wrote:
>
>
> ____
> 发件人: Uladzislau Rezki
> 发送时间: 2021年1月22日 22:31
> 收件人: Zhang, Qiang
> 抄送: Uladzislau Rezki; Paul E. McKenney; r...@vger.kernel.org;
> linux-kernel@vger.kernel.
Hello, Zhang.
> >
> >发件人: Uladzislau Rezki (Sony)
> >发送时间: 2021年1月21日 0:21
> >收件人: LKML; RCU; Paul E . McKenney; Michael Ellerman
> >抄送: Andrew Morton; Daniel Axtens; Frederic Weisbecker; Neeraj >Upadhyay;
> >Joel
> On Wed 20-01-21 17:21:46, Uladzislau Rezki (Sony) wrote:
> > For a single argument we can directly request a page from a caller
> > context when a "carry page block" is run out of free spots. Instead
> > of hitting a slow path we can request an extra page by dem
On Mon, Jan 25, 2021 at 04:39:43PM +0100, Michal Hocko wrote:
> On Mon 25-01-21 15:31:50, Uladzislau Rezki wrote:
> > > On Wed 20-01-21 17:21:46, Uladzislau Rezki (Sony) wrote:
> > > > For a single argument we can directly request a page from a caller
> > > >
On Wed, Jan 27, 2021 at 09:00:27AM +, Zhang, Qiang wrote:
>
>
> ____
> 发件人: Uladzislau Rezki
> 发送时间: 2021年1月26日 22:07
> 收件人: Zhang, Qiang
> 抄送: Uladzislau Rezki; Paul E. McKenney; r...@vger.kernel.org;
> linux-kernel@vger.ker
On Mon, Jan 25, 2021 at 05:25:59PM +0100, Uladzislau Rezki wrote:
> On Mon, Jan 25, 2021 at 04:39:43PM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 15:31:50, Uladzislau Rezki wrote:
> > > > On Wed 20-01-21 17:21:46, Uladzislau Rezki (Sony) wrote:
> > > > > F
On Thu, Jan 28, 2021 at 04:17:01PM +0100, Michal Hocko wrote:
> On Thu 28-01-21 16:11:52, Uladzislau Rezki wrote:
> > On Mon, Jan 25, 2021 at 05:25:59PM +0100, Uladzislau Rezki wrote:
> > > On Mon, Jan 25, 2021 at 04:39:43PM +0100, Michal Hocko wrote:
> > > > On Mon
On Tue, Jan 05, 2021 at 06:56:59AM -0800, Paul E. McKenney wrote:
> On Tue, Jan 05, 2021 at 02:14:41PM +0100, Uladzislau Rezki wrote:
> > Dear, Lukas.
> >
> > > Dear Uladzislau,
> > >
> > > in commit 538fc2ee870a3 ("rcu: Introduce kfree_rcu() sin
> Add self tests for checking of RCU-tasks API functionality.
> It covers:
> - wait API functions;
> - invoking/completion call_rcu_tasks*().
>
> Self-tests are run when CONFIG_PROVE_RCU kernel parameter is set.
>
> Signed-off-by: Uladzislau Rezki (Sony)
> ---
struct seq_file *m, struct vm_struct *v)
BTW, if navigation over both list is an issue, for example when there
are multiple heavy readers of /proc/vmallocinfo, i think, it make sense
to implement RCU safe lists iteration and get rid of both locks.
As for the patch: Reviewed-by: Uladzislau Rezki (Sony)
Thanks!
--
Vlad Rezki
On Sun, Dec 13, 2020 at 09:51:34PM +, Matthew Wilcox wrote:
> On Sun, Dec 13, 2020 at 07:39:36PM +0100, Uladzislau Rezki wrote:
> > On Sun, Dec 13, 2020 at 01:08:43PM -0500, Waiman Long wrote:
> > > When multiple locks are acquired, they should be released in reverse
> &
On Mon, Dec 14, 2020 at 03:37:46PM +, Matthew Wilcox wrote:
> On Mon, Dec 14, 2020 at 04:11:28PM +0100, Uladzislau Rezki wrote:
> > On Sun, Dec 13, 2020 at 09:51:34PM +, Matthew Wilcox wrote:
> > > If we need to iterate the list efficiently, i'd suggest getting ri
> On Thu, Nov 26, 2020 at 05:44:28PM +1100, Stephen Rothwell wrote:
> > Hi all,
> >
> > After merging the rcu tree, today's linux-next build (htmldocs) produced
> > these warnings:
> >
> > include/linux/rcupdate.h:872: warning: Excess function parameter 'ptr'
> > description in 'kfree_rcu'
> > i
Rezki
From ed5c294addcb472be0d5c3619c5a7e0e9d34c3c5 Mon Sep 17 00:00:00 2001
From: Uladzislau Rezki
Date: Wed, 5 Aug 2015 16:20:50 +0200
Subject: [PATCH] sched: check pinned tasks before balance
The problem is there are pinned tasks in the system
which can not be migrated on another CPUs while
> Recently a discussion about stability and performance of a system
> involving a high rate of kfree_rcu() calls surfaced on the list [1]
> which led to another discussion how to prepare for this situation.
>
> This patch adds basic batching support for kfree_rcu(). It is "basic"
> because we do n
On Wed, Oct 02, 2019 at 11:23:06AM +1000, Daniel Axtens wrote:
> Hi,
>
> >>/*
> >> * Find a place in the tree where VA potentially will be
> >> * inserted, unless it is merged with its sibling/siblings.
> >> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
> >>
x27;t really make sense.
>
> Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for
> split purpose")
> Cc: Uladzislau Rezki (Sony)
> Signed-off-by: Daniel Wagner
> ---
> mm/vmalloc.c | 9 +++--
> 1 file changed, 3 insertions(+), 6 deletio
Hello, Daniel.
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index a3c70e275f4e..9fb7a16f42ae 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va,
> struct list_head *next;
> struct rb_node **link;
> struct rb_node *p
> > Hello, Joel.
> >
> > First of all thank you for improving it. I also noticed a high pressure
> > on RCU-machinery during performing some vmalloc tests when kfree_rcu()
> > flood occurred. Therefore i got rid of using kfree_rcu() there.
>
> Replying a bit late due to overseas conference travel
On Mon, Oct 07, 2019 at 07:36:44PM +0200, Sebastian Andrzej Siewior wrote:
> On 2019-10-07 18:56:11 [+0200], Uladzislau Rezki wrote:
> > Actually there is a high lock contention on vmap_area_lock, because it
> > is still global. You can have a look at last slide:
On Mon, Oct 07, 2019 at 11:44:20PM +0200, Uladzislau Rezki wrote:
> On Mon, Oct 07, 2019 at 07:36:44PM +0200, Sebastian Andrzej Siewior wrote:
> > On 2019-10-07 18:56:11 [+0200], Uladzislau Rezki wrote:
> > > Actually there is a high lock contention on vmap_area_lock, because
On Fri, Oct 04, 2019 at 01:20:38PM -0400, Joel Fernandes wrote:
> On Tue, Oct 01, 2019 at 01:27:02PM +0200, Uladzislau Rezki wrote:
> [snip]
> > > > I have just a small question related to workloads and performance
> > > > evaluation.
> > > > Are you awa
Hello, Daniel.
On Wed, Oct 09, 2019 at 08:05:39AM +0200, Daniel Wagner wrote:
> On Tue, Oct 08, 2019 at 06:04:59PM +0200, Uladzislau Rezki wrote:
> > > so, we do not guarantee, instead we minimize number of allocations
> > > with GFP_NOWAIT flag. For example on my 4x
On Fri, Oct 04, 2019 at 05:37:28PM +0200, Sebastian Andrzej Siewior wrote:
> If you post something that is related to PREEMPT_RT please keep tglx and
> me in Cc.
>
> On 2019-10-03 11:09:06 [+0200], Daniel Wagner wrote:
> > Replace preempt_enable() and preempt_disable() with the vmap_area_lock
> >
>
> You could have been migrated to another CPU while
> memory has been allocated.
>
That is true that we can migrate since we allow preemption
when allocate. But it does not really matter on which CPU an
allocation occurs and whether we migrate or not.
If we land on another CPU or still stay on
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index a3c70e275f4e..9fb7a16f42ae 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va,
> struct list_head *next;
> struct rb_node **link;
> struct rb_node *parent;
> + u
Hello, Daniel, Sebastian.
> > On Fri, Oct 04, 2019 at 06:30:42PM +0200, Sebastian Andrzej Siewior wrote:
> > > On 2019-10-04 18:20:41 [+0200], Uladzislau Rezki wrote:
> > > > If we have migrate_disable/enable, then, i think preempt_enable/disable
> > > > s
On Mon, Oct 07, 2019 at 06:34:43PM +0200, Daniel Wagner wrote:
> On Mon, Oct 07, 2019 at 06:23:30PM +0200, Uladzislau Rezki wrote:
> > Hello, Daniel, Sebastian.
> >
> > > > On Fri, Oct 04, 2019 at 06:30:42PM +0200, Sebastian Andrzej Siewior
> > > > wrote:
in "busy" tree
> C) "purge_list" is only used when vmap_area is in
>vmap_purge_list
>
> 2) Eliminate "flags".
> Since only one flag VM_VM_AREA is being used, and the same
> thing can be done by judging whether "vm&quo
in "busy" tree
> C) "purge_list" is only used when vmap_area is in
>vmap_purge_list
>
> 2) Eliminate "flags".
> Since only one flag VM_VM_AREA is being used, and the same
> thing can be done by judging whether "vm&quo
> >
> > - local_irq_save(*flags); // For safely calling this_cpu_ptr().
> > + local_irq_save(*flags); /* For safely calling this_cpu_ptr(). */
>
> And here as well. ;-)
>
OK. For me it works either way. I can stick to "//" :)
--
Vlad Rezki
On Fri, May 01, 2020 at 02:27:49PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 28, 2020 at 10:58:48PM +0200, Uladzislau Rezki (Sony) wrote:
> > Cache some extra objects per-CPU. During reclaim process
> > some pages are cached instead of releasing by linking them
> > into th
On Fri, May 01, 2020 at 03:25:24PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 28, 2020 at 10:58:49PM +0200, Uladzislau Rezki (Sony) wrote:
> > Document the rcutree.rcu_min_cached_objs sysfs kernel parameter.
> >
> > Signed-off-by: Uladzislau Rezki (Sony)
>
> Could y
On Fri, May 01, 2020 at 04:03:59PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 28, 2020 at 10:59:00PM +0200, Uladzislau Rezki (Sony) wrote:
> > Move inlined kvfree_call_rcu() function out of the
> > header file. This step is a preparation for head-less
> > support.
>
On Sun, May 03, 2020 at 08:27:00PM -0400, Joel Fernandes wrote:
> On Fri, May 01, 2020 at 04:06:38PM -0700, Paul E. McKenney wrote:
> > On Tue, Apr 28, 2020 at 10:59:01PM +0200, Uladzislau Rezki (Sony) wrote:
> > > Make a kvfree_call_rcu() function to support head-less
> >
> > > On Tue, Apr 28, 2020 at 10:58:59PM +0200, Uladzislau Rezki (Sony) wrote:
> > > > > From: "Joel Fernandes (Google)"
> > > > >
> > > > > Handle cases where the the object being kvfree_rcu()'d is not aligned
> >
On Fri, May 01, 2020 at 03:39:09PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 28, 2020 at 10:58:58PM +0200, Uladzislau Rezki (Sony) wrote:
> > Update the kvfree_call_rcu() with head-less support, it
> > means an object without any rcu_head structure can be
> > reclaimed
> > >
> > > If we are not doing single-pointer allocation, then that would also
> > > eliminate
> > > entering the low-level page allocator for single-pointer allocations.
> > >
> > > Or did you mean entry into the allocator for the full-page allocations
> > > related to the pointer array for PR
> > @@ -3072,21 +3105,34 @@ static inline bool queue_kfree_rcu_work(struct
> > kfree_rcu_cpu *krcp)
> > krwp = &(krcp->krw_arr[i]);
> >
> > /*
> > -* Try to detach bhead or head and attach it over any
> > +* Try to detach bkvhead or head and attach
> >
> > For single argument we can drop the lock before the entry to the page
> > allocator. Because it follows might_sleep() anotation we avoid of having
> > a situation when spinlock(rt mutex) is taken from any atomic context.
> >
> > Since the lock is dropped the current context can be interru
> >
> > A few questions about the resulting alloc_vmap_area():
> >
> > : static struct vmap_area *alloc_vmap_area(unsigned long size,
> > : unsigned long align,
> > : unsigned long vstart, unsigned long vend,
> > : int
On Fri, Oct 11, 2019 at 04:55:15PM -0700, Andrew Morton wrote:
> On Thu, 10 Oct 2019 17:17:49 +0200 Uladzislau Rezki wrote:
>
> > > > :* The preload is done in non-atomic context, thus it allows us
> > > > :* to use more permissive allo
On Mon, Oct 14, 2019 at 03:13:08PM +0200, Michal Hocko wrote:
> On Fri 11-10-19 00:33:18, Uladzislau Rezki (Sony) wrote:
> > Get rid of preempt_disable() and preempt_enable() when the
> > preload is done for splitting purpose. The reason is that
> > calling spin_lock() with
> > > > > > :* The preload is done in non-atomic context, thus it allows us
> > > > > > :* to use more permissive allocation masks to be more stable
> > > > > > under
> > > > > > :* low memory condition and high memory pressure.
> > > > > > :*
> > > > > > :* Even if it fails we
Hello, Michal.
Sorry for late reply. See my comments enclosed below:
> On Wed 16-10-19 11:54:36, Uladzislau Rezki (Sony) wrote:
> > Some background. The preemption was disabled before to guarantee
> > that a preloaded object is available for a CPU, it was stored for.
>
>
On Wed, Oct 16, 2019 at 01:07:22PM +0200, Michal Hocko wrote:
> On Wed 16-10-19 11:54:38, Uladzislau Rezki (Sony) wrote:
> > When fit type is NE_FIT_TYPE there is a need in one extra object.
> > Usually the "ne_fit_preload_node" per-CPU variable has it and
> >
> > alloc_vmap_area() is given a gfp_mask for the page allocator.
> > Let's respect that mask and consider it even in the case when
> > doing regular CPU preloading, i.e. where a context can sleep.
>
> This is explaining what but it doesn't say why. I would go with
> "
> Allocation functions shoul
> > >
> > > This is explaining what but it doesn't say why. I would go with
> > > "
> > > Allocation functions should comply with the given gfp_mask as much as
> > > possible. The preallocation code in alloc_vmap_area doesn't follow that
> > > pattern and it is using a hardcoded GFP_KERNEL. Althou
>
> Note for Uladzislau Rezki, I noticed that the new augmented rbtree
> code defines its own augment_tree_propagate_from function to update
> the augmented subtree information after a node is modified; it would
> probably be feasible to rely o
>
> ---
> a/lib/rbtree_test.c~augmented-rbtree-add-new-rb_declare_callbacks_max-macro-fix-2
> +++ a/lib/rbtree_test.c
> @@ -220,10 +220,6 @@ static void check_augmented(int nr_nodes
> struct rb_node *rb;
>
> check(nr_nodes);
> - for (rb = rb_first(&root.rb_root); rb; rb = rb_nex
On Mon, Jan 28, 2019 at 12:04:29PM -0800, Andrew Morton wrote:
> On Thu, 24 Jan 2019 12:56:48 +0100 "Uladzislau Rezki (Sony)"
> wrote:
>
> > commit 763b218ddfaf ("mm: add preempt points into
> > __purge_vmap_area_lazy()")
> >
> > intr
On Mon, Jan 28, 2019 at 05:45:28PM -0500, Joel Fernandes wrote:
> On Thu, Jan 24, 2019 at 12:56:48PM +0100, Uladzislau Rezki (Sony) wrote:
> > commit 763b218ddfaf ("mm: add preempt points into
> > __purge_vmap_area_lazy()")
> >
> > introduced some pree
Hello, Michal.
On Fri, Feb 01, 2019 at 01:45:28PM +0100, Michal Hocko wrote:
> On Thu 31-01-19 17:24:52, Uladzislau Rezki (Sony) wrote:
> > vmap_lazy_nr variable has atomic_t type that is 4 bytes integer
> > value on both 32 and 64 bit systems. lazy_max_pages() deals with
> >
Hello, Matthew.
On Mon, Feb 04, 2019 at 05:33:00AM -0800, Matthew Wilcox wrote:
> On Mon, Feb 04, 2019 at 11:49:56AM +0100, Uladzislau Rezki wrote:
> > On Fri, Feb 01, 2019 at 01:45:28PM +0100, Michal Hocko wrote:
> > > On Thu 31-01-19 17:24:52, Uladzislau Rezki (Sony) wrote:
&g
On Wed, May 22, 2019 at 11:19:04AM -0700, Andrew Morton wrote:
> On Wed, 22 May 2019 17:09:37 +0200 "Uladzislau Rezki (Sony)"
> wrote:
>
> > Introduce ne_fit_preload()/ne_fit_preload_end() functions
> > for preloading one extra vmap_area object to ensure that
&g
On Wed, May 22, 2019 at 11:19:11AM -0700, Andrew Morton wrote:
> On Wed, 22 May 2019 17:09:38 +0200 "Uladzislau Rezki (Sony)"
> wrote:
>
> > It does not make sense to try to "unlink" the node that is
> > definitely not linked with a list nor tree. On t
On Wed, May 22, 2019 at 11:19:16AM -0700, Andrew Morton wrote:
> On Wed, 22 May 2019 17:09:39 +0200 "Uladzislau Rezki (Sony)"
> wrote:
>
> > Move the BUG_ON()/RB_EMPTY_NODE() check under unlink_va()
> > function, it means if an empty node gets freed it is a BUG
&
On Fri, May 24, 2019 at 06:33:16PM +0800, Hillf Danton wrote:
>
> On Wed, 22 May 2019 17:09:37 +0200 Uladzislau Rezki (Sony) wrote:
> > /*
> > + * Preload this CPU with one extra vmap_area object to ensure
> > + * that we have it available when fit type of free are
On Mon, May 27, 2019 at 11:07:12AM +0800, Hillf Danton wrote:
>
> On Mon, 27 May 2019 05:22:28 +0800 Uladzislau Rezki (Sony) wrote:
> > It does not make sense to try to "unlink" the node that is
> > definitely not linked with a list nor tree. On the first
> >
> > Move the BUG_ON()/RB_EMPTY_NODE() check under unlink_va()
> > function, it means if an empty node gets freed it is a BUG
> > thus is considered as faulty behaviour.
>
> Can we switch it to a WARN_ON(). We are trying to remove all BUG_ON()s.
> If a user wants to crash on warning, there's a sysc
find a spot whereas a linked list provides a constant-time access
> > to previous and next blocks to check if merging can be done. In case
> > of merging of de-allocated memory chunk a large coalesced area is
> > created.
> >
> > Complexity: ~O(log(N))
> >
> > Signed-off-
> > +#if DEBUG_AUGMENT_PROPAGATE_CHECK
> > +static void
> > +augment_tree_propagate_do_check(struct rb_node *n)
> > +{
> > + struct vmap_area *va;
> > + struct rb_node *node;
> > + unsigned long size;
> > + bool found = false;
> > +
> > + if (n == NULL)
> > + return;
> > +
> > +
> > >
> > > Do we need this change?
> > >
> > This patch does not tend to refactor the code. I have removed extra empty
> > lines because i touched the code around. I can either keep that change or
> > remove it. What is your opinion?
>
> Usually it's better to separate cosmetic changes from func
Hello, Andrew.
>
> It's a lot of new code. I t looks decent and I'll toss it in there for
> further testing. Hopefully someone will be able to find the time for a
> detailed review.
>
I have got some proposals and comments about simplifying the code a bit.
So i am about to upload the v3 for furt
On Thu, Apr 18, 2019 at 03:10:33PM -0700, Andrew Morton wrote:
> On Thu, 18 Apr 2019 21:39:25 +0200 "Uladzislau Rezki (Sony)"
> wrote:
>
> > On my "Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz" system(12 CPUs)
> > i get the warning from the compiler about fra
On Fri, Jul 12, 2019 at 11:09:00PM +0800, Pengfei Li wrote:
> On Fri, Jul 12, 2019 at 9:49 PM Matthew Wilcox wrote:
> >
> > On Fri, Jul 12, 2019 at 08:02:13PM +0800, Pengfei Li wrote:
> >
> > I don't think you need struct union struct union. Because llist_node
> > is just a pointer, you can get t
> On Sun, Aug 11, 2019 at 11:46 AM Uladzislau Rezki (Sony)
> wrote:
> >
> > Recently there was introduced RB_DECLARE_CALLBACKS_MAX template.
> > One of the callback, to be more specific *_compute_max(), calculates
> > a maximum scalar value of node against its l
On Sun, Aug 11, 2019 at 05:39:23PM -0700, Michel Lespinasse wrote:
> On Sun, Aug 11, 2019 at 11:46 AM Uladzislau Rezki (Sony)
> wrote:
> > RB_DECLARE_CALLBACKS_MAX defines its own callback to update the
> > augmented subtree information after a node is modified. It makes
&
>
> I think it would be sufficient to call RBCOMPUTE(node, true) on every
> node and check the return value ?
>
Yes, that is enough for sure. The only way i was thinking about to make it
public, because checking the tree for MAX is generic for every users which
use RB_DECLARE_CALLBACKS_MAX templat
kes sense to me. I also can reproduce
that issue, so i agree with your patch. Basically we can skip free VA
block(that can fit) examining previous one(my fault), instead of moving
base downwards and recheck an area that did not fit.
Reviewed-by: Uladzislau Rezki (Sony)
Appreciate you for fixing it!
--
Vlad Rezki
Hello, Michel.
>
> Hmmm, I had not thought about that. Agree that this can be useful -
> there is already similar test code in rbtree_test.c and also
> vma_compute_subtree_gap() in mmap.c, ...
>
> With patch 3/3 of this series, the RBCOMPUTE function (typically
> generated through the RB_DECLARE
On Mon, Jul 29, 2019 at 04:21:39PM -0700,
sathyanarayanan.kuppusw...@linux.intel.com wrote:
> From: Kuppuswamy Sathyanarayanan
>
> Recent changes to the vmalloc code by Commit 68ad4a330433
> ("mm/vmalloc.c: keep track of free blocks for vmap allocation") can
> cause spurious percpu allocation fa
101 - 200 of 690 matches
Mail list logo