The count and scan can be separated in time. It is a fair chance that all work is already done when the scan starts. It then might retry. This is can be avoided with returning SHRINK_STOP.
Signed-off-by: Peter Enderborg <peter.enderb...@sony.com> --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c716eadc7617..8b36c6b2887d 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3310,7 +3310,7 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) break; } - return freed; + return freed == 0 ? SHRINK_STOP : freed; } static struct shrinker kfree_rcu_shrinker = { -- 2.17.1