________________________________________ 发件人: Uladzislau Rezki <ure...@gmail.com> 发送时间: 2021年1月25日 5:57 收件人: Zhang, Qiang 抄送: Uladzislau Rezki (Sony); LKML; RCU; Paul E . McKenney; Michael Ellerman; Andrew Morton; Daniel Axtens; Frederic Weisbecker; Neeraj Upadhyay; Joel Fernandes; Peter Zijlstra; Michal Hocko; Thomas Gleixner; Theodore Y . Ts'o; Sebastian Andrzej Siewior; Oleksiy Avramchenko 主题: Re: 回复: [PATCH 3/3] kvfree_rcu: use migrate_disable/enable()
>Hello, Zhang. > >________________________________________ > >发件人: Uladzislau Rezki (Sony) <ure...@gmail.com> > >发送时间: 2021年1月21日 0:21 > >收件人: LKML; RCU; Paul E . McKenney; Michael Ellerman > >抄送: Andrew Morton; Daniel Axtens; Frederic Weisbecker; Neeraj >Upadhyay; > >Joel Fernandes; Peter Zijlstra; Michal Hocko; Thomas >Gleixner; Theodore Y . > >Ts'o; Sebastian Andrzej Siewior; Uladzislau >Rezki; Oleksiy Avramchenko > >主题: [PATCH 3/3] kvfree_rcu: use migrate_disable/enable() > > > >Since the page is obtained in a fully preemptible context, dropping > >the lock can lead to migration onto another CPU. As a result a prev. > >bnode of that CPU may be underutilised, because a decision has been > >made for a CPU that was run out of free slots to store a pointer. > > > >migrate_disable/enable() are now independent of RT, use it in order > >to prevent any migration during a page request for a specific CPU it > >is requested for. > > > Hello Rezki > > The critical migrate_disable/enable() area is not allowed to block, under RT > and non RT. > There is such a description in preempt.h > > > * Notes on the implementation. > * > * The implementation is particularly tricky since existing code patterns > * dictate neither migrate_disable() nor migrate_enable() is allowed to block. > * This means that it cannot use cpus_read_lock() to serialize against > hotplug, > * nor can it easily migrate itself into a pending affinity mask change on > * migrate_enable(). > >How i interpret it is migrate_enable()/migrate_disable() are not allowed to >use any blocking primitives, such as rwsem/mutexes/etc. in order to mark a >current context as non-migratable. > >void migrate_disable(void) >{ > struct task_struct *p = current; > > if (p->migration_disabled) { > p->migration_disabled++; > return; > } > preempt_disable(); > this_rq()->nr_pinned++; > p->migration_disabled = 1; > preempt_enable(); >} > >It does nothing that prevents you from doing schedule() or even wait for any >event(mutex slow path behaviour), when the process is removed from the >run-queue. >I mean after the migrate_disable() is invoked. Or i miss something? Hello Rezki Sorry, there's something wrong with the previous description. There are the following scenarios Due to migrate_disable will increase this_rq()->nr_pinned , after that if get_free_page be blocked, and this time, CPU going offline, the sched_cpu_wait_empty() be called in per-cpu "cpuhp/%d" task, and be blocked. blocked: sched_cpu_wait_empty() { struct rq *rq = this_rq(); rcuwait_wait_event(&rq->hotplug_wait, rq->nr_running == 1 && !rq_has_pinned_tasks(rq), TASK_UNINTERRUPTIBLE); } wakeup: balance_push() { if (is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) { if (!rq->nr_running && !rq_has_pinned_tasks(rq) && rcuwait_active(&rq->hotplug_wait)) { raw_spin_unlock(&rq->lock); rcuwait_wake_up(&rq->hotplug_wait); raw_spin_lock(&rq->lock); } return; } } One of the conditions for this function to wake up is "rq->nr_pinned == 0" that is to say between migrate_disable/enable, if blocked will defect CPU going offline longer blocking time. I'm not sure that's a problem,and I didn't find it in the kernel code between migrate_disable/enable possible sleep calls. > > How about the following changes: > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index e7a226abff0d..2aa19537ac7c 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -3488,12 +3488,10 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, > (*krcp)->bkvhead[idx]->nr_records == > KVFREE_BULK_MAX_ENTR) { > bnode = get_cached_bnode(*krcp); > if (!bnode && can_alloc) { > - migrate_disable(); > krc_this_cpu_unlock(*krcp, *flags); > bnode = (struct kvfree_rcu_bulk_data *) > __get_free_page(GFP_KERNEL | > __GFP_RETRY_MAYFAIL | __GFP_NOMEMALLOC | __GFP_NOWARN); > - *krcp = krc_this_cpu_lock(flags); > - migrate_enable(); > + raw_spin_lock_irqsave(&(*krcp)->lock, *flags); > >Hm.. Taking the former lock can lead to a pointer leaking, i mean a CPU >associated >with "krcp" might go offline during a page request process, so a queuing >occurs on >off-lined CPU. Apat of that, acquiring a former lock still does not solve: I agree with you here >- CPU1 in process of page allocation; >- CPU1 gets migrated to CPU2; >- another task running on CPU1 also allocate a page; >- both bnodes are added to krcp associated with CPU1. > >I agree that such scenario probably will never happen or i would say, can be >considered as a corner case. We can drop the: >[PATCH 3/3] kvfree_rcu: use migrate_disable/enable() >and live with: an allocated bnode can be queued to another CPU, so its prev. >"bnode" can be underutilized. What is also can be considered as a corner case. >According to my tests, it is hard to achieve: >Running kvfree_rcu() simultaneously in a tight loop, 1 000 000 >allocations/freeing: > >- 64 CPUs and 64 threads showed 1 migration; >- 64 CPUs and 128 threads showed 0 migrations; >- 64 CPUs and 32 threads showed 0 migration. >Thoughts? > >Thank you for your comments! Maybe migrate_disable/enable() can be removed Thanks Qiang >-- >Vlad Rezki