On 3/2/26 7:14 AM, Frederic Weisbecker wrote:
On Sat, Feb 21, 2026 at 01:54:18PM -0500, Waiman Long wrote:
The current cpuset partition code is able to dynamically update
the sched domains of a running system and the corresponding
HK_TYPE_DOMAIN housekeeping cpumask to perform what is essentally the
"isolcpus=domain,..." boot command line feature at run time.

The housekeeping cpumask update requires flushing a number of different
workqueues which may not be safe with cpus_read_lock() held as the
workqueue flushing code may acquire cpus_read_lock() or acquiring locks
which have locking dependency with cpus_read_lock() down the chain. Below
is an example of such circular locking problem.

   ======================================================
   WARNING: possible circular locking dependency detected
   6.18.0-test+ #2 Tainted: G S
   ------------------------------------------------------
   test_cpuset_prs/10971 is trying to acquire lock:
   ffff888112ba4958 ((wq_completion)sync_wq){+.+.}-{0:0}, at: 
touch_wq_lockdep_map+0x7a/0x180

   but task is already holding lock:
   ffffffffae47f450 (cpuset_mutex){+.+.}-{4:4}, at: 
cpuset_partition_write+0x85/0x130

   which lock already depends on the new lock.

   the existing dependency chain (in reverse order) is:
   -> #4 (cpuset_mutex){+.+.}-{4:4}:
   -> #3 (cpu_hotplug_lock){++++}-{0:0}:
   -> #2 (rtnl_mutex){+.+.}-{4:4}:
   -> #1 ((work_completion)(&arg.work)){+.+.}-{0:0}:
   -> #0 ((wq_completion)sync_wq){+.+.}-{0:0}:

   Chain exists of:
     (wq_completion)sync_wq --> cpu_hotplug_lock --> cpuset_mutex
Which workqueue is involved here that holds rtnl_mutex?
Is this an existing problem or added test code?

Circular locking dependency here may not necessarily mean that rtnl_mutex is directly used in a work function.  However it can be used in a locking chain involving multiple parties that can result in a deadlock situation if they happen in the right order. So it is better safe that sorry even if the chance of this occurrence is minimal.

Cheers,
Longman


Reply via email to