Hello, On Mon, Nov 09, 2020 at 02:11:42PM +0000, Trond Myklebust wrote: > That means changing all filesystem code to use cpu-intensive queues. As > far as I can tell, they all use workqueues (most of them using the > standard system queue) for fput(), dput() and/or iput() calls.
I suppose the assumption was that those operations couldn't possiby be expensive enough to warrant other options, which doesn't seem to be the case unfortunately. Switching the users to system_unbound_wq, which should be pretty trivial, seems to be the straight forward solution. I can definitely see benefits in making workqueue smarter about concurrency-managed work items taking a long time. Given that nothing on these types of workqueues can be latency sensitive and the problem being reported is on the scale of tens of seconds, I think a more palatable approach could be through watchdog mechanism rather than hooking into cond_resched(). Something like: * Run watchdog timer more frequently - e.g. 1/4 of threshold. * If a work item is occupying the local concurrency for too long, set WORKER_CPU_INTENSIVE for the worker and, probably, generate a warning. I still think this should generate a warning and thus can't replace switching to unbound wq. The reason is that the concurrency limit isn't the only problem. A kthread needing to run on one particular CPU for tens of seconds just isn't great. Thanks. -- tejun