On Sun, Dec 16, 2018 at 8:32 AM Vlad Buslov <vla...@mellanox.com> wrote: > > On Thu 13 Dec 2018 at 23:32, Cong Wang <xiyou.wangc...@gmail.com> wrote: > > On Tue, Dec 11, 2018 at 2:19 AM Vlad Buslov <vla...@mellanox.com> wrote: > >> > >> As a part of the effort to remove dependency on rtnl lock, cls API is being > >> converted to use fine-grained locking mechanisms instead of global rtnl > >> lock. However, chain_head_change callback for ingress Qdisc is a sleeping > >> function and cannot be executed while holding a spinlock. > > > > > > Why does it have to be a spinlock not a mutex? > > > > I've read your cover letter and this changelog, I don't find any > > answer. > > My initial implementation used mutex. However, it was changed to > spinlock by Jiri's request during internal review. >
So what's the answer to my question? :) > > > >> > >> Extend cls API with new workqueue intended to be used for tcf_proto > >> lifetime management. Modify tcf_proto_destroy() to deallocate proto > >> asynchronously on workqueue in order to ensure that all chain_head_change > >> callbacks involving the proto complete before it is freed. Convert > >> mini_qdisc_pair_swap(), that is used as a chain_head_change callback for > >> ingress and clsact Qdiscs, to use a workqueue. Move Qdisc deallocation to > >> tc_proto_wq ordered workqueue that is used to destroy tcf proto instances. > >> This is necessary to ensure that Qdisc is destroyed after all instances of > >> chain/proto that it contains in order to prevent use-after-free error in > >> tc_chain_notify_delete(). > > > > > > Please avoid async unless you have to, there are almost always bugs > > when playing with deferred workqueue or any other callbacks. > > Indeed, async Qdisc and tp deallocation introduces additional > complexity. What approach would you recommend to make chain_head_change > callback atomic? I don't look into any of your code yet, from my understanding of your changelog, it seems all these workqueue stuffs can be gone if you can make it a mutex instead of a spinlock. This is why I stopped here and wait for your answer to my above question. Thanks.