From: Yunsheng Lin <linyunsh...@huawei.com> Date: Mon, 15 Mar 2021 17:30:10 +0800
> Currently qdisc_lock(q) is taken before enqueuing and dequeuing > for lockless qdisc's skb_bad_txq/gso_skb queue, qdisc->seqlock is > also taken, which can provide the same protection as qdisc_lock(q). > > This patch removes the unnecessay qdisc_lock(q) protection for > lockless qdisc' skb_bad_txq/gso_skb queue. > > And dev_reset_queue() takes the qdisc->seqlock for lockless qdisc > besides taking the qdisc_lock(q) when doing the qdisc reset, > some_qdisc_is_busy() takes both qdisc->seqlock and qdisc_lock(q) > when checking qdisc status. It is unnecessary to take both lock > while the fast path only take one lock, so this patch also changes > it to only take qdisc_lock(q) for locked qdisc, and only take > qdisc->seqlock for lockless qdisc. > > Since qdisc->seqlock is taken for lockless qdisc when calling > qdisc_is_running() in some_qdisc_is_busy(), use qdisc->running > to decide if the lockless qdisc is running. > > Signed-off-by: Yunsheng Lin <linyunsh...@huawei.com> What about other things protected by this lock, such as statistics and qlen? This change looks too risky to me.