On Fri, Nov 22, 2019 at 04:54:08PM -0800, Ralph Campbell wrote:

> Actually, I think you can remove the "need_wake" variable since it is
> unconditionally set to "true".

Oh, yes, thank you. An earlier revision had a different control flow
 
> Also, the comment in__mmu_interval_notifier_insert() says
> "mni->mr_invalidate_seq" and I think that should be
> "mni->invalidate_seq".

Got it.

I squashed this in:

diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index b3a064b3b31807..30abbfdc25be55 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -129,7 +129,6 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
 {
        struct mmu_interval_notifier *mni;
        struct hlist_node *next;
-       bool need_wake = false;
 
        spin_lock(&mmn_mm->lock);
        if (--mmn_mm->active_invalidate_ranges ||
@@ -140,7 +139,6 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
 
        /* Make invalidate_seq even */
        mmn_mm->invalidate_seq++;
-       need_wake = true;
 
        /*
         * The inv_end incorporates a deferred mechanism like rtnl_unlock().
@@ -160,8 +158,7 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm)
        }
        spin_unlock(&mmn_mm->lock);
 
-       if (need_wake)
-               wake_up_all(&mmn_mm->wq);
+       wake_up_all(&mmn_mm->wq);
 }
 
 /**
@@ -884,7 +881,7 @@ static int __mmu_interval_notifier_insert(
         * possibility for live lock, instead defer the add to
         * mn_itree_inv_end() so this algorithm is deterministic.
         *
-        * In all cases the value for the mni->mr_invalidate_seq should be
+        * In all cases the value for the mni->invalidate_seq should be
         * odd, see mmu_interval_read_begin()
         */
        spin_lock(&mmn_mm->lock);

Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to