On 22/12/2014 18:48, j.gli...@gmail.com wrote:
>  static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> -                                                    unsigned long start,
> -                                                    unsigned long end,
> -                                                    enum mmu_event event)
> +                                                    struct 
> mmu_notifier_range *range)
>  {
> +     /*
> +      * Initialize list no matter what in case a mmu_notifier register after
> +      * a range_start but before matching range_end.
> +      */
> +     INIT_LIST_HEAD(&range->list);

I don't see how can an mmu_notifier register after a range_start but
before a matching range_end. The mmu_notifier registration locks all mm
locks, and that should prevent any invalidation from running, right?

>       if (mm_has_notifiers(mm))
> -             __mmu_notifier_invalidate_range_start(mm, start, end, event);
> +             __mmu_notifier_invalidate_range_start(mm, range);
>  }

...

>  void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> -                                        unsigned long start,
> -                                        unsigned long end,
> -                                        enum mmu_event event)
> +                                        struct mmu_notifier_range *range)
>  
>  {
>       struct mmu_notifier *mn;
> @@ -185,21 +183,36 @@ void __mmu_notifier_invalidate_range_start(struct 
> mm_struct *mm,
>       id = srcu_read_lock(&srcu);
>       hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
>               if (mn->ops->invalidate_range_start)
> -                     mn->ops->invalidate_range_start(mn, mm, start,
> -                                                     end, event);
> +                     mn->ops->invalidate_range_start(mn, mm, range);
>       }
>       srcu_read_unlock(&srcu, id);
> +
> +     /*
> +      * This must happen after the callback so that subsystem can block on
> +      * new invalidation range to synchronize itself.
> +      */
> +     spin_lock(&mm->mmu_notifier_mm->lock);
> +     list_add_tail(&range->list, &mm->mmu_notifier_mm->ranges);
> +     mm->mmu_notifier_mm->nranges++;
> +     spin_unlock(&mm->mmu_notifier_mm->lock);
>  }
>  EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);

Don't you have a race here because you add the range struct after the
callback?

-------------------------------------------------------------------------
Thread A                    | Thread B
-------------------------------------------------------------------------
call mmu notifier callback  |
  clear SPTE                |
                            | device page fault
                            |   mmu_notifier_range_is_valid returns true
                            |   install new SPTE
add event struct to list    |
mm clears/modifies the PTE  |
-------------------------------------------------------------------------

So we are left with different entries in the host page table and the
secondary page table.

I would think you'd want the event struct to be added to the list before
the callback is run.

Best regards,
Haggai


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to