On Wed, Jan 27, 2021 at 09:16:07AM +0100, Jan Beulich wrote:
> Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
> across pCPU-s) and the ones in EOI handling in PCI pass-through code,
> serializing perhaps an entire domain isn't helpful when no state (which
> isn't e.g. further protected by the per-channel lock) changes.

I'm unsure this move is good from a performance PoV, as the operations
switched to use the lock in read mode is a very small subset, and then
the remaining operations get a performance penalty when compared to
using a plain spin lock.

> Unfortunately this implies dropping of lock profiling for this lock,
> until r/w locks may get enabled for such functionality.
> 
> While ->notify_vcpu_id is now meant to be consistently updated with the
> per-channel lock held, an extension applies to ECS_PIRQ: The field is
> also guaranteed to not change with the per-domain event lock held for
> writing. Therefore the link_pirq_port() call from evtchn_bind_pirq()
> could in principle be moved out of the per-channel locked regions, but
> this further code churn didn't seem worth it.
> 
> Signed-off-by: Jan Beulich <jbeul...@suse.com>
> @@ -1510,9 +1509,10 @@ int evtchn_destroy(struct domain *d)
>  {
>      unsigned int i;
>  
> -    /* After this barrier no new event-channel allocations can occur. */
> +    /* After this kind-of-barrier no new event-channel allocations can 
> occur. */
>      BUG_ON(!d->is_dying);
> -    spin_barrier(&d->event_lock);
> +    read_lock(&d->event_lock);
> +    read_unlock(&d->event_lock);

Don't you want to use write mode here to assure there are no read
users that have taken the lock before is_dying has been set, and thus
could make wrong assumptions?

As I understand the point of the barrier here is to ensure there are
no lockers carrier over from before is_dying has been set.

Thanks, Roger.

Reply via email to