> -----Original Message----- > From: Xen-devel <xen-devel-boun...@lists.xenproject.org> On Behalf Of Jan > Beulich > Sent: 28 September 2020 12:00 > To: xen-devel@lists.xenproject.org > Cc: Andrew Cooper <andrew.coop...@citrix.com>; George Dunlap > <george.dun...@eu.citrix.com>; Ian > Jackson <i...@xenproject.org>; Julien Grall <jul...@xen.org>; Wei Liu > <w...@xen.org>; Stefano Stabellini > <sstabell...@kernel.org> > Subject: [PATCH 07/12] evtchn: cut short evtchn_reset()'s loop in the common > case > > The general expectation is that there are only a few open ports left > when a domain asks its event channel configuration to be reset. > Similarly on average half a bucket worth of event channels can be > expected to be inactive. Try to avoid iterating over all channels, by > utilizing usage data we're maintaining anyway. > > Signed-off-by: Jan Beulich <jbeul...@suse.com>
Reviewed-by: Paul Durrant <p...@xen.org> > > --- a/xen/common/event_channel.c > +++ b/xen/common/event_channel.c > @@ -232,7 +232,11 @@ void evtchn_free(struct domain *d, struc > evtchn_port_clear_pending(d, chn); > > if ( consumer_is_xen(chn) ) > + { > write_atomic(&d->xen_evtchns, d->xen_evtchns - 1); > + /* Decrement ->xen_evtchns /before/ ->active_evtchns. */ > + smp_wmb(); > + } > write_atomic(&d->active_evtchns, d->active_evtchns - 1); > > /* Reset binding to vcpu0 when the channel is freed. */ > @@ -1073,6 +1077,19 @@ int evtchn_unmask(unsigned int port) > return 0; > } > > +static bool has_active_evtchns(const struct domain *d) > +{ > + unsigned int xen = read_atomic(&d->xen_evtchns); > + > + /* > + * Read ->xen_evtchns /before/ active_evtchns, to prevent > + * evtchn_reset() exiting its loop early. > + */ > + smp_rmb(); > + > + return read_atomic(&d->active_evtchns) > xen; > +} > + > int evtchn_reset(struct domain *d, bool resuming) > { > unsigned int i; > @@ -1097,7 +1114,7 @@ int evtchn_reset(struct domain *d, bool > if ( !i ) > return -EBUSY; > > - for ( ; port_is_valid(d, i); i++ ) > + for ( ; port_is_valid(d, i) && has_active_evtchns(d); i++ ) > { > evtchn_close(d, i, 1); > > @@ -1340,6 +1357,10 @@ int alloc_unbound_xen_event_channel( > > spin_unlock_irqrestore(&chn->lock, flags); > > + /* > + * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit > + * barrier needed due to spin-locked region just above. > + */ > write_atomic(&ld->xen_evtchns, ld->xen_evtchns + 1); > > out: >