On 17/11/15 17:04, Jan Beulich wrote: >>>> On 03.11.15 at 18:58, <malcolm.cross...@citrix.com> wrote: >> --- a/xen/common/grant_table.c >> +++ b/xen/common/grant_table.c >> @@ -178,6 +178,10 @@ struct active_grant_entry { >> #define _active_entry(t, e) \ >> ((t)->active[(e)/ACGNT_PER_PAGE][(e)%ACGNT_PER_PAGE]) >> >> +bool_t grant_rwlock_barrier; >> + >> +DEFINE_PER_CPU(rwlock_t *, grant_rwlock); > Shouldn't these be per grant table? And wouldn't doing so eliminate > the main limitation of the per-CPU rwlocks?
The grant rwlock is per grant table. The entire point of this series is to reduce the cmpxchg storm which happens when many pcpus attempt to grap the same domains grant read lock. As identified in the commit message, reducing the cmpxchg pressure on the cache coherency fabric increases intra-vm network through from 10Gbps to 50Gbps when running iperf between two 16-vcpu guests. Or in other words, 80% of cpu time is wasted with waiting on an atomic read/modify/write operation against a remote hot cache line. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel