The per-domain event channel lock limits scalability when many VCPUs are trying to send interdomain events. A per-channel lock is introduced eliminating any lock contention when sending an event.
See this graph for the performance improvements: http://xenbits.xen.org/people/dvrabel/evtchn-scalability.png A different test (using Linux's evtchn device which masks/unmasks event channels) showed the following lock profile improvements: Per-domain lock: (XEN) lock: 69267976(00000004:19830041), block: 27777407(00000002:3C7C5C96) Per-event channel lock (XEN) lock: 686530(00000000:076AF5F6), block: 1787(00000000:000B4D22) Locking removed from evtchn_unmask(): (XEN) lock: 10769(00000000:00512999), block: 99(00000000:00009491) v3: - Clear xen_consumer when clearing state. - Defer freeing struct evtchn's until evtchn_destroy_final(). - Remove redundant d->evtchn test from port_is_valid(). - Use port_is_valid() again. - Drop event lock from notify_via_xen_event_channel(). v2: - Use unsigned int for d->valid_evtchns. - Compare channel pointers in double_evtchn_lock(). David _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel