On Fri, Nov 22, 2013 at 03:30:54PM +0800, Chengyuan Li wrote:
> I'm testing ovs 2.0, bridged mode, configed miss-handlers with 28-thread
> and 4-thread respectively, sending the same pattern (short-live connection)
> and  same amount traffic  to the VM running on this vswitch, 28-thread
> configuration consume much more cpu than 4-thread, but the traffic volume
> should be far from ovs-vswitchd's max capacity.
> 
> - 28-thread
> ovs-vswitchd cpu usage: 2741%
> kernel missed packets: 130646/sec
> host throughput total pps 139679/sec
> 
> - 4-thread
> ovs-vswitchd cpu usage: 510%
> kernel missed packets: 130726/sec
> host throughput total pps 135715/sec
> 
> The perf shows that 70% cycles are consumed in __ticket_spin_lock() in
> 28-thread case. Further perf lock shows that futex_queues lock contention
> is very heavy. So it mean that pthread_mutex_lock() in ovs-vswitchd trigger
> the high cpu usage of spin_lock in the kernel.
> 
> Is this a known issue of kernel futex or ovs-vswitchd can improve?

ovs-vswitchd can definitely improve and we are in the midst of that
work.  For now I'd suggest using only a small number of threads.
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to