I'm guessing the traffic is bursty or ovs-vswitchd was busing doing other work and the queues overflowed.
--Justin On May 27, 2013, at 9:34 AM, kevin parker <kevin.parker...@gmail.com> wrote: > Justin, > Even at 85-90% CPU usage i am seeing increased lost count,initially > i faced high lost count when ovs-vswitchd was at 100%.So what can be the > reason for dropped packet even with 10% free cpu available. > > regards, > kevin > > > On Mon, May 27, 2013 at 9:40 PM, Justin Pettit <jpet...@nicira.com> wrote: > We've made a lot of improvements in flow set up rate since version 1.4, so > upgrading to a more current version (we're on 1.10 now) will likely help. > We're currently working on multithreading the OVS userspace and adding > support for wildcarded flows in the kernel, which should substantially > improve flow set up. > > --Justin > > > On May 27, 2013, at 12:59 AM, kevin parker <kevin.parker...@gmail.com> wrote: > > > Hi, > > > > Running OVS 1.4 on xenserver 6.0.2 , but its taking very high cpu some > > times ~100%. > > > > ovs-dpctl show > > > > system@xenbr5: > > lookups: hit:2560723 missed:3742809 lost:0 > > flows: 5 > > port 0: xenbr5 (internal) > > port 1: eth5 > > system@xapi2: > > lookups: hit:1660559495 missed:1241428 lost:0 > > flows: 11 > > port 0: xapi2 (internal) > > port 1: eth7 > > port 2: eth6 > > system@xenbr4: > > lookups: hit:2539909 missed:3729876 lost:0 > > flows: 5 > > port 0: xenbr4 (internal) > > port 1: eth4 > > system@xapi3: > > lookups: hit:20443295213 missed:26596588140 lost:267425491 > > flows: 3069 > > port 0: xapi3 (internal) > > port 1: eth1 > > port 2: eth0 > > port 4: xapi4 (internal) > > port 15: vif12.0 > > port 18: vif14.0 > > system@xenbr2: > > lookups: hit:1634980795 missed:166104910 lost:0 > > flows: 127 > > port 0: xenbr2 (internal) > > port 1: eth2 > > system@xenbr3: > > lookups: hit:2450949145 missed:81360495 lost:0 > > flows: 118 > > port 0: xenbr3 (internal) > > port 1: eth3 > > port 2: xapi6 (internal) > > port 6: vif12.1 > > port 8: vif14.1 > > > > Network usage: > > > > dstat -n > > > > -net/total- > > recv send > > 6475k 5736k > > 6575k 5646k > > 6767k 6347k > > > > Can some one please tell me how this can be fixed. > > > > Regards, > > Kevin > > > > > > > > > > _______________________________________________ > > discuss mailing list > > discuss@openvswitch.org > > http://openvswitch.org/mailman/listinfo/discuss > > _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss