A faster CPU will help (roughly in a manner that is roughly
proportional to it's single threaded processing improvement). However,
unless you are currently using a very old CPU I suspect that it will
not be sufficient.
On Thu, Apr 18, 2013 at 4:56 AM, Kristoffer Egefelt wrote:
> Hi Jesse
>
> The
Hi Jesse
The system is still experiencing delay with more than 12000 flows.
Is there anything I can do about this - will getting a faster CPU help?
Thanks
On 17/04/2013, at 10.19.21, Kristoffer Egefelt wrote:
> OK - any suggestions on how to calculate the right value ?
> Could you explain the
OK - any suggestions on how to calculate the right value ?
Could you explain the consequences of increasing this value ?
I ran
ovs-vsctl set bridge xenbr0 other-config:flow-eviction-threshold=2
the flows are stabilizing around 12-15000
Memory usage went from 12M to 14M - but the CPU load i
On Tue, Apr 16, 2013 at 3:47 AM, Kristoffer Egefelt wrote:
> Thanks Jesse,
>
> I have a NAT firewall which logs something like this every second - OVS is
> processing ~400.000 packets/s - and I'm having issues with response times or
> even timeouts when load > 96%.
> Is there anything I can do a
Thanks Jesse,
I have a NAT firewall which logs something like this every second - OVS is
processing ~400.000 packets/s - and I'm having issues with response times or
even timeouts when load > 96%.
Is there anything I can do about the latency/CPU usage, other than not running
OVS on the NAT fir
OK thanks - however ovs-dpctl show:
lookups: hit:142051685241 missed:16517079493 lost:215200
flows: 1544
with cpu utilization around 80% and ~250.000 p/s
(I hope this is the correct way to see amount of current flows)
If SSH (through OVS) have noticeable delay I would think that
On Mon, Apr 8, 2013 at 2:13 PM, Kristoffer Egefelt wrote:
> OK thanks - however ovs-dpctl show:
>
> lookups: hit:142051685241 missed:16517079493 lost:215200
> flows: 1544
>
> with cpu utilization around 80% and ~250.000 p/s
>
> (I hope this is the correct way to see amount of current flows)
>
> If
On Mon, Apr 8, 2013 at 1:14 AM, Kristoffer Egefelt wrote:
> Makes perfect sense - but with openvswitch 1.7.1 I'm seeing stuff like:
>
> 2013-04-06T20:03:55Z|6338190|timeval|WARN|82 ms poll interval (24 ms user,
> 52 ms system) is over 52 times the weighted mean interval 2 ms (2423146772
> samples)
Makes perfect sense - but with openvswitch 1.7.1 I'm seeing stuff like:
2013-04-06T20:03:55Z|6338190|timeval|WARN|82 ms poll interval (24 ms user, 52
ms system) is over 52 times the weighted mean interval 2 ms (2423146772 samples)
2013-04-07T03:48:04Z|6378252|timeval|WARN|context switches: 0 volu
I can't speak to the performance impact of running storage traffic over OVS. We
have storage running over OVS in our small XenServer pool and haven't seen
any ill effects, but that isn't much of a test. However, the sFlow
instrumentation in OVS gives useful visibility into storage activity, fo
Thanks for the input - I may not have explained myself properly though - I'm
not considering pci-passthrough.
What I would like to confirm is, if there's any problems running storage
traffic over OVS, latency/performance wise - in this case using SR-IOV VFs
inside dom0, to seperate the traffic.
You can use 10Gb NIC cards, but when you assign vnic to each VM, the
traffic is directly goes through VNIC, and
not through OVS. The observed behaviour is that if the traffic goes via
OVS, you may not see 10Gb speed, and it varies. What is the need of sending
traffic via OVS. Is there any decisio
Hi,
I want to use 10Gig Intel x520 NICs - should I:
- Run storage (iscsi/nfs) over OVS ?
- Create VFs and run storage and OVS on separate interfaces ?
- Buy more physical NICs even if I don't need the bandwidth ?
Any experiences with SR-IOV, storage latency or other issues to suggest one
over t
13 matches
Mail list logo