Another good thing...
I have two test-VM's running on two different nodes... The VM on node
with ovs 1.4.1 gives me throuput of about 5MB/s, the other one on the
1.7.90 about 8MB/s, so that customers def. should see a better overall
performance.
Will continue to monitor ;)
Oliver.
On 06/08/
Hi,
change done this morning, but to typical times I still see a raise in
CPU-load, though the number of flows kept being low:
20120608-094530: lookups: hit:21369696007 missed:12901230041
lost:210760086 flows: 588 SHORT_FLOWS=329, TOP=mem: 6848 cpu: 47
20120608-094540: lookups: hit:2136971009
Hi Justin,
thnx for the explanations.
Here is an excerpt of a scenario, when CPU-load goes up, though within our
network the lost-figures don't normally change:
20120607-114420: lookups: hit:21149788628 missed:12736368714 lost:210746961
flows: 3280 SHORT_FLOWS=2451, TOP=mem: 19m cpu: 39
201206
On Jun 6, 2012, at 2:52 AM, Oliver Francke wrote:
> @Justin: Any other recommendations?
Are you also having many short-lived flows? If you're in the range I mentioned
in my response to Kaushal (roughly 120,000 flow setups per second), then the
forthcoming 1.7.0 release may be enough for you.
Thanks for the information. We've seen that OVS can handle over 10Gbps. The
problem that you're seeing is related to flow setups. In releases prior to
1.7, the flow setup rate was roughly 40,000 flows per second. The changes in
1.7 increase that number to 120,000.
As we discussed, the bulk
Hi Kaushal,
thanks for your first impressions. My next change-window is in two days,
will put the current version on one of 5 nodes.
I'ill establish a small script, which monitors memory-usages, CPU-load,
no of flows etc...
@Justin: Any other recommendations?
If it's worth, I could try to st
The packets lost in the previous case came about in the couple of minutes
we put the whole 350mbps of load. In the lesser load, we did not see packet
loss.
I am running traffic at the same rate 20-30mbps and the CPU load is also
the same. I think as soon as we add the full load, the high CPU load
Okay, great. The big change here is that we're actually setting up fewer of
those kernel flows on purpose to force them into userspace. Here's the commit
message that describes the change:
ofproto-dpif: Implement "flow setup governor" to speed up many short flows.
The cost of crea
Hi Kaushal n Justin,
On 06/05/2012 09:19 AM, Kaushal Shubhank wrote:
Surely we will try the 1.7.0 version. Considering this is production,
we will be able to try this in off-peak hours. We will update you with
the results as soon as possible.
Thanks a lot and looking forward to contribute to
Surely we will try the 1.7.0 version. Considering this is production, we
will be able to try this in off-peak hours. We will update you with the
results as soon as possible.
Thanks a lot and looking forward to contribute to the project in any way
possible.
Kaushal
On Tue, Jun 5, 2012 at 12:36 PM
Of your nearly 12,000 flows, over 10,000 had fewer than four packets:
[jpettit@timber-2 Desktop] grep -e "packets:[0123]," live_flows_20120604 |wc -l
10143
Short-lived flows are really difficult for OVS, since there's a lot of overhead
in setting up and maintaining the kernel flow table. We
Are eth3 and eth4 on the same network segment? If so, I'd guess you've
introduced a loop.
I wouldn't recommend setting your evection threshold so high, since OVS is
going to have to do a lot of work to maintain so many kernel flows. I wouldn't
go above 10s of thousands of flows. What do your
Hello,
We have a simple setup in which a server running a transparent proxy needs
to intercept the http port 80 data. We have installed openvswitch (1.4.1)
in the same server (running Ubuntu-natty 2.6.38-12-server 64bit) to feed
the proxy with the corresponding type of packets while bridging all o
13 matches
Mail list logo