Ananth,
"megaflows" support was added in v1.11.0 so 1.9.3 doesn't have it.
Regards,
---
Motonori Shindo
On 2013/10/01, at 16:37, ananthan wrote:
> Thanks Justin,
> For Xenserver i am planning to upgrade from 1.4 to
> 1.9.Does 1.9.3 have wild carding feature.
> Regards,
Hi Ananth,
I'm wondering if your upgrade would also fix my issues on Citrix
XenServer 6.1
(http://openvswitch.org/pipermail/discuss/2013-September/011324.html).
It was already pointed that there have been lots of performance
improvements since version 1.4 and that I won't likely see my proble
Thanks Justin,
For Xenserver i am planning to upgrade from 1.4 to
1.9.Does 1.9.3 have wild carding feature.
Regards,
Ananth
On Tue, Oct 1, 2013 at 11:22 AM, Justin Pettit wrote:
> Correct. Multi-threading is going to be part of the 2.0 release. The
> improvement you're s
Correct. Multi-threading is going to be part of the 2.0 release. The
improvement you're seeing is from our adding support for wildcarding in the
kernel. We've been calling it megaflows. (As opposed to the previous
exact-match microflows that were installed.) In the new model, ovs-vswitchd
Hi,
Tested ovs 1.11 on ubuntu 12.04 and recreated above workload,this time
i am so shocked to see that Load Average didnt go more than 0.1% which is
nearly 80 times improvement when compared to previous situation :) .It
solves all the issue that we were facing with old release,thanks for fixi
On May 28, 2013, at 8:39 AM, ananthan wrote:
> Hi,
> this is the only output for
>
> ovs-ofctl dump-flows xapi3
>
> duration=5120780.012s, table=0, n_packets=48344340859,
> n_bytes=12069667659298, priority=0 actions=NORMAL
>
> when i discussed this question on IRC someone pointed a
Hi,
this is the only output for
* *
*ovs-ofctl dump-flows xapi3*
duration=5120780.012s, table=0, n_packets=48344340859,
n_bytes=12069667659298, priority=0 actions=NORMAL
when i discussed this question on IRC someone pointed about :
above default flow makes ovs a standard learning/forwa
On May 27, 2013, at 9:58 AM, ananthan wrote:
> is it possible to increase the buffer,if we have lot of free ram
No. And that's probably not a great idea, since it will introduce lots of
latency with deeper queues. You could try increasing the
"flow-eviction-threshold", which affects the numb
On May 27, 2013, at 10:21 AM, ananthan wrote:
> Thanks Justin,
> I have seen a thread regarding changing flow-eviction-threshold
> ovs-vsctl set bridge xenbr3 other-config:flow-eviction-threshold=2500 (as my
> current flows are with in 3000)
> Can this be done with out affecting current traffi
Thanks Justin,
I have seen a thread regarding changing flow-eviction-threshold
ovs-vsctl set bridge xenbr3 other-config:flow-eviction-threshold=2500 (as
my current flows are with in 3000)
Can this be done with out affecting current traffic,ie for allocating more
threshold does it need to restart.
T
is it possible to increase the buffer,if we have lot of free ram
thanks,
Ananthan
On Mon, May 27, 2013 at 10:19 PM, Justin Pettit wrote:
> On May 27, 2013, at 9:29 AM, ananthan wrote:
>
> > Thanks for your reply, Can you please clarify this,does lost indicate
> packet drop?
>
> Yes. That count
On May 27, 2013, at 9:29 AM, ananthan wrote:
> Thanks for your reply, Can you please clarify this,does lost indicate packet
> drop?
Yes. That counter is the number of packets that were to be queued to go to
userspace, but there wasn't room for them.
--Justin
___
Thanks for your reply, Can you please clarify this,does *lost* indicate
packet drop?
On Mon, May 27, 2013 at 9:44 PM, Justin Pettit wrote:
> Your question is essentially identical (including the version numbers) to
> Kevin Parker's from a few hours earlier, so I'll give the same answer:
>
> We
Your question is essentially identical (including the version numbers) to Kevin
Parker's from a few hours earlier, so I'll give the same answer:
We've made a lot of improvements in flow set up rate since version 1.4, so
upgrading to a more current version (we're on 1.10 now) will likely help.
Hi,
I am having high traffic vms running on Xenserver 6.0.2 with some vms
using more than 11mbps which include public and private traffic.
And because of this high traffic vms i am not able to run more than 2 vms
on single host as ovs-vswitchd struggles to process packet resulting in
heavy packe
15 matches
Mail list logo