Hi,
On Tue, 2014-05-13 at 07:00 +, Yinpeijun wrote:
> >> >Subject: Re: [ovs-discuss] VXLAN problems
> >> >To: Jesse Gross
> >> >Cc: "discuss@openvswitch.org"
> >> >Message-ID:
> >> >
> >> >
> >> >Content-Type: text/plain; charset="iso-8859-1"
> >> >
> >> >I managed to solve this by setti
Hi Yinpeijun,
On Tue, 2014-05-13 at 02:13 +, Yinpeijun wrote:
> >Date: Thu, 19 Dec 2013 14:45:33 +0100
> >From: Igor Sever
> >Subject: Re: [ovs-discuss] VXLAN problems
> >To: Jesse Gross
> >Cc: "discuss@openvswitch.org"
> >Message-ID:
> >
> >Content-Type: text/plain; charset="iso-8859-
Hi,
We want to setup an environment for large scale SDN simulation, to verify
control plane scalability.
For example thousands of vswitches need to be simulated on top of limited
number of physical machines (e.g. 10).
Each vswitch needs to support OVSDB and OpenFlow connections to controller.
E
gt; From: Jesse Gross [mailto:je...@nicira.com]
> Sent: Tuesday, January 28, 2014 5:36 AM
> To: Zhou, Han
> Cc: discuss@openvswitch.org
> Subject: Re: [ovs-discuss] Question on sending jumbo frames
>
> It's hard to say, I would run tcpdump on each interface in the path
> and m
other point I should look at?
Best regards,
Han
-Original Message-
From: Jesse Gross [mailto:je...@nicira.com]
Sent: Friday, January 24, 2014 3:05 PM
To: Zhou, Han
Cc: discuss@openvswitch.org
Subject: Re: [ovs-discuss] Question on sending jumbo frames
On Thu, Jan 23, 2014 at 5:32 PM, Zhou
Hi,
I am using OVS2.0.1 and GRE tunnels for transport.
I am trying to send jumbo frames from guest VM, so I changed MTU of VM's
eth0, OVS interface br0, vport interface to the VM, and also host's eth0 to
9000.
But I cannot change MTU of br-int with command ifconfig br-int mtu 9000.
The results
Hi Xiaoye,
> Is there any way to disable this batching mechanism in theĀ ovsĀ kernel so that
> each upcall only contains one packet?
Batching is a mechanism of the user space upcall handler rather than OVS
kernel.
To disable it you can change below macro in ofproto/ofproto-dpif-upcall.h:
#define
Hi Chen,
Are you sure it is only MTU change resulted in such a huge difference?
It is hard to believe that fragmentation itself would lead to 9G -> 122K
performance degradation.
Could you try eth4 MTU = 1500, and br-int MTU = 1200?
Best regards,
Han
-Original Message-
From: discuss-boun
Hi Ethan,
Thanks for sharing your overview on multi-threading, which is quite
helpful for us to understand the big picture. It is fairly reasonable
to ensure correctness first and then optimizing it.
On Wednesday, December 04, 2013 4:55 AM, Ethan Jackson wrote:
>
> Specifically we're going to
Hi Ben,
This is clear, thanks pointing out!
Best regards,
Han
On Tuesday, December 03, 2013 12:51 PM, Ben Pfaff wrote:
> On Tue, Dec 03, 2013 at 03:38:40AM +0000, Zhou, Han wrote:
> > > It's in the manpage, also in "ovs-appctl help".
> > We are using OVS
Hi Ben,
> It's in the manpage, also in "ovs-appctl help".
We are using OVS2.0 so didn't noticed it. We will upgrade to latest later on.
But even in OVS official webpage it is still not there yet:
http://openvswitch.org/cgi-bin/ovsman.cgi?page=utilities%2Fovs-appctl.8
Best regards,
Han
___
Hi Alex,
Thanks for your kind feedback.
On Tuesday, December 03, 2013 3:01 AM, Alex Wang wrote:
> This is the case when the rate of incoming upcalls is slower than the
> "dispatcher" reading speed. After "dispatcher" breaks out the for loop, it is
> necessary to wake up "handler" threads with up
Hi,
I just got the answer from a previous commit comment:
# ovs-appctl coverage/show
Suggest to put it to --help.
Best regards,
Han
___
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss
Hi,
There are counters defined by macro COVERAGE_DEFINE, e.g.:
COVERAGE_DEFINE(upcall_queue_overflow);
Is there a command or any tricks to check the value of the counters?
I used gdb attaching to running thread to check it, but it would be highly
appreciated if someone could help point out the co
Hi,
I've found the answer for my first question:
On Wednesday, November 27, 2013 2:37 PM Zhou, Han wrote:
> However, there are still things unclear to me:
> 1. I see in the code that a miss_handler should be woken up only when it has
> 50
> upcalls pending by dispatcher, but
Hi,
Since there is only 1 dispatcher thread, it will be the bottleneck if there are
many miss_handler threads in a 32-core machine.
Chengyuan's test shows that most of the high CPU of miss_handler thread is
caused by ticket spin locks triggered by futex calls. This can be related to
the fact t
16 matches
Mail list logo