We have SR-IOV based neutron networking for multicast with bare-metal
performance. But in this use case, I prefer to use OVS to have
multicast traffic isolated within each virtual networks, with
overlapping groups, etc.
I am curious how much work it is to write a simple controller to keep
track of
On Monday, May 23, 2016, O'Reilly, Darragh wrote:
>
> > I have an openstack setup using ml2/ovs/vxlan. While sending multicast
> > traffic on one VM, I am seeing packets being replicated to many
> > hypervisors via VXLAN tunnels, although I only have a couple of
> > receiver VMs on a couple of hy
Hi all,
I have an openstack setup using ml2/ovs/vxlan. While sending multicast
traffic on one VM, I am seeing packets being replicated to many
hypervisors via VXLAN tunnels, although I only have a couple of
receiver VMs on a couple of hypervisors. Is this expected?
I set mcast_snooping_enable=tru
Hello,
I am testing OVS 2.4.0 on Linux 3.18 kernel.
Not sure when it got changed, but VXLAN offload is no longer on by
default in 3.18 - it used to be the case in 3.14. So, normally I have:
ethtool -k eth4 | grep tnl
tx-udp_tnl-segmentation: off [fixed]
Configuring a VXLAN interface manually wou
On Fri, Feb 27, 2015 at 7:13 PM, Chris Dunlop wrote:
> On Fri, Feb 27, 2015 at 11:08:21AM -0800, Pravin Shelar wrote:
> > On Thu, Feb 26, 2015 at 9:13 PM, Chris Dunlop
> wrote:
> > > Hi,
> > >
> > > "Me too" on Simon's BUG() described below (apologies for the top post).
> > > Basically:
> > >
>
On Fri, Feb 27, 2015 at 6:49 PM, Chris Dunlop wrote:
> On Fri, Feb 27, 2015 at 06:06:25PM -0500, Xu (Simon) Chen wrote:
> > On Friday, February 27, 2015, Chris Dunlop wrote:
> > > Simon, are you able to try your test running direct hypervisor
> > > to hypervisor?
&
On Fri, Feb 27, 2015 at 6:14 PM, Pravin Shelar wrote:
> On Fri, Feb 27, 2015 at 3:06 PM, Xu (Simon) Chen
> wrote:
> >
> >
> > On Friday, February 27, 2015, Chris Dunlop wrote:
> >>
> >> On Fri, Feb 27, 2015 at 11:08:21AM -0800, Pravin Shelar wrote:
>
On Friday, February 27, 2015, Chris Dunlop wrote:
> On Fri, Feb 27, 2015 at 11:08:21AM -0800, Pravin Shelar wrote:
> > On Thu, Feb 26, 2015 at 9:13 PM, Chris Dunlop > wrote:
> > > Hi,
> > >
> > > "Me too" on Simon's BUG() described below (apologies for the top post).
> > > Basically:
> > >
> > >
ually
crashes too. Any ideas?
Thanks.
-Simon
On Thu, Feb 12, 2015 at 8:32 PM, Xu (Simon) Chen wrote:
> Hi folks,
>
>
> I can now consistently reproduce a kernel panic on my system. I am using
> OVS 2.3.0 on 3.14.29 kernel, a sender and a receiver (two VMs) on two
> identical h
Hi folks,
I can now consistently reproduce a kernel panic on my system. I am using
OVS 2.3.0 on 3.14.29 kernel, a sender and a receiver (two VMs) on two
identical hypervisors, using VXLAN tunnel connecting the two VMs. Iperf is
used inside of VMs for generating traffic. The sender side has no pro
Hey folks,
I've been trying to leverage vxlan hardware offload (checksum) to improve
tunnel performance.
If I run vxlan tunnels over a single 10Gbps interface, I can achieve
roughly 9Gbps throughput between VMs with MTU 1500 vnic. Without hardware
offload, the performance is much worse.
With bon
Hi all,
It recently started to happen somewhat randomly, but frequently enough to
disrupt my openstack networking setup:
[Sun Sep 14 17:02:50 2014] ovs-vswitchd[1817720]: segfault at 0 ip
0045c5f0 sp 7fffedf1cde8 error 4 in ovs-vswitchd[40+137000]
VMs lose connectivity after this
Cool, thanks... I am currently reverting to 2.3.0 release and that is
working fine.
On Mon, Aug 18, 2014 at 5:01 PM, Alex Wang wrote:
> The fix is one commit after, commit
> 1738803acda21425c19d1549c0c1e6586ef0c64a
>
>
> On Mon, Aug 18, 2014 at 1:59 PM, Xu (Simon) Chen
>
I am running "HEAD-6a92c6f".
On Mon, Aug 18, 2014 at 4:52 PM, Alex Wang wrote:
>
>
>
> On Mon, Aug 18, 2014 at 1:37 PM, Ben Pfaff wrote:
>
>> Can you provide a backtrace?
>>
>>
>
> Or can you tell us what is the value of 'wait' in "recvmsg(sock->fd,
> &msg, wait ? 0:MSG_DONTWAIT)"?
>
> If it i
n Mon, Aug 18, 2014 at 11:35 AM, Ben Pfaff wrote:
> On Mon, Aug 18, 2014 at 11:33:59AM -0400, Xu (Simon) Chen wrote:
> > I am running debian wheezy with 3.14.17 kernel, and openvswitch from
> trunk
> > (2.3.90). It seems that ovs-vsctl commands are all hanging, although it
>
I am running debian wheezy with 3.14.17 kernel, and openvswitch from trunk
(2.3.90). It seems that ovs-vsctl commands are all hanging, although it
actually worked in terms of adding/removing bridges and ports.
In the log, vswitchd log, I see the following:
ovs_rcu(urcu3)|WARN|blocked xxx ms waitin
I am trying to have my VM directly plug into OVS, and at the same time
enable virtio/vhost-net multiqueue. I tried to configure the following in
libvirt.xml:
...
The "queues" parameter works for my VMs plugged into linux bridges, but it
is silently dropped for OVS bridges. Not sure wh
This might have something to do with upgrade process itself. The bridges
showed up after a reboot and/or reloading switchd process.
On Fri, Mar 28, 2014 at 6:52 PM, Ben Pfaff wrote:
> On Thu, Mar 20, 2014 at 03:06:25PM -0400, Xu (Simon) Chen wrote:
> > I am trying to replace OVS 2
I experienced a deadlock while using openvswitch 2.0.0 release.
[2019033.190243] INFO: task kworker/u97:1:28426 blocked for more than 120
seconds.
[2019033.256661] [] mutex_lock+0x2a/0x50
[2019033.262653] [] ovs_lock+0x15/0x20 [openvswitch]
[2019033.269795] [] ovs_exit_net+0x35/0x90 [openvswitc
I am trying to replace OVS 2.0.0 with 2.1.0 on my servers. While trying the
new version the OVS bridges don't show up in "ip link" output anymore for
some reason. Unfortunately, OpenStack is broken due to this change.
Any ideas what I need to do to get the OVS bridges back into "ip link"?
Thanks..
20 matches
Mail list logo