K, tracked it down to neutron/agent/common/ovs_lib.py:151 Datapath_type has a default value of OVS_DATAPATH_SYSTEM. I'm guessing somewhere someone created an instance of the object without passing in the datapath_type variable from config.
Anyway, changing that to OVS_DATAPATH_NETDEV got rid of the br-int errors... It didn't, however get dhcp, ping, or anything else networking related to work on the vm; Even when I statically assign the IP addresses (matching what the connected ports say they should be), I don't even see the ifconfig stats change. ps aux | grep qemu shows this line for one of the qemu's: /usr/bin/qemu-system-x86_64 -name instance-00000003 -S -machine pc-i440fx-utopic,accel=kvm,usb=off -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -object memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=on,size=2048M,id=ram-node0,host-nodes=0,policy=bind,share=on -numa node,nodeid=0,cpus=0,memdev=ram-node0 -uuid c1e511c9-216a-4248-a2cf-7f08d61dbcc8 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=12.0.0,serial=be47d935-9390-b631-e0a7-e6ad55dde37f,uuid=c1e511c9-216a-4248-a2cf-7f08d61dbcc8,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000003.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/c1e511c9-216a-4248-a2cf-7f08d61dbcc8/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/opt/stack/data/nova/instances/c1e511c9-216a-4248-a2cf-7f08d61dbcc8/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu3272d0ab-f3 -netdev type=vhost-user,id=hostnet0,chardev=charnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:82:61:6c,bus=pci.0,addr=0x3 -chardev socket,id=charnet1,path=/var/run/openvswitch/vhuea352081-64 -netdev type=vhost-user,id=hostnet1,chardev=charnet1 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=fa:16:3e:d7:6c:17,bus=pci.0,addr=0x4 -chardev file,id=charserial0,path=/opt/stack/data/nova/instances/c1e511c9-216a-4248-a2cf-7f08d61dbcc8/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on Osv-ofctl dump-ports br-int doesn't have any stats that change... port 16: rx pkts=0, bytes=?, drop=?, errs=?, frame=?, over=?, crc=? tx pkts=0, bytes=?, drop=0, errs=?, coll=? port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0 tx pkts=18, bytes=1538, drop=0, errs=0, coll=0 port 5: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0 tx pkts=18, bytes=1486, drop=0, errs=0, coll=0 port 15: rx pkts=0, bytes=?, drop=?, errs=?, frame=?, over=?, crc=? ... > -----Original Message----- > From: Gabe Black > Sent: Wednesday, August 26, 2015 12:43 PM > To: 'Mooney, Sean K'; 'b...@openvswitch.org' > Subject: RE: duplicate option: of_interface > > Interesting... it seems that the ovs-vsctl set bridge br-int > datapath_type=netdev does take, but very soon after it is changed back to > system. I'll try and figure out how that is happening. > > Gabe > > > -----Original Message----- > > From: Gabe Black > > Sent: Wednesday, August 26, 2015 12:42 PM > > To: 'Mooney, Sean K'; b...@openvswitch.org > > Subject: RE: duplicate option: of_interface > > > > Hi Sean, > > > > Yes, I did have the OVS_DATAPATH_TYPE set to netdev. Even after > > setting the [ovs] datapath_type=netdev and trying to manually set the > > datapath_type to netdev (ovs-vsctl set Bridge br-int > > datapath_type=netdev), the br-int still shows system. > > > > The interesting thing is that the other bridges (br-ex and br-p6p1 are > > set to netdev (automatically during stacking), but just br-int is > > system and immutable it seems via manual ovs-vsctl commands). > > > > I'll keep playing around and see if there is some way to reconcile. > > > > Thanks, > > Gabe > > > > > -----Original Message----- > > > From: Mooney, Sean K [mailto:sean.k.moo...@intel.com] > > > Sent: Wednesday, August 26, 2015 5:12 AM > > > To: Gabe Black; b...@openvswitch.org > > > Subject: RE: duplicate option: of_interface > > > > > > I know the feeling. I think your horizon issue can be fixed by > > > Running sudo iptables -F sudo iptables -X iptables based security > > > groups do not work with vhost-user so this will have no effect on vm > > > security > > > > > > I think the ovs-ofctl: br-int is not a bridge or a socket error is > > > because the datapath_type on the bridge is not currently set to netdev. > > > > > > Looking at your previous error I noticed the follow command the > > > datapath_type is being set to system. > > > ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', > > > > '--may-exist', 'add- br', 'br-int', '--', 'set', 'Bridge', > > > > 'br-int', 'datapath_type=system'] > > > > > > > > > I think the reason the bridge does not exists is because it has the > > > wrong datapath type. > > > You can validate this by running > > > sudo ovs-vsctl list-br > > > > > > when using kernel ovs the output will be similar to this > > > > > > _uuid : 50c64fb3-3c0c-40f9-8e45-6384a99687aa > > > controller : [] > > > datapath_id : "0000b24fc650f940" > > > datapath_type : system > > > external_ids : {bridge-id=br-int} > > > fail_mode : secure > > > flood_vlans : [] > > > flow_tables : {} > > > ipfix : [] > > > mirrors : [] > > > name : br-int > > > netflow : [] > > > other_config : {} > > > ports : [1e2aeb85-01a2-4f1d-a0ef-f4954acab3f2, > > > b8c7a19e-c510- > > 47c1- > > > 874c-9d99ec6085f5, c370d053-6aea-4f16-a6bc-f54f1aabfcb5, > > > e1a7ef91-53a4- 4222-9bf7-249888ad48ef, e917d939-44db-4661-9311- > > bb68a6c132c4] > > > protocols : ["OpenFlow10"] > > > sflow : [] > > > status : {} > > > stp_enable : false > > > > > > when using ovs with dpdk the datapath_type should be netdev. > > > > > > If it is currently system and assuming you have > > > OVS_DATAPATH_TYPE=netdev set in your local.conf this could be > > > caused, as one of my patches to neutron merged Saturday and we have > > > not updated our deployment code To add the appropriate configuration. > > > > > > https://github.com/openstack/neutron/commit/5b708d5f0e9a5ddb4675148 > > > 9a52665e673a5cb0b > > > > > > > > > If you add > > > [ovs] > > > datapath_type = netdev > > > to the ml2_conf.ini the q-agt will configure the dpdk netdev > > > datapath instead of the system kernel datapath. > > > > > > For your running system you can run the following commands to set > > > the datapath > > > > > > sudo ovs-vsctl set bridge <bridge name> datapath=netdev. > > > > > > This should be run for all ovs bridges. > > > > > > On huge table memory as the system is left running main memory gets > > > fragment. > > > when you start ovs with our service file it allocates hugempage > > > memory in the kernel from the current free memory. > > > > > > When you compile dpdk there is a hugepage segment limit of 256 > segment. > > > If the dpdk application cannot allocated its hugepage memory in less > > > segment then this limit the application will exit. > > > You can adjust this limit by setting > OVS_DPDK_MEM_SEGMENTS=<value> > > in > > > your local.conf The more often you stack and unstack between reboots > > > the more fragmented your memory will be so for development I tend to > > > set the OVS_DPDK_MEM_SEGMENTS quite high. > > > > > > Regards > > > Sean. > > > -----Original Message----- > > > From: Gabe Black [mailto:gabe.bl...@viavisolutions.com] > > > Sent: Tuesday, August 25, 2015 9:45 PM > > > To: Gabe Black; Mooney, Sean K; b...@openvswitch.org > > > Subject: RE: duplicate option: of_interface > > > > > > Sorry, that was premature. Apparently hugetable memory gets > > > fragmented or leaks or something as the ovs-dpdk failed to start. > > Rebooting fixed that. > > > > > > I still get many errors in the q-agt log file similar to this: > > > ERROR neutron.agent.linux.utils [-] > > > Command: ['ovs-ofctl', 'add-flows', 'br-int', '-'] Exit code: 1 > > > Stdin: > > > hard_timeout=0,idle_timeout=0,priority=0,table=0,cookie=0x0,actions= > > > no > > > r > > > mal > > > Stdout: > > > Stderr: ovs-ofctl: br-int is not a bridge or a socket > > > > > > and > > > > > > ERROR neutron.agent.common.ovs_lib [req-ab1e1f65-d14c-4209-ac20- > > > d29e450eda38 None None] Unable to execute ['ovs-ofctl', > > > 'dump-flows', > > > 'br- int', 'table=23']. Exception: > > > Command: ['ovs-ofctl', 'dump-flows', 'br-int', 'table=23'] Exit code: > > > 1 > > > Stdin: > > > Stdout: > > > Stderr: ovs-ofctl: br-int is not a bridge or a socket > > > > > > But at least q-agt isn't dying. I remember why I hate fedora though.. > > > Even though horizon says it is running and we turned the firewall to > > > permissive, I can never access the website. Seems like something > > > else is protecting or acting as a firewall. > > > > > > > > > > > > > -----Original Message----- > > > > From: discuss [mailto:discuss-boun...@openvswitch.org] On Behalf > > > > Of Gabe Black > > > > Sent: Tuesday, August 25, 2015 2:06 PM > > > > To: Mooney, Sean K; b...@openvswitch.org > > > > Subject: Re: [ovs-discuss] duplicate option: of_interface > > > > > > > > Thanks for the reply and hints on changes... Armed with that I > > > > undid the changes from git commit 053bfc5a (neutron) and that > > > > allowed the q-agt process to at least not die there with that > > > > callstack. The commit message seemed to indicate it was for > > > > better restarting/cleanup; so I hope it is relatively low impact to back > out. > > > > > > > > However, it now dies because it is still unable to create the br-int > bridge: > > > > > > > > Unable to execute ['ovs-vsctl', '--timeout=10', '--oneline', > > > > '--format=json', '--', '--may-exist', 'add-br', 'br-int', '--', > > > > 'set', 'Bridge', 'br-int', 'datapath_type=system']. > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl > > > > Traceback (most recent call last): > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl File > > > > "/opt/stack/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, > > > > in run_vsctl > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl > > > > log_fail_as_error=False).rstrip() > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl File > > > > "/opt/stack/neutron/neutron/agent/linux/utils.py", line 153, in > execute > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl raise > > > > RuntimeError(m) > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl > > > > RuntimeError: > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl > > Command: > > > > ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', > > > > '--may-exist', 'add- br', 'br-int', '--', 'set', 'Bridge', > > > > 'br-int', 'datapath_type=system'] > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl Exit > > > > code: -14 > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl Stdin: > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl Stdout: > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl Stderr: > > > > 2015- 08-25T19:59:44Z|00002|fatal_signal|WARN|terminating with > > > > signal > > > > 14 (Alarm > > > > clock) > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl > > > > 2015-08-25 12:59:44.777 TRACE neutron.agent.ovsdb.impl_vsctl > > > > > > > > I noticed a difference between a multi-node setup and a single > > > > node, where multi-node's controller has openvswitch and ovsdpdk > > > > for the Q_ML2_PLUGIN_MECHANISM_DRIVERS. Is it perhaps related > to that? > > Or > > > > should it be trying to create br-int as datapath_type netdev instead? > > > > Just throwing things out there because I'm ignorant :-) I'll try > > > > variations of that in hopes I get lucky. > > > > > > > > > > > > Gabe > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > From: Mooney, Sean K [mailto:sean.k.moo...@intel.com] > > > > > Sent: Tuesday, August 25, 2015 1:30 PM > > > > > To: Gabe Black; b...@openvswitch.org > > > > > Subject: RE: duplicate option: of_interface > > > > > > > > > > Hi Gabe > > > > > We have started to see that message in our ci since this weekend. > > > > > We are currently investigating it but I belive a change has > > > > > merged to neuton that we need to back port to Our agent. > > > > > > > > > > A lot of code has merged in the last 2 weeks as the code freeze > > > > > for the liberty release is moday. > > > > > The stable kilo branch should be unaffected but we are actively > > > > > looking into this at present. > > > > > > > > > > Regards > > > > > Sean. > > > > > > > > > > > > > > > -----Original Message----- > > > > > From: Gabe Black [mailto:gabe.bl...@viavisolutions.com] > > > > > Sent: Tuesday, August 25, 2015 7:50 PM > > > > > To: b...@openvswitch.org > > > > > Cc: Mooney, Sean K > > > > > Subject: duplicate option: of_interface > > > > > > > > > > I have followed the getting started guide > > > > > (http://git.openstack.org/cgit/stackforge/networking-ovs- > > > > > dpdk/tree/doc/source/getstarted.rst) on both fedora 21 and > > > > > Ubuntu > > > > > 15.04 to get a single-node set up with dpdk ovs. > > > > > > > > > > My local.conf file is identical to the one provided as the > > > > > single node > > > > template: > > > > > http://git.openstack.org/cgit/stackforge/networking-ovs- > > > > > dpdk/tree/doc/source/_downloads/local.conf.single_node > > > > > > > > > > I set HOST_IP_IFACE=eno1, HOST_IP=10.3.73.124, > > > > > OVS_BRIDGE_MAPPINGS="default:br-enp4s0f0", and > > > > > ML2_VLAN_RANGES=default:1000:1010 > > > > > > > > > > eno1 and associated IP is the interface/ip address of the server > > > > > (i.e. what > > > > we > > > > > use to ssh to the box). enp4s0f0 is the 10G intel nic interface > > > > > that will > > > > > eventually be used for the data interface in a multi-node setup. > > > > > Finally the vlan range was just arbitrarily chosen. > > > > > > > > > > Other than that, there isn't anything else modified other than > > > > > following instructions of the getting started guide. However > > > > > for both Fedora 21, and Ubuntu 15.04 (Ubuntu there were some > > > > > mods that needed to take place like disabling apparmor, > > > > > symlinking /var/run/openstack, and fixing ovs-dpdk-init > > > > > script) result in the following error message in q-agt: > > > > > > > > > > Traceback (most recent call last): > > > > > File "/usr/bin/networking-ovs-dpdk-agent", line 10, in <module> > > > > > sys.exit(main()) > > > > > File "/usr/lib/python2.7/site- > > > > > > > > > packages/networking_ovs_dpdk/eventlet/ovs_dpdk_neutron_agent.py", > > > > > line 20, in main > > > > > agent_main.main() > > > > > File "/usr/lib/python2.7/site- > > > > > packages/networking_ovs_dpdk/agent/main.py", line 43, in main > > > > > mod = importutils.import_module(mod_name) > > > > > File > > > > > "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", > > > > > line 57, in import_module > > > > > __import__(import_str) > > > > > File "/usr/lib/python2.7/site- > > > > > > > > > > > > > > > packages/networking_ovs_dpdk/agent/openflow/ovsdpdk_ofctl/main.py", > > > > > line 17, in <module> > > > > > from networking_ovs_dpdk.agent import > ovs_dpdk_neutron_agent > > > > > File "/usr/lib/python2.7/site- > > > > > > packages/networking_ovs_dpdk/agent/ovs_dpdk_neutron_agent.py", > > > > line > > > > > 47, in <module> > > > > > from neutron.plugins.ml2.drivers.openvswitch.agent import > > > > > ovs_dvr_neutron_agent > > > > > File > > > > > > > > > > > > > > > "/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_ > > > > > dvr_neutron_agent.py", line 29, in <module> > > > > > cfg.CONF.import_group('AGENT', > > > > > 'neutron.plugins.ml2.drivers.openvswitch.' > > > > > File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", > > > > > line 2088, in import_group > > > > > __import__(module_str) > > > > > File > > > > > > > > > > > > > > > "/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/com > > > > > mon/config.py", line 111, in <module> > > > > > cfg.CONF.register_opts(ovs_opts, "OVS") > > > > > File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", > > > > > line 1824, in __inner > > > > > result = f(self, *args, **kwargs) > > > > > File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", > > > > > line 1983, in register_opts > > > > > self.register_opt(opt, group, clear_cache=False) > > > > > File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", > > > > > line 1828, in __inner > > > > > return f(self, *args, **kwargs) > > > > > File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", > > > > > line 1967, in register_opt > > > > > return group._register_opt(opt, cli) > > > > > File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", > > > > > line 1345, in _register_opt > > > > > if _is_opt_registered(self._opts, opt): > > > > > File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", > > > > > line 574, in _is_opt_registered > > > > > raise DuplicateOptError(opt.name) > > > > > oslo_config.cfg.DuplicateOptError: duplicate option: > > > > > of_interface q-agt failed to start > > > > > > > > > > I thought this error message was just because Ubuntu > > > > > testing/support hasn't been fleshed out yet with ovs-dpdk, but > > > > > then I got the exact same error on Fedora 21. I tried editing > > > > > both > > > > > > > > > > > > > > > /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/comm > > > > > on/config.py and /opt/stack/networking-ovs- > > > > > dpdk/networking_ovs_dpdk/common/config.py to get past the > error, > > > but > > > > > then there are complaints about not finding br-int... So I'm > > > > > guessing that isn't the correct workaround. Anyone have any > > > > > suggestions of what I might have misconfigured? > > > > > > > > > > Thank you for your help! > > > > > Gabriel Black > > > > > > > > > > > > > _______________________________________________ > > > > discuss mailing list > > > > discuss@openvswitch.org > > > > http://openvswitch.org/mailman/listinfo/discuss _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss