Hello, All:
I've just installed OVS on my Xen host and I'm wondering if there's a
recommended configuration for bonding [pysical] interfaces with OVS? e.g., My
host system has six [physical] interfaces. I've combined those [physical]
interfaces to create three bonded interfaces: 'bond0', 'bond1
After reading "Red Hat network scripts integration"
[https://github.com/openvswitch/ovs/blob/master/rhel/README.RHEL] I realized
that I had a few typos in my configuration script so I decided to start all
over.
I disable the bonded [physical] interface 'bond2':
> [root@xen-2 ~]# ifenslave -d bon
I have question concerning linking POX controller with openvswitch. I link
bridge br0 of openvswitch to two qemu VM guests.
The configuration of qemu VM to openvswitch bridge is sudo
qemu-system-x86_64 -m 1028 -net nic,macaddr=00:00:00:00:cc:10 -net
tap,script=/etc/ovs-ifup,downscript=//etc/ovs-ifd
Hi Ben ,
Thank you for your mail , we havent build any package. we have downloaded
openvswitch-2.3.90 from git hub and executing for bundle(openflow version 1.4)
related scripts.
Openvswitch 2.3.90 supports wire protocol upto 0x04. This bundle feature is in
openflow version 1.4 which is wire p
Hi Ben ,
We are working on openflow 1.4 bundle Git hub oftest suite and test cases
failing with openvswitch for the below ofp message.
bundle_ctrl_type=ofp.OFPBCT_OPEN_REQUEST
self.assertEqual(response.bundle_ctrl_type, ofp.OFPBCT_OPEN_REPLY)
bundle_ctrl_type=ofp.OFPBCT_COMMIT_REQUEST
Hi,
I've been running some load tests on Openstack Kilo with CentOS 7.1 and OVS
2.3.1.
The out of the box performance seems to be poor, I can only manage around 21000
TPS (web requests).
That equates to roughty
239309.20 txpck/s or 276117.34 txkB/s
This post indicates that with OVS >