(these comments are from work we did with nova-network a while ago. These comments are only focused on the underlying kvm performance, not the gymnastics to get neutron to build out the right configuration)
We've seen similar performance (around 3 Gbps) for GRE tunnels on machines that can easily flatten 10GE in more efficient configurations, and the right tuning. With vlan or untagged bridges, we could easily saturate a 10GE link from a single VM with multiple streams. The main difference that we saw with this was a drop in single stream TCP performance, as you'd expect. We were only able to get about 4 gbit out of one stream, where we could get upwards of 9 on bare metal. I get that GRE is easy to test with, and it is probably easier to setup, but I don't think it makes sense to be a default configuration choice. The performance implications of that choice are pretty serious. Incidentally, you'll need to do more than just tuning the MTU if you want good performance; you'll need to increase your buffers, window size, etc. Full details for what we did are are: - http://buriedlede.blogspot.com/2012/11/driving-100-gigabit-network-with.html Much of the tuning was cribbed from here: - http://fasterdata.es.net/host-tuning/linux/ hth -nld On Mon, Jan 27, 2014 at 1:39 AM, Li, Chen <chen...@intel.com> wrote: > Hi list, > > > > I'm working under CentOS 6.4 + Havana + Neutron + OVS + gre. > > > > I'm testing performance for gre. > > > > I have a 10Gb/s NIC for compute Node. > > > > While, the max bandwidth I can get is small then 3Gb/s, even I have enough > instances. > > I noticed the reason the bandwidth can't reach higher is due to the > utilization for one CPU core is already 100%. > > > > So, I want to try if I can get higher bandwidth if I have bigger MTU, > because the default MTU = 1500. > > > > But, after I set *network_device_mtu=8500* in "/etc/nova/nova.conf", and > restart openstack-nova-compute service and re-create a new instance, the > MTU for devices is still 1500: > > > > 202: qbr053ac004-d6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu *1500* qdisc > noqueue state UNKNOWN > > link/ether da:c0:8d:c2:d5:1c brd ff:ff:ff:ff:ff:ff > > 203: qvo053ac004-d6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu *1500* > qdisc > pfifo_fast state UP qlen 1000 > > link/ether f6:0b:04:3f:9d:41 brd ff:ff:ff:ff:ff:ff > > 204: qvb053ac004-d6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu *1500* > qdisc > pfifo_fast state UP qlen 1000 > > link/ether da:c0:8d:c2:d5:1c brd ff:ff:ff:ff:ff:ff > > 205: tap053ac004-d6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu *1500* qdisc > htb state UNKNOWN qlen 500 > > link/ether fe:18:3e:c2:e9:84 brd ff:ff:ff:ff:ff:ff > > > > Anyone know why is this happen ? > > How can I solve it ?? > > > > Thanks. > > -chen > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack@lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack