We're using vxlan as transport with nova-network. We have a technical need
to use VXlan, but weren't happy with the state of neutron for a production
deploy yet.
We basically setup a vxlan interface as the master interface for
VlanManager, and then used dot1q tagging for tenant isolation. It is a
(these comments are from work we did with nova-network a while ago. These
comments are only focused on the underlying kvm performance, not the
gymnastics to get neutron to build out the right configuration)
We've seen similar performance (around 3 Gbps) for GRE tunnels on machines
that can easily
d ? We are just using virtio.
>
> thanks!
>
>
>
>
>
> On Wed, Jan 15, 2014 at 1:32 PM, Narayan Desai wrote:
>
>> Are you using virtio, and vhost_net?
>>
>> Also, where are you tuning those parameters, host or guest? The ethernet
>> level ones will
delay is on the RX side, this
> means, the server responding.
> So, we were thinking about going upper with ring or txqueuelen settings.
>
> Any idea ?
>
>
>
>
>
>
> *Alejandro Comisario #melicloud CloudBuilders*
> Arias 3751, Piso 7 (C1430CRG)
> Ciudad de Buenos Ai
We don't have a workload remotely like that (generally, we have a lot more
demand for bandwidth, but we also generally run faster networks than that
as well), but 1k pps sounds awfully low. Like low by several orders of
magnitude.
I didn't measure pps in our benchmarking, but did manage to saturat