Hi

On Thu, Nov 6, 2014 at 7:27 AM, Jesse Gross <je...@nicira.com> wrote:

> On Wed, Nov 5, 2014 at 10:58 PM, FengYu LeiDian
> <fengyuleidian0...@gmail.com> wrote:
> > 于 2014年11月06日 00:08, Jesse Gross 写道:
> >
> >> On Tue, Nov 4, 2014 at 11:03 PM, FengYu LeiDian
> >> <fengyuleidian0...@gmail.com> wrote:
> >>>
> >>> Hi
> >>>
> >>> Env: redhat 6.4, OpenvSwitch-2.1.2, using native
> >>> datapath/linux/openvswitch.ko module
> >>>
> >>> VM1 on host1, VM2 on host2, host1 and host2 are connected by a switch.
> >>> both VM enable virtio/vhost when launching VM
> >>>
> >>> case1:
> >>> VM -> tap -> ovs-bridge -> eth1
> >>>
> >>> case2:
> >>> VM -> tap -> ovs-bridge -> vxlan -> eth1
> >>>
> >>> When using vxlan in case2, iperf performance drop 60%,
> >>
> >>
> >> Your NIC probably doesn't support offloads (checksum, TSO, etc.) in
> >> the presence of VXLAN.
> >
> >
> > This is the default features supported by my NIC 82599 both in case1 and
> > case2.
>
> I can assure you that this NIC does not support VXLAN and the stack is
> being forced to do segmentation in software.
>
>
I have seen similar numbers (2.3 Gbps), with bare-metal Linux and OVS 2.0
with vxlan encap -
with Intel 82599 on Ubuntu 14.04.
MTU of bridge with vxlan port was set to 1400 bytes.
Do we have reference data that  2+ Gbps number is the expected perf for a
NIC that does not support VxLAN offloads?
This older link shows 4+ Gbps with 1500 byte MTU for bare-metal vxlan perf
with OVS (1.8.9):
http://networkstatic.net/configuring-vxlan-and-gre-tunnels-on-openvswitch/

Thanks
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to