Hi Yinpeijun,

On Tue, 2014-05-13 at 02:13 +0000, Yinpeijun wrote:
> >Date: Thu, 19 Dec 2013 14:45:33 +0100
> >From: Igor Sever <i...@xorops.com>
> >Subject: Re: [ovs-discuss] VXLAN problems
> >To: Jesse Gross <je...@nicira.com>
> >Cc: "discuss@openvswitch.org" <discuss@openvswitch.org>
> >Message-ID:
> >     <cafiynd8qd7eu4sv2eoljxka0k05k-ns+ag0reaeudgacejk...@mail.gmail.com>
> >Content-Type: text/plain; charset="iso-8859-1"
> >
> >I managed to solve this by setting VM NIC MTU to 1400, 1450 wasn't enough.
> >
> >Thanks.
> 
> Hello ,I use ovs-2.0.0 and test vxlan scp between two VMS,and find the same 
> problem。
> 
> but I think change VM NIC MTU is just avoid and has not solve the problem。I 
> want to  
> 
> know we have to change VM NIC or PHY NIC MTU when we use vxlan to 
> communication ? 
> 

This is due to the tunnel outer header that adds VXLAN + UDP + IP + ETH
= 50 bytes to the inner packets. When TCP stream is tested between VMs,
TCP buffer is segmented to 1500 bytes, but with outer header added, it
will be 1550 that slightly exceed PHY NIC MTU, and results in an
additional IP fragmentation into big + small packet pairs in
Hypervisor's IP stack.

In my environment configuring VM MTU to the same as PHY NIC MTU (1500)
leads to 40% performance drop.

Does this answer your question?

Best regards,
Han
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to