>> >Subject: Re: [ovs-discuss] VXLAN problems >> >To: Jesse Gross <je...@nicira.com> >> >Cc: "discuss@openvswitch.org" <discuss@openvswitch.org> >> >Message-ID: >> > >> ><cafiynd8qd7eu4sv2eoljxka0k05k-ns+ag0reaeudgacejk...@mail.gmail.com> >> >Content-Type: text/plain; charset="iso-8859-1" >> > >> >I managed to solve this by setting VM NIC MTU to 1400, 1450 wasn't enough. >> > >> >Thanks. >> >> Hello ,I use ovs-2.0.0 and test vxlan scp between two VMS,and find the >> same problem¡£ >> >> but I think change VM NIC MTU is just avoid and has not solve the >> problem¡£I want to >> >> know we have to change VM NIC or PHY NIC MTU when we use vxlan to >> communication ? >> > >This is due to the tunnel outer header that adds VXLAN + UDP + IP + ETH = 50 >bytes to the inner packets. When TCP stream is tested between VMs, TCP buffer >is segmented to 1500 bytes, but with outer header added, it will be 1550 that >slightly exceed PHY NIC MTU, and results in an additional IP fragmentation >into big + small packet pairs in Hypervisor's IP stack. > >In my environment configuring VM MTU to the same as PHY NIC MTU (1500) leads >to 40% performance drop. > >Does this answer your question? > >Best regards, >Han
Thank you for your reply ,Han. I agree with your analysis and know the process , My confusion is, whether to have clear documentation or rules that PHY or VM NIC MTU need to be modified when we use ovs vxlan to communication ? if needed, we should increase PYH NIC MTU or decrease VM NIC MTU ? _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss