>> >This is due to the tunnel outer header that adds VXLAN + UDP + IP + ETH = 
>> >50 bytes to the inner packets. When TCP stream is tested between VMs, TCP 
>> >buffer is segmented to 1500 bytes, but with outer header added, it will be 
>> >1550 that slightly exceed PHY NIC MTU, and results in an additional IP 
>> >fragmentation into big + small packet pairs in Hypervisor's IP stack.
>> >
>> >In my environment configuring VM MTU to the same as PHY NIC MTU (1500) 
>> >leads to 40% performance drop.
>> >
>> >Does this answer your question?
>> >
>> >Best regards,
>> >Han
>> 
>> Thank you for your reply ,Han. I agree with your analysis and know the 
>> process , My confusion is, whether to have clear documentation or 
>> rules that PHY or VM NIC MTU need to be modified when we use
>> 
>> ovs vxlan to communication ?  
>> 
>> 
>> if needed, we should increase PYH NIC MTU or decrease VM NIC MTU ?
>>  
>
>This problem exists also on other tunnel protocols such as GRE. We should 
>always increase PHY NIC MTU when possible.
>
>In fact, MTU specified by VM doesn't make any sense in a virtualized 
>environment. Maybe you can try this patch if you are interested:
>
>http://openvswitch.org/pipermail/dev/2014-May/040027.html
>
>You don't need to care about VM MTU setting with this patch, and the best 
>thing is that it will be much faster even comparing with properly changed MTU.


Thanks again, It seem that the patch can solve my problem, I will read the 
patch and test it in my environment.

Looking forward to the exchange.





_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to