Hi folks,

I'm trying to figure out - why is there no documentation on the NEED to
configure MTU inside VM, when using vxlan as guest isolation method ?


Right now, by defaut/design, traffic/MTU goes like this:

eth0 inside VM is by default 1500 bytes --> vnetY mtu1450 --> virbrX
mtu1450--> vxlan mtu1450--> ethX mtu1500--> physical network (in this case
I use ethX as traffic label instead of bridge, vxlan interface is created
on top of ethX interface)

Inside VM, I can get IP address via DHCP, use ping, because those generate
packet less than 1500 bytes.
>From within VM - i.e. SSH/SCP login works, but SCP data transfer fails, yum
update fails, etc -

Any other traffic from VM to outside does not work and no other
connectivity, until I configure MTU inside VM to be less than 1500...

What is the recommended way to configure vxlan - documentation is just
asking for supported kernel and iproute2 versions, use ethX or bridgeX as
traffic label, give it IP - and that's it.

There must be some clear decision on how to make this works:

1) eather - don't bother client configuring MTU inside VM/template, and
make MTU on vxlan and vnet interfaces 1500 bytes - but ask Administrator to
increase mtu to 1600 on physical interface ethX or bridgeX

2) as it currently is the case, use 1450 MTU on vnet,vxlan, and make
trouble for user to configure MTU for each of his VMs/templates.


Am I missing something here perhaps ?
Is there any more complete documentation on this ?

Best
-- 

Andrija Panić

Reply via email to