Ihar Hrachyshka wrote:
> Actually, we already have 1450 for network_device_mtu for the job since:
> 
> https://review.openstack.org/#/c/267847/4/devstack-vm-gate.sh
> 

Ah! Forgot about that one. Cool.

> Also, I added some interface state dump for worlddump, and here is how the
> main node networking setup looks like:
> 
> http://logs.openstack.org/59/265759/20/experimental/gate-grenade-dsvm-neutron-multinode/d64a6e6/logs/worlddump-2016-01-30-164508.txt.gz
> 
> br-ex: mtu = 1450
> inside router: qg mtu = 1450, qr = 1450
> 
> So should be fine in this regard. I also set devstack locally enforcing
> network_device_mtu, and it seems to pass packets of 1450 size through. So
> it’s probably something tunneling packets to the subnode that fails for us,
> not local router-to-tap bits.

Yeah! That's right. So is it the case that we need to do 1500 less the
GRE overhead less the VXLAN overhead? So 1446? Since the traffic gets
enacpsulated in VXLAN then encapsulated in GRE (yo dawg, I heard u like
tunneling).

http://baturin.org/tools/encapcalc/


> 
> I also see br-tun having 1500. Is it a problem? Probably not, but I admit I
> miss a lot in this topic so far.

Dunno. Maybe?

> Also I see some qg-2c68fb65-21 device in the worlddump output from above in
> global namespace. The device has mtu = 1500. Which router does the device
> belong to?..

Good question.

-- 
Sean M. Collins

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to