On Tue, Dec 2, 2014 at 9:41 AM, Thomas Graf <tg...@suug.ch> wrote: > On 12/02/14 at 07:34pm, Michael S. Tsirkin wrote: >> On Tue, Dec 02, 2014 at 05:09:27PM +0000, Thomas Graf wrote: >> > On 12/02/14 at 01:48pm, Flavio Leitner wrote: >> > > What about containers or any other virtualization environment that >> > > doesn't use Virtio? >> > >> > The host can dictate the MTU in that case for both veth or OVS >> > internal which would be primary container plumbing techniques. >> >> It typically can't do this easily for VMs with emulated devices: >> real ethernet uses a fixed MTU. >> >> IMHO it's confusing to suggest MTU as a fix for this bug, it's >> an unrelated optimization. >> ICMP_DEST_UNREACH/ICMP_FRAG_NEEDED is the right fix here. > > PMTU discovery only resolves the issue if an actual IP stack is > running inside the VM. This may not be the case at all.
It's also only really a correct thing to do if the ICMP packet is coming from an L3 node. If you are doing straight bridging then you have to resort to hacks like OVS had before, which I agree are not particularly desirable. > I agree that exposing an MTU towards the guest is not applicable > in all situations, in particular because it is difficult to decide > what MTU to expose. It is a relatively elegant solution in a lot > of virtualization host cases hooked up to an orchestration system > though. I also think this is the right thing to do as a common case optimization and I know other platforms (such as Hyper-V) do it. It's not a complete solution so we still need the original patch in this thread to handle things transparently. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev