For me it definitely appears to be something internal to openvswitch, KVM
or the tap driver as I can do MTU 1500 pings to external addresses from
inside the VM which to my limited understanding indicates the extra bytes
are not present when the traffic leaves the hypervisor node.

I worked around this issue by setting the MTU for my VM interfaces on the
hypervisor to 1503. Reducing the MTU on the interfaces inside the VMs had
no effect, ie. I was still getting MTU 1501 packets even when I set the MTU
as low as 1000.

My setup as detailed in my other post is essentially:
Bridge for external traffic with OVS-bond (MTU 9000) - Patch port - Bridge
for VMs - A port for each VM with a tagged VLAN

2015-06-24 5:18 GMT+02:00 Johnson Wu <john...@snoopy.org>:

> It's sheer coincidence that that posts are only 10 min apart.
>
> In my setup all interfaces have MTU defined to be 1500 before I applied
> the workaround.
>
> I tried to do tcpdump on the client to see if the frames received are over
> 1516 and ip pkts over 1500... Nope
>
> Will try a simple linux bridge and see.
>
> Thanks.
>
>
>
>
> Sent from my iPhone
>
>
> > On Jun 23, 2015, at 7:16 PM, Jesse Gross <je...@nicira.com> wrote:
> >
> >> On Fri, Jun 19, 2015 at 1:52 PM, Johnson L. Wu <john...@snoopy.org>
> wrote:
> >>
> >>
> >> Greetings,
> >>
> >>
> >>
> >> I am currently running
> >>
> >> 3.16.0-4-amd64
> >>
> >> No LSB modules are available.
> >>
> >> Distributor ID: Debian
> >>
> >> Description:    Debian GNU/Linux 8.1 (jessie)
> >>
> >> Release:        8.1
> >>
> >> Codename:       jessie
> >>
> >> ovs-vsctl (Open vSwitch) 2.3.1
> >>
> >> Compiled Jun 15 2015 19:30:36
> >>
> >> DB Schema 7.6.2
> >>
> >> libvirtd (libvirt) 1.2.9
> >>
> >>
> >>
> >> and I am seeing an MTU issue where packets coming in from the physical
> side
> >> are vanilla 1500Bytes
> >>
> >> Once it gets to the virtual NIC I see the following in dmesg:
> >>
> >>
> >>
> >> [ 6754.472356] openvswitch: vnet5: dropped over-mtu packet: 1502 > 1500
> >>
> >> [ 6754.472363] openvswitch: inter-1tom: dropped over-mtu packet: 1502 >
> 1500
> >>
> >>
> >>
> >> If I set BOTH the vnet5 vnic to mtu 1502 AND the guest OS MTU to 1502
> things
> >> will flow
> >>
> >> Otherwise mant protocols with full sized packets will see timeout.
> >>
> >>
> >>
> >> Wireshark loaded on the client didn’t help, as a trace done on the
> client
> >> itself sees the IP packets coming in (AFTER adjusting MTU to 1502) as
> 1500.
> >>
> >>
> >>
> >> I think OVS must be doing something that added a tag to the packets.
> >>
> >> Anyone with the same observation?
> >
> > I've never heard of this problem before and there were two threads
> > started on it within 10 minutes, so I'm combining them together on the
> > assumption that they are somehow related.
> >
> > It's not really obvious to me how or why OVS would be increasing the
> > size of the packet by 1 or 2 bytes, especially if the packet is just
> > flowing through. As I said, I've never heard this reported before. In
> > the other message, it looks like OVS should be removing a VLAN tag. Is
> > that true in both cases?
> >
> > Similarly, in the other thread, it looks the MTUs of the physical
> > interfaces are 9000, which makes it seems like it is possible that
> > these packets are actually coming across the wire. What happens if the
> > MTUs are all the same, as required by Ethernet specs?
> >
> > Finally, it would be helpful to try this with the Linux bridge instead
> > of OVS, if possible, since it seems likely that the cause may be
> > another component.
>
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to