Hey Tobias,

Thx for providing more info,  I'm only about to comment on the 'dump-flows'
output.  Here are what I found 'loop-like':

"""
skb_priority(0),in_port(p258p2),skb_mark(0/0),eth(src=00:00:5e:00:02:05,dst=33:33:00:00:00:12),eth_type(0x8100),vlan(vid=303,pcp=0),encap(eth_type(0x86dd),ipv6(src=fe80::d6ca:6dff:fe01:ab2b/::,dst=ff02::12/::,label=0/0,proto=112/0,tclass=0/0,hlimit=255/0,frag=no/0xff)),
packets:217320, bytes:20428080, used:0.548s,
actions:pop_vlan,24,29,push_vlan(vid=303,pcp=0),1

skb_priority(0),in_port(p258p2),skb_mark(0/0),eth(src=00:00:5e:00:02:06,dst=33:33:00:00:00:12),eth_type(0x8100),vlan(vid=701,pcp=0),encap(eth_type(0x86dd),ipv6(src=fe80::d6ca:6dff:fe01:ab2b/::,dst=ff02::12/::,label=0/0,proto=112/0,tclass=0/0,hlimit=255/0,frag=no/0xff)),
packets:217320, bytes:20428080, used:0.548s,
actions:pop_vlan,27,13,18,push_vlan(vid=701,pcp=0),1

skb_priority(0),in_port(p258p1),skb_mark(0/0),eth(src=00:00:5e:00:02:06,dst=33:33:00:00:00:12),eth_type(0x8100),vlan(vid=701,pcp=0),encap(eth_type(0x86dd),ipv6(src=fe80::d6ca:6dff:fe01:ab2b/::,dst=ff02::12/::,label=0/0,proto=112/0,tclass=0/0,hlimit=255/0,frag=no/0xff)),
packets:217320, bytes:20428080, used:0.548s,
actions:pop_vlan,27,13,18,push_vlan(vid=701,pcp=0),1

skb_priority(0),in_port(p258p1),skb_mark(0/0),eth(src=00:00:5e:00:02:05,dst=33:33:00:00:00:12),eth_type(0x8100),vlan(vid=303,pcp=0),encap(eth_type(0x86dd),ipv6(src=fe80::d6ca:6dff:fe01:ab2b/::,dst=ff02::12/::,label=0/0,proto=112/0,tclass=0/0,hlimit=255/0,frag=no/0xff)),
packets:217320, bytes:20428080, used:0.548s,
actions:pop_vlan,24,29,push_vlan(vid=303,pcp=0),1

"""
Could you verify the output ports using 'ovs-dpctl show'?

Thanks,
Alex Wang,

On Fri, Mar 6, 2015 at 2:08 AM, Tobias Brunner <tobias.brun...@vshn.ch>
wrote:

> Hi everyone,
>
> Thanks a lot for taking care of this issue!
>
> > This is a good point.  I suspect there is a loop in the setup causing the
> > bounch-back.
>
> Yes, this really looks like that there is a loop somewhere.
>
> > > I cannot yet reconcile this with the previous message of the sending
> > > VM receiving the traffic with a hairpin.
> >
> > Besides checking the VM, I'd suggest you check on the HV and run
> > 'ovs-dpctl dump-flows -m' (this will show you how the packets are
> forwarded
> > in kernel), right after you configure IPv6 on VM.  'ovs-dpctl show' will
> > give
> > you the mapping between port number and port name.  So, we can see
> > how other HV process the IPv6 message and check if there is a loop.
>
> I did that, but can't find any usable information on the "dump-flows"
> output.
> I've attached the file.
>
> > > Are you using OVS tunnels? If so, one idea could be to connect  VM
> > > vifs to a OVS bridge. But use Linux bonding to carry the tunneled
> > > traffic out of the machine. Another option to try out is not to use
> > > LACP bonds but rather something like active-backup (man
> > > ovs-vswitchd.conf.db -> Bonding Configuration)
> >
> > This is a good suggestion.
>
> No, we are not using any tunnels. And I'm not able to change from LACP to
> active-backup on the affected systems because they're running in
> production. We
> also don't have any lab devices available to experiment with, this is too
> bad,
> I know =(
>
> Some more information about the setup: The hosts are connected with two
> network interfaces, each to one Brocade ICX6650 switch which form a MCT
> cluster. Therefore we are able to use LACP to form a bonding with this two
> network connections.
> We have some other servers connected exactly identically to the same
> switches
> which are not running any VMs on it and also don't have OVS on it. They're
> using linux bonding to form the LACP channel and don't suffer from this DAD
> trouble.
>
> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
>
> Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> Transmit Hash Policy: layer2 (0)
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 0
> Down Delay (ms): 0
>
> 802.3ad info
> LACP rate: fast
> Min links: 0
> Aggregator selection policy (ad_select): stable
> Active Aggregator Info:
>         Aggregator ID: 1
>         Number of ports: 2
>         Actor Key: 33
>         Partner Key: 30203
>         Partner Mac Address: 01:80:c2:xx:xx:xx
>
> Slave Interface: p262p1
> MII Status: up
> Speed: 10000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 90:e2:ba:xx:xx:xx
> Aggregator ID: 1
> Slave queue ID: 0
>
> Slave Interface: p262p2
> MII Status: up
> Speed: 10000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 90:e2:ba:xx:xx:xx
> Aggregator ID: 1
> Slave queue ID: 0
>
> Cheers,
> Tobias
>
> --
> Tobias Brunner
> Linux and Network Engineer
>
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to