On 22 Mar 2023, at 14:42, Alex Zetaeffesse wrote:

> I will do my best in describing the goal and the issue
> The ports connected to the bridge (I shrunk the output for readability) are
> the following
>
> root@pve:~# ovs-ofctl show vmbr1
> [CUT]
>  2(sv_z1ad0101): addr:1e:9a:25:9b:22:0a
>  3(sv_z1ad0102): addr:ba:46:f5:b8:4a:63
>  4(sv_z1ad0103): addr:56:75:a3:a8:f7:89
>  5(sv_z1ad0104): addr:9e:a9:30:0f:a2:53
>  6(sv_z1ad4064): addr:26:de:29:e0:8a:12
>  7(enp6s0f0): addr:00:1c:c4:47:63:31
>  8(enp7s0f0): addr:00:1c:c4:47:63:33
>  9(bond0): addr:0a:aa:d5:40:6d:9a
>  LOCAL(vmbr1): addr:00:1c:c4:47:63:31
> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>
> First of all the ports enps6s0f0 e enps7s0f0 (the physical memebers of
> bond0) should not be there.

I would suggest creating a kernel bond with the enp6s0f0 and enp7s0f0 members, 
and only adding that bond.

> The goal is to have just sv_xxxxx and bond0. Bond0 is supposed to receive
> 802.1ad traffic i.e. with ethertype 0x88a8.
> Then frames should be forwarded to each interface sv_xxxxx based on the
> 802.1ad tag, i.e. frames tagged with the tag 101 should be forwarded to the
> port q1ad0101, frames with tag 102 should be forwarded to the port
> q1ad0102, and so on.

Your sv_xx ports should be native tagged ports, for details see the 
https://docs.openvswitch.org/en/latest/faq/vlan/ documentation.

Your bond port should be a trunk port.

If you use the default NORMAL rule in openflow (check ovs-ofctl dump-flows 
vmbr1), this should work just fine. You can see the FDB table with the right 
VLAN tags (see ovs-appctl fdb/xxxx commands).

> I think I have to remove the physical ones from the vmbr1 and possibly add
> the right flow. I don't think is the right way to do it, everything should
> be accomplished through the ProxMox Interface, but at least I will have a
> working scenario and discuss it in the ProxMox's forum.
>
> Thanks,
>
> Alex
>
>
>
> On Wed, Mar 22, 2023 at 10:54 AM Eelco Chaudron <echau...@redhat.com> wrote:
>
>>
>>
>> On 22 Mar 2023, at 10:49, Alex Zetaeffesse wrote:
>>
>>> Thank you Eelco,
>>>
>>> the thing is that I'm troubleshooting connectivity issues and the problem
>>> for the moment seems right in vmbr1.
>>> Hence I'm trying to understand if vmbr1 works as expected.
>>
>> For OVS to work the bridge does not need to be up. Traffic should just
>> work :)
>>
>> What are you trying to do, and does not work?
>>
>> You can check the following thinks:
>>
>> - Are the ports configured as needed (ovs-vsctl show)
>> - Are you OpenFlow rules as needed (ovs-ofctl dump-flows <br>)
>> - If traffic is not working, check the dp rules (when you are actively
>> trying to send traffic as these rules time out after 10 seconds).
>> ovs-appctl dpctl/dump-flows
>>
>> Cheers,
>>
>> Eelco
>>
>>
>>
>>> On Wed, Mar 22, 2023 at 10:42 AM Eelco Chaudron <echau...@redhat.com>
>> wrote:
>>>
>>>>
>>>>
>>>> On 21 Mar 2023, at 23:42, Alex Zetaeffesse via discuss wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> First of all, let me be clear on the fact that I'm the first who
>> doesn't
>>>>> want to run a mixed environment :-)
>>>>>
>>>>> In my ProxMox environment I have the following:
>>>>>
>>>>> root@pve:~# ip link show dev vmbr1
>>>>> 11: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>> state
>>>>> UNKNOWN mode DEFAULT group default qlen 1000
>>>>>     link/ether 00:1c:c4:47:63:31 brd ff:ff:ff:ff:ff:ff
>>>>
>>>> This is the same on my Fedora and RHEL systems. The bridge show DOWN if
>> I
>>>> bring it down, and UNKNOWN state when it’s up.
>>>>
>>>> [vagrant@f35 ~]$ sudo ovs-vsctl add-br br1
>>>> [vagrant@f35 ~]$ ip link show bro
>>>> 6: br1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
>> DEFAULT
>>>> group default qlen 1000
>>>>     link/ether c6:c4:18:57:ba:48 brd ff:ff:ff:ff:ff:ff
>>>> [vagrant@f35 ~]$ sudo ip link set br1 up
>>>> [vagrant@f35 ~]$ ip link show br1
>>>> 6: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
>>>> UNKNOWN mode DEFAULT group default qlen 1000
>>>>     link/ether c6:c4:18:57:ba:48 brd ff:ff:ff:ff:ff:ff
>>>>
>>>> This might be a bug no one ever noticed :)
>>>>
>>>> //Eelco
>>>>
>>>>> root@pve:~# ovs-ofctl show vmbr1
>>>>> OFPT_FEATURES_REPLY (xid=0x2): dpid:0000001cc4476331
>>>>> n_tables:254, n_buffers:0
>>>>> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS
>> ARP_MATCH_IP
>>>>> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
>>>>> mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
>>>>>  2(sv_z1ad0101): addr:1e:9a:25:9b:22:0a
>>>>>      config:     0
>>>>>      state:      0
>>>>>      speed: 0 Mbps now, 0 Mbps max
>>>>>  3(sv_z1ad0102): addr:ba:46:f5:b8:4a:63
>>>>>      config:     0
>>>>>      state:      0
>>>>>      speed: 0 Mbps now, 0 Mbps max
>>>>>  4(sv_z1ad0103): addr:56:75:a3:a8:f7:89
>>>>>      config:     0
>>>>>      state:      0
>>>>>      speed: 0 Mbps now, 0 Mbps max
>>>>>  5(sv_z1ad0104): addr:9e:a9:30:0f:a2:53
>>>>>      config:     0
>>>>>      state:      0
>>>>>      speed: 0 Mbps now, 0 Mbps max
>>>>>  6(sv_z1ad4064): addr:26:de:29:e0:8a:12
>>>>>      config:     0
>>>>>      state:      0
>>>>>      speed: 0 Mbps now, 0 Mbps max
>>>>>  7(enp6s0f0): addr:00:1c:c4:47:63:31
>>>>>      config:     0
>>>>>      state:      0
>>>>>      current:    1GB-FD COPPER AUTO_NEG
>>>>>      advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER
>> AUTO_NEG
>>>>>      supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER
>> AUTO_NEG
>>>>>      speed: 1000 Mbps now, 1000 Mbps max
>>>>>  8(enp7s0f0): addr:00:1c:c4:47:63:33
>>>>>      config:     0
>>>>>      state:      0
>>>>>      current:    1GB-FD COPPER AUTO_NEG
>>>>>      advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER
>> AUTO_NEG
>>>>>      supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER
>> AUTO_NEG
>>>>>      speed: 1000 Mbps now, 1000 Mbps max
>>>>>  9(bond0): addr:0a:aa:d5:40:6d:9a
>>>>>      config:     0
>>>>>      state:      0
>>>>>      speed: 0 Mbps now, 0 Mbps max
>>>>>  LOCAL(vmbr1): addr:00:1c:c4:47:63:31
>>>>>      config:     0
>>>>>      state:      0
>>>>>      speed: 0 Mbps now, 0 Mbps max
>>>>> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>>>>> root@pve:~# ovs-vsctl --version
>>>>> ovs-vsctl (Open vSwitch) 2.15.0
>>>>> DB Schema 8.2.0
>>>>>
>>>>> Is there any significant information in the ovs-dpctl output to see
>> where
>>>>> the problem might be?
>>>>> Or what other command may I use to spot the issue?
>>>>>
>>>>> What surprises me is that two ports of the bridge vmbr1 have ovs-system
>>>> as
>>>>> master that is in DOWN state.
>>>>>
>>>>> root@pve:~# ip link show dev enp6s0f0
>>>>> 4: enp6s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> pfifo_fast
>>>>> master ovs-system state UP mode DEFAULT group default qlen 1000
>>>>>     link/ether 00:1c:c4:47:63:31 brd ff:ff:ff:ff:ff:ff
>>>>> root@pve:~# ip link show dev enp7s0f0
>>>>> 6: enp7s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> pfifo_fast
>>>>> master ovs-system state UP mode DEFAULT group default qlen 1000
>>>>>     link/ether 00:1c:c4:47:63:33 brd ff:ff:ff:ff:ff:ff
>>>>> root@pve:~# ip link show dev ovs-system
>>>>> 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>> mode
>>>>> DEFAULT group default qlen 1000
>>>>>     link/ether f6:c6:45:e5:39:98 brd ff:ff:ff:ff:ff:ff
>>>>>
>>>>> Alex
>>>>> _______________________________________________
>>>>> discuss mailing list
>>>>> disc...@openvswitch.org
>>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>>>
>>>>
>>
>>

_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to