Is it expected gateway mac address ages out while there is ongoing traffic
when hardware offloading is active?
# ovs-appctl fdb/show br-ex | grep 00:1c:73:aa:bb:cc
> 1 111 00:1c:73:aa:bb:cc 889
Thanks,
Serhat
On Sun, May 25, 2025 at 6:56 PM Numan Siddique wrote:
>
>
>
>
> On Sat, May
On Sat, May 24, 2025, 7:46 AM Serhat Rıfat Demircan <
demircan.ser...@gmail.com> wrote:
> Finally I can confirm and reproduce when gateway mac ages out from mac
> address table(or flush) ovs starts to flood all ports and flows switch to
> software path. When It learn gateway mac address by any arp
Finally I can confirm and reproduce when gateway mac ages out from mac
address table(or flush) ovs starts to flood all ports and flows switch to
software path. When It learn gateway mac address by any arp requests
hardware offload works again.
It seems increasing mac-aging on br-ex can be an optio
No OVN, it is only OVS. oftrace says it flooding because "no learned MAC
for destination". Is this expected while there is existing traffic? And
another question is, can I prevent flooding? :)
# ovs-appctl ofproto/trace br-int
> "in_port=4,ip,eth_src=fa:16:7e:e1:50:af,eth_dst=00:1c:73:aa:bb:cc,nw_
On Thu, May 22, 2025 at 11:33 AM Serhat Rıfat Demircan via discuss
wrote:
>
> I think this is the actual flow change in ovs side. When br-ex(br-ex is
> external bridge, openstack is bond interface) is added as output interface it
> breaks offloading. Still do not know why it changes :) Do you ha
I think this is the actual flow change in ovs side. When br-ex(br-ex is
external bridge, openstack is bond interface) is added as output interface
it breaks offloading. Still do not know why it changes :) Do you have any
idea why ovs decide to redirect traffic to external bridge?
*Offloaded flow:*
After little more digging, I see miss upcalls and flow modify logs like
below while tc flow switching to not_in_hw. I also see similar upcall while
switching back to in_hw.
2025-05-21T08:14:27.189Z|19748|dpif(handler3)|DBG|system@ovs-system: miss
upcall:
recirc_id(0),dp_hash(0),skb_priority(0),in
On 5/20/25 4:32 PM, Serhat Rıfat Demircan wrote:
> Yes, statistics are updated.
>
> # ovs-appctl dpctl/dump-flows type=offloaded | grep dst=00:1c:73:aa:bb:cc
> recirc_id(0x5),in_port(4),ct_state(+est-rel+rpl-inv+trk),ct_zone(0x1),ct_mark(0),eth(src=fa:16:7e:e1:50:af,dst=00:1c:73:aa:bb:cc),eth_type
Yes, statistics are updated.
# ovs-appctl dpctl/dump-flows type=offloaded | grep dst=00:1c:73:aa:bb:cc
recirc_id(0x5),in_port(4),ct_state(+est-rel+rpl-inv+trk),ct_zone(0x1),ct_mark(0),eth(src=fa:16:7e:e1:50:af,dst=00:1c:73:aa:bb:cc),eth_type(0x0800),ipv4(frag=no),
packets:222, bytes:30340, used:0.
On 5/20/25 2:20 PM, Serhat Rıfat Demircan wrote:
> Redirecting to br-ex happens even if there is a single hardware offloaded
> port on hypervisor. Still can't find the actual reason.
> Increasing mac-aging-time and mac-table-size did not help also.
>
> By the way, it is bidirectional storage(cep
Redirecting to br-ex happens even if there is a single hardware offloaded
port on hypervisor. Still can't find the actual reason.
Increasing mac-aging-time and mac-table-size did not help also.
By the way, it is bidirectional storage(ceph) traffic.
On Wed, May 14, 2025 at 1:39 PM Ilya Maximets
On 5/8/25 4:15 PM, Serhat Rıfat Demircan via discuss wrote:
> Hello Everyone,
>
> I've been testing OVS hardware offloaded ports with Openstack for a while.
> I have ConnectX-5 and ConnectX-6 cards in my lab. Iperf tests are looking
> promising, but when I try to use hardware offloaded ports for k
12 matches
Mail list logo