From: wenxu
ip l add tun type gretap external
ip r a 10.0.0.2 encap ip id 1000 dst 172.168.0.2 key dev tun
ip a a 10.0.0.1/24 dev tun
The peer arp request to 10.0.0.1 with tunnel_id, but the arp reply
only set the tun_id but not the tun_flags with TUNNEL_KEY. The arp
reply packet don't co
From: wenxu
ip rule add from all iif gretap tun_id 2000 lookup 200
Signed-off-by: wenxu
---
ip/iprule.c| 33 +
man/man8/ip-rule.8 | 4 +++-
2 files changed, 36 insertions(+), 1 deletion(-)
diff --git a/ip/iprule.c b/ip/iprule.c
index 0f8fc6d..d28f151
From: wenxu
In the ip_rcv the skb go through the PREROUTING hook first,
Then jump in vrf device go through the same hook again.
When conntrack work with vrf, there will be some conflict for rules.
Because the package go through the hook twice with different nf status
ip link add user1 type vrf
From: wenxu
ip l add dev tun type gretap key 1000
ip a a dev tun 10.0.0.1/24
Packets with tun-id 1000 can be recived by tun dev. But packet can't
be sent through dev tun for non-tunnel-dst
With this patch: tunnel-dst can be get through lwtunnel like beflow:
ip r a 10.0.0.7 encap ip id 100
From: wenxu
For nat example:
client 1.1.1.7 ---> 2.2.2.7 which dnat to 10.0.0.7 server
When syn_rcv pkt from server it get the peer(client->server) route
through daddr = ct->tuplehash[!dir].tuple.dst.u3.ip, the value 2.2.2.7
is not correct in this situation. it should be 10.0.0.7
ct-&
From: wenxu
In the forward chain the iif is changed from slave device to master vrf
device. It will lead the offload not match on lower slave device.
This patch make the flollowing example can work correct
ip addr add dev eth0 1.1.1.1/24
ip addr add dev eth1 10.0.0.1/24
ip link add user1 type
From: wenxu
ip l add dev tun type gretap external
ip r a 10.0.0.1 encap ip dst 192.168.152.171 id 1000 dev gretap
For gretap example when the command set the id but don't set the
TUNNEL_KEY flags. There is no key field in the send packet
User can set flags with key, csum, seq
ip r a 10.
From: wenxu
ip l add dev tun type gretap external
ip r a 10.0.0.1 encap ip dst 192.168.152.171 id 1000 dev gretap
For gretap example when the command set the id but don't set the
TUNNEL_KEY flags. There is no key field in the send packet
User can set flags with key, csum, seq
ip r a 10.
Hi stephen,
I found the state of this patch is Accepted. I wonder why it didn't merge into
the iproute2 master?
BR
wenxu
On 1/2/2019 11:57 AM, we...@ucloud.cn wrote:
> From: wenxu
>
> ip l add dev tun type gretap external
> ip r a 10.0.0.1 encap ip dst 192.168.152.171 i
From: wenxu
For nat example:
client 1.1.1.7 ---> 2.2.2.7 which dnat to 10.0.0.7 server
When syn_rcv pkt from server it get the peer(client->server) route
through daddr = ct->tuplehash[!dir].tuple.dst.u3.ip, the value 2.2.2.7
is not correct in this situation. it should be 10.0.0.7
ct-&
Hi pablo,
How about the state for this patch?
On 12/21/2018 6:12 PM, we...@ucloud.cn wrote:
> From: wenxu
>
> This patch allows us to match on the tunnel metadata that is available
> of the packet. We can use this to validate if the packet comes from/goes
> to tunnel and th
Hi pablo,
How about the status for this patch?
On 12/29/2018 6:10 PM, we...@ucloud.cn wrote:
> From: wenxu
>
> In the forward chain the iif is changed from slave device to master vrf
> device. It will lead the offload not match on lower slave device.
>
> This patch make the
On 1/10/2019 12:41 AM, Pablo Neira Ayuso wrote:
> On Fri, Dec 21, 2018 at 06:12:24PM +0800, we...@ucloud.cn wrote:
> [...]
>> +static struct xt_match tunnel_mt_reg __read_mostly = {
>> +.name = "tunnel",
>> +.revision = 0,
>> +.family = NFPROTO_UNSPEC,
>> +
On 1/10/2019 12:05 PM, wenxu wrote:
> On 1/10/2019 12:41 AM, Pablo Neira Ayuso wrote:
>> On Fri, Dec 21, 2018 at 06:12:24PM +0800, we...@ucloud.cn wrote:
>> [...]
>>> +static struct xt_match tunnel_mt_reg __read_mostly = {
>>> + .name = &quo
From: wenxu
This patch allows us to match on the tunnel metadata that is available
of the packet. We can use this to validate if the packet comes from/goes
to tunnel and the corresponding tunnel ID in the iptables.
Signed-off-by: wenxu
---
include/uapi/linux/netfilter/xt_tunnel.h | 12
From: wenxu
BUG report in selftests: bpf: test_tunnel.sh
Testing IPIP tunnel...
BUG: unable to handle kernel NULL pointer dereference at
PGD 0 P4D 0
Oops: 0010 [#1] SMP PTI
CPU: 0 PID: 16822 Comm: ping Not tainted 5.0.0-rc3-00352-gc8b34e6 #1
Hardware name: QEMU Standard PC
On 2019/2/17 上午12:34, Alexei Starovoitov wrote:
> On Sat, Feb 16, 2019 at 2:11 AM wrote:
>> From: wenxu
>>
>> BUG report in selftests: bpf: test_tunnel.sh
>>
>> Testing IPIP tunnel...
>> BUG: unable to handle kernel NULL pointer dereference at 00
On 2019/2/17 上午11:35, wenxu wrote:
> On 2019/2/17 上午12:34, Alexei Starovoitov wrote:
>> On Sat, Feb 16, 2019 at 2:11 AM wrote:
>>> From: wenxu
>>>
>>> BUG report in selftests: bpf: test_tunnel.sh
>>>
>>> Testing IPIP tunnel...
>&g
On 2019/2/15 下午5:38, Alan Maguire wrote:
> Naresh Kamboju noted the following oops during execution of selftest
> tools/testing/selftests/bpf/test_tunnel.sh on x86_64:
>
> [ 274.120445] BUG: unable to handle kernel NULL pointer dereference
> at
> [ 274.128285] #PF error: [INSTR]
From: wenxu
The fragment packets do defrag in tcf_ct_handle_fragments
will clear the skb->cb which make the qdisc_skb_cb clear
too and set the pkt_len to 0. The bytes always 0 when dump
the filter. And it also update the pkt_len after all the
fragments finish the defrag to one packet and m
From: wenxu
When tcf_ct_act execute the tcf_lastuse_update should
be update or the used stats never update
filter protocol ip pref 3 flower chain 0
filter protocol ip pref 3 flower chain 0 handle 0x1
eth_type ipv4
dst_ip 1.1.1.1
ip_flags frag/firstfrag
skip_hw
not_in_hw
action order
From: wenxu
The fragment packets do defrag in act_ct module. The reassembled packet
over the mtu in the act_mirred. This big packet should be fragmented
to send out.
Fixes: b57dc7c13ea9 ("net/sched: Introduce action ct")
Signed-off-by: wenxu
---
This patch is bas
be changed to 8M
through the following?
ESW_POOLS[] = { 8 * 1024 * 1024,
1 * 1024 * 1024,
64 * 1024,
128 };
BR
wenxu
From: wenxu
Currently all the conntrack entry offfload rules will be add
in both ct and ct_nat flow table in the mlx5e driver. It is
not makesense.
This serise provide nat attribute in the ct_metadata action which
tell driver the rule should add to ct or ct_nat flow table
wenxu (2):
net
From: wenxu
Add nat attribute in the ct_metadata action. This tell driver the offload
conntrack entry is nat one or not.
Signed-off-by: wenxu
---
include/net/flow_offload.h | 1 +
net/sched/act_ct.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/include/net/flow_offload.h b
From: wenxu
In the ct offload all the conntrack entry offload rules
will be add to both ct ft and ct_nat ft twice.
It is not makesense. The ct_metadat.nat will tell driver
the rule should add to ct or ct_nat flow table
Signed-off-by: wenxu
---
drivers/net/ethernet/mellanox/mlx5/core/en
On 5/28/2020 7:35 PM, Edward Cree wrote:
> On 28/05/2020 08:15, we...@ucloud.cn wrote:
>> From: wenxu
>>
>> Currently all the conntrack entry offfload rules will be add
>> in both ct and ct_nat flow table in the mlx5e driver. It is
>> not makesense.
>>
From: wenxu
Currently add nat mangle action with comparing invert and ori tuple.
It is better to check IPS_NAT_MASK flags first to avoid non necessary
memcmp for non-NAT conntrack.
Signed-off-by: wenxu
---
net/sched/act_ct.c | 19 +--
1 file changed, 13 insertions(+), 6
在 2020/5/30 1:56, Marcelo Ricardo Leitner 写道:
> On Fri, May 29, 2020 at 12:07:45PM +0800, we...@ucloud.cn wrote:
>> From: wenxu
>>
>> Currently add nat mangle action with comparing invert and ori tuple.
>> It is better to check IPS_NAT_MASK flags first to avoid non
From: wenxu
Currently add nat mangle action with comparing invert and ori tuple.
It is better to check IPS_NAT_MASK flags first to avoid non necessary
memcmp for non-NAT conntrack.
Signed-off-by: wenxu
---
net/sched/act_ct.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/sched
在 2020/5/30 8:04, wenxu 写道:
> 在 2020/5/30 1:56, Marcelo Ricardo Leitner 写道:
>> On Fri, May 29, 2020 at 12:07:45PM +0800, we...@ucloud.cn wrote:
>>> From: wenxu
>>>
>>> Currently add nat mangle action with comparing invert and ori tuple.
>>> It i
From: wenxu
In the ct offload all the conntrack entry offload rules
will be add to both ct ft and ct_nat ft twice. It is not
make sense.
The driver can distinguish NAT from non-NAT conntrack
through the FLOW_ACTION_MANGLE action.
Signed-off-by: wenxu
---
drivers/net/ethernet/mellanox/mlx5
On 5/31/2020 4:01 PM, Oz Shlomo wrote:
> Hi Wenxu,
>
> On 5/28/2020 10:15 AM, we...@ucloud.cn wrote:
>> From: wenxu
>>
>> In the ct offload all the conntrack entry offload rules
>> will be add to both ct ft and ct_nat ft twice.
>> It is not makesen
On 6/30/2020 11:57 PM, Eric Dumazet wrote:
>
> On 6/29/20 7:54 PM, we...@ucloud.cn wrote:
>> From: wenxu
>>
>> The fragment packets do defrag in act_ct module. The reassembled packet
>> over the mtu in the act_mirred. This big packet should be fragment
On 7/1/2020 3:02 AM, Cong Wang wrote:
> On Mon, Jun 29, 2020 at 7:55 PM wrote:
>> From: wenxu
>>
>> The fragment packets do defrag in act_ct module. The reassembled packet
>> over the mtu in the act_mirred. This big packet should be fragmented
>> to s
On 7/1/2020 1:52 PM, Cong Wang wrote:
> On Tue, Jun 30, 2020 at 7:36 PM wenxu wrote:
>>
>> On 7/1/2020 3:02 AM, Cong Wang wrote:
>>> On Mon, Jun 29, 2020 at 7:55 PM wrote:
>>>> From: wenxu
>>>>
>>>> The fragment packets do defrag i
On 7/1/2020 2:12 PM, Cong Wang wrote:
> On Tue, Jun 30, 2020 at 11:03 PM wenxu wrote:
>> Only forward packet case need do fragment again and there is no need do
>> defrag explicit.
> Same question: why act_mirred? You have to explain why act_mirred
> has the responsibility
On 7/1/2020 2:21 PM, wenxu wrote:
> On 7/1/2020 2:12 PM, Cong Wang wrote:
>> On Tue, Jun 30, 2020 at 11:03 PM wenxu wrote:
>>> Only forward packet case need do fragment again and there is no need do
>>> defrag explicit.
>> Same question: why act_mirred? Y
On 7/2/2020 6:21 AM, David Miller wrote:
> From: we...@ucloud.cn
> Date: Mon, 29 Jun 2020 17:16:17 +0800
>
>> From: wenxu
>>
>> The fragment packets do defrag in tcf_ct_handle_fragments
>> will clear the skb->cb which make the qdisc_skb_cb clear
>> too
On 7/2/2020 1:33 AM, Cong Wang wrote:
> On Wed, Jul 1, 2020 at 1:21 AM wenxu wrote:
>>
>> On 7/1/2020 2:21 PM, wenxu wrote:
>>> On 7/1/2020 2:12 PM, Cong Wang wrote:
>>>> On Tue, Jun 30, 2020 at 11:03 PM wenxu wrote:
>>>>> Only forward packet
On 7/3/2020 8:47 AM, Marcelo Ricardo Leitner wrote:
> On Thu, Jul 02, 2020 at 02:39:07PM -0700, Cong Wang wrote:
>> On Thu, Jul 2, 2020 at 10:32 AM Marcelo Ricardo Leitner
>> wrote:
>>> On Thu, Jul 02, 2020 at 05:36:38PM +0800, wenxu wrote:
>>>> On 7/2/202
From: wenxu
When tcf_ct_act execute the tcf_lastuse_update should
be update or the used stats never update
filter protocol ip pref 3 flower chain 0
filter protocol ip pref 3 flower chain 0 handle 0x1
eth_type ipv4
dst_ip 1.1.1.1
ip_flags frag/firstfrag
skip_hw
not_in_hw
action order
From: wenxu
When tcf_ct_act execute the tcf_lastuse_update should
be update or the used stats never update
filter protocol ip pref 3 flower chain 0
filter protocol ip pref 3 flower chain 0 handle 0x1
eth_type ipv4
dst_ip 1.1.1.1
ip_flags frag/firstfrag
skip_hw
not_in_hw
action order
Please drop this one for wrong tags
在 2020/7/4 15:42, we...@ucloud.cn 写道:
> From: wenxu
>
> When tcf_ct_act execute the tcf_lastuse_update should
> be update or the used stats never update
>
> filter protocol ip pref 3 flower chain 0
> filter protocol ip pref 3 flow
From: wenxu
using nf_ct_frag_gather to defrag in act_ct to elide CB clear.
Avoid serious crashes and problems in ct subsystem. Because Some packet
schedulers store pointers in the qdisc CB private area and Parallel
accesses to the SKB.
Fixes: b57dc7c13ea9 ("net/sched: Introduce acti
From: wenxu
Make nf_ct_frag6_gather elide the CB clear when packets are defragmented
by connection tracking. This can make each subsystem such as br_netfilter
, openvswitch, act_ct do defrag without restore the CB. And avoid
serious crashes and problems in ct subsystem. Because Some packet
From: wenxu
Add nf_ct_frag_gather and Make nf_ct_frag6_gather elide the CB clear
when packets are defragmented by connection tracking. This can make
each subsystem such as br_netfilter, openvswitch, act_ct do defrag
without restore the CB.
This also avoid serious crashes and problems in ct
在 2020/7/4 1:50, Marcelo Ricardo Leitner 写道:
> On Fri, Jul 03, 2020 at 06:19:51PM +0800, wenxu wrote:
>> On 7/3/2020 8:47 AM, Marcelo Ricardo Leitner wrote:
>>> On Thu, Jul 02, 2020 at 02:39:07PM -0700, Cong Wang wrote:
>>>> On Thu, Jul 2, 2020 at 10:32 AM Marce
From: wenxu
Add nf_ct_frag_gather for conntrack defrag and it will
elide the CB clear when packets are defragmented by
connection tracking
Signed-off-by: wenxu
---
include/net/netfilter/ipv4/nf_defrag_ipv4.h | 2 +
net/ipv4/netfilter/nf_defrag_ipv4.c | 314
在 2020/7/6 22:38, Florian Westphal 写道:
> we...@ucloud.cn wrote:
>> From: wenxu
>>
>> Add nf_ct_frag_gather for conntrack defrag and it will
>> elide the CB clear when packets are defragmented by
>> connection tracking
> Why is this patch required?
> Can
On 7/7/2020 12:29 AM, Florian Westphal wrote:
> wenxu wrote:
>> 在 2020/7/6 22:38, Florian Westphal 写道:
>>> we...@ucloud.cn wrote:
>>>> From: wenxu
>>>>
>>>> Add nf_ct_frag_gather for conntrack defrag and it will
>>>> elide the
From: wenxu
Add ip_defrag_ignore_cb for conntrack defrag and it will
elide the CB clear when packets are defragmented by
connection tracking.
Signed-off-by: wenxu
---
include/net/ip.h | 2 ++
net/ipv4/ip_fragment.c | 55 ++
2 files
From: wenxu
Add nf_ct_frag_gather and Make nf_ct_frag6_gather elide the CB clear
when packets are defragmented by connection tracking. This can make
each subsystem such as br_netfilter, openvswitch, act_ct do defrag
without restore the CB.
This also avoid serious crashes and problems in ct
From: wenxu
using ip_defrag_ignore_cb to defrag in act_ct to elide CB clear.
Avoid serious crashes and problems in ct subsystem. Because Some packet
schedulers store pointers in the qdisc CB private area and Parallel
accesses to the SKB.
Fixes: b57dc7c13ea9 ("net/sched: Introduce acti
From: wenxu
Make nf_ct_frag6_gather elide the CB clear when packets are defragmented
by connection tracking. This can make each subsystem such as br_netfilter
, openvswitch, act_ct do defrag without restore the CB. And avoid
serious crashes and problems in ct subsystem. Because Some packet
From: wenxu
When a block bind with a dev which support indr block call(vxlan/gretap
device). It can bind success but with nooffloaddevcnt++. It will fail
when replace the hw filter in tc_setup_cb_call with skip_sw mode for
checkout the nooffloaddevcnt and skip_sw.
if (block->nooffloaddev
在 2019/9/16 18:28, Jiri Pirko 写道:
> Please use get_maintainers script to get list of ccs.
>
> Mon, Sep 16, 2019 at 12:15:34PM CEST, we...@ucloud.cn wrote:
>> From: wenxu
>>
>> When a block bind with a dev which support indr block call(vxlan/gretap
>> devi
From: wenxu
A vxlan or gretap device offload through indr block methord. If the device
successfully bind with a real hw through indr block call, It also add
nooffloadcnt counter. This counter will lead the rule add failed in
fl_hw_replace_filter-->tc_setup_cb_call with skip_sw flags.
In
From: wenxu
A vxlan or gretap device offload through indr block methord. If the device
successfully bind with a real hw through indr block call, It also add
nooffloadcnt counter. This counter will lead the rule add failed in
fl_hw_replace_filter-->tc_setup_cb_call with skip_sw flags.
In
sorry forget cc to jiri.
On 9/19/2019 4:37 PM, we...@ucloud.cn wrote:
> From: wenxu
>
> A vxlan or gretap device offload through indr block methord. If the device
> successfully bind with a real hw through indr block call, It also add
> nooffloadcnt counter. This counter will le
Hi John & Jakub
There are some limitations for indirect tc callback work with skip_sw ?
BR
wenxu
On 9/19/2019 8:50 PM, Or Gerlitz wrote:
>
>> successfully bind with a real hw through indr block call, It also add
>> nooffloadcnt counter. This counter will lead the
在 2019/9/23 17:42, John Hurley 写道:
> On Mon, Sep 23, 2019 at 5:20 AM wenxu wrote:
>> Hi John & Jakub
>>
>> There are some limitations for indirect tc callback work with skip_sw ?
>>
> Hi Wenxu,
> This is not really a limitation.
> As Or points out, i
From: wenxu
Currently indr setup supoort both indr_setup_ft_cb and indr_setup_tc_cb.
But the __flow_block_indr_cleanup only check the indr_setup_tc_cb in
mlx5e driver.
It is better to just check the indr_release_cb, all the setup_cb type
share the same release_cb.
Fixes: 1fac52da5942 (&quo
From: wenxu
The cb_priv in the flow_indr_dev_unregister get from the driver
is the same as cb_priv of flow_indr_dev. But it always isn't
the same as cb_priv of flow_block_cb which leads miss cleanup operation.
For mlx5e example the cb_priv of flow_indr_dev is the mlx5e_rep_priv
which re
From: wenxu
If the representor is removed, then identify the indirect
flow_blocks that need to be removed by the release callback.
Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect flow_block
infrastructure")
Signed-off-by: wenxu
---
drivers/net/ethernet/broadcom/bnxt
在 2020/6/11 19:05, Pablo Neira Ayuso 写道:
On Thu, Jun 11, 2020 at 06:03:17PM +0800, we...@ucloud.cn wrote:
[...]
diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
index 0cfc35e..40eaf64 100644
--- a/net/core/flow_offload.c
+++ b/net/core/flow_offload.c
@@ -372,14 +372,13 @@ int flo
From: wenxu
In the function __flow_block_indr_cleanup, The match stataments
this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv
is totally different data from the flow_indr_dev->cb_priv.
Store the representor cb_priv to the flow_block_cb->indr.cb_priv in
the dr
From: wenxu
If the representor is removed, then identify the indirect
flow_blocks that need to be removed by the release callback.
Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect flow_block
infrastructure")
Signed-off-by: wenxu
---
drivers/net/ethernet/broadcom/bnxt
From: wenxu
When a indr device add in offload success. After the representor
go away. All the flow_block_cb cleanup but miss del form driver
list.
Fixes: 0fdcf78d5973 ("net: use flow_indr_dev_setup_offload()")
Signed-off-by: wenxu
---
drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
From: wenxu
block->nooffloaddevcnt warning with following dmesg log:
When a indr device add in offload success. The block->nooffloaddevcnt
always zero. But When all the representors go away. All the flow_block_cb
cleanup. Then remove the indr device, The __tcf_block_pu
Please drop this series.
Thank you.
在 2020/6/12 18:08, we...@ucloud.cn 写道:
From: wenxu
block->nooffloaddevcnt warning with following dmesg log:
When a indr device add in offload success. The block->nooffloaddevcnt
always zero. But When all the representors go away. All the flow_bl
From: wenxu
In the function __flow_block_indr_cleanup, The match stataments
this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv
is totally different data with the flow_indr_dev->cb_priv.
Store the representor cb_priv to the flow_block_cb->indr.cb_priv in
the dr
From: wenxu
The block->nooffloaddevcnt should always count for indr block.
even the indr block offload successful. The representor maybe
gone away and the ingress qdisc can work in software mode.
block->nooffloaddevcnt warning with following dmesg log:
[ 760.
From: wenxu
When a indr device add in offload success. After the representor
go away. All the flow_block_cb cleanup but miss del form driver
list.
Fixes: 0fdcf78d5973 ("net: use flow_indr_dev_setup_offload()")
Signed-off-by: wenxu
---
net/netfilter/nf_flow_table_offload.c | 1 +
net
From: wenxu
If the representor is removed, then identify the indirect
flow_blocks that need to be removed by the release callback.
Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect flow_block
infrastructure")
Signed-off-by: wenxu
---
drivers/net/ethernet/broadcom/bnxt
From: wenxu
v2:
patch2: store the cb_priv of representor to the flow_block_cb->indr.cb_priv
in the driver. And make the correct check with the statments
this->indr.cb_priv == cb_priv
patch4: del the driver list only in the indriect cleanup callbacks
v3:
add the cover letter and cha
From: wenxu
In the function __flow_block_indr_cleanup, The match stataments
this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv
is totally different data with the flow_indr_dev->cb_priv.
Store the representor cb_priv to the flow_block_cb->indr.cb_priv in
the dr
From: wenxu
When a indr device add in offload success. After the representor
go away. All the flow_block_cb cleanup but miss del form driver
list.
Fixes: 0fdcf78d5973 ("net: use flow_indr_dev_setup_offload()")
Signed-off-by: wenxu
---
net/netfilter/nf_flow_table_offload.c | 1 +
net
From: wenxu
The cleanup operation based on the setup callback. But in the mlx5e
driver there are tc and flowtable indrict setup callback and shared
the same release callbacks. So when the representor is removed,
then identify the indirect flow_blocks that need to be removed by
the release
From: wenxu
When a indr device add in offload success. The block->nooffloaddevcnt
should be 0. After the representor go away. When the dir device go away
the flow_block UNBIND operation with -EOPNOTSUPP which lead the warning
dmesg log.
The block->nooffloaddevcnt should always count fo
在 2020/6/16 18:51, Simon Horman 写道:
> On Tue, Jun 16, 2020 at 11:19:38AM +0800, we...@ucloud.cn wrote:
>> From: wenxu
>>
>> In the function __flow_block_indr_cleanup, The match stataments
>> this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv
在 2020/6/16 22:34, Simon Horman 写道:
> On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu wrote:
>> 在 2020/6/16 18:51, Simon Horman 写道:
>>> On Tue, Jun 16, 2020 at 11:19:38AM +0800, we...@ucloud.cn wrote:
>>>> From: wenxu
>>>>
>>>> In the fu
On 6/17/2020 4:17 AM, Pablo Neira Ayuso wrote:
> On Tue, Jun 16, 2020 at 11:19:39AM +0800, we...@ucloud.cn wrote:
>> From: wenxu
>>
>> When a indr device add in offload success. The block->nooffloaddevcnt
>> should be 0. After the representor go away. When
On 6/17/2020 4:30 AM, Pablo Neira Ayuso wrote:
> On Tue, Jun 16, 2020 at 10:17:50PM +0200, Pablo Neira Ayuso wrote:
>> On Tue, Jun 16, 2020 at 11:19:39AM +0800, we...@ucloud.cn wrote:
>>> From: wenxu
>>>
>>> When a indr device add in offload success. The bl
On 6/17/2020 4:13 AM, Pablo Neira Ayuso wrote:
> On Tue, Jun 16, 2020 at 11:19:38AM +0800, we...@ucloud.cn wrote:
>> From: wenxu
>>
>> In the function __flow_block_indr_cleanup, The match stataments
>> this->cb_priv == cb_priv is always false, the flow_block_cb-&g
On 6/16/2020 11:47 PM, Simon Horman wrote:
> On Tue, Jun 16, 2020 at 11:18:16PM +0800, wenxu wrote:
>> 在 2020/6/16 22:34, Simon Horman 写道:
>>> On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu wrote:
>>>> 在 2020/6/16 18:51, Simon Horman 写道:
>>>>>
On 6/17/2020 4:38 AM, Pablo Neira Ayuso wrote:
> On Tue, Jun 16, 2020 at 05:47:17PM +0200, Simon Horman wrote:
>> On Tue, Jun 16, 2020 at 11:18:16PM +0800, wenxu wrote:
>>> 在 2020/6/16 22:34, Simon Horman 写道:
>>>> On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu
On 6/17/2020 4:38 PM, Pablo Neira Ayuso wrote:
> On Wed, Jun 17, 2020 at 11:36:19AM +0800, wenxu wrote:
>> On 6/17/2020 4:38 AM, Pablo Neira Ayuso wrote:
>>> On Tue, Jun 16, 2020 at 05:47:17PM +0200, Simon Horman wrote:
>>>> On Tue, Jun 16, 2020 at 11:18:16PM +080
From: wenxu
Add flow_indr_block_cb_alloc/remove function prepare for the bug fix
in the third patch.
Signed-off-by: wenxu
---
include/net/flow_offload.h | 13 +
net/core/flow_offload.c| 43 ---
2 files changed, 45 insertions(+), 11
From: wenxu
If the representor is removed, then identify the indirect flow_blocks
that need to be removed by the release callback and the port representor
structure. To identify the port representor structure, a new
indr.cb_priv field needs to be introduced. The flow_block also needs to
be
From: wenxu
v2:
patch2: store the cb_priv of representor to the flow_block_cb->indr.cb_priv
in the driver. And make the correct check with the statments
this->indr.cb_priv == cb_priv
patch4: del the driver list only in the indriect cleanup callbacks
v3:
add the cover letter and changlog
From: wenxu
Prepare fix the bug in the next patch. use flow_indr_block_cb_alloc/remove
function and remove the __flow_block_indr_binding.
Signed-off-by: wenxu
---
drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c | 19 ---
.../net/ethernet/mellanox/mlx5/core/en/rep/tc.c
From: wenxu
The block->nooffloaddevcnt should always count for indr block.
even the indr block offload successful. The representor maybe
gone away and the ingress qdisc can work in software mode.
block->nooffloaddevcnt warning with following dmesg log:
[ 760.
From: wenxu
v2:
patch2: store the cb_priv of representor to the flow_block_cb->indr.cb_priv
in the driver. And make the correct check with the statments
this->indr.cb_priv == cb_priv
patch4: del the driver list only in the indriect cleanup callbacks
v3:
add the cover letter and changlog
From: wenxu
Add flow_indr_block_cb_alloc/remove function for next fix patch.
Signed-off-by: wenxu
---
include/net/flow_offload.h | 13 +
net/core/flow_offload.c| 21 +
2 files changed, 34 insertions(+)
diff --git a/include/net/flow_offload.h b/include/net
From: wenxu
If the representor is removed, then identify the indirect flow_blocks
that need to be removed by the release callback and the port representor
structure. To identify the port representor structure, a new
indr.cb_priv field needs to be introduced. The flow_block also needs to
be
From: wenxu
The block->nooffloaddevcnt should always count for indr block.
even the indr block offload successful. The representor maybe
gone away and the ingress qdisc can work in software mode.
block->nooffloaddevcnt warning with following dmesg log:
[ 760.
From: wenxu
Prepare fix the bug in the next patch. use flow_indr_block_cb_alloc/remove
function and remove the __flow_block_indr_binding.
Signed-off-by: wenxu
---
drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c | 19 ---
.../net/ethernet/mellanox/mlx5/core/en/rep/tc.c
wrong inner_proto leads no pull the Mac header to linear-spatial
3. finally It made a crash in ovs_flow_extract->__skb_pull
Signed-off-by: wenxu
---
net/openvswitch/vport-gre.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/openvswitch/vport-gre.c b/net/openvswitch/vport-gre.c
i
tion: this is a fairly common
problem case, so we can delete the conntrack
immediately. --RR */
-if (th->rst ) {
+if (th->rst && !nf_ct_tcp_rst_no_kill) {
nf_ct_kill_acct(ct, ctinfo, skb);
return NF_ACCEPT;
}
BR
wenxu
301 - 400 of 405 matches
Mail list logo