[PATCH] iptunnel: Set tun_flags in the iptunnel_metadata_reply from src

2018-12-23 Thread wenxu
From: wenxu ip l add tun type gretap external ip r a 10.0.0.2 encap ip id 1000 dst 172.168.0.2 key dev tun ip a a 10.0.0.1/24 dev tun The peer arp request to 10.0.0.1 with tunnel_id, but the arp reply only set the tun_id but not the tun_flags with TUNNEL_KEY. The arp reply packet don't co

[PATCH iproute2] iprule: Add tun_id filed in the selector

2018-12-24 Thread wenxu
From: wenxu ip rule add from all iif gretap tun_id 2000 lookup 200 Signed-off-by: wenxu --- ip/iprule.c| 33 + man/man8/ip-rule.8 | 4 +++- 2 files changed, 36 insertions(+), 1 deletion(-) diff --git a/ip/iprule.c b/ip/iprule.c index 0f8fc6d..d28f151

[PATCH net-next] vrf: Add VRF_F_BYPASS_RCV_NF flag to vrf device

2018-12-26 Thread wenxu
From: wenxu In the ip_rcv the skb go through the PREROUTING hook first, Then jump in vrf device go through the same hook again. When conntrack work with vrf, there will be some conflict for rules. Because the package go through the hook twice with different nf status ip link add user1 type vrf

[PATCH net-next] ip_gre: Support lwtunnel for none-tunnel-dst gre port

2018-12-27 Thread wenxu
From: wenxu ip l add dev tun type gretap key 1000 ip a a dev tun 10.0.0.1/24 Packets with tun-id 1000 can be recived by tun dev. But packet can't be sent through dev tun for non-tunnel-dst With this patch: tunnel-dst can be get through lwtunnel like beflow: ip r a 10.0.0.7 encap ip id 100

[PATCH] nft_flow_offload: Fix the peer route get from wrong daddr

2018-12-27 Thread wenxu
From: wenxu For nat example: client 1.1.1.7 ---> 2.2.2.7 which dnat to 10.0.0.7 server When syn_rcv pkt from server it get the peer(client->server) route through daddr = ct->tuplehash[!dir].tuple.dst.u3.ip, the value 2.2.2.7 is not correct in this situation. it should be 10.0.0.7 ct-&

[PATCH] nft_flow_offload: Make flow offload work with vrf slave device correct

2018-12-29 Thread wenxu
From: wenxu In the forward chain the iif is changed from slave device to master vrf device. It will lead the offload not match on lower slave device. This patch make the flollowing example can work correct ip addr add dev eth0 1.1.1.1/24 ip addr add dev eth1 10.0.0.1/24 ip link add user1 type

[PATCH iproute2 v4] iproute: Set ip/ip6 lwtunnel flags

2019-01-01 Thread wenxu
From: wenxu ip l add dev tun type gretap external ip r a 10.0.0.1 encap ip dst 192.168.152.171 id 1000 dev gretap For gretap example when the command set the id but don't set the TUNNEL_KEY flags. There is no key field in the send packet User can set flags with key, csum, seq ip r a 10.

[PATCH iproute2 v5] iproute: Set ip/ip6 lwtunnel flags

2019-01-01 Thread wenxu
From: wenxu ip l add dev tun type gretap external ip r a 10.0.0.1 encap ip dst 192.168.152.171 id 1000 dev gretap For gretap example when the command set the id but don't set the TUNNEL_KEY flags. There is no key field in the send packet User can set flags with key, csum, seq ip r a 10.

Re: [PATCH iproute2 v5] iproute: Set ip/ip6 lwtunnel flags

2019-01-08 Thread wenxu
Hi stephen, I found the state of this patch is Accepted. I wonder why it didn't merge into the iproute2 master? BR wenxu On 1/2/2019 11:57 AM, we...@ucloud.cn wrote: > From: wenxu > > ip l add dev tun type gretap external > ip r a 10.0.0.1 encap ip dst 192.168.152.171 i

[PATCH RESEND] nft_flow_offload: Fix the peer route get from wrong daddr

2019-01-08 Thread wenxu
From: wenxu For nat example: client 1.1.1.7 ---> 2.2.2.7 which dnat to 10.0.0.7 server When syn_rcv pkt from server it get the peer(client->server) route through daddr = ct->tuplehash[!dir].tuple.dst.u3.ip, the value 2.2.2.7 is not correct in this situation. it should be 10.0.0.7 ct-&

Re: [PATCH] netfilter: x_tables: add xt_tunnel match

2019-01-08 Thread wenxu
Hi pablo, How about the state for this patch? On 12/21/2018 6:12 PM, we...@ucloud.cn wrote: > From: wenxu > > This patch allows us to match on the tunnel metadata that is available > of the packet. We can use this to validate if the packet comes from/goes > to tunnel and th

Re: [PATCH] nft_flow_offload: Make flow offload work with vrf slave device correct

2019-01-08 Thread wenxu
Hi pablo, How about the status for this patch?   On 12/29/2018 6:10 PM, we...@ucloud.cn wrote: > From: wenxu > > In the forward chain the iif is changed from slave device to master vrf > device. It will lead the offload not match on lower slave device. > > This patch make the

Re: [PATCH] netfilter: x_tables: add xt_tunnel match

2019-01-09 Thread wenxu
On 1/10/2019 12:41 AM, Pablo Neira Ayuso wrote: > On Fri, Dec 21, 2018 at 06:12:24PM +0800, we...@ucloud.cn wrote: > [...] >> +static struct xt_match tunnel_mt_reg __read_mostly = { >> +.name = "tunnel", >> +.revision = 0, >> +.family = NFPROTO_UNSPEC, >> +

Re: [PATCH] netfilter: x_tables: add xt_tunnel match

2019-01-09 Thread wenxu
On 1/10/2019 12:05 PM, wenxu wrote: > On 1/10/2019 12:41 AM, Pablo Neira Ayuso wrote: >> On Fri, Dec 21, 2018 at 06:12:24PM +0800, we...@ucloud.cn wrote: >> [...] >>> +static struct xt_match tunnel_mt_reg __read_mostly = { >>> + .name = &quo

[PATCH v2] netfilter: x_tables: add xt_tunnel match

2019-01-09 Thread wenxu
From: wenxu This patch allows us to match on the tunnel metadata that is available of the packet. We can use this to validate if the packet comes from/goes to tunnel and the corresponding tunnel ID in the iptables. Signed-off-by: wenxu --- include/uapi/linux/netfilter/xt_tunnel.h | 12

[PATCH net-next] ip_tunnel: Fix DST_METADATA dst_entry handle in tnl_update_pmtu

2019-02-15 Thread wenxu
From: wenxu BUG report in selftests: bpf: test_tunnel.sh Testing IPIP tunnel... BUG: unable to handle kernel NULL pointer dereference at PGD 0 P4D 0 Oops: 0010 [#1] SMP PTI CPU: 0 PID: 16822 Comm: ping Not tainted 5.0.0-rc3-00352-gc8b34e6 #1 Hardware name: QEMU Standard PC

Re: [PATCH net-next] ip_tunnel: Fix DST_METADATA dst_entry handle in tnl_update_pmtu

2019-02-16 Thread wenxu
On 2019/2/17 上午12:34, Alexei Starovoitov wrote: > On Sat, Feb 16, 2019 at 2:11 AM wrote: >> From: wenxu >> >> BUG report in selftests: bpf: test_tunnel.sh >> >> Testing IPIP tunnel... >> BUG: unable to handle kernel NULL pointer dereference at 00

Re: [PATCH net-next] ip_tunnel: Fix DST_METADATA dst_entry handle in tnl_update_pmtu

2019-02-16 Thread wenxu
On 2019/2/17 上午11:35, wenxu wrote: > On 2019/2/17 上午12:34, Alexei Starovoitov wrote: >> On Sat, Feb 16, 2019 at 2:11 AM wrote: >>> From: wenxu >>> >>> BUG report in selftests: bpf: test_tunnel.sh >>> >>> Testing IPIP tunnel... >&g

Re: [PATCH net-next] iptunnel: NULL pointer deref for ip_md_tunnel_xmit

2019-02-16 Thread wenxu
On 2019/2/15 下午5:38, Alan Maguire wrote: > Naresh Kamboju noted the following oops during execution of selftest > tools/testing/selftests/bpf/test_tunnel.sh on x86_64: > > [ 274.120445] BUG: unable to handle kernel NULL pointer dereference > at > [ 274.128285] #PF error: [INSTR]

[PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag

2020-06-29 Thread wenxu
From: wenxu The fragment packets do defrag in tcf_ct_handle_fragments will clear the skb->cb which make the qdisc_skb_cb clear too and set the pkt_len to 0. The bytes always 0 when dump the filter. And it also update the pkt_len after all the fragments finish the defrag to one packet and m

[PATCH net 2/2] net/sched: act_ct: add miss tcf_lastuse_update.

2020-06-29 Thread wenxu
From: wenxu When tcf_ct_act execute the tcf_lastuse_update should be update or the used stats never update filter protocol ip pref 3 flower chain 0 filter protocol ip pref 3 flower chain 0 handle 0x1 eth_type ipv4 dst_ip 1.1.1.1 ip_flags frag/firstfrag skip_hw not_in_hw action order

[PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-06-29 Thread wenxu
From: wenxu The fragment packets do defrag in act_ct module. The reassembled packet over the mtu in the act_mirred. This big packet should be fragmented to send out. Fixes: b57dc7c13ea9 ("net/sched: Introduce action ct") Signed-off-by: wenxu --- This patch is bas

The size of ct offoad mlx5_flow_table in mlx5e driver

2020-05-27 Thread wenxu
be changed to 8M through the following? ESW_POOLS[] = { 8 * 1024 * 1024, 1 * 1024 * 1024, 64 * 1024, 128 }; BR wenxu

[PATCH net-next 0/2] net/mlx5e: add nat support in ct_metadata

2020-05-28 Thread wenxu
From: wenxu Currently all the conntrack entry offfload rules will be add in both ct and ct_nat flow table in the mlx5e driver. It is not makesense. This serise provide nat attribute in the ct_metadata action which tell driver the rule should add to ct or ct_nat flow table wenxu (2): net

[PATCH net-next 1/2] net/sched: act_ct: add nat attribute in ct_metadata

2020-05-28 Thread wenxu
From: wenxu Add nat attribute in the ct_metadata action. This tell driver the offload conntrack entry is nat one or not. Signed-off-by: wenxu --- include/net/flow_offload.h | 1 + net/sched/act_ct.c | 1 + 2 files changed, 2 insertions(+) diff --git a/include/net/flow_offload.h b

[PATCH net-next 2/2] net/mlx5e: add ct_metadata.nat support in ct offload

2020-05-28 Thread wenxu
From: wenxu In the ct offload all the conntrack entry offload rules will be add to both ct ft and ct_nat ft twice. It is not makesense. The ct_metadat.nat will tell driver the rule should add to ct or ct_nat flow table Signed-off-by: wenxu --- drivers/net/ethernet/mellanox/mlx5/core/en

Re: [PATCH net-next 0/2] net/mlx5e: add nat support in ct_metadata

2020-05-28 Thread wenxu
On 5/28/2020 7:35 PM, Edward Cree wrote: > On 28/05/2020 08:15, we...@ucloud.cn wrote: >> From: wenxu >> >> Currently all the conntrack entry offfload rules will be add >> in both ct and ct_nat flow table in the mlx5e driver. It is >> not makesense. >>

[PATCH] net/sched: act_ct: add nat mangle action only for NAT-conntrack

2020-05-28 Thread wenxu
From: wenxu Currently add nat mangle action with comparing invert and ori tuple. It is better to check IPS_NAT_MASK flags first to avoid non necessary memcmp for non-NAT conntrack. Signed-off-by: wenxu --- net/sched/act_ct.c | 19 +-- 1 file changed, 13 insertions(+), 6

Re: [PATCH] net/sched: act_ct: add nat mangle action only for NAT-conntrack

2020-05-29 Thread wenxu
在 2020/5/30 1:56, Marcelo Ricardo Leitner 写道: > On Fri, May 29, 2020 at 12:07:45PM +0800, we...@ucloud.cn wrote: >> From: wenxu >> >> Currently add nat mangle action with comparing invert and ori tuple. >> It is better to check IPS_NAT_MASK flags first to avoid non

[PATCH v2] net/sched: act_ct: add nat mangle action only for NAT-conntrack

2020-05-29 Thread wenxu
From: wenxu Currently add nat mangle action with comparing invert and ori tuple. It is better to check IPS_NAT_MASK flags first to avoid non necessary memcmp for non-NAT conntrack. Signed-off-by: wenxu --- net/sched/act_ct.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/sched

Re: [PATCH] net/sched: act_ct: add nat mangle action only for NAT-conntrack

2020-05-29 Thread wenxu
在 2020/5/30 8:04, wenxu 写道: > 在 2020/5/30 1:56, Marcelo Ricardo Leitner 写道: >> On Fri, May 29, 2020 at 12:07:45PM +0800, we...@ucloud.cn wrote: >>> From: wenxu >>> >>> Currently add nat mangle action with comparing invert and ori tuple. >>> It i

[PATCH net-next v2] net/mlx5e: add conntrack offload rules only in ct or ct_nat flow table

2020-05-29 Thread wenxu
From: wenxu In the ct offload all the conntrack entry offload rules will be add to both ct ft and ct_nat ft twice. It is not make sense. The driver can distinguish NAT from non-NAT conntrack through the FLOW_ACTION_MANGLE action. Signed-off-by: wenxu --- drivers/net/ethernet/mellanox/mlx5

Re: [PATCH net-next 2/2] net/mlx5e: add ct_metadata.nat support in ct offload

2020-05-31 Thread wenxu
On 5/31/2020 4:01 PM, Oz Shlomo wrote: > Hi Wenxu, > > On 5/28/2020 10:15 AM, we...@ucloud.cn wrote: >> From: wenxu >> >> In the ct offload all the conntrack entry offload  rules >> will be add to both ct ft and ct_nat ft twice. >> It is not makesen

Re: [PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-06-30 Thread wenxu
On 6/30/2020 11:57 PM, Eric Dumazet wrote: > > On 6/29/20 7:54 PM, we...@ucloud.cn wrote: >> From: wenxu >> >> The fragment packets do defrag in act_ct module. The reassembled packet >> over the mtu in the act_mirred. This big packet should be fragment

Re: [PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-06-30 Thread wenxu
On 7/1/2020 3:02 AM, Cong Wang wrote: > On Mon, Jun 29, 2020 at 7:55 PM wrote: >> From: wenxu >> >> The fragment packets do defrag in act_ct module. The reassembled packet >> over the mtu in the act_mirred. This big packet should be fragmented >> to s

Re: [PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-06-30 Thread wenxu
On 7/1/2020 1:52 PM, Cong Wang wrote: > On Tue, Jun 30, 2020 at 7:36 PM wenxu wrote: >> >> On 7/1/2020 3:02 AM, Cong Wang wrote: >>> On Mon, Jun 29, 2020 at 7:55 PM wrote: >>>> From: wenxu >>>> >>>> The fragment packets do defrag i

Re: [PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-06-30 Thread wenxu
On 7/1/2020 2:12 PM, Cong Wang wrote: > On Tue, Jun 30, 2020 at 11:03 PM wenxu wrote: >> Only forward packet case need do fragment again and there is no need do >> defrag explicit. > Same question: why act_mirred? You have to explain why act_mirred > has the responsibility

Re: [PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-07-01 Thread wenxu
On 7/1/2020 2:21 PM, wenxu wrote: > On 7/1/2020 2:12 PM, Cong Wang wrote: >> On Tue, Jun 30, 2020 at 11:03 PM wenxu wrote: >>> Only forward packet case need do fragment again and there is no need do >>> defrag explicit. >> Same question: why act_mirred? Y

Re: [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag

2020-07-02 Thread wenxu
On 7/2/2020 6:21 AM, David Miller wrote: > From: we...@ucloud.cn > Date: Mon, 29 Jun 2020 17:16:17 +0800 > >> From: wenxu >> >> The fragment packets do defrag in tcf_ct_handle_fragments >> will clear the skb->cb which make the qdisc_skb_cb clear >> too

Re: [PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-07-02 Thread wenxu
On 7/2/2020 1:33 AM, Cong Wang wrote: > On Wed, Jul 1, 2020 at 1:21 AM wenxu wrote: >> >> On 7/1/2020 2:21 PM, wenxu wrote: >>> On 7/1/2020 2:12 PM, Cong Wang wrote: >>>> On Tue, Jun 30, 2020 at 11:03 PM wenxu wrote: >>>>> Only forward packet

Re: [PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-07-03 Thread wenxu
On 7/3/2020 8:47 AM, Marcelo Ricardo Leitner wrote: > On Thu, Jul 02, 2020 at 02:39:07PM -0700, Cong Wang wrote: >> On Thu, Jul 2, 2020 at 10:32 AM Marcelo Ricardo Leitner >> wrote: >>> On Thu, Jul 02, 2020 at 05:36:38PM +0800, wenxu wrote: >>>> On 7/2/202

[PATCH net 2/2] net/sched: act_ct: add miss tcf_lastuse_update.

2020-07-04 Thread wenxu
From: wenxu When tcf_ct_act execute the tcf_lastuse_update should be update or the used stats never update filter protocol ip pref 3 flower chain 0 filter protocol ip pref 3 flower chain 0 handle 0x1 eth_type ipv4 dst_ip 1.1.1.1 ip_flags frag/firstfrag skip_hw not_in_hw action order

[PATCH net] net/sched: act_ct: add miss tcf_lastuse_update.

2020-07-04 Thread wenxu
From: wenxu When tcf_ct_act execute the tcf_lastuse_update should be update or the used stats never update filter protocol ip pref 3 flower chain 0 filter protocol ip pref 3 flower chain 0 handle 0x1 eth_type ipv4 dst_ip 1.1.1.1 ip_flags frag/firstfrag skip_hw not_in_hw action order

Re: [PATCH net 2/2] net/sched: act_ct: add miss tcf_lastuse_update.

2020-07-04 Thread wenxu
Please drop this one for wrong tags 在 2020/7/4 15:42, we...@ucloud.cn 写道: > From: wenxu > > When tcf_ct_act execute the tcf_lastuse_update should > be update or the used stats never update > > filter protocol ip pref 3 flower chain 0 > filter protocol ip pref 3 flow

[PATCH net-next 3/3] net/sched: act_ct: fix clobber qdisc_skb_cb in defrag

2020-07-05 Thread wenxu
From: wenxu using nf_ct_frag_gather to defrag in act_ct to elide CB clear. Avoid serious crashes and problems in ct subsystem. Because Some packet schedulers store pointers in the qdisc CB private area and Parallel accesses to the SKB. Fixes: b57dc7c13ea9 ("net/sched: Introduce acti

[PATCH net-next 2/3] netfilter: nf_conntrack_reasm: make nf_ct_frag6_gather elide the CB clear

2020-07-05 Thread wenxu
From: wenxu Make nf_ct_frag6_gather elide the CB clear when packets are defragmented by connection tracking. This can make each subsystem such as br_netfilter , openvswitch, act_ct do defrag without restore the CB. And avoid serious crashes and problems in ct subsystem. Because Some packet

[PATCH net-next 0/3] make nf_ct_frag/6_gather elide the skb CB clear

2020-07-05 Thread wenxu
From: wenxu Add nf_ct_frag_gather and Make nf_ct_frag6_gather elide the CB clear when packets are defragmented by connection tracking. This can make each subsystem such as br_netfilter, openvswitch, act_ct do defrag without restore the CB. This also avoid serious crashes and problems in ct

Re: [PATCH net] net/sched: act_mirred: fix fragment the packet after defrag in act_ct

2020-07-05 Thread wenxu
在 2020/7/4 1:50, Marcelo Ricardo Leitner 写道: > On Fri, Jul 03, 2020 at 06:19:51PM +0800, wenxu wrote: >> On 7/3/2020 8:47 AM, Marcelo Ricardo Leitner wrote: >>> On Thu, Jul 02, 2020 at 02:39:07PM -0700, Cong Wang wrote: >>>> On Thu, Jul 2, 2020 at 10:32 AM Marce

[PATCH net-next 1/3] netfilter: nf_defrag_ipv4: Add nf_ct_frag_gather support

2020-07-05 Thread wenxu
From: wenxu Add nf_ct_frag_gather for conntrack defrag and it will elide the CB clear when packets are defragmented by connection tracking Signed-off-by: wenxu --- include/net/netfilter/ipv4/nf_defrag_ipv4.h | 2 + net/ipv4/netfilter/nf_defrag_ipv4.c | 314

Re: [PATCH net-next 1/3] netfilter: nf_defrag_ipv4: Add nf_ct_frag_gather support

2020-07-06 Thread wenxu
在 2020/7/6 22:38, Florian Westphal 写道: > we...@ucloud.cn wrote: >> From: wenxu >> >> Add nf_ct_frag_gather for conntrack defrag and it will >> elide the CB clear when packets are defragmented by >> connection tracking > Why is this patch required? > Can

Re: [PATCH net-next 1/3] netfilter: nf_defrag_ipv4: Add nf_ct_frag_gather support

2020-07-06 Thread wenxu
On 7/7/2020 12:29 AM, Florian Westphal wrote: > wenxu wrote: >> 在 2020/7/6 22:38, Florian Westphal 写道: >>> we...@ucloud.cn wrote: >>>> From: wenxu >>>> >>>> Add nf_ct_frag_gather for conntrack defrag and it will >>>> elide the

[PATCH net-next v2 1/3] net: ip_fragment: Add ip_defrag_ignore_cb support

2020-07-06 Thread wenxu
From: wenxu Add ip_defrag_ignore_cb for conntrack defrag and it will elide the CB clear when packets are defragmented by connection tracking. Signed-off-by: wenxu --- include/net/ip.h | 2 ++ net/ipv4/ip_fragment.c | 55 ++ 2 files

[PATCH net-next v2 0/3] make nf_ct_frag/6_gather elide the skb CB clear

2020-07-06 Thread wenxu
From: wenxu Add nf_ct_frag_gather and Make nf_ct_frag6_gather elide the CB clear when packets are defragmented by connection tracking. This can make each subsystem such as br_netfilter, openvswitch, act_ct do defrag without restore the CB. This also avoid serious crashes and problems in ct

[PATCH net-next v2 3/3] net/sched: act_ct: fix clobber qdisc_skb_cb in defrag

2020-07-06 Thread wenxu
From: wenxu using ip_defrag_ignore_cb to defrag in act_ct to elide CB clear. Avoid serious crashes and problems in ct subsystem. Because Some packet schedulers store pointers in the qdisc CB private area and Parallel accesses to the SKB. Fixes: b57dc7c13ea9 ("net/sched: Introduce acti

[PATCH net-next v2 2/3] netfilter: nf_conntrack_reasm: make nf_ct_frag6_gather elide the CB clear

2020-07-06 Thread wenxu
From: wenxu Make nf_ct_frag6_gather elide the CB clear when packets are defragmented by connection tracking. This can make each subsystem such as br_netfilter , openvswitch, act_ct do defrag without restore the CB. And avoid serious crashes and problems in ct subsystem. Because Some packet

[PATCH net] net/sched: cls_api: Fix nooffloaddevcnt counter in indr block call success

2019-09-16 Thread wenxu
From: wenxu When a block bind with a dev which support indr block call(vxlan/gretap device). It can bind success but with nooffloaddevcnt++. It will fail when replace the hw filter in tc_setup_cb_call with skip_sw mode for checkout the nooffloaddevcnt and skip_sw. if (block->nooffloaddev

Re: [PATCH net] net/sched: cls_api: Fix nooffloaddevcnt counter in indr block call success

2019-09-16 Thread wenxu
在 2019/9/16 18:28, Jiri Pirko 写道: > Please use get_maintainers script to get list of ccs. > > Mon, Sep 16, 2019 at 12:15:34PM CEST, we...@ucloud.cn wrote: >> From: wenxu >> >> When a block bind with a dev which support indr block call(vxlan/gretap >> devi

[PATCH net v2] net/sched: cls_api: Fix nooffloaddevcnt counter when indr block call success

2019-09-16 Thread wenxu
From: wenxu A vxlan or gretap device offload through indr block methord. If the device successfully bind with a real hw through indr block call, It also add nooffloadcnt counter. This counter will lead the rule add failed in fl_hw_replace_filter-->tc_setup_cb_call with skip_sw flags. In

[PATCH net v3] net/sched: cls_api: Fix nooffloaddevcnt counter when indr block call success

2019-09-19 Thread wenxu
From: wenxu A vxlan or gretap device offload through indr block methord. If the device successfully bind with a real hw through indr block call, It also add nooffloadcnt counter. This counter will lead the rule add failed in fl_hw_replace_filter-->tc_setup_cb_call with skip_sw flags. In

Re: [PATCH net v3] net/sched: cls_api: Fix nooffloaddevcnt counter when indr block call success

2019-09-19 Thread wenxu
sorry forget cc to jiri. On 9/19/2019 4:37 PM, we...@ucloud.cn wrote: > From: wenxu > > A vxlan or gretap device offload through indr block methord. If the device > successfully bind with a real hw through indr block call, It also add > nooffloadcnt counter. This counter will le

Re: [PATCH net v3] net/sched: cls_api: Fix nooffloaddevcnt counter when indr block call success

2019-09-22 Thread wenxu
Hi John & Jakub There are some limitations for indirect tc callback work with  skip_sw ? BR wenxu On 9/19/2019 8:50 PM, Or Gerlitz wrote: > >> successfully bind with a real hw through indr block call, It also add >> nooffloadcnt counter. This counter will lead the

Re: [PATCH net v3] net/sched: cls_api: Fix nooffloaddevcnt counter when indr block call success

2019-09-23 Thread wenxu
在 2019/9/23 17:42, John Hurley 写道: > On Mon, Sep 23, 2019 at 5:20 AM wenxu wrote: >> Hi John & Jakub >> >> There are some limitations for indirect tc callback work with skip_sw ? >> > Hi Wenxu, > This is not really a limitation. > As Or points out, i

[PATCH net 2/2] flow_offload: fix miss cleanup flow_block_cb of indr_setup_ft_cb type

2020-06-10 Thread wenxu
From: wenxu Currently indr setup supoort both indr_setup_ft_cb and indr_setup_tc_cb. But the __flow_block_indr_cleanup only check the indr_setup_tc_cb in mlx5e driver. It is better to just check the indr_release_cb, all the setup_cb type share the same release_cb. Fixes: 1fac52da5942 (&quo

[PATCH net 1/2] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-10 Thread wenxu
From: wenxu The cb_priv in the flow_indr_dev_unregister get from the driver is the same as cb_priv of flow_indr_dev. But it always isn't the same as cb_priv of flow_block_cb which leads miss cleanup operation. For mlx5e example the cb_priv of flow_indr_dev is the mlx5e_rep_priv which re

[PATCH net v2] flow_offload: fix incorrect cleanup for indirect flow_blocks

2020-06-11 Thread wenxu
From: wenxu If the representor is removed, then identify the indirect flow_blocks that need to be removed by the release callback. Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect flow_block infrastructure") Signed-off-by: wenxu --- drivers/net/ethernet/broadcom/bnxt

Re: [PATCH net v2] flow_offload: fix incorrect cleanup for indirect flow_blocks

2020-06-11 Thread wenxu
在 2020/6/11 19:05, Pablo Neira Ayuso 写道: On Thu, Jun 11, 2020 at 06:03:17PM +0800, we...@ucloud.cn wrote: [...] diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c index 0cfc35e..40eaf64 100644 --- a/net/core/flow_offload.c +++ b/net/core/flow_offload.c @@ -372,14 +372,13 @@ int flo

[PATCH net v3 2/2] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-11 Thread wenxu
From: wenxu In the function __flow_block_indr_cleanup, The match stataments this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv is totally different data from the flow_indr_dev->cb_priv. Store the representor cb_priv to the flow_block_cb->indr.cb_priv in the dr

[PATCH net v3 1/2] flow_offload: fix incorrect cleanup for indirect flow_blocks

2020-06-11 Thread wenxu
From: wenxu If the representor is removed, then identify the indirect flow_blocks that need to be removed by the release callback. Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect flow_block infrastructure") Signed-off-by: wenxu --- drivers/net/ethernet/broadcom/bnxt

[PATCH net 2/2] flow_offload: fix the list_del corruption in the driver list

2020-06-12 Thread wenxu
From: wenxu When a indr device add in offload success. After the representor go away. All the flow_block_cb cleanup but miss del form driver list. Fixes: 0fdcf78d5973 ("net: use flow_indr_dev_setup_offload()") Signed-off-by: wenxu --- drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c

[PATCH net 1/2] flow_offload: return zero for FLOW_BLOCK_UNBIND type flow_indr_dev_setup_offload

2020-06-12 Thread wenxu
From: wenxu block->nooffloaddevcnt warning with following dmesg log: When a indr device add in offload success. The block->nooffloaddevcnt always zero. But When all the representors go away. All the flow_block_cb cleanup. Then remove the indr device, The __tcf_block_pu

Re: [PATCH net 1/2] flow_offload: return zero for FLOW_BLOCK_UNBIND type flow_indr_dev_setup_offload

2020-06-12 Thread wenxu
Please drop this series. Thank you. 在 2020/6/12 18:08, we...@ucloud.cn 写道: From: wenxu block->nooffloaddevcnt warning with following dmesg log: When a indr device add in offload success. The block->nooffloaddevcnt always zero. But When all the representors go away. All the flow_bl

[PATCH net v2 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-13 Thread wenxu
From: wenxu In the function __flow_block_indr_cleanup, The match stataments this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv is totally different data with the flow_indr_dev->cb_priv. Store the representor cb_priv to the flow_block_cb->indr.cb_priv in the dr

[PATCH net v2 3/4] net/sched: cls_api: fix nooffloaddevcnt warning dmesg log

2020-06-13 Thread wenxu
From: wenxu The block->nooffloaddevcnt should always count for indr block. even the indr block offload successful. The representor maybe gone away and the ingress qdisc can work in software mode. block->nooffloaddevcnt warning with following dmesg log: [ 760.

[PATCH net v2 4/4] flow_offload: fix the list_del corruption in the driver list

2020-06-13 Thread wenxu
From: wenxu When a indr device add in offload success. After the representor go away. All the flow_block_cb cleanup but miss del form driver list. Fixes: 0fdcf78d5973 ("net: use flow_indr_dev_setup_offload()") Signed-off-by: wenxu --- net/netfilter/nf_flow_table_offload.c | 1 + net

[PATCH net v2 1/4] flow_offload: fix incorrect cleanup for indirect flow_blocks

2020-06-13 Thread wenxu
From: wenxu If the representor is removed, then identify the indirect flow_blocks that need to be removed by the release callback. Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect flow_block infrastructure") Signed-off-by: wenxu --- drivers/net/ethernet/broadcom/bnxt

[PATCH net v3 0/4] several fixes for indirect flow_blocks offload

2020-06-15 Thread wenxu
From: wenxu v2: patch2: store the cb_priv of representor to the flow_block_cb->indr.cb_priv in the driver. And make the correct check with the statments this->indr.cb_priv == cb_priv patch4: del the driver list only in the indriect cleanup callbacks v3: add the cover letter and cha

[PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-15 Thread wenxu
From: wenxu In the function __flow_block_indr_cleanup, The match stataments this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv is totally different data with the flow_indr_dev->cb_priv. Store the representor cb_priv to the flow_block_cb->indr.cb_priv in the dr

[PATCH net v3 4/4] flow_offload: fix the list_del corruption in the driver list

2020-06-15 Thread wenxu
From: wenxu When a indr device add in offload success. After the representor go away. All the flow_block_cb cleanup but miss del form driver list. Fixes: 0fdcf78d5973 ("net: use flow_indr_dev_setup_offload()") Signed-off-by: wenxu --- net/netfilter/nf_flow_table_offload.c | 1 + net

[PATCH net v3 1/4] flow_offload: fix incorrect cleanup for flowtable indirect flow_blocks

2020-06-15 Thread wenxu
From: wenxu The cleanup operation based on the setup callback. But in the mlx5e driver there are tc and flowtable indrict setup callback and shared the same release callbacks. So when the representor is removed, then identify the indirect flow_blocks that need to be removed by the release

[PATCH net v3 3/4] net/sched: cls_api: fix nooffloaddevcnt warning dmesg log

2020-06-15 Thread wenxu
From: wenxu When a indr device add in offload success. The block->nooffloaddevcnt should be 0. After the representor go away. When the dir device go away the flow_block UNBIND operation with -EOPNOTSUPP which lead the warning dmesg log. The block->nooffloaddevcnt should always count fo

Re: [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-16 Thread wenxu
在 2020/6/16 18:51, Simon Horman 写道: > On Tue, Jun 16, 2020 at 11:19:38AM +0800, we...@ucloud.cn wrote: >> From: wenxu >> >> In the function __flow_block_indr_cleanup, The match stataments >> this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv

Re: [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-16 Thread wenxu
在 2020/6/16 22:34, Simon Horman 写道: > On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu wrote: >> 在 2020/6/16 18:51, Simon Horman 写道: >>> On Tue, Jun 16, 2020 at 11:19:38AM +0800, we...@ucloud.cn wrote: >>>> From: wenxu >>>> >>>> In the fu

Re: [PATCH net v3 3/4] net/sched: cls_api: fix nooffloaddevcnt warning dmesg log

2020-06-16 Thread wenxu
On 6/17/2020 4:17 AM, Pablo Neira Ayuso wrote: > On Tue, Jun 16, 2020 at 11:19:39AM +0800, we...@ucloud.cn wrote: >> From: wenxu >> >> When a indr device add in offload success. The block->nooffloaddevcnt >> should be 0. After the representor go away. When

Re: [PATCH net v3 3/4] net/sched: cls_api: fix nooffloaddevcnt warning dmesg log

2020-06-16 Thread wenxu
On 6/17/2020 4:30 AM, Pablo Neira Ayuso wrote: > On Tue, Jun 16, 2020 at 10:17:50PM +0200, Pablo Neira Ayuso wrote: >> On Tue, Jun 16, 2020 at 11:19:39AM +0800, we...@ucloud.cn wrote: >>> From: wenxu >>> >>> When a indr device add in offload success. The bl

Re: [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-16 Thread wenxu
On 6/17/2020 4:13 AM, Pablo Neira Ayuso wrote: > On Tue, Jun 16, 2020 at 11:19:38AM +0800, we...@ucloud.cn wrote: >> From: wenxu >> >> In the function __flow_block_indr_cleanup, The match stataments >> this->cb_priv == cb_priv is always false, the flow_block_cb-&g

Re: [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-16 Thread wenxu
On 6/16/2020 11:47 PM, Simon Horman wrote: > On Tue, Jun 16, 2020 at 11:18:16PM +0800, wenxu wrote: >> 在 2020/6/16 22:34, Simon Horman 写道: >>> On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu wrote: >>>> 在 2020/6/16 18:51, Simon Horman 写道: >>>>>

Re: [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-16 Thread wenxu
On 6/17/2020 4:38 AM, Pablo Neira Ayuso wrote: > On Tue, Jun 16, 2020 at 05:47:17PM +0200, Simon Horman wrote: >> On Tue, Jun 16, 2020 at 11:18:16PM +0800, wenxu wrote: >>> 在 2020/6/16 22:34, Simon Horman 写道: >>>> On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu

Re: [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for flow_block_cb

2020-06-17 Thread wenxu
On 6/17/2020 4:38 PM, Pablo Neira Ayuso wrote: > On Wed, Jun 17, 2020 at 11:36:19AM +0800, wenxu wrote: >> On 6/17/2020 4:38 AM, Pablo Neira Ayuso wrote: >>> On Tue, Jun 16, 2020 at 05:47:17PM +0200, Simon Horman wrote: >>>> On Tue, Jun 16, 2020 at 11:18:16PM +080

[PATCH net v4 1/4] flow_offload: add flow_indr_block_cb_alloc/remove function

2020-06-17 Thread wenxu
From: wenxu Add flow_indr_block_cb_alloc/remove function prepare for the bug fix in the third patch. Signed-off-by: wenxu --- include/net/flow_offload.h | 13 + net/core/flow_offload.c| 43 --- 2 files changed, 45 insertions(+), 11

[PATCH net v4 3/4] net: flow_offload: fix flow_indr_dev_unregister path

2020-06-17 Thread wenxu
From: wenxu If the representor is removed, then identify the indirect flow_blocks that need to be removed by the release callback and the port representor structure. To identify the port representor structure, a new indr.cb_priv field needs to be introduced. The flow_block also needs to be

[PATCH net v4 0/4] several fixes for indirect flow_blocks offload

2020-06-17 Thread wenxu
From: wenxu v2: patch2: store the cb_priv of representor to the flow_block_cb->indr.cb_priv in the driver. And make the correct check with the statments this->indr.cb_priv == cb_priv patch4: del the driver list only in the indriect cleanup callbacks v3: add the cover letter and changlog

[PATCH net v4 2/4] flow_offload: use flow_indr_block_cb_alloc/remove function

2020-06-17 Thread wenxu
From: wenxu Prepare fix the bug in the next patch. use flow_indr_block_cb_alloc/remove function and remove the __flow_block_indr_binding. Signed-off-by: wenxu --- drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c | 19 --- .../net/ethernet/mellanox/mlx5/core/en/rep/tc.c

[PATCH net v4 4/4] net/sched: cls_api: fix nooffloaddevcnt warning dmesg log

2020-06-17 Thread wenxu
From: wenxu The block->nooffloaddevcnt should always count for indr block. even the indr block offload successful. The representor maybe gone away and the ingress qdisc can work in software mode. block->nooffloaddevcnt warning with following dmesg log: [ 760.

[PATCH net v5 0/4] several fixes for indirect flow_blocks offload

2020-06-18 Thread wenxu
From: wenxu v2: patch2: store the cb_priv of representor to the flow_block_cb->indr.cb_priv in the driver. And make the correct check with the statments this->indr.cb_priv == cb_priv patch4: del the driver list only in the indriect cleanup callbacks v3: add the cover letter and changlog

[PATCH net v5 1/4] flow_offload: add flow_indr_block_cb_alloc/remove function

2020-06-18 Thread wenxu
From: wenxu Add flow_indr_block_cb_alloc/remove function for next fix patch. Signed-off-by: wenxu --- include/net/flow_offload.h | 13 + net/core/flow_offload.c| 21 + 2 files changed, 34 insertions(+) diff --git a/include/net/flow_offload.h b/include/net

[PATCH net v5 3/4] net: flow_offload: fix flow_indr_dev_unregister path

2020-06-18 Thread wenxu
From: wenxu If the representor is removed, then identify the indirect flow_blocks that need to be removed by the release callback and the port representor structure. To identify the port representor structure, a new indr.cb_priv field needs to be introduced. The flow_block also needs to be

[PATCH net v5 4/4] net/sched: cls_api: fix nooffloaddevcnt warning dmesg log

2020-06-18 Thread wenxu
From: wenxu The block->nooffloaddevcnt should always count for indr block. even the indr block offload successful. The representor maybe gone away and the ingress qdisc can work in software mode. block->nooffloaddevcnt warning with following dmesg log: [ 760.

[PATCH net v5 2/4] flow_offload: use flow_indr_block_cb_alloc/remove function

2020-06-18 Thread wenxu
From: wenxu Prepare fix the bug in the next patch. use flow_indr_block_cb_alloc/remove function and remove the __flow_block_indr_binding. Signed-off-by: wenxu --- drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c | 19 --- .../net/ethernet/mellanox/mlx5/core/en/rep/tc.c

[PATCH] [stable 4.1.y PACTH] openvswitch: fix crash cause by non-nvgre packet

2015-12-22 Thread wenxu
wrong inner_proto leads no pull the Mac header to linear-spatial 3. finally It made a crash in ovs_flow_extract->__skb_pull Signed-off-by: wenxu --- net/openvswitch/vport-gre.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/openvswitch/vport-gre.c b/net/openvswitch/vport-gre.c i

openvswitch conntrack and nat problem in first packet reply with RST

2017-03-13 Thread wenxu
tion: this is a fairly common problem case, so we can delete the conntrack immediately. --RR */ -if (th->rst ) { +if (th->rst && !nf_ct_tcp_rst_no_kill) { nf_ct_kill_acct(ct, ctinfo, skb); return NF_ACCEPT; } BR wenxu

<    1   2   3   4   5   >