ng a NULL skb to avoid calling the reuseport's bpf_prog.
Fixes: a6024562ffd7 ("udp: Add GRO functions to UDP socket")
Cc: Tom Herbert
Signed-off-by: Martin KaFai Lau
---
net/ipv4/udp.c | 6 +-
net/ipv6/udp.c | 2 +-
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git
ic
to running bpf_prog, so bpf tag is used for this series.
Please refer to the individual commit message for details.
Martin KaFai Lau (2):
bpf: udp: ipv6: Avoid running reuseport's bpf_prog from __udp6_lib_err
bpf: udp: Avoid calling reuseport's bpf_prog from udp_gro
net/ipv4/udp
PF")
Cc: Craig Gallek
Signed-off-by: Martin KaFai Lau
---
net/ipv6/udp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 07fa579dfb96..133e6370f89c 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -515,7 +515,7 @@ int __udp6_l
patch 1 make more sense
to merge it with this patch. With this change, for patch 1 and 2:
Acked-by: Martin KaFai Lau
.kallsyms] [k]
> __cgroup_bpf_run_filter_getsockopt
> |
> --3.30%--__cgroup_bpf_run_filter_getsockopt
>|
> --0.81%--__kmalloc
>
> Signed-off-by: Stanislav Fomichev
> Cc: Martin KaFai Lau
> Cc: So
On Thu, Jan 21, 2021 at 09:40:19PM +0100, Shanti Lombard wrote:
> Le 2021-01-21 12:14, Jakub Sitnicki a écrit :
> > On Wed, Jan 20, 2021 at 10:06 PM CET, Alexei Starovoitov wrote:
> >
> > There is also documentation in the kernel:
> >
> > https://www.kernel.org/doc/html/latest/bpf/prog_sk_lookup.
On Wed, Jan 20, 2021 at 05:22:41PM -0800, Stanislav Fomichev wrote:
> BPF rewrites from 111 to 111, but it still should mark the port as
> "changed".
> We also verify that if port isn't touched by BPF, it's still prohibited.
>
> Signed-off-by: Stanislav Fomichev
> ---
> .../selftests/bpf/prog_te
On Thu, Jan 21, 2021 at 02:57:44PM -0800, s...@google.com wrote:
> On 01/21, Martin KaFai Lau wrote:
> > On Wed, Jan 20, 2021 at 05:22:41PM -0800, Stanislav Fomichev wrote:
> > > BPF rewrites from 111 to 111, but it still should mark the port as
> > > "changed&quo
On Thu, Jan 21, 2021 at 04:30:08PM -0800, s...@google.com wrote:
> On 01/21, Martin KaFai Lau wrote:
> > On Thu, Jan 21, 2021 at 02:57:44PM -0800, s...@google.com wrote:
> > > On 01/21, Martin KaFai Lau wrote:
> > > > On Wed, Jan 20, 2021 at 05:22:41PM -
On Fri, Jan 22, 2021 at 08:16:40AM -0800, s...@google.com wrote:
> On 01/21, Martin KaFai Lau wrote:
> > On Thu, Jan 21, 2021 at 04:30:08PM -0800, s...@google.com wrote:
> > > On 01/21, Martin KaFai Lau wrote:
> > > > On Thu, Jan 21, 2021 at 02:57:44PM -0800, s...@goo
On Tue, Dec 01, 2020 at 11:44:12PM +0900, Kuniyuki Iwashima wrote:
> This patch renames reuseport_select_sock() to __reuseport_select_sock() and
> adds two wrapper function of it to pass the migration type defined in the
> previous commit.
>
> reuseport_select_sock : BPF_SK_REUSEPORT_MI
On Thu, Dec 10, 2020 at 01:57:19AM +0900, Kuniyuki Iwashima wrote:
[ ... ]
> > > > I think it is a bit complex to pass the new listener from
> > > > reuseport_detach_sock() to inet_csk_listen_stop().
> > > >
> > > > __tcp_close/tcp_disconnect/tcp_abort
> > > > |-tcp_set_state
> > > > | |-unhas
On Thu, Dec 10, 2020 at 02:15:38PM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Wed, 9 Dec 2020 16:07:07 -0800
> > On Tue, Dec 01, 2020 at 11:44:12PM +0900, Kuniyuki Iwashima wrote:
> > > This patch renames reuseport_select_sock() to __reuseport_sel
On Thu, Dec 10, 2020 at 02:58:10PM +0900, Kuniyuki Iwashima wrote:
[ ... ]
> > > I've implemented one-by-one migration only for the accept queue for now.
> > > In addition to the concern about TFO queue,
> > You meant this queue: queue->fastopenq.rskq_rst_head?
>
> Yes.
>
>
> > Can "req" be p
On Tue, Dec 15, 2020 at 02:03:13AM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Thu, 10 Dec 2020 10:49:15 -0800
> > On Thu, Dec 10, 2020 at 02:15:38PM +0900, Kuniyuki Iwashima wrote:
> > > From: Martin KaFai Lau
> > > Date: Wed, 9 Dec
On Thu, Dec 17, 2020 at 01:41:58AM +0900, Kuniyuki Iwashima wrote:
[ ... ]
> > There may also be places assuming that the req->rsk_listener will never
> > change once it is assigned. not sure. have not looked closely yet.
>
> I have checked this again. There are no functions that expect explici
On Thu, Dec 17, 2020 at 09:23:23AM -0800, Stanislav Fomichev wrote:
> When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> syscall starts incurring kzalloc/kfree cost. While, in general, it's
> not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> TCP_ZERO
cted_sk, select it as a new listener
> > - SK_PASS with selected_sk NULL, fall back to the random selection
> > - SK_DROP, cancel the migration
> >
> > Link:
> > https://lore.kernel.org/netdev/20201123003828.xjpjdtk4ygl6t...@kafai-mbp.dhcp.thefacebook.com/
> > Su
On Wed, Dec 02, 2020 at 11:19:02AM -0800, Martin KaFai Lau wrote:
> On Tue, Dec 01, 2020 at 06:04:50PM -0800, Andrii Nakryiko wrote:
> > On Tue, Dec 1, 2020 at 6:49 AM Kuniyuki Iwashima
> > wrote:
> > >
> > > This commit adds new bpf_attach_type for BPF_PROG_TY
or duplicated code (review comment addressed)
>
> v4: Removing logic to pass struct sock and struct tcp_sock together (review
> comment addressed)
nit. A short line even for cover letter.
Acked-by: Martin KaFai Lau
socket
> similar to what's done by lsof, ss, netstat or fuser. Potentially, this
> information could be used from a cgroup_skb/*gress hook to try to
> associate network traffic with processes.
>
> The test makes sure that a socket it created is tagged with prog_tests's
> pid.
Acked-by: Martin KaFai Lau
; @@ -1007,6 +1013,13 @@ static void test_bpf_sk_storage_get(void)
> "map value wasn't set correctly (expected %d, got %d, err=%d)\n",
> getpid(), val, err);
The failure of this CHECK here should "goto close_socket;" now.
Others LGTM.
Ack
On Thu, Dec 03, 2020 at 11:16:08PM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Wed, 2 Dec 2020 20:24:02 -0800
> > On Wed, Dec 02, 2020 at 11:19:02AM -0800, Martin KaFai Lau wrote:
> > > On Tue, Dec 01, 2020 at 06:04:50PM -0800, Andrii Nakryiko wrote:
> https://lore.kernel.org/netdev/20201119001154.kapwihc2plp4f...@kafai-mbp.dhcp.thefacebook.com/
> Suggested-by: Martin KaFai Lau
> Signed-off-by: Kuniyuki Iwashima
> ---
> include/uapi/linux/bpf.h | 8
> net/core/filter.c | 12 +++-
> tools/include
On Tue, Dec 01, 2020 at 11:44:08PM +0900, Kuniyuki Iwashima wrote:
> This patch is a preparation patch to migrate incoming connections in the
> later commits and adds a field (num_closed_socks) to the struct
> sock_reuseport to keep TCP_CLOSE sockets in the reuseport group.
>
> When we close a lis
On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
[ ... ]
> diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> index fd133516ac0e..60d7c1f28809 100644
> --- a/net/core/sock_reuseport.c
> +++ b/net/core/sock_reuseport.c
> @@ -216,9 +216,11 @@ int reuseport_add_sock
On Tue, Dec 01, 2020 at 11:44:18PM +0900, Kuniyuki Iwashima wrote:
> This patch adds a test for BPF_SK_REUSEPORT_SELECT_OR_MIGRATE.
>
> Reviewed-by: Benjamin Herrenschmidt
> Signed-off-by: Kuniyuki Iwashima
> ---
> .../bpf/prog_tests/migrate_reuseport.c| 164 ++
> .../bp
On Sun, Dec 06, 2020 at 01:03:07AM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Fri, 4 Dec 2020 17:42:41 -0800
> > On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
> > [ ... ]
> > > diff --git a/net/core/sock_reuseport.c
On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> From: Eric Dumazet
> Date: Tue, 1 Dec 2020 16:25:51 +0100
> > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > This patch lets reuseport_detach_sock() return a pointer of struct sock,
> > > which is used only by inet_unhash
On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
> @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
>
> reuse->num_socks--;
> reuse->socks[i] = reuse->socks[reuse->num_socks];
> + prog = rcu_dereference(reuse->prog);
>
>
On Tue, Dec 08, 2020 at 03:31:34PM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Mon, 7 Dec 2020 12:33:15 -0800
> > On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> > > From: Eric Dumazet
> > > Date: Tue, 1 Dec 2020 16:
On Tue, Dec 08, 2020 at 03:27:14PM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Mon, 7 Dec 2020 12:14:38 -0800
> > On Sun, Dec 06, 2020 at 01:03:07AM +0900, Kuniyuki Iwashima wrote:
> > > From: Martin KaFai Lau
> > > Date: Fri, 4 Dec 2020
On Tue, Dec 08, 2020 at 05:17:48PM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Mon, 7 Dec 2020 23:34:41 -0800
> > On Tue, Dec 08, 2020 at 03:31:34PM +0900, Kuniyuki Iwashima wrote:
> > > From: Martin KaFai Lau
> > > Date: Mon, 7 Dec 2020
On Tue, Nov 17, 2020 at 06:40:18PM +0900, Kuniyuki Iwashima wrote:
> This patch lets reuseport_detach_sock() return a pointer of struct sock,
> which is used only by inet_unhash(). If it is not NULL,
> inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> sockets from the closing l
On Tue, Nov 17, 2020 at 06:40:21PM +0900, Kuniyuki Iwashima wrote:
> We will call sock_reuseport.prog for socket migration in the next commit,
> so the eBPF program has to know which listener is closing in order to
> select the new listener.
>
> Currently, we can get a unique ID for each listener
On Tue, Nov 17, 2020 at 06:40:22PM +0900, Kuniyuki Iwashima wrote:
> This patch makes it possible to select a new listener for socket migration
> by eBPF.
>
> The noteworthy point is that we select a listening socket in
> reuseport_detach_sock() and reuseport_select_sock(), but we do not have
> st
On Tue, Nov 17, 2020 at 06:40:15PM +0900, Kuniyuki Iwashima wrote:
> The SO_REUSEPORT option allows sockets to listen on the same port and to
> accept connections evenly. However, there is a defect in the current
> implementation. When a SYN packet is received, the connection is tied to a
> listeni
order to do this a new helper
> wrapping sock_from_file is added.
>
> This is useful to tracing programs but also other program types
> inheriting this set of helpers such as iterators or LSM programs.
Acked-by: Martin KaFai Lau
least one more test on the tcp iter is needed.
Other than that,
Acked-by: Martin KaFai Lau
re many entries and seq->op->stop() is called (due to
seq_has_overflowed()). It is possible that not all of the entries will be
iterated (and deleted). However, I think it is a more generic issue in
resuming the iteration and not specific to this series.
Acked-by: Martin KaFai Lau
On Thu, Nov 19, 2020 at 05:26:54PM +0100, Florent Revest wrote:
> From: Florent Revest
>
> The eBPF program iterates over all files and tasks. For all socket
> files, it stores the tgid of the last task it encountered with a handle
> to that socket. This is a heuristic for finding the "owner" of
On Fri, Nov 20, 2020 at 07:09:22AM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Wed, 18 Nov 2020 15:50:17 -0800
> > On Tue, Nov 17, 2020 at 06:40:18PM +0900, Kuniyuki Iwashima wrote:
> > > This patch lets reuseport_detach_sock() return a pointer of struct
On Fri, Nov 20, 2020 at 07:17:49AM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Wed, 18 Nov 2020 17:49:13 -0800
> > On Tue, Nov 17, 2020 at 06:40:15PM +0900, Kuniyuki Iwashima wrote:
> > > The SO_REUSEPORT option allows sockets to listen on
On Thu, Nov 19, 2020 at 03:22:39PM -0800, Andrii Nakryiko wrote:
> __module_address() needs to be called with preemption disabled or with
> module_mutex taken. preempt_disable() is enough for read-only uses, which is
> what this fix does.
Acked-by: Martin KaFai Lau
On Thu, Nov 19, 2020 at 03:22:40PM -0800, Andrii Nakryiko wrote:
[ ... ]
> +int btf__get_from_id(__u32 id, struct btf **btf)
> +{
> + struct btf *res;
> + int btf_fd;
> +
> + *btf = NULL;
> + btf_fd = bpf_btf_get_fd_by_id(id);
> + if (btf_fd < 0)
> + return 0;
It sh
Fs.
Acked-by: Martin KaFai Lau
On Thu, Nov 19, 2020 at 03:22:42PM -0800, Andrii Nakryiko wrote:
[ ... ]
> +static int load_module_btfs(struct bpf_object *obj)
> +{
> + struct bpf_btf_info info;
> + struct module_btf *mod_btf;
> + struct btf *btf;
> + char name[64];
> + __u32 id, len;
> + int err, fd;
> +
l be auto-loaded by
> test_progs test runner and expected by some of selftests to be present and
> loaded.
Acked-by: Martin KaFai Lau
On Thu, Nov 19, 2020 at 03:22:44PM -0800, Andrii Nakryiko wrote:
> Add a self-tests validating libbpf is able to perform CO-RE relocations
> against the type defined in kernel module BTF.
Acked-by: Martin KaFai Lau
On Thu, Nov 19, 2020 at 03:06:11PM +, Daniel T. Lee wrote:
[ ... ]
> static int run_bpf_prog(char *prog, int cg_id)
> {
> - int map_fd;
> - int rc = 0;
> + struct hbm_queue_stats qstats = {0};
> + char cg_dir[100], cg_pin_path[100];
> + struct bpf_link *link = NULL;
>
On Fri, Nov 20, 2020 at 06:34:05PM -0800, Martin KaFai Lau wrote:
> On Thu, Nov 19, 2020 at 03:06:11PM +, Daniel T. Lee wrote:
> [ ... ]
>
> > static int run_bpf_prog(char *prog, int cg_id)
> > {
> > - int map_fd;
> > - int rc = 0;
> >
On Sat, Nov 21, 2020 at 07:13:22PM +0900, Kuniyuki Iwashima wrote:
> From: Martin KaFai Lau
> Date: Thu, 19 Nov 2020 17:53:46 -0800
> > On Fri, Nov 20, 2020 at 07:09:22AM +0900, Kuniyuki Iwashima wrote:
> > > From: Martin KaFai Lau
> > > Date: Wed, 18 Nov 2020
On Mon, Nov 09, 2020 at 01:00:21PM -0800, Andrii Nakryiko wrote:
> Allocate ID for vmlinux BTF. This makes it visible when iterating over all BTF
> objects in the system. To allow distinguishing vmlinux BTF (and later kernel
> module BTF) from user-provided BTFs, expose extra kernel_btf flag, as we
On Mon, Nov 09, 2020 at 11:00:15AM +0800, Hangbin Liu wrote:
> On Fri, Nov 06, 2020 at 06:15:44PM -0800, Martin KaFai Lau wrote:
> > > - if (iph->nexthdr == 58 /* NEXTHDR_ICMP */) {
> > Same here. Can this check be kept?
>
> Hi Martin,
>
> I'm OK
statistics ---
> 3 packets transmitted, 3 received, 0% packet loss, time 47ms
> rtt min/avg/max/mdev = 0.030/0.048/0.067/0.016 ms
> PASS: ip6ip6tnl
> ```
>
> v3:
> Add back ICMP check as Martin suggested.
>
> v2: Keep ip6ip6 section in test_tunnel_kern.c.
This should be for bpf-next.
Acked-by: Martin KaFai Lau
On Tue, Nov 10, 2020 at 11:01:12PM +0100, KP Singh wrote:
> On Mon, Nov 9, 2020 at 9:32 PM John Fastabend
> wrote:
> >
> > Andrii Nakryiko wrote:
> > > On Fri, Nov 6, 2020 at 5:52 PM Martin KaFai Lau wrote:
> > > >
> > > > On Fri, Nov
On Tue, Nov 10, 2020 at 03:53:13PM -0800, Andrii Nakryiko wrote:
> On Tue, Nov 10, 2020 at 3:43 PM Martin KaFai Lau wrote:
> >
> > On Tue, Nov 10, 2020 at 11:01:12PM +0100, KP Singh wrote:
> > > On Mon, Nov 9, 2020 at 9:32 PM John Fastabend
> > > wrote:
>
On Tue, Nov 10, 2020 at 04:17:06PM -0800, Andrii Nakryiko wrote:
> On Tue, Nov 10, 2020 at 4:07 PM Martin KaFai Lau wrote:
> >
> > On Tue, Nov 10, 2020 at 03:53:13PM -0800, Andrii Nakryiko wrote:
> > > On Tue, Nov 10, 2020 at 3:43 PM Martin KaFai Lau wrote:
> > &g
some of the function prefix from sk_storage to bpf_sk_storage
- Use prefix check instead of substr check
Martin KaFai Lau (4):
bpf: Folding omem_charge() into sk_storage_charge()
bpf: Rename some functions in bpf_sk_storage
bpf: Allow using bpf_sk_storage in FENTRY/FEXIT/RAW_TP
bpf: selftest
llows tracing a kernel function.
Acked-by: Song Liu
Signed-off-by: Martin KaFai Lau
---
include/net/bpf_sk_storage.h | 2 +
kernel/trace/bpf_trace.c | 5 +++
net/core/bpf_sk_storage.c| 74
3 files changed, 81 insertions(+)
diff --git a/include/net/bpf
sk_storage_charge() is the only user of omem_charge().
This patch simplifies it by folding omem_charge() into
sk_storage_charge().
Acked-by: Song Liu
Signed-off-by: Martin KaFai Lau
---
net/core/bpf_sk_storage.c | 23 ++-
1 file changed, 10 insertions(+), 13 deletions
Rename some of the functions currently prefixed with sk_storage
to bpf_sk_storage. That will make the next patch have fewer
prefix check and also bring the bpf_sk_storage.c to a more
consistent function naming.
Signed-off-by: Martin KaFai Lau
---
net/core/bpf_sk_storage.c | 38
. It ensures
this bpf program cannot load.
Signed-off-by: Martin KaFai Lau
---
.../bpf/prog_tests/sk_storage_tracing.c | 135 ++
.../bpf/progs/test_sk_storage_trace_itself.c | 29
.../bpf/progs/test_sk_storage_tracing.c | 95
3 files changed, 259 inser
On Sat, Nov 14, 2020 at 05:17:20PM -0800, Jakub Kicinski wrote:
> On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote:
> > This patch adds bpf_sk_storage_get_tracing_proto and
> > bpf_sk_storage_delete_tracing_proto. They will check
> > in runtime that the helpers ca
On Mon, Nov 16, 2020 at 10:00:04AM -0800, Jakub Kicinski wrote:
> On Mon, 16 Nov 2020 09:37:34 -0800 Martin KaFai Lau wrote:
> > On Sat, Nov 14, 2020 at 05:17:20PM -0800, Jakub Kicinski wrote:
> > > On Thu, 12 Nov 2020 13:13:13 -0800 Martin KaFai Lau wrote:
>
On Mon, Nov 16, 2020 at 10:43:40AM -0800, Jakub Kicinski wrote:
> On Mon, 16 Nov 2020 10:37:49 -0800 Martin KaFai Lau wrote:
> > On Mon, Nov 16, 2020 at 10:00:04AM -0800, Jakub Kicinski wrote:
> > > Locks that can run in any context but preempt disabled or softirq
> > >
orage in FENTRY/FEXIT/RAW_TP")
Suggested-by: Jakub Kicinski
Signed-off-by: Martin KaFai Lau
---
net/core/bpf_sk_storage.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 359908a7d3c1..a32037daa933 100644
---
On Mon, Nov 16, 2020 at 07:37:52PM +0100, Florian Lehner wrote:
> bpf handlers for perf events other than tracepoints, kprobes or uprobes
> are attached to the overflow_handler of the perf event.
>
> Perf events of type software/dummy are placeholder events. So when
> attaching a bpf handle to an
ns():
> EgressLogByRemoteEndpoint 95.40ns 10.48M
Acked-by: Martin KaFai Lau
On Tue, Nov 17, 2020 at 02:56:36PM +, Daniel T. Lee wrote:
> Under the samples/bpf directory, similar tracing helpers are
> fragmented around. To keep consistent of tracing programs, this commit
> moves the helper and define locations to increase the reuse of each
> helper function.
>
> Signed
On Tue, Nov 17, 2020 at 02:56:37PM +, Daniel T. Lee wrote:
[ ... ]
> diff --git a/samples/bpf/hbm.c b/samples/bpf/hbm.c
> index b9f9f771dd81..008bc635ad9b 100644
> --- a/samples/bpf/hbm.c
> +++ b/samples/bpf/hbm.c
> @@ -46,7 +46,6 @@
> #include
> #include
>
> -#include "bpf_load.h"
> #i
On Tue, Nov 17, 2020 at 02:56:38PM +, Daniel T. Lee wrote:
[ ... ]
> diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
> index 01449d767122..7a643595ac6c 100644
> --- a/samples/bpf/Makefile
> +++ b/samples/bpf/Makefile
> @@ -82,7 +82,7 @@ test_overhead-objs := bpf_load.o test_overhead_u
On Tue, Nov 17, 2020 at 02:56:39PM +, Daniel T. Lee wrote:
> This commit refactors the existing kprobe program with libbpf bpf
> loader. To attach bpf program, this uses generic bpf_program__attach()
> approach rather than using bpf_load's load_bpf_file().
>
> To attach bpf to perf_event, inst
On Mon, Dec 21, 2020 at 02:22:41PM -0800, Song Liu wrote:
> On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev wrote:
> >
> > When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > not an issue, sometime
On Tue, Dec 22, 2020 at 07:09:33PM -0800, s...@google.com wrote:
> On 12/22, Martin KaFai Lau wrote:
> > On Thu, Dec 17, 2020 at 09:23:23AM -0800, Stanislav Fomichev wrote:
> > > When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> > > syscall sta
On Thu, Dec 31, 2020 at 12:14:13PM -0800, s...@google.com wrote:
> On 12/30, Martin KaFai Lau wrote:
> > On Mon, Dec 21, 2020 at 02:22:41PM -0800, Song Liu wrote:
> > > On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev
> > wrote:
> > > >
> > > > W
On Mon, Jan 04, 2021 at 02:14:53PM -0800, Stanislav Fomichev wrote:
> When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> syscall starts incurring kzalloc/kfree cost. While, in general, it's
> not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> TCP_ZERO
On Tue, Jan 05, 2021 at 02:45:30PM +, Sean Young wrote:
> clang supports arbitrary length ints using the _ExtInt extension. This
> can be useful to hold very large values, e.g. 256 bit or 512 bit types.
>
> Larger types (e.g. 1024 bits) are possible but I am unaware of a use
> case for these.
On Tue, Jan 05, 2021 at 07:20:47AM -0800, menglong8.d...@gmail.com wrote:
> From: Menglong Dong
>
> 'unistd.h' included in 'selftests/bpf/prog_tests/test_lsm.c' is
> duplicated.
It is for bpf-next. Please put a proper tag next time.
Acked-by: Martin KaFai Lau
On Tue, Jan 05, 2021 at 01:43:50PM -0800, Stanislav Fomichev wrote:
> Add custom implementation of getsockopt hook for TCP_ZEROCOPY_RECEIVE.
> We skip generic hooks for TCP_ZEROCOPY_RECEIVE and have a custom
> call in do_tcp_getsockopt using the on-stack data. This removes
> 3% overhead for locking
ut since compilation
> of net.c picks up system headers the problem can recur.
>
> Dropping #include resolves the issue and it is
> not needed for compilation anyhow.
Acked-by: Martin KaFai Lau
On Wed, Jan 06, 2021 at 02:45:56PM -0800, s...@google.com wrote:
> On 01/06, Martin KaFai Lau wrote:
> > On Tue, Jan 05, 2021 at 01:43:50PM -0800, Stanislav Fomichev wrote:
> > > Add custom implementation of getsockopt hook for TCP_ZEROCOPY_RECEIVE.
> > >
ch applied:
> 0.52% 0.12% tcp_mmap [kernel.kallsyms] [k]
> __cgroup_bpf_run_filter_getsockopt_kern
>
> Signed-off-by: Stanislav Fomichev
> Cc: Martin KaFai Lau
> Cc: Song Liu
> Cc: Eric Dumazet
> ---
> include/linux/bpf-cgroup.h
--0.81%--__kmalloc
>
> With the patch applied:
> 0.52% 0.12% tcp_mmap [kernel.kallsyms] [k]
> __cgroup_bpf_run_filter_getsockopt_kern
>
> Signed-off-by: Stanislav Fomichev
> Cc: Martin KaFai Lau
> Cc: Song Liu
> Cc: Eric Dumaz
.kallsyms] [k]
> __cgroup_bpf_run_filter_getsockopt
> |
> --3.30%--__cgroup_bpf_run_filter_getsockopt
>|
> --0.81%--__kmalloc
>
> Signed-off-by: Stanislav Fomichev
> Cc: Martin KaFai Lau
> Cc: So
On Fri, Jan 08, 2021 at 03:19:47PM -0800, Song Liu wrote:
[ ... ]
> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> index dd5aedee99e73..9bd47ad2b26f1 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c
> @@ -140,17 +140,18 @@ static
On Mon, Jan 11, 2021 at 10:35:43PM +0100, KP Singh wrote:
> On Mon, Jan 11, 2021 at 7:57 PM Martin KaFai Lau wrote:
> >
> > On Fri, Jan 08, 2021 at 03:19:47PM -0800, Song Liu wrote:
> >
> > [ ... ]
> >
> > > diff --git a/kernel/bpf/bpf_local_storage
On Mon, Jan 11, 2021 at 11:47:38AM -0800, Stanislav Fomichev wrote:
> optlen == 0 indicates that the kernel should ignore BPF buffer
> and use the original one from the user. We, however, forget
> to free the temporary buffer that we've allocated for BPF.
>
> Reported-
On Mon, Jan 11, 2021 at 02:38:02PM -0800, Stanislav Fomichev wrote:
> On Mon, Jan 11, 2021 at 2:32 PM Martin KaFai Lau wrote:
> >
> > On Mon, Jan 11, 2021 at 11:47:38AM -0800, Stanislav Fomichev wrote:
> > > optlen == 0 indicates that the kernel should ignore BPF buffer
&g
On Tue, Jan 12, 2021 at 08:28:29AM -0800, Stanislav Fomichev wrote:
> optlen == 0 indicates that the kernel should ignore BPF buffer
> and use the original one from the user. We, however, forget
> to free the temporary buffer that we've allocated for BPF.
Acked-by: Martin KaFai Lau
On Mon, Jan 11, 2021 at 03:41:26PM -0800, Song Liu wrote:
>
>
> > On Jan 11, 2021, at 10:56 AM, Martin Lau wrote:
> >
> > On Fri, Jan 08, 2021 at 03:19:47PM -0800, Song Liu wrote:
> >
> > [ ... ]
> >
> >> diff --git a/kernel/bpf/bpf_local_storage.c
> >> b/kernel/bpf/bpf_local_storage.c
> >>
This set enforces NULL check on the new helper return types,
RET_PTR_TO_BTF_ID_OR_NULL and RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL.
Martin KaFai Lau (3):
bpf: Enforce id generation for all may-be-null register type
bpf: selftest: Ensure the return value of bpf_skc_to helpers must be
checked
(sk);
if (!req_sk)
return 0;
/* !tp has not been tested, so verifier should reject. */
return *(__u8 *)tp;
}
Signed-off-by: Martin KaFai Lau
---
tools/testing/selftests/bpf/verifier/sock.c | 25 +
1 file changed, 25 insertions(+)
diff --git a/to
This patch tests all pointers returned by bpf_per_cpu_ptr() must be
tested for NULL first before it can be accessed.
This patch adds a subtest "null_check", so it moves the ".data..percpu"
existence check to the very beginning and before doing any subtest.
Signed-off-
ixes: af7ec1383361 ("bpf: Add bpf_skc_to_tcp6_sock() helper")
Fixes: eaa6bcb71ef6 ("bpf: Introduce bpf_per_cpu_ptr()")
Cc: Yonghong Song
Cc: Hao Luo
Signed-off-by: Martin KaFai Lau
---
kernel/bpf/verifier.c | 11 +--
1 file changed, 5 insertions(+), 6 deletions(-)
On Tue, Oct 27, 2020 at 06:47:13PM -0700, Alexander Duyck wrote:
> From: Alexander Duyck
>
> Drop the tcp_client/server.py files in favor of using a client and server
> thread within the test case. Specifically we spawn a new thread to play the
> role of the server, and the main testing thread pl
On Thu, Oct 29, 2020 at 09:58:15AM -0700, Alexander Duyck wrote:
[ ... ]
> > > @@ -43,7 +94,9 @@ int verify_result(const struct tcpbpf_globals *result)
> > > EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32);
> > > EXPECT_EQ(0, result->good_cb_test_rv, PRIu32);
> > > EXPECT_EQ(1, r
On Sat, Oct 31, 2020 at 11:52:18AM -0700, Alexander Duyck wrote:
> From: Alexander Duyck
>
> Drop the tcp_client/server.py files in favor of using a client and server
> thread within the test case. Specifically we spawn a new thread to play the
The thread comment may be outdated in v2.
> role of
ation becomes the last step of the
> call and then immediately following is the tear down of the test setup.
>
> Signed-off-by: Alexander Duyck
Acked-by: Martin KaFai Lau
> ---
> .../testing/selftests/bpf/prog_tests/tcpbpf_user.c | 114
>
> 1 file
n we can clean up the remaining bits such as the one remaining
> CHECK_FAIL at the end of test_tcpbpf_user so that the function only makes
> use of CHECK as needed.
>
> Acked-by: Andrii Nakryiko
> Signed-off-by: Alexander Duyck
Acked-by: Martin KaFai Lau
> ---
> .../testing/sel
1 - 100 of 1277 matches
Mail list logo