ss_callbacks+0x202/0x7c0
[ 57.078962] [] __do_softirq+0xf7/0x3f0
[ 57.085373] [] run_ksoftirqd+0x35/0x70
cannot reuse filter memory, since it's readonly, so have to
extend sk_filter with work_struct
Signed-off-by: Alexei Starovoitov
---
arch/x86/net/bpf_jit_comp.c | 17 -
in
On Wed, Oct 2, 2013 at 9:23 PM, Eric Dumazet wrote:
> On Wed, 2013-10-02 at 20:50 -0700, Alexei Starovoitov wrote:
>> on x86 system with net.core.bpf_jit_enable = 1
>
>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>> index a6ac848..378fa03 100644
>>
On Wed, Oct 2, 2013 at 9:57 PM, Eric Dumazet wrote:
> On Wed, 2013-10-02 at 21:53 -0700, Eric Dumazet wrote:
>> On Wed, 2013-10-02 at 21:44 -0700, Alexei Starovoitov wrote:
>>
>> > I think ifdef config_x86 is a bit ugly inside struct sk_filter, but
>> > don
ss_callbacks+0x202/0x7c0
[ 57.078962] [] __do_softirq+0xf7/0x3f0
[ 57.085373] [] run_ksoftirqd+0x35/0x70
cannot reuse jited filter memory, since it's readonly,
so use original bpf insns memory to hold work_struct
defer kfree of sk_filter until jit completed freeing
tested on x86_64 and i386
S
On Thu, Oct 3, 2013 at 4:02 PM, Eric Dumazet wrote:
> On Thu, 2013-10-03 at 15:47 -0700, Alexei Starovoitov wrote:
>> on x86 system with net.core.bpf_jit_enable = 1
>>
>
>> --- a/net/core/filter.c
>> +++ b/net/core/filter.c
>> @@ -644,7 +644,9 @@ void sk_fil
On Thu, Oct 3, 2013 at 4:07 PM, Eric Dumazet wrote:
> On Thu, 2013-10-03 at 15:47 -0700, Alexei Starovoitov wrote:
>
>> @@ -722,7 +725,8 @@ EXPORT_SYMBOL_GPL(sk_unattached_filter_destroy);
>> int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
>> {
>&
On Thu, Oct 3, 2013 at 4:11 PM, Alexei Starovoitov wrote:
> On Thu, Oct 3, 2013 at 4:07 PM, Eric Dumazet wrote:
>> On Thu, 2013-10-03 at 15:47 -0700, Alexei Starovoitov wrote:
>>
>>> @@ -722,7 +725,8 @@ EXPORT_SYMBOL_GPL(sk_unattached_filter_destroy);
>>> int s
On Wed, Jul 20, 2016 at 01:19:51AM +0200, Daniel Borkmann wrote:
> On 07/19/2016 06:34 PM, Alexei Starovoitov wrote:
> >On Tue, Jul 19, 2016 at 01:17:53PM +0200, Daniel Borkmann wrote:
> >>>+ return -EINVAL;
> >>>+
> >>>+ /* Is this a use
hing
> the system, we print a warning on invocation.
>
> It was tested with the tracex7 program on x86-64.
>
> Signed-off-by: Sargun Dhillon
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> ---
> include/uapi/linux/bpf.h | 12
> kern
On Fri, Jul 22, 2016 at 11:53:52AM +0200, Daniel Borkmann wrote:
> On 07/22/2016 04:14 AM, Alexei Starovoitov wrote:
> >On Thu, Jul 21, 2016 at 06:09:17PM -0700, Sargun Dhillon wrote:
> >>This allows user memory to be written to during the course of a kprobe.
> >>It sho
On Fri, Jul 22, 2016 at 05:05:27PM -0700, Sargun Dhillon wrote:
> It was tested with the tracex7 program on x86-64.
it's my fault to start tracexN tradition that turned out to be
cumbersome, let's not continue it. Instead could you rename it
to something meaningful? Like test_probe_write_user ?
Ri
memory!",
> + current->comm, task_pid_nr(current));
I think checkpatch should have complained here.
current->comm line should start under "
No other nits for this patch :)
Once fixed, feel free to add my Acked-by: Alexei Starovoitov
On Sat, Jul 23, 2016 at 05:44:11PM -0700, Sargun Dhillon wrote:
> This example shows using a kprobe to act as a dnat mechanism to divert
> traffic for arbitrary endpoints. It rewrite the arguments to a syscall
> while they're still in userspace, and before the syscall has a chance
> to copy the arg
On Sat, Jul 23, 2016 at 05:39:42PM -0700, Sargun Dhillon wrote:
> The example has been modified to act like a test in the follow up set. It
> tests
> for the positive case (Did the helper work or not) as opposed to the negative
> case (is the helper able to violate the safety constraints we set
at
> uses it, in one the intended ways to divert execution.
>
> Thanks to Alexei Starovoitov, and Daniel Borkmann for review, I've made
> changes based on their recommendations.
>
> This helper should be considered experimental, so we print a warning
> to dmesg when it i
On Thu, Sep 15, 2016 at 11:25:10PM +0200, Mickaël Salaün wrote:
> >> Agreed. With this RFC, the Checmate features (i.e. network helpers)
> >> should be able to sit on top of Landlock.
> >
> > I think neither of them should be called fancy names for no technical
> > reason.
> > We will have only o
On Tue, Sep 20, 2016 at 12:49:13AM +0200, Mickaël Salaün wrote:
> Add security access check for cgroup backed FD. The "cgroup.procs" file
> of the corresponding cgroup should be readable to identify the cgroup,
> and writable to prove that the current process can manage this cgroup
> (e.g. through
On Mon, Jun 20, 2016 at 11:38:18AM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Jun 20, 2016 at 11:29:13AM +0800, Wangnan (F) escreveu:
> > On 2016/6/17 0:48, Arnaldo Carvalho de Melo wrote:
> > >Em Thu, Jun 16, 2016 at 08:02:41AM +, Wang Nan escreveu:
> > >>With '--dry-run', 'perf record'
On 6/21/16 7:47 AM, Thadeu Lima de Souza Cascardo wrote:
The calling convention is different with ABIv2 and so we'll need changes
in bpf_slow_path_common() and sk_negative_common().
How big would those changes be? Do we know?
How come no one reported this was broken previously? This is the fi
> Signed-off-by: Martin KaFai Lau
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Tejun Heo
Acked-by: Alexei Starovoitov
and
> give enough debug info if things did not go well.
>
> Signed-off-by: Martin KaFai Lau
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Tejun Heo
> ---
> samples/bpf/Makefile | 3 +
> samples/bpf/bpf_helpers.h |
;sk_cgrp_data), cgrp);
if you'd need to respin the patch for other reasons please add kdoc
to bpf.h for this new helper similar to other helpers.
To say that 0 or 1 return values is indication of cg2 descendant relation
and < 0 in case of error.
Acked-by: Alexei Starovoitov
On Thu, Sep 22, 2016 at 09:56:47PM +0200, Mickaël Salaün wrote:
> This fix a pointer leak when an unprivileged eBPF program read a pointer
> value from the context. Even if is_valid_access() returns a pointer
> type, the eBPF verifier replace it with UNKNOWN_VALUE. The register
> value containing a
nks for the fix.
Acked-by: Alexei Starovoitov
On Sat, Sep 24, 2016 at 02:10:05AM +0530, Naveen N. Rao wrote:
> seccomp_phase1() does not exist anymore. Instead, update sample to use
> __seccomp_filter(). While at it, set max locked memory to unlimited.
>
> Signed-off-by: Naveen N. Rao
Acked-by: Alexei Starovoitov
On Sat, Sep 24, 2016 at 12:33:54AM +0200, Daniel Borkmann wrote:
> On 09/23/2016 10:35 PM, Naveen N. Rao wrote:
> >Tail calls allow JIT'ed eBPF programs to call into other JIT'ed eBPF
> >programs. This can be achieved either by:
> >(1) retaining the stack setup by the first eBPF program and having
On Fri, Sep 23, 2016 at 12:49:47PM +, Wang Nan wrote:
> This patch set is the first step to implement features I announced
> in LinuxCon NA 2016. See page 31 of:
>
>
> http://events.linuxfoundation.org/sites/events/files/slides/Performance%20Monitoring%20and%20Analysis%20Using%20perf%20and%2
VALUE.
> >However, this fix is important for future unprivileged eBPF programs
> >which could use pointers in their context.
> >
> >Signed-off-by: Mickaël Salaün
> >Cc: Alexei Starovoitov
> >Cc: Daniel Borkmann
>
> Seems okay to me:
>
> Acked-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
Mickael, please mention [PATCH net-next] in subject next time.
Thanks
On Mon, Sep 26, 2016 at 09:49:30AM +0800, Wangnan (F) wrote:
>
>
> On 2016/9/24 23:16, Alexei Starovoitov wrote:
> >On Fri, Sep 23, 2016 at 12:49:47PM +, Wang Nan wrote:
> >>This patch set is the first step to implement features I announced
> >>in
On Mon, Sep 26, 2016 at 11:14:50AM -0700, Shaohua Li wrote:
> put_cpu_var takes the percpu data, not the data returned from
> get_cpu_var.
>
> This doesn't change the behavior.
>
> Cc: Tejun Heo
> Cc: Alexei Starovoitov
> Signed-off-by: Shaohua Li
Looks good. Nic
On Tue, Sep 27, 2016 at 08:42:41AM -0700, Shaohua Li wrote:
> put_cpu_var takes the percpu data, not the data returned from
> get_cpu_var.
>
> This doesn't change the behavior.
>
> Cc: Tejun Heo
> Cc: Alexei Starovoitov
> Signed-off-by: Shaohua Li
Acked-by: Alexei Starovoitov
On Sun, Jul 24, 2016 at 06:50:47PM +0100, Colin King wrote:
> From: Colin Ian King
>
> file f needs to be closed, fixes resource leak.
>
> Signed-off-by: Colin Ian King
have been travelling. sorry for delay.
Acked-by: Alexei Starovoitov
On Mon, Aug 01, 2016 at 12:33:30AM -0400, Valdis Kletnieks wrote:
> Building with W=1 generates some 350 lines of warnings of the form:
>
> kernel/bpf/core.c: In function '__bpf_prog_run':
> kernel/bpf/core.c:476:33: warning: initialized field overwritten
> [-Woverride-init]
>[BPF_ALU | BPF_A
On Mon, Aug 01, 2016 at 01:18:43AM -0400, valdis.kletni...@vt.edu wrote:
> On Sun, 31 Jul 2016 21:42:22 -0700, Alexei Starovoitov said:
>
> > and at least 2 other such patches for other files...
> > Is there a single warning where -Woverride-init was useful?
> > May
On Wed, Sep 07, 2016 at 01:27:35PM +0300, Yauheni Kaliuta wrote:
> The patch instrument different places of resource limits checks with
> reporting using the infrastructure from the previous patch.
>
> Signed-off-by: Yauheni Kaliuta
> ---
> arch/ia64/kernel/perfmon.c | 4 +++-
>
On 9/7/16 4:46 PM, Omar Sandoval wrote:
From: Omar Sandoval
This is a generally useful data structure, so make it available to
anyone else who might want to use it. It's also a nice cleanup
separating the allocation logic from the rest of the tag handling logic.
The code is behind a new Kconfi
On 9/7/16 5:38 PM, Omar Sandoval wrote:
On Wed, Sep 07, 2016 at 05:01:56PM -0700, Alexei Starovoitov wrote:
On 9/7/16 4:46 PM, Omar Sandoval wrote:
From: Omar Sandoval
This is a generally useful data structure, so make it available to
anyone else who might want to use it. It's also a
place custom checker groups
> * simpler userland API
>
> Signed-off-by: Mickaël Salaün
> Cc: Alexei Starovoitov
> Cc: Andy Lutomirski
> Cc: Daniel Borkmann
> Cc: David S. Miller
> Cc: Kees Cook
> Link:
> https://lkml.kernel.org/r/calcetrwwtiz3kztkegow24-dvhq
onymous inode)
> * replace struct file* with struct path* in map_landlock_handle
> * add BPF protos
> * fix bpf_landlock_cmp_fs_prop_with_struct_file()
>
> Signed-off-by: Mickaël Salaün
> Cc: Alexei Starovoitov
> Cc: Andy Lutomirski
> Cc: Daniel Borkmann
> Cc: David
On Wed, Sep 14, 2016 at 09:24:07AM +0200, Mickaël Salaün wrote:
> This will be useful to support Landlock for the next commits.
>
> Signed-off-by: Mickaël Salaün
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Daniel Mack
> Cc: David S. Miller
> Cc: Tejun Heo
On Wed, Sep 14, 2016 at 09:24:14AM +0200, Mickaël Salaün wrote:
> This is a proof of concept to expose optional values that could depend
> of the process access rights.
>
> There is two dedicated flags: LANDLOCK_FLAG_ACCESS_SKB_READ and
> LANDLOCK_FLAG_ACCESS_SKB_WRITE. Each of them can be activat
On Wed, Sep 14, 2016 at 09:24:15AM +0200, Mickaël Salaün wrote:
> Add a basic sandbox tool to create a process isolated from some part of
> the system. This can depend of the current cgroup.
>
> Example with the current process hierarchy (seccomp):
>
> $ ls /home
> user1
> $ LANDLOCK_ALLOWE
On Thu, Sep 15, 2016 at 01:02:22AM +0200, Mickaël Salaün wrote:
> >
> > I would suggest for the next RFC to do minimal 7 patches up to this point
> > with simple example that demonstrates the use case.
> > I would avoid all unpriv stuff and all of seccomp for the next RFC as well,
> > otherwise I
On Thu, Sep 15, 2016 at 01:22:49AM +0200, Mickaël Salaün wrote:
>
> On 14/09/2016 20:51, Alexei Starovoitov wrote:
> > On Wed, Sep 14, 2016 at 09:23:56AM +0200, Mickaël Salaün wrote:
> >> This new arraymap looks like a set and brings new properties:
> >> * stro
On Wed, Sep 14, 2016 at 06:25:07PM -0700, Andy Lutomirski wrote:
> On Wed, Sep 14, 2016 at 3:11 PM, Mickaël Salaün wrote:
> >
> > On 14/09/2016 20:27, Andy Lutomirski wrote:
> >> On Wed, Sep 14, 2016 at 12:24 AM, Mickaël Salaün wrote:
> >>> Add a new flag CGRP_NO_NEW_PRIVS for each cgroup. This f
On Wed, Sep 14, 2016 at 07:27:08PM -0700, Andy Lutomirski wrote:
> >> >
> >> > This RFC handle both cgroup and seccomp approaches in a similar way. I
> >> > don't see why building on top of cgroup v2 is a problem. Is there
> >> > security issues with delegation?
> >>
> >> What I mean is: cgroup v2
On Wed, Sep 14, 2016 at 09:08:57PM -0700, Andy Lutomirski wrote:
> On Wed, Sep 14, 2016 at 9:00 PM, Alexei Starovoitov
> wrote:
> > On Wed, Sep 14, 2016 at 07:27:08PM -0700, Andy Lutomirski wrote:
> >> >> >
> >> >> > This RFC handle both c
On Wed, Sep 14, 2016 at 09:38:16PM -0700, Andy Lutomirski wrote:
> On Wed, Sep 14, 2016 at 9:31 PM, Alexei Starovoitov
> wrote:
> > On Wed, Sep 14, 2016 at 09:08:57PM -0700, Andy Lutomirski wrote:
> >> On Wed, Sep 14, 2016 at 9:00 PM, Alexei Starovoitov
> >> wrote:
On Mon, Aug 29, 2016 at 02:17:18PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 26, 2016 at 07:31:22PM -0700, Alexei Starovoitov wrote:
> > +static int perf_event_set_bpf_handler(struct perf_event *event, u32
> > prog_fd)
> > +{
> > + struct bpf_prog *pr
an overflow_handler to sw and hw perf_events.
Peter, please review.
Patches 5 and 6 are examples from myself and Brendan.
v1-v2: fixed issues spotted by Peter and Daniel.
Thanks!
Alexei Starovoitov (5):
bpf: support 8-byte metafield access
bpf: introduce BPF_PROG_TYPE_PERF_EVENT program
>prog, since it's
assigned only once before it's accessed.
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h| 4 +++
include/linux/perf_event.h | 2 ++
kernel/events/core.c | 85 +-
3 files changed, 90 inserti
e_data without affecting bpf programs.
New fields can be added to the end of struct bpf_perf_event_data
in the future.
Signed-off-by: Alexei Starovoitov
Acked-by: Daniel Borkmann
---
include/linux/perf_event.h | 5
include/uapi/linux/Kbuild | 1 +
include/uapi/linux
YCLES for current process and inherited perf_events to
children
- PERF_COUNT_SW_CPU_CLOCK on all cpus
- PERF_COUNT_SW_CPU_CLOCK for current process
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile | 4 +
samples/bpf/bpf_helpers.h | 2 +
samples/bpf/bpf_load.c
Make sure that BPF_PROG_TYPE_PERF_EVENT programs only use
preallocated hash maps, since doing memory allocation
in overflow_handler can crash depending on where nmi got triggered.
Signed-off-by: Alexei Starovoitov
Acked-by: Daniel Borkmann
---
kernel/bpf/verifier.c | 22
From: Brendan Gregg
sample instruction pointer and frequency count in a BPF map
Signed-off-by: Brendan Gregg
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile| 4 +
samples/bpf/sampleip_kern.c | 38 +
samples/bpf/sampleip_user.c | 196
d xdp programs.
They check for 4-byte only ctx access before these conditions are hit.
Signed-off-by: Alexei Starovoitov
Acked-by: Daniel Borkmann
---
kernel/bpf/verifier.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
On Thu, Sep 01, 2016 at 10:12:51AM +0200, Peter Zijlstra wrote:
> On Wed, Aug 31, 2016 at 02:50:41PM -0700, Alexei Starovoitov wrote:
> > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> > index 97bfe62f30d7..dcaaaf3ec8e6 100644
> > --- a/include/linux/p
.
v2->v3: fixed few more minor issues
v1->v2: fixed issues spotted by Peter and Daniel.
Thanks!
Alexei Starovoitov (5):
bpf: support 8-byte metafield access
bpf: introduce BPF_PROG_TYPE_PERF_EVENT program type
bpf: perf_event progs should only use preallocated maps
perf, bpf: add perf
d xdp programs.
They check for 4-byte only ctx access before these conditions are hit.
Signed-off-by: Alexei Starovoitov
Acked-by: Daniel Borkmann
---
kernel/bpf/verifier.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
e_data without affecting bpf programs.
New fields can be added to the end of struct bpf_perf_event_data
in the future.
Signed-off-by: Alexei Starovoitov
Acked-by: Daniel Borkmann
---
include/linux/perf_event.h | 5 +++
include/uapi/linux/Kbuild | 1 +
include/uapi/linux
YCLES for current process and inherited perf_events to
children
- PERF_COUNT_SW_CPU_CLOCK on all cpus
- PERF_COUNT_SW_CPU_CLOCK for current process
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile | 4 +
samples/bpf/bpf_helpers.h | 2 +
samples/bpf/bpf_load.c
>prog, since it's
assigned only once before it's accessed.
Signed-off-by: Alexei Starovoitov
---
include/linux/bpf.h| 4 +++
include/linux/perf_event.h | 4 +++
kernel/events/core.c | 89 +-
3 files changed, 96 inserti
From: Brendan Gregg
sample instruction pointer and frequency count in a BPF map
Signed-off-by: Brendan Gregg
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile| 4 +
samples/bpf/sampleip_kern.c | 38 +
samples/bpf/sampleip_user.c | 196
Make sure that BPF_PROG_TYPE_PERF_EVENT programs only use
preallocated hash maps, since doing memory allocation
in overflow_handler can crash depending on where nmi got triggered.
Signed-off-by: Alexei Starovoitov
Acked-by: Daniel Borkmann
---
kernel/bpf/verifier.c | 22
On Wed, Dec 31, 2014 at 08:38:49PM -0500, kan.li...@intel.com wrote:
>
> Changes since V1:
> - Using work queue to set Rx network flow classification rules and search
>available NET policy object asynchronously.
> - Using RCU lock to replace read-write lock
> - Redo performance test and upd
On Thu, Aug 04, 2016 at 04:28:53PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 03, 2016 at 11:57:05AM -0700, Brendan Gregg wrote:
>
> > As for pmu tracepoints: if I were to instrument it (although I wasn't
> > planning to), I'd put a tracepoint in perf_event_overflow() called
> > "perf:perf_overflo
On Thu, Aug 04, 2016 at 09:13:16PM -0700, Brendan Gregg wrote:
> On Thu, Aug 4, 2016 at 6:43 PM, Alexei Starovoitov
> wrote:
> > On Thu, Aug 04, 2016 at 04:28:53PM +0200, Peter Zijlstra wrote:
> >> On Wed, Aug 03, 2016 at 11:57:05AM -0700, Brendan Gregg wrote:
> >>
On Fri, Aug 05, 2016 at 12:52:09PM +0200, Peter Zijlstra wrote:
> > > > Currently overflow_handler is set at event alloc time. If we start
> > > > changing it on the fly with atomic xchg(), afaik things shouldn't
> > > > break, since each overflow_handler is run to completion and doesn't
> > > > ch
On Thu, Aug 25, 2016 at 12:32:44PM +0200, Mickaël Salaün wrote:
> Add an eBPF function bpf_landlock_cmp_cgroup_beneath(opt, map, map_op)
> to compare the current process cgroup with a cgroup handle, The handle
> can match the current cgroup if it is the same or a child. This allows
> to make condit
On 8/15/17 12:34 PM, Edward Cree wrote:
State of a register doesn't matter if it wasn't read in reaching an exit;
a write screens off all reads downstream of it from all explored_states
upstream of it.
This allows us to prune many more branches; here are some processed insn
counts for some Cil
On Mon, Oct 16, 2017 at 11:18 AM, Richard Weinberger wrote:
> current is never NULL.
>
> Signed-off-by: Richard Weinberger
> ---
> kernel/bpf/helpers.c | 12
> 1 file changed, 12 deletions(-)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 3d24e238221e..e8845adc
On Mon, Oct 16, 2017 at 2:10 PM, Richard Weinberger wrote:
> Am Montag, 16. Oktober 2017, 23:02:06 CEST schrieb Daniel Borkmann:
>> On 10/16/2017 10:55 PM, Richard Weinberger wrote:
>> > Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann:
>> >>> struct task_struct *task =
On Tue, Oct 17, 2017 at 12:23:13AM +0200, Richard Weinberger wrote:
> Alexei,
>
> Am Dienstag, 17. Oktober 2017, 00:06:08 CEST schrieb Alexei Starovoitov:
> > On Mon, Oct 16, 2017 at 11:18 AM, Richard Weinberger wrote:
> > > current is never NULL.
> > >
> &
for __GFP_NOWARN and using it in bpf is much cleaner
fix that avoids layering violations.
Acked-by: Alexei Starovoitov
ed-by: Mark Rutland
> Reported-by: Shankara Pailoor
> Reported-by: Richard Weinberger
> Signed-off-by: Daniel Borkmann
> Cc: John Fastabend
Acked-by: Alexei Starovoitov
enly assumed __GFP_NOWARN would work, so
> no changes needed to their actual __alloc_percpu_gfp() calls
> which use the flag already.
>
> Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov
On Thu, May 25, 2017 at 05:38:26PM -0700, David Daney wrote:
> Since the eBPF machine has 64-bit registers, we only support this in
> 64-bit kernels. As of the writing of this commit log test-bpf is showing:
>
> test_bpf: Summary: 316 PASSED, 0 FAILED, [308/308 JIT'ed]
>
> All current test cas
: Alexei Starovoitov
---
kernel/bpf/arraymap.c| 26 +++---
kernel/events/core.c | 6 +-
kernel/trace/bpf_trace.c | 4 ++--
3 files changed, 14 insertions(+), 22 deletions(-)
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 5e00b2333c26..55ffa9949128
From: Teng Qin
This commit updates documentation of the bpf_perf_event_output and
bpf_perf_event_read helpers to match their implementation.
Signed-off-by: Teng Qin
Signed-off-by: Alexei Starovoitov
---
include/uapi/linux/bpf.h | 11 +++
tools/include/uapi/linux/bpf.h | 11
v1->v2: address Peter's feedback. Refactor patch 1 to allow attaching
bpf programs to all event types and reading counters from all of them as well
patch 2 - more tests
patch 3 - address Dave's feedback and document bpf_perf_event_read()
and bpf_perf_event_output() properly
Teng Qin (3):
perf, b
(). Refactored the
existing sample to fork individual task on each CPU, attach kprobe to
more controllable function, and more accurately check if each read on
every CPU returned with good value.
Signed-off-by: Teng Qin
Signed-off-by: Alexei Starovoitov
---
samples/bpf/bpf_helpers.h | 3
On 5/29/17 2:39 AM, Peter Zijlstra wrote:
Do we want something like the below to replace much of the above?
if (!perf_event_valid_local(event, NULL, cpu))
goto err_out;
Seems to be roughly what you're after, although I suppose @cpu might be
hard to determine a priory,
On 5/30/17 9:51 AM, Peter Zijlstra wrote:
On Tue, May 30, 2017 at 08:52:14AM -0700, Alexei Starovoitov wrote:
+ if (!(event->attach_state & PERF_ATTACH_TASK) &&
+ event->cpu != cpu)
+ return false;
we do if (unlikely(event->oncpu != cpu))
as
On 10/31/17 6:55 PM, David Miller wrote:
From: Josef Bacik
Date: Tue, 31 Oct 2017 11:45:55 -0400
v1->v2:
- moved things around to make sure that bpf_override_return could really only be
used for an ftrace kprobe.
- killed the special return values from trace_call_bpf.
- renamed pc_modified t
On 10/31/17 8:45 AM, Josef Bacik wrote:
From: Josef Bacik
Error injection is sloppy and very ad-hoc. BPF could fill this niche
perfectly with it's kprobe functionality. We could make sure errors are
only triggered in specific call chains that we care about with very
specific situations. Acco
On Wed, Nov 01, 2017 at 09:55:24AM +0100, Peter Zijlstra wrote:
> On Wed, Nov 01, 2017 at 09:27:43AM +0100, Ingo Molnar wrote:
> >
> > * Peter Zijlstra wrote:
> >
> > > On Wed, Nov 01, 2017 at 06:15:54PM +1100, Stephen Rothwell wrote:
> > > > Hi all,
> > > >
> > > > Today's linux-next merge of
injection for all of our code
paths.
Signed-off-by: Josef Bacik
Both bpf and tracing bits look great to me.
Acked-by: Alexei Starovoitov
Bacik
Acked-by: Alexei Starovoitov
+++ b/samples/bpf/test_override_return.sh
@@ -0,0 +1,15 @@
+#!/bin/bash
+
+rm -f testfile.img
+dd if=/dev/zero of=testfile.img bs=1M seek=1000 count=1
+DEVICE=$(losetup --show -f testfile.img)
+mkfs.btrfs -f $DEVICE
+mkdir tmpmnt
+./tracex7 $DEVICE
+if [ $?
On Wed, Nov 01, 2017 at 03:59:48PM +0200, Michael S. Tsirkin wrote:
> On Wed, Nov 01, 2017 at 09:02:03PM +0800, Jason Wang wrote:
> >
> >
> > On 2017年11月01日 00:45, Michael S. Tsirkin wrote:
> > > > +static void __tun_set_steering_ebpf(struct tun_struct *tun,
> > > > +
On 11/2/17 7:54 AM, Roman Gushchin wrote:
+#define DEV_BPF_ACC_MKNOD (1ULL << 0)
+#define DEV_BPF_ACC_READ (1ULL << 1)
+#define DEV_BPF_ACC_WRITE (1ULL << 2)
+
+#define DEV_BPF_DEV_BLOCK (1ULL << 0)
+#define DEV_BPF_DEV_CHAR (1ULL << 1)
+
all macros in bpf.h start wit
On Thu, Nov 02, 2017 at 12:05:52PM +0100, Arnd Bergmann wrote:
> The bpf_verifer_ops array is generated dynamically and may be
> empty depending on configuration, which then causes an out
> of bounds access:
>
> kernel/bpf/verifier.c: In function 'bpf_check':
> kernel/bpf/verifier.c:4320:29: error
On Thu, Nov 02, 2017 at 05:14:00PM +0100, Arnd Bergmann wrote:
> On Thu, Nov 2, 2017 at 4:59 PM, Alexei Starovoitov
> wrote:
> > On Thu, Nov 02, 2017 at 12:05:52PM +0100, Arnd Bergmann wrote:
> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >> in
18aaf8a4 ("bpf: move knowledge about post-translation offsets
> > out of verifier")
> > Signed-off-by: Arnd Bergmann
>
> Thanks Arnd! I was hoping to nuke this code before build bots catch up
> to me, didn't work out :)
yeah. Jakub's patches may not make it in time for net-next closing.
so let's use this fix for now.
Acked-by: Alexei Starovoitov
On 11/11/17 4:14 PM, Ingo Molnar wrote:
* Josef Bacik wrote:
On Fri, Nov 10, 2017 at 10:34:59AM +0100, Ingo Molnar wrote:
* Josef Bacik wrote:
@@ -551,6 +578,10 @@ static const struct bpf_func_proto
*kprobe_prog_func_proto(enum bpf_func_id func
return &bpf_get_stackid_pr
On Sun, Nov 12, 2017 at 07:28:24AM +, yupeng0...@gmail.com wrote:
> Add a new type BPF_PROG_TYPE_FTRACE to bpf, let bpf can be attached to
> ftrace. Ftrace pass the function parameters to bpf prog, bpf prog
> return 1 or 0 to indicate whether ftrace can trace this function. The
> major propose
On Fri, Nov 03, 2017 at 05:52:22PM +0100, Daniel Borkmann wrote:
> On 11/03/2017 03:31 PM, Josef Bacik wrote:
> > On Fri, Nov 03, 2017 at 12:12:13AM +0100, Daniel Borkmann wrote:
> > > Hi Josef,
> > >
> > > one more issue I just noticed, see comment below:
> > >
> > > On 11/02/2017 03:37 PM, Jose
On 11/3/17 3:58 PM, Sandipan Das wrote:
For added security, the layout of some structures can be
randomized by enabling CONFIG_GCC_PLUGIN_RANDSTRUCT. One
such structure is task_struct. To build BPF programs, we
use Clang which does not support this feature. So, if we
attempt to read a field of a
On 11/5/17 2:31 AM, Naveen N. Rao wrote:
Hi Alexei,
Alexei Starovoitov wrote:
On 11/3/17 3:58 PM, Sandipan Das wrote:
For added security, the layout of some structures can be
randomized by enabling CONFIG_GCC_PLUGIN_RANDSTRUCT. One
such structure is task_struct. To build BPF programs, we
use
On Wed, Oct 18, 2017 at 7:22 AM, Daniel Borkmann wrote:
>
> Higher prio imo would be to make the allocation itself faster
> though, I remember we talked about this back in May wrt hashtable,
> but I kind of lost track whether there was an update on this in
> the mean time. ;-)
new percpu allocato
On Thu, Oct 19, 2017 at 03:52:49PM +0100, David Howells wrote:
> From: Chun-Yi Lee
>
> There are some bpf functions can be used to read kernel memory:
> bpf_probe_read, bpf_probe_write_user and bpf_trace_printk. These allow
> private keys in kernel memory (e.g. the hibernation image signing key)
1 - 100 of 2341 matches
Mail list logo