On Wed, Dec 12, 2018 at 04:45:26PM -0800, David Miller wrote:
> From: Alexei Starovoitov
> Date: Wed, 12 Dec 2018 15:39:10 -0800
>
> > But this approach doesn't scale.
> > We do rebase our trees when we need to fixup or drop patches and
> > at any given p
On Thu, Dec 13, 2018 at 04:37:28PM +1100, Stephen Rothwell wrote:
> Hi Alexei,
>
> On Wed, 12 Dec 2018 20:33:41 -0800 Alexei Starovoitov
> wrote:
> >
> > If Stephen's scripts can recognize SOB anywhere in the log then
> > --signoff can theoretically solve it
On Wed, Dec 12, 2018 at 04:42:37PM -0800, Matt Mullins wrote:
> Distributions build drivers as modules, including network and filesystem
> drivers which export numerous tracepoints. This enables
> bpf(BPF_RAW_TRACEPOINT_OPEN) to attach to those tracepoints.
>
> Signed-off-by: Matt Mullins
> ---
On Thu, Dec 20, 2018 at 01:45:56PM -0600, Kangjie Lu wrote:
> check_reg_arg() may fail and not mark correct data in "env". This
> fix inserts a check that ensures check_reg_arg() is successful, and
> if it is not, the fix stops further operations and returns an error
> upstream.
>
> Signed-off-by
On Sat, Dec 22, 2018 at 03:07:22PM -0800, David Miller wrote:
> From: "Gustavo A. R. Silva"
> Date: Fri, 21 Dec 2018 14:49:01 -0600
>
> > flen is indirectly controlled by user-space, hence leading to
> > a potential exploitation of the Spectre variant 1 vulnerability.
> >
> > This issue was dete
On Sat, Dec 22, 2018 at 08:53:40PM -0600, Gustavo A. R. Silva wrote:
> Hi,
>
> On 12/22/18 8:40 PM, David Miller wrote:
> > From: Alexei Starovoitov
> > Date: Sat, 22 Dec 2018 15:59:54 -0800
> >
> > > On Sat, Dec 22, 2018 at 03:07:22PM -0800, David Mille
On Sat, Dec 22, 2018 at 09:37:02PM -0600, Gustavo A. R. Silva wrote:
>
> Can't we have the case in which the code can be "trained" to read
> perfectly valid values for prog->len for quite a while, making the
> microcode come into place and speculate about:
>
> 1013 if (flen == 0 || flen >
On Sat, Dec 22, 2018 at 11:03:31PM -0600, Gustavo A. R. Silva wrote:
> Alexei,
>
> On 12/22/18 10:12 PM, Alexei Starovoitov wrote:
> > On Sat, Dec 22, 2018 at 09:37:02PM -0600, Gustavo A. R. Silva wrote:
> > >
> > > Can't we have the case in w
On Fri, Dec 21, 2018 at 09:44:44AM -0800, Tim Chen wrote:
> +
> +4. Kernel sandbox attacking kernel
> +^^
> +
> +The kernel has support for running user-supplied programs within the
> +kernel. Specific rules (such as bounds checking) are enforced on these
> +program
On Tue, Dec 25, 2018 at 01:17:10AM -0600, Kangjie Lu wrote:
> check_reg_arg() may fail. This fix inserts checks for its return value.
> If check_reg_arg() fails, issues an error message.
>
> Signed-off-by: Kangjie Lu
> ---
> kernel/bpf/verifier.c | 15 ---
> 1 file changed, 12 insert
>
> >
> > Another example is __BPF_PROG_RUN_ARRAY(), which also uses
> > preempt_enable_no_resched().
>
> Alexei, I think this code is just wrong.
why 'just wrong' ?
> Do you know why it uses
> preempt_enable_no_resched()?
dont recall precisely.
we could be preemptable at the point where macro
On Fri, Oct 19, 2018 at 1:22 AM Peter Zijlstra wrote:
>
> On Thu, Oct 18, 2018 at 10:00:53PM -0700, Alexei Starovoitov wrote:
> > >
> > > >
> > > > Another example is __BPF_PROG_RUN_ARRAY(), which also uses
> > > > preempt_enable_no_resched().
Hi Andrii,
syzbot found UAF in raw_tp cookie series in bpf-next.
Reverting the whole merge
2e244a72cd48 ("Merge branch 'bpf-raw-tracepoint-support-for-bpf-cookie'")
fixes the issue.
Pls take a look.
See C reproducer below. It splats consistently with CONFIG_KASAN=y
Thanks.
On Sun, Mar 24, 2024
ICK_MMAP_LAYOUT)
> void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> {
> mm->mmap_base = TASK_UNMAPPED_BASE;
> - mm->get_unmapped_area = arch_get_unmapped_area;
> + clear_bit(MMF_TOPDOWN, &mm->flags);
> }
> #endif
Makes sense to me.
Acked-by: Alexei Starovoitov
for the idea and for bpf bits.
On Tue, May 14, 2024 at 12:33 AM Ubisectech Sirius
wrote:
>
> Hello.
> We are Ubisectech Sirius Team, the vulnerability lab of China ValiantSec.
> Recently, our team has discovered a issue in Linux kernel 6.7. Attached to
> the email were a PoC file of the issue.
Jiri,
please take a look.
>
On Thu, Sep 28, 2023 at 6:21 PM Masami Hiramatsu wrote:
>
>
> Thus, what I need is to make fprobe to use function-graph tracer's shadow
> stack and trampoline instead of rethook. This may need to generalize its
> interface so that we can share it between fprobe and function-graph tracer,
> but we
On Tue, May 21, 2024 at 1:49 PM Deepak Gupta wrote:
>
> On Tue, May 21, 2024 at 12:48:16PM +0200, Jiri Olsa wrote:
> >hi,
> >as part of the effort on speeding up the uprobes [0] coming with
> >return uprobe optimization by using syscall instead of the trap
> >on the uretprobe trampoline.
>
> I und
t; might_alloc(flags);
>
> - if (unlikely(should_failslab(s, flags)))
> - return NULL;
> + if (static_branch_unlikely(&should_failslab_active)) {
> + if (should_failslab(s, flags))
> + return NULL;
> + }
makes sense.
Acked-by: Alexei Starovoitov
Do you have any microbenchmark numbers before/after this optimization?
On Sat, Jun 1, 2024 at 1:57 PM Vlastimil Babka wrote:
>
> On 5/31/24 6:43 PM, Alexei Starovoitov wrote:
> > On Fri, May 31, 2024 at 2:33 AM Vlastimil Babka wrote:
> >> might_alloc(flags);
> >>
> >> - if (unlikely(should_failslab(s, fl
On Wed, Jun 19, 2024 at 3:49 PM Vlastimil Babka wrote:
>
> When CONFIG_FUNCTION_ERROR_INJECTION is disabled,
> within_error_injection_list() will return false for any address and the
> result of check_non_sleepable_error_inject() denylist is thus redundant.
> The bpf_non_sleepable_error_inject lis
On Tue, Jun 25, 2024 at 7:24 AM Vlastimil Babka wrote:
>
> On 6/20/24 12:49 AM, Vlastimil Babka wrote:
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3874,13 +3874,37 @@ static __always_inline void
> > maybe_wipe_obj_freeptr(struct kmem_cache *s,
> > 0, sizeof(void *));
> >
On Wed, Jun 19, 2024 at 3:49 PM Vlastimil Babka wrote:
>
> Functions marked for error injection can have an associated static key
> that guards the callsite(s) to avoid overhead of calling an empty
> function when no error injection is in progress.
>
> Outside of the error injection framework itse
On Wed, Jul 24, 2024 at 4:40 AM Puranjay Mohan wrote:
>
> Implement bpf_send_signal_pid and bpf_send_signal_tgid helpers which are
> similar to bpf_send_signal_thread and bpf_send_signal helpers
> respectively but can be used to send signals to other threads and
> processes.
Thanks for working on
On Tue, Sep 3, 2024 at 9:33 AM Paul E. McKenney wrote:
>
> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> index 84daaa33ea0ab..4ba96e2cfa405 100644
> --- a/include/linux/srcu.h
> +++ b/include/linux/srcu.h
...
> +static inline int srcu_read_lock_lite(struct srcu_struct *ssp)
> __acqu
On Mon, Sep 9, 2024 at 5:55 PM Daniel Xu wrote:
>
> Right now there exists prog produce / userspace consume and userspace
> produce / prog consume support. But it is also useful to have prog
> produce / prog consume.
>
> For example, we want to track the latency overhead of cpumap in
> production.
On Tue, Apr 20, 2021 at 5:35 AM Florent Revest wrote:
>
> On Tue, Apr 20, 2021 at 12:54 AM Alexei Starovoitov
> wrote:
> >
> > On Mon, Apr 19, 2021 at 05:52:39PM +0200, Florent Revest wrote:
> > > This type provides the guarantee that an argument is going to be a co
On Wed, Dec 10, 2014 at 3:29 PM, Fengguang Wu wrote:
> Greetings,
>
> 0day kernel testing robot got the below dmesg and the first bad commit is
>
> net: sock: allow eBPF programs to be attached to sockets
> [init] Kernel was tainted on startup. Will ignore flags that are already set.
> [init]
("net: sock: allow eBPF programs to be attached to sockets")
Signed-off-by: Alexei Starovoitov
---
Silly mistake. I was sure I've checked this error path. Apparently not :(
net/core/filter.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/core/filter.c
On Thu, Dec 4, 2014 at 1:26 AM, Joe Perches wrote:
> On Thu, 2014-11-27 at 10:49 -0800, Joe Perches wrote:
>> On Thu, 2014-11-27 at 12:25 +, David Laight wrote:
>> > Why the change in data?
>>
>> btw: without gcov and using -O2
>>
>> $ size arch/x86/net/bpf_jit_comp.o*
>>text data
On Thu, Dec 4, 2014 at 10:05 AM, Joe Perches wrote:
> On Thu, 2014-12-04 at 07:56 -0800, Alexei Starovoitov wrote:
>> On Thu, Dec 4, 2014 at 1:26 AM, Joe Perches wrote:
>> > On Thu, 2014-11-27 at 10:49 -0800, Joe Perches wrote:
>> >> On Thu, 2014-11-27 at 1
0 1068329bb arch/x86/net/bpf_jit_comp.o.4.9.old
>
> Signed-off-by: Joe Perches
probably it was worth noting in comment that
reg is 4-bit value and AUX_REG==12, so it won't overflow.
Dave, it's for net-next of course.
Suggested-by: Alexei Starovoitov
Tested-by: Alexei St
s:
>
>>
>> Signed-off-by: Joe Perches
>> ---
>>
>> compiled, untested by me, but per Alexei Starovoitov this passes
>> the test_bpf suite
>
> Really, the root cause of this is the 'inline' abuse in non fast paths
> for non trivial functions.
well, it is a tri
On Thu, Dec 4, 2014 at 5:01 PM, Joe Perches wrote:
> Let the compiler decide instead.
>
> No change in object size x86-64 -O2 no profiling
>
> Signed-off-by: Joe Perches
> Suggested-by: Eric Dumazet
Acked-by: Alexei Starovoitov
Dave, this is on top of pr
On Thu, Jan 22, 2015 at 7:57 AM, Michael Holzheu
wrote:
> We must not hold locks when calling copy_to_user():
>
> BUG: sleeping function called from invalid context at mm/memory.c:3732
> in_atomic(): 0, irqs_disabled(): 0, pid: 671, name: test_maps
> 1 lock held by test_maps/671:
> #0: (rcu_read
On Thu, Jan 22, 2015 at 8:01 AM, Michael Holzheu
wrote:
> Looks like the "test_maps" test case expects to get the keys in
> the wrong order when iterating over the elements:
>
> test_maps: samples/bpf/test_maps.c:79: test_hashmap_sanity: Assertion
> `bpf_get_next_key(map_fd, &key, &next_key) == 0
On Thu, Jan 22, 2015 at 9:54 AM, Michael Holzheu
wrote:
>> > So call rcu_read_unlock() before copy_to_user(). We can
>> > release the lock earlier because it is not needed for copy_to_user().
>>
>> we cannot move the rcu unlock this way, since it protects the value.
>> So we need to copy the value
Michael Holzheu caught two issues (in bpf syscall and in the test).
Fix them. Details in corresponding patches.
Alexei Starovoitov (2):
bpf: rcu lock must not be held when calling copy_to_user()
samples: bpf: relax test_maps check
kernel/bpf/syscall.c| 25
6>] SyS_bpf+0x20e/0x840
Fix it by allocating temporary buffer to store map element value.
Fixes: db20fd2b0108 ("bpf: add lookup/update/delete/iterate methods to BPF
maps")
Reported-by: Michael Holzheu
Signed-off-by: Alexei Starovoitov
---
kernel/bpf/syscall.c | 25 ++
hash map is unordered, so get_next_key() iterator shouldn't
rely on particular order of elements. So relax this test.
Fixes: ffb65f27a155 ("bpf: add a testsuite for eBPF maps")
Reported-by: Michael Holzheu
Signed-off-by: Alexei Starovoitov
---
samples/bpf/test_maps.c |
debug info would probably want to use kprobe attachment point, since kprobe
can be inserted anywhere and all registers are avaiable in the program.
tracepoint attachments are useful without debug info, so standalone tools
like iosnoop will use them.
The main difference vs existing perf_pro
|
256 -> 511 : 0| |
512 -> 1023 : 2214734 |***** |
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 +++
samples/bpf/bpf_load.c |3 ++
samples/bpf/tracex4_ker
bpf_ktime_get_ns() is used by programs to compue time delta between events
or as a timestamp
Signed-off-by: Alexei Starovoitov
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 10 ++
2 files changed, 11 insertions(+)
diff --git a/include/uapi/linux/bpf.h b/include
-off-by: Alexei Starovoitov
---
include/linux/ftrace_event.h |2 ++
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 39
kernel/trace/trace_events_filter.c | 10 ++---
kernel/trace/trace_kprobe.c| 11
: ./bld_x64/../net/unix/af_unix.c:1231
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |2 +
samples/bpf/dropmon.c | 129 +
2 files changed, 131 insertions(+)
create mode 100644 samples/bpf/dropmon.c
diff --git a/samples/bpf/Makefile b
n be done for network transmit latencies, syscalls, etc
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex3_kern.c | 96 +
samples/bpf/tracex3_user.c | 146
3 files changed,
struct module *mod, void *module_region)
> +void __weak module_memfree(void *module_region)
> {
> vfree(module_region);
> }
Looks obviously correct.
Acked-by: Alexei Starovoitov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body
On Mon, Jan 5, 2015 at 11:42 AM, Christoph Hellwig wrote:
> On Mon, Jan 05, 2015 at 12:38:04PM -0700, Jens Axboe wrote:
>> That was true in earlier kernels as well, going back a few versions at
>> least, preempt was disabled on calling __blk_mq_run_hw_queue(). Just
>> checked, and 3.16 and later h
On Thu, Jan 8, 2015 at 1:16 AM, Fam Zheng wrote:
> + if (!timeout || (timeout->tv_nsec == 0 && timeout->tv_sec == 0)) {
..
> + } else if (timeout->tv_nsec >= 0 && timeout->tv_sec >= 0) {
the check for tv_nsec is not enough, which points
to the fragility of passing user timespec around
On Thu, Jan 8, 2015 at 10:42 AM, wrote:
>> I'd like to see a more ambitious change, since the timer isn't the
>> only problem like this. Specifically, I'd like a syscall that does a
>> list of epoll-related things and then waits. The list of things could
>> include, at least:
>>
>> - EPOLL_CTL
On Thu, Jan 8, 2015 at 1:29 AM, Christoph Hellwig wrote:
> On Wed, Jan 07, 2015 at 07:55:42PM -0800, Alexei Starovoitov wrote:
>> I'm seeing the same splats... what tree I can pull the fix from ?
>
> None so far. I'll still need a review to apply it to the scsi-queue t
|
Ctrl-C at any time. Kernel will auto cleanup maps and programs
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex2_kern.c | 71 +
samples/bpf/tracex2_user.c | 95 +
Debugging of eBPF programs needs some form of printk from the program,
so let programs call limited trace_printk() with %d %u %x %p modifiers only.
Signed-off-by: Alexei Starovoitov
---
include/uapi/linux/bpf.h|1 +
kernel/trace/bpf_trace.c| 61
receive_skb: dev=lo
skbaddr=88000dfcc900 len=84
Ctrl-C at any time, kernel will auto cleanup
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 +++
samples/bpf/bpf_helpers.h | 18 ++
samples/bpf/bpf_load.c | 59 ++
from unsafe address via probe_kernel_read(),
so that eBPF program can walk any kernel data structures
Signed-off-by: Alexei Starovoitov
---
include/linux/ftrace_event.h |4 ++
include/trace/bpf_trace.h | 25 +++
include/trace/ftrace.h | 30
incl
On Fri, Jan 16, 2015 at 7:02 AM, Steven Rostedt wrote:
> On Thu, 15 Jan 2015 20:16:01 -0800
> Alexei Starovoitov wrote:
>
>> Hi Ingo, Steven,
>>
>> This patch set is based on tip/master.
>
> Note, the tracing code isn't maintained in tip/master, but perf
On Mon, Jan 19, 2015 at 1:52 AM, Masami Hiramatsu
wrote:
> If we can write the script as
>
> int bpf_prog4(s64 write_size)
> {
>...
> }
>
> This will be much easier to play with.
yes. that's the intent for user space to do.
>> The example of this arbitrary pointer walking is tracex1_kern.c
On Mon, Jan 19, 2015 at 6:58 PM, Masami Hiramatsu
wrote:
>>
>> it's done already... one can do the same skb->dev->name logic
>> in kprobe attached program... so from bpf program point of view,
>> tracepoints and kprobes feature-wise are exactly the same.
>> Only input is different.
>
> No, I meant
llvm trunk now has BPF backend:
https://twitter.com/llvmweekly/status/559076464973594625
It's the one used to build samples/bpf/*_kern.c examples.
Compiler just emits extended BPF instructions. It's not
aware whether they are used for tracing or networking,
or how they're loaded into the kernel.
On Tue, Jan 27, 2015 at 3:13 PM, Karl Beldan wrote:
> On Tue, Jan 27, 2015 at 10:03:32PM +, Al Viro wrote:
>> On Tue, Jan 27, 2015 at 04:25:16PM +0100, Karl Beldan wrote:
>> > The carry from the 64->32bits folding was dropped, e.g with:
>> > saddr=0x daddr=0xFFFF len=0x proto=0
bpf_ktime_get_ns() is used by programs to compue time delta between events
or as a timestamp
Signed-off-by: Alexei Starovoitov
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 10 ++
2 files changed, 11 insertions(+)
diff --git a/include/uapi/linux/bpf.h b/include
m unsafe address via probe_kernel_read(),
so that eBPF program can walk any kernel data structures
Signed-off-by: Alexei Starovoitov
---
include/linux/ftrace_event.h |4 ++
include/trace/bpf_trace.h | 25 +++
include/trace/ftrace.h | 29
incl
: ./bld_x64/../net/unix/af_unix.c:1231
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |2 +
samples/bpf/dropmon.c | 129 +
2 files changed, 131 insertions(+)
create mode 100644 samples/bpf/dropmon.c
diff --git a/samples/bpf/Makefile b
|
256 -> 511 : 0| |
512 -> 1023 : 2214734 |***** |
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 +++
samples/bpf/bpf_load.c |3 ++
samples/bpf/tracex4_ker
n be done for network transmit latencies, syscalls, etc
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex3_kern.c | 92 +++
samples/bpf/tracex3_user.c | 150
3 files changed,
y want to use kprobe attachment point, since kprobe
can be inserted anywhere and all registers are avaiable in the program.
tracepoint attachments are useful without debug info, so standalone tools
like iosnoop will use them.
The main difference vs existing perf_probe/ftrace infra is in kernel agg
-off-by: Alexei Starovoitov
---
include/linux/ftrace_event.h |2 ++
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 39
kernel/trace/trace_events_filter.c | 10 ++---
kernel/trace/trace_kprobe.c| 11
receive_skb: dev=lo
skbaddr=88000dfcc900 len=84
Ctrl-C at any time, kernel will auto cleanup
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 +++
samples/bpf/bpf_helpers.h | 14 +++
samples/bpf/bpf_load.c | 59 ++
|
Ctrl-C at any time. Kernel will auto cleanup maps and programs
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex2_kern.c | 71 +
samples/bpf/tracex2_user.c | 95 +
On Wed, Jan 28, 2015 at 8:25 AM, Arnaldo Carvalho de Melo
wrote:
> Em Wed, Jan 28, 2015 at 01:24:15PM -0300, Arnaldo Carvalho de Melo escreveu:
>> Em Tue, Jan 27, 2015 at 08:06:09PM -0800, Alexei Starovoitov escreveu:
>> > diff --git a/samples/bpf/tracex1_kern.c b/samples/
On 3/19/15 8:07 AM, Steven Rostedt wrote:
struct trace_print_flags {
unsigned long mask;
@@ -252,6 +253,7 @@ enum {
TRACE_EVENT_FL_WAS_ENABLED_BIT,
TRACE_EVENT_FL_USE_CALL_FILTER_BIT,
TRACE_EVENT_FL_TRACEPOINT_BIT,
+ TRACE_EVENT_FL_KPROBE_BIT,
I
On 3/19/15 8:11 AM, Steven Rostedt wrote:
On Mon, 16 Mar 2015 14:49:39 -0700
Alexei Starovoitov wrote:
bpf_ktime_get_ns() is used by programs to compue time delta between events
"compute"
ok :)
+ [BPF_FUNC_ktime_get_ns] = {
+ .func = bpf_kt
On 3/19/15 8:29 AM, Steven Rostedt wrote:
+ /* check format string for allowed specifiers */
+ for (i = 0; i < fmt_size; i++)
Even though there's only a single "if" statement after the "for", it is
usually considered proper to add the brackets if the next line is
complex (more than
On 3/19/15 8:50 AM, Steven Rostedt wrote:
I'm not going to review the sample code, as I'm a bit strapped for
time, and that's more userspace oriented anyway. I'm much more concerned
that the kernel modifications are correct.
sure. thanks a lot for thorough review!
--
To unsubscribe from this
bpf_ktime_get_ns() is used by programs to compute time delta between events
or as a timestamp
Signed-off-by: Alexei Starovoitov
Reviewed-by: Steven Rostedt
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 14 ++
2 files changed, 15 insertions(+)
diff --git a
# 50
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex3_kern.c | 89 ++
samples/bpf/tracex3_user.c | 150
3 files changed, 243 insertions(+)
create mode 100644
0x8804338bc280 is 15sec old was allocated at ip 8105dc32
$ addr2line -fispe vmlinux 8105dc32
do_fork at fork.c:1665
As soon as processes exit the memory is reclaimed and tracex4 prints nothing.
Similar experiment can be done with __kmalloc/kfree pair.
Signed-off-by: Alexei
*' as an input
('struct pt_regs' is architecture dependent)
Note, kprobes are _not_ a stable kernel ABI, so bpf programs attached to
kprobes must be recompiled for every kernel version and user must supply correct
LINUX_VERSION_CODE in attr.kern_version during bpf_prog_load() call
ld_x64/../net/ipv4/icmp.c:1038
0x816d0da9: ./bld_x64/../net/unix/af_unix.c:1231
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex2_kern.c | 86 +++
samples/bpf/tracex2_user.c | 95 +
ping-19826 [000] d.s2 63103.382684: : skb 880466b1d300 len 84
ping-19826 [000] d.s2 63104.382533: : skb 880466b1ca00 len 84
ping-19826 [000] d.s2 63104.382594: : skb 880466b1d300 len 84
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile|
add TRACE_EVENT_FL_KPROBE flag to differentiate kprobe type of tracepoints,
since bpf programs can only be attached to kprobe type of
PERF_TYPE_TRACEPOINT perf events.
Signed-off-by: Alexei Starovoitov
---
include/linux/ftrace_event.h |3 +++
kernel/trace/trace_kprobe.c |2 +-
2 files
esume one day the default setting
of it might change, though), but code making use of it should not care if
it's actually enabled or not.
Instead, hide this via header files and let the rest deal with it.
Signed-off-by: Daniel Borkmann
Signed-off-by: Alexei Starovoitov
---
include/linux/bp
buffers
and emits big 'this is debug only' banner.
Signed-off-by: Alexei Starovoitov
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 75 ++
2 files changed, 76 insertions(+)
diff --git a/include/uapi/linux/bpf.h b/include/
e TCP stack instrumentation (like web10g) using
bpf+kprobe, but without adding any new code tcp stack.
Though kprobes are slow comparing to tracepoints, they are good enough
for prototyping and trace_marker/debug_tracepoint ideas can accelerate
them in the future.
Alexei Starovoitov (8):
trac
On 3/20/15 2:09 PM, Steven Rostedt wrote:
+/**
+ * trace_call_bpf - invoke BPF program
+ * @prog - BPF program
+ * @ctx - opaque context pointer
+ *
+ * kprobe handlers execute BPF programs via this helper.
+ * Can be used from static tracepoints in the future.
Should also state what the expe
On 3/20/15 2:22 PM, Steven Rostedt wrote:
+/* limited trace_printk()
+ * only %d %u %x %ld %lu %lx %lld %llu %llx %p conversion specifiers allowed
+ */
Ah! Again, don't contaminate the rest of the kernel with net comment
styles! :-)
ok :)
+ } else if (fmt[i] == 'p') {
+
esume one day the default setting
of it might change, though), but code making use of it should not care if
it's actually enabled or not.
Instead, hide this via header files and let the rest deal with it.
Signed-off-by: Daniel Borkmann
Signed-off-by: Alexei Starovoitov
---
include/linux/bp
buffers
and emits big 'this is debug only' banner.
Signed-off-by: Alexei Starovoitov
Reviewed-by: Steven Rostedt
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 78 ++
2 files changed, 79 insertions(+)
diff --git a/in
0x8804338bc280 is 15sec old was allocated at ip 8105dc32
$ addr2line -fispe vmlinux 8105dc32
do_fork at fork.c:1665
As soon as processes exit the memory is reclaimed and tracex4 prints nothing.
Similar experiment can be done with __kmalloc/kfree pair.
Signed-off-by: Alexei
# 50
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex3_kern.c | 89 ++
samples/bpf/tracex3_user.c | 150
3 files changed, 243 insertions(+)
create mode 100644
ping-19826 [000] d.s2 63103.382684: : skb 880466b1d300 len 84
ping-19826 [000] d.s2 63104.382533: : skb 880466b1ca00 len 84
ping-19826 [000] d.s2 63104.382594: : skb 880466b1d300 len 84
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile|
ld_x64/../net/ipv4/icmp.c:1038
0x816d0da9: ./bld_x64/../net/unix/af_unix.c:1231
Signed-off-by: Alexei Starovoitov
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex2_kern.c | 86 +++
samples/bpf/tracex2_user.c | 95 +
t_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
Next step is to prototype TCP stack instrumentation (like web10g) using
bpf+kprobe, but without adding any new code tcp stack.
Though kprobes are slow comparing to tracepoints, they are good enough
for prototyping and trace_marker/debug_tracepoint ideas can
user must supply correct
LINUX_VERSION_CODE in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov
Reviewed-by: Steven Rostedt
---
include/linux/ftrace_event.h| 11
include/uapi/linux/bpf.h|3 +
include/uapi/linux/perf_event.h |1 +
kernel/bpf/sys
bpf_ktime_get_ns() is used by programs to compute time delta between events
or as a timestamp
Signed-off-by: Alexei Starovoitov
Reviewed-by: Steven Rostedt
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 14 ++
2 files changed, 15 insertions(+)
diff --git a
add TRACE_EVENT_FL_KPROBE flag to differentiate kprobe type of tracepoints,
since bpf programs can only be attached to kprobe type of
PERF_TYPE_TRACEPOINT perf events.
Signed-off-by: Alexei Starovoitov
Reviewed-by: Steven Rostedt
---
include/linux/ftrace_event.h |3 +++
kernel/trace
On 3/21/15 5:14 AM, Masami Hiramatsu wrote:
> (2015/03/21 8:30), Alexei Starovoitov wrote:
>>
>> Note, kprobes are _not_ a stable kernel ABI, so bpf programs attached to
>> kprobes must be recompiled for every kernel version and user must supply
>> corre
On 3/22/15 3:06 AM, Masami Hiramatsu wrote:
> (2015/03/22 1:02), Alexei Starovoitov wrote:
>> On 3/21/15 5:14 AM, Masami Hiramatsu wrote:
>>> (2015/03/21 8:30), Alexei Starovoitov wrote:
>>>>
>>>> Note, kprobes are _not_ a stable kernel ABI, so bpf
On 3/22/15 4:10 AM, Ingo Molnar wrote:
* Alexei Starovoitov wrote:
+static const struct bpf_func_proto bpf_trace_printk_proto = {
+ .func = bpf_trace_printk,
+ .gpl_only = true,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_STACK,
+ .arg2_type
ts, they are good enough
for prototyping and trace_marker/debug_tracepoint ideas can accelerate
them in the future.
Alexei Starovoitov (8):
tracing: add kprobe flag
tracing: attach BPF programs to kprobes
tracing: allow BPF programs to call bpf_ktime_get_ns()
tracing: allow BPF programs to c
user must supply correct
LINUX_VERSION_CODE in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei Starovoitov
Reviewed-by: Steven Rostedt
---
include/linux/ftrace_event.h| 11
include/uapi/linux/bpf.h|3 +
include/uapi/linux/perf_event.h |1 +
kernel/bpf/sys
301 - 400 of 2341 matches
Mail list logo