nts_sysctl(const struct
> ctl_table *table, int write,
> return ret;
> }
>
> -static struct ctl_table user_event_sysctls[] = {
> +static const struct ctl_table user_event_sysctls[] = {
> {
> .procname = "user_events_max",
> .data = &max_user_events,
Acked-by: Steven Rostedt (Google) # for kernel/trace/
-- Steve
On Wed, 13 Sep 2023 13:38:27 +0200
Juergen Gross wrote:
> diff --git a/include/trace/events/xen.h b/include/trace/events/xen.h
> index 44a3f565264d..0577f0cdd231 100644
> --- a/include/trace/events/xen.h
> +++ b/include/trace/events/xen.h
> @@ -6,26 +6,26 @@
> #define _TRACE_XEN_H
>
> #includ
07f9595 100644
> --- a/arch/x86/include/asm/ftrace.h
> +++ b/arch/x86/include/asm/ftrace.h
Acked-by: Steven Rostedt (Google)
-- Steve
On Wed, 7 Sep 2022 09:04:28 -0400
Kent Overstreet wrote:
> On Wed, Sep 07, 2022 at 01:00:09PM +0200, Michal Hocko wrote:
> > Hmm, it seems that further discussion doesn't really make much sense
> > here. I know how to use my time better.
>
> Just a thought, but I generally find it more product
On Mon, 5 Sep 2022 16:42:29 -0400
Kent Overstreet wrote:
> > Haven't tried that yet but will do. Thanks for the reference code!
>
> Is it really worth the effort of benchmarking tracing API overhead here?
>
> The main cost of a tracing based approach is going to to be the data structure
> for
On Mon, 5 Sep 2022 11:44:55 -0700
Nadav Amit wrote:
> I would note that I have a solution in the making (which pretty much works)
> for this matter, and does not require any kernel changes. It produces a
> call stack that leads to the code that lead to syscall failure.
>
> The way it works is by
On Sun, 4 Sep 2022 18:32:58 -0700
Suren Baghdasaryan wrote:
> Page allocations (overheads are compared to get_free_pages() duration):
> 6.8% Codetag counter manipulations (__lazy_percpu_counter_add +
> __alloc_tag_add)
> 8.8% lookup_page_ext
> 1237% call stack capture
> 139% tracepoint with atta
On Thu, 1 Sep 2022 21:35:32 -0400
Kent Overstreet wrote:
> On Thu, Sep 01, 2022 at 08:23:11PM -0400, Steven Rostedt wrote:
> > If ftrace, perf, bpf can't do what you want, take a harder look to see if
> > you can modify them to do so.
>
> Maybe we can use this ex
On Thu, 1 Sep 2022 18:55:15 -0400
Kent Overstreet wrote:
> On Thu, Sep 01, 2022 at 06:34:30PM -0400, Steven Rostedt wrote:
> > On Thu, 1 Sep 2022 17:54:38 -0400
> > Kent Overstreet wrote:
> > >
> > > So this looks like it's gotten better since I
On Thu, 1 Sep 2022 17:54:38 -0400
Kent Overstreet wrote:
>
> So this looks like it's gotten better since I last looked, but it's still not
> there yet.
>
> Part of the problem is that the tracepoints themselves are in the wrong place:
> your end event is when a task is woken up, but that means s
On Thu, 1 Sep 2022 17:38:44 -0400
Steven Rostedt wrote:
> # echo 'hist:keys=comm,prio,delta.buckets=10:sort=delta' >
> /sys/kernel/tracing/events/synthetic/wakeup_lat/trigger
The above could almost be done with sqlhist (but I haven't implemented
"buckets=10"
On Tue, 30 Aug 2022 14:49:16 -0700
Suren Baghdasaryan wrote:
> From: Kent Overstreet
>
> This adds the ability to easily instrument code for measuring latency.
> To use, add the following to calls to your code, at the start and end of
> the event you wish to measure:
>
> code_tag_time_stats_
On Thu, 1 Sep 2022 10:32:19 -0400
Kent Overstreet wrote:
> On Thu, Sep 01, 2022 at 08:51:31AM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 30, 2022 at 02:48:52PM -0700, Suren Baghdasaryan wrote:
> > > +static void lazy_percpu_counter_switch_to_pcpu(struct
> > > raw_lazy_percpu_counter *c)
> >
On Thu, 9 Jun 2022 15:02:20 +0200
Petr Mladek wrote:
> > I'm somewhat curious whether we can actually remove that trace event.
>
> Good question.
>
> Well, I think that it might be useful. It allows to see trace and
> printk messages together.
Yes people still use it. I was just asked about
On Wed, 27 Apr 2022 19:49:17 -0300
"Guilherme G. Piccoli" wrote:
> Currently we don't have a way to check if there are dumpers set,
> except counting the list members maybe. This patch introduces a very
> simple helper to provide this information, by just keeping track of
> registered/unregistere
On Thu, 28 Apr 2022 09:01:13 +0800
Xiaoming Ni wrote:
> > +#ifdef CONFIG_DEBUG_NOTIFIERS
> > + {
> > + char sym_name[KSYM_NAME_LEN];
> > +
> > + pr_info("notifiers: registered %s()\n",
> > + notifier_name(n, sym_name));
> > + }
>
> Duplicate Code.
>
>
On Tue, 10 May 2022 13:38:39 +0200
Petr Mladek wrote:
> As already mentioned in the other reply, panic() sometimes stops
> the other CPUs using NMI, for example, see kdump_nmi_shootdown_cpus().
>
> Another situation is when the CPU using the lock ends in some
> infinite loop because something we
On Fri, 29 Apr 2022 10:46:35 -0300
"Guilherme G. Piccoli" wrote:
> Thanks Sergei and Steven, good idea! I thought about the switch change
> you propose, but I confess I got a bit confused by the "fallthrough"
> keyword - do I need to use it?
No. The fallthrough keyword is only needed when there'
Why not:
>
> case DIE_OOPS:
> case PANIC_NOTIFIER:
> do_dump = 1;
> break;
Agreed.
Other than that.
Acked-by: Steven Rostedt (Google)
-- Steve
On Mon, 8 Nov 2021 15:35:50 +0100
Borislav Petkov wrote:
> On Mon, Nov 08, 2021 at 03:24:39PM +0100, Borislav Petkov wrote:
> > I guess I can add another indirection to notifier_chain_register() and
> > avoid touching all the call sites.
>
> IOW, something like this below.
>
> This way I won'
On Fri, 30 Apr 2021 09:15:51 +0200
Paolo Bonzini wrote:
> > Nit, but in change logs, please avoid stating "next patch" as searching git
> > history (via git blame or whatever) there is no such thing as "next patch".
> >
>
> Interesting, I use next patch(es) relatively often, though you're rig
On Tue, 27 Apr 2021 07:09:46 +0800
Lai Jiangshan wrote:
> From: Lai Jiangshan
>
> There is no any functionality change intended. Just rename it and
> move it to arch/x86/kernel/nmi.c so that we can resue it later in
> next patch for early NMI and kvm.
Nit, but in change logs, please avoid sta
On Thu, 15 Apr 2021 02:50:53 +0200
Dario Faggioli wrote:
> On Wed, 2021-04-14 at 15:07 -0400, Steven Rostedt wrote:
> > On Wed, 14 Apr 2021 19:11:19 +0100
> > Andrew Cooper wrote:
> >
> > > Where the plugin (ought to) live depends heavily on whether we
> >
On Thu, 15 Apr 2021 00:11:32 +0200
Dario Faggioli wrote:
> Yes, basically, we can say that a Xen system has "its own trace-cmd".
> It's called `xentrace`, you run it from Dom0 and you get a (binary)
> file which contains a bunch of events.
>
> Not that differently from a trace-cmd's "trace.dat
On Wed, 14 Apr 2021 19:11:19 +0100
Andrew Cooper wrote:
> Where the plugin (ought to) live depends heavily on whether we consider
> the trace format a stable ABI or not.
Agreed. Like the VMware plugin to handle ESX traces. It's internal and not
published as the API is not stable.
But if it ever
On Wed, 14 Apr 2021 11:07:33 +0100
Andrew Cooper wrote:
> On 13/04/2021 16:46, Steven Rostedt wrote:
> > Hi Giuseppe,
> >
> > On Tue, 13 Apr 2021 16:28:36 +0200
> > Giuseppe Eletto wrote:
> >
> >> Hello,
> >> I want to share with you a new pl
Hi Giuseppe,
On Tue, 13 Apr 2021 16:28:36 +0200
Giuseppe Eletto wrote:
> Hello,
> I want to share with you a new plugin developed by me, under the
> supervision of Dario Faggioli, which allows the new version of KernelShark
> (the v2-beta) to open and view the Xen traces created using the "xen
On Thu, 24 May 2018 13:40:24 +0200
Petr Mladek wrote:
> On Wed 2018-05-23 12:54:15, Thomas Garnier wrote:
> > When using -fPIE/PIC with function tracing, the compiler generates a
> > call through the GOT (call *__fentry__@GOTPCREL). This instruction
> > takes 6 bytes instead of 5 on the usual rel
latest trace-v4.17-rc4-2 tree, which can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
trace-v4.17-rc4-2
Tag SHA1: 580faa7b1b80b1332683dc869732a4db8a506b9c
Head SHA1: 45dd9b0666a162f8e4be76096716670cf1741f0e
Steven Rostedt (VMware) (1):
tracing/x86/xen
29 matches
Mail list logo