Hi Andrew,
On 30/05/24 19:46, Andrew Davis wrote:
On 5/30/24 4:07 AM, Beleswar Padhi wrote:
Acquire the mailbox handle during device probe and do not release handle
in stop/detach routine or error paths. This removes the redundant
requests for mbox handle later during rproc start/attach. This a
Acquire the mailbox handle during device probe and do not release handle
in stop/detach routine or error paths. This removes the redundant
requests for mbox handle later during rproc start/attach. This also
allows to defer remoteproc driver's probe if mailbox is not probed yet.
Signed-off-by: Bele
Acquire the mailbox handle during device probe and do not release handle
in stop/detach routine or error paths. This removes the redundant
requests for mbox handle later during rproc start/attach. This also
allows to defer remoteproc driver's probe if mailbox is not probed yet.
Signed-off-by: Bele
Use the device lifecycle managed allocation function. This helps prevent
mistakes like freeing out of order in cleanup functions and forgetting
to free on error paths.
Signed-off-by: Beleswar Padhi
---
drivers/remoteproc/ti_k3_r5_remoteproc.c | 6 ++
1 file changed, 2 insertions(+), 4 deleti
Hello All,
This series adds deferred probe functionality in the TI's Remoteproc
drivers. The remoteproc drivers are dependent on the omap-mailbox driver
for mbox functionalities. Sometimes, the remoteproc driver could be
probed before the mailbox driver leading to rproc boot failures. Thus,
defer
Don't allocate devs again when it's valid pointer which has pionted to
the memory allocated above with size (count + 2 * sizeof(dev)).
A kmemleak reports:
unreferenced object 0x88800dda1980 (size 16):
comm "kworker/u10:5", pid 69, jiffies 4294671781
hex dump (first 16 bytes):
00 00 00
On Mon, 3 Jun 2024 15:50:55 -0400
Mathieu Desnoyers wrote:
> On 2024-06-01 04:22, Masami Hiramatsu (Google) wrote:
> > From: Masami Hiramatsu (Google)
> >
> > Support raw tracepoint event on module by fprobe events.
> > Since it only uses for_each_kernel_tracepoint() to find a tracepoint,
> > t
On Mon, 3 Jun 2024 10:52:50 -0400
Steven Rostedt wrote:
> On Mon, 3 Jun 2024 11:37:23 +0900
> Masami Hiramatsu (Google) wrote:
>
> > On Sat, 01 Jun 2024 23:37:55 -0400
> > Steven Rostedt wrote:
> >
> > [...]
> > >
> > > +static int ftrace_hash_move_and_update_subops(struct ftrace_ops *subop
On Mon, 3 Jun 2024 11:07:52 -0400
Steven Rostedt wrote:
> On Mon, 3 Jun 2024 11:00:18 -0400
> Steven Rostedt wrote:
>
> > Yes, but that gets a bit complex, and requires the changing of all archs.
> > If it starts to become a problem, I rather add that as a feature. That is,
> > we can always go
On Mon, 3 Jun 2024 15:50:55 -0400
Mathieu Desnoyers wrote:
> Hi Masami,
>
> Why prevent module unload when a fprobe tracepoint is attached to a
> module ? This changes the kernel's behavior significantly just for the
> sake of instrumentation.
>
> As an alternative, LTTng-modules attach/detach
It is possible that remote processor is already running before
linux boot or remoteproc platform driver probe. Implement required
remoteproc framework ops to provide resource table address and
connect or disconnect with remote processor in such case.
Signed-off-by: Tanmay Shah
---
Changes in v4:
On 2024-06-01 04:22, Masami Hiramatsu (Google) wrote:
From: Masami Hiramatsu (Google)
Support raw tracepoint event on module by fprobe events.
Since it only uses for_each_kernel_tracepoint() to find a tracepoint,
the tracepoints on modules are not handled. Thus if user specified a
tracepoint on
From: "Steven Rostedt (Google)"
Add a test that creates 3 instances and enables function_graph tracer in
each as well as the top instance, where each will enable a filter (but one
that traces all functions) and check that they are filtering properly.
Signed-off-by: Steven Rostedt (Google)
---
From: "Steven Rostedt (Google)"
The function tracer is tested to see if pid filtering works. Add a test to
test function_graph tracer as well, but only if the function_graph tracer
is enabled for the top level or instance.
Signed-off-by: Steven Rostedt (Google)
---
.../ftrace/test.d/ftrace/fun
From: "Steven Rostedt (Google)"
In most cases function graph is used by a single user. Instead of calling
a loop to call function graph callbacks in this case, call the function
return callback directly.
Use the static_key that is set when the function graph tracer has less
than 2 callbacks regi
From: "Steven Rostedt (Google)"
In most cases function graph is used by a single user. Instead of calling
a loop to call function graph callbacks in this case, call the function
entry callback directly.
Add a static_key that will be used to set the function graph logic to
either do the loop (whe
From: "Steven Rostedt (Google)"
Instead of looping through all the elements of fgraph_array[] to see if
there's an gops attached to one and then calling its gops->func(). Create
a fgraph_array_bitmask that sets bits when an index in the array is
reserved (via the simple lru algorithm). Then only
From: "Steven Rostedt (VMware)"
Added functions that can be called by a fgraph_ops entryfunc and retfunc to
store state between the entry of the function being traced to the exit of
the same function. The fgraph_ops entryfunc() may call
fgraph_reserve_data() to store up to 32 words onto the task'
From: "Masami Hiramatsu (Google)"
Add a selftest for multiple function graph tracer with storage on a same
function. In this case, the shadow stack entry will be shared among those
fgraph with different data storage. So this will ensure the fgraph will
not mixed those storage data.
Link:
https:
From: "Steven Rostedt (Google)"
Instead of iterating through the entire fgraph_array[] and seeing if one
of the bitmap bits are set to know to call the array's retfunc() function,
use for_each_set_bit() on the bitmap itself. This will only iterate for
the number of set bits.
Reviewed-by: Masami
From: "Steven Rostedt (VMware)"
Add boot up selftest that passes variables from a function entry to a
function exit, and make sure that they do get passed around.
Co-developed with Masami Hiramatsu:
Link:
https://lore.kernel.org/linux-trace-kernel/171509110271.162236.11047551496319744627.stgit@
From: "Steven Rostedt (VMware)"
The use of the task->trace_recursion for the logic used for the function
graph no-trace was a bit of an abuse of that variable. Now that there
exists global vars that are per stack for registered graph traces, use
that instead.
Link:
https://lore.kernel.org/linux
From: "Steven Rostedt (VMware)"
The use of the task->trace_recursion for the logic used for the function
graph depth was a bit of an abuse of that variable. Now that there
exists global vars that are per stack for registered graph traces, use that
instead.
Link:
https://lore.kernel.org/linux-tr
From: "Masami Hiramatsu (Google)"
Since the fgraph_array index is used for the bitmap on the shadow
stack, it may leave some entries after a function_graph instance is
removed. Thus if another instance reuses the fgraph_array index soon
after releasing it, the fgraph may confuse to call the newer
From: "Steven Rostedt (VMware)"
Allow for instances to have their own ftrace_ops part of the fgraph_ops
that makes the funtion_graph tracer filter on the set_ftrace_filter file
of the instance and not the top instance.
This uses the new ftrace_startup_subops(), by using graph_ops as the
"manager
From: "Steven Rostedt (VMware)"
The use of the task->trace_recursion for the logic used for the
set_graph_function was a bit of an abuse of that variable. Now that there
exists global vars that are per stack for registered graph traces, use that
instead.
Link:
https://lore.kernel.org/linux-trac
From: "Steven Rostedt (VMware)"
Add a "task variables" array on the tasks shadow ret_stack that is the
size of longs for each possible registered fgraph_ops. That's a total
of 16, taking up 8 * 16 = 128 bytes (out of a page size 4k).
This will allow for fgraph_ops to do specific features on a pe
From: "Steven Rostedt (Google)"
There are cases where a single system will use a single function callback
to handle multiple users. For example, to allow function_graph tracer to
have multiple users where each can trace their own set of functions, it is
useful to only have one ftrace_ops register
From: "Steven Rostedt (Google)"
The subops filters use a "manager" ops to enable and disable its filters.
The manager ops can handle more than one subops, and its filter is what
controls what functions get set. Add a ftrace_hash_move_and_update_subops()
function that will update the manager ops w
From: "Steven Rostedt (Google)"
Now that the function_graph has a main callback that handles the function
graph subops tracing, it no longer honors the pid filtering of ftrace. Add
back this logic in the function_graph code to update the gops callback for
the entry function to test if it should t
From: "Steven Rostedt (VMware)"
Pass the fgraph_ops structure to the function graph callbacks. This will
allow callbacks to add a descriptor to a fgraph_ops private field that wil
be added in the future and use it for the callbacks. This will be useful
when more than one callback can be registere
From: "Steven Rostedt (VMware)"
Now that function graph tracing can handle more than one user, allow it to
be enabled in the ftrace instances. Note, the filtering of the functions is
still joined by the top level set_ftrace_filter and friends, as well as the
graph and nograph files.
Co-developed
From: "Steven Rostedt (VMware)"
Some of the flags for ftrace_startup() may be exposed even when
CONFIG_DYNAMIC_FTRACE is not configured in. This is fine as the difference
between dynamic ftrace and static ftrace is done within the internals of
ftrace itself. No need to have use cases fail to comp
From: "Steven Rostedt (VMware)"
The function pointers ftrace_graph_entry and ftrace_graph_return are no
longer called via the function_graph tracer. Instead, an array structure is
now used that will allow for multiple users of the function_graph
infrastructure. The variables are still used by the
From: "Steven Rostedt (VMware)"
Allow for multiple users to attach to function graph tracer at the same
time. Only 16 simultaneous users can attach to the tracer. This is because
there's an array that stores the pointers to the attached fgraph_ops. When
a function being traced is entered, each of
From: "Masami Hiramatsu (Google)"
For the tail-call, there would be 2 or more ftrace_ret_stacks on the
ret_stack, which records "return_to_handler" as the return address except
for the last one. But on the real stack, there should be 1 entry because
tail-call reuses the return address on the sta
From: "Steven Rostedt (VMware)"
Add an array structure that will eventually allow the function graph tracer
to have up to 16 simultaneous callbacks attached. It's an array of 16
fgraph_ops pointers, that is assigned when one is registered. On entry of a
function the entry of the first item in the
From: "Steven Rostedt (VMware)"
Instead of using "ALIGN()", use BUILD_BUG_ON() as the structures should
always be divisible by sizeof(long).
Co-developed with Masami Hiramatsu:
Link:
https://lore.kernel.org/linux-trace-kernel/171509093949.162236.14518699447151894536.stgit@devnote2
Link:
http:/
This is a continuation of the function graph multi user code.
I wrote a proof of concept back in 2019 of this code[1] and
Masami started cleaning it up. I started from Masami's work v10
that can be found here:
https://lore.kernel.org/linux-trace-kernel/171509088006.162236.7227326999861366050.stg
From: "Steven Rostedt (VMware)"
In order to make it possible to have multiple callbacks registered with the
function_graph tracer, the retstack needs to be converted from an array of
ftrace_ret_stack structures to an array of longs. This will allow to store
the list of callbacks on the stack for
On Tue, 28 May 2024 11:23:13 -0500, Dave Hansen
wrote:
On 5/17/24 04:06, Dmitrii Kuvaiskii wrote:
...
First, why is SGX so special here? How is the SGX problem different
than what the core mm code does?
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -25,6 +25
It is possible that remote processor is already running before
linux boot or remoteproc platform driver probe. Implement required
remoteproc framework ops to provide resource table address and
connect or disconnect with remote processor in such case.
Signed-off-by: Tanmay Shah
Changes in v3:
-
On Mon, 3 Jun 2024 11:46:36 +0900
Masami Hiramatsu (Google) wrote:
> > > at the beginning of the loop.
> > > Also, at the end of the loop,
> > >
> > > if (ftrace_hash_empty(new_hash)) {
> > > free_ftrace_hash(new_hash);
> > > new_hash = EMPTY_HASH;
> > > break;
> > > }
>
> And we still
On Mon, 3 Jun 2024 11:00:18 -0400
Steven Rostedt wrote:
> Yes, but that gets a bit complex, and requires the changing of all archs.
> If it starts to become a problem, I rather add that as a feature. That is,
> we can always go back to it. But for now, lets keep the complexity down.
And if we we
On Mon, 3 Jun 2024 12:11:07 +0900
Masami Hiramatsu (Google) wrote:
> > From: "Steven Rostedt (Google)"
> >
> > In most cases function graph is used by a single user. Instead of calling
> > a loop to call function graph callbacks in this case, call the function
> > entry callback directly.
> >
On Mon, 3 Jun 2024 11:46:36 +0900
Masami Hiramatsu (Google) wrote:
> > > if (ftrace_hash_empty(new_hash)) {
> > > free_ftrace_hash(new_hash);
> > > new_hash = EMPTY_HASH;
> > > break;
> > > }
>
> And we still need this (I think this should be done in intersect_hash(), we
> just
> need t
On Mon, 3 Jun 2024 11:37:23 +0900
Masami Hiramatsu (Google) wrote:
> On Sat, 01 Jun 2024 23:37:55 -0400
> Steven Rostedt wrote:
>
> [...]
> >
> > +static int ftrace_hash_move_and_update_subops(struct ftrace_ops *subops,
> > + struct ftrace_hash **orig_s
On Tue, May 21, 2024 at 02:24:54PM +0200, Arnaud Pouliquen wrote:
> Add the "st,proc-id" property allowing to identify the remote processor.
> This ID is used to define an unique ID, common between Linux, U-boot and
> OP-TEE to identify a coprocessor.
> This ID will be used in request to OP-TEE rem
On Mon, 3 Jun 2024 at 02:22, Arnaud POULIQUEN
wrote:
>
> Hello Mathieu,
>
> On 5/31/24 19:28, Mathieu Poirier wrote:
> > On Thu, May 30, 2024 at 09:42:26AM +0200, Arnaud POULIQUEN wrote:
> >> Hello Mathieu,
> >>
> >> On 5/29/24 22:35, Mathieu Poirier wrote:
> >>> On Wed, May 29, 2024 at 09:13:26AM
DIMM Firmware Interface Table (NFIT) module");
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
---
base-commit: a693b9c95abd4947c2d06e05733de5d470ab6586
change-id: 20240603-md-drivers-acpi-nfit-e032bad0b189
Hi Deepak,
On 03/06/2024 09:36, Deepak Kumar Singh wrote:
There are certain usecases which require glink interrupt to be
wakeup capable. For example if handset is in sleep state and
usb charger is plugged in, dsp wakes up and sends glink interrupt
to host for glink pmic channel communication. Gl
Hello Mathieu,
On 5/31/24 19:28, Mathieu Poirier wrote:
> On Thu, May 30, 2024 at 09:42:26AM +0200, Arnaud POULIQUEN wrote:
>> Hello Mathieu,
>>
>> On 5/29/24 22:35, Mathieu Poirier wrote:
>>> On Wed, May 29, 2024 at 09:13:26AM +0200, Arnaud POULIQUEN wrote:
Hello Mathieu,
On 5/28/2
There are certain usecases which require glink interrupt to be
wakeup capable. For example if handset is in sleep state and
usb charger is plugged in, dsp wakes up and sends glink interrupt
to host for glink pmic channel communication. Glink is suppose to
wakeup host processor completely for furthe
On 01/06/2024 10:33 pm, Qais Yousef wrote:
{rt, realtime, dl}_{task, prio}() functions return value is actually
a bool. Convert their return type to reflect that.
Suggested-by: Steven Rostedt (Google)
Signed-off-by: Qais Yousef
---
include/linux/sched/deadline.h | 8
include/linu
On 6/2/24 14:13, g...@luigi311.com wrote:
> From: Luis Garcia
>
> v6:
> - Drop powerdown-gpio patches
> - per Sakari Ailus request as it is not part or
> not used by the sensor
> - I tested without it and PPP still works
> - Dave mentioned its not part of the datasheet
>
55 matches
Mail list logo