On Mon Dec 4, 2023 at 1:52 PM CET, Conor Dooley wrote:
> On Mon, Dec 04, 2023 at 02:40:44PM +0200, Markuss Broks wrote:
> > On 12/3/23 13:20, Conor Dooley wrote:
> > > On Sat, Dec 02, 2023 at 01:48:33PM +0100, Karel Balej wrote:
> > > > From: Markuss Broks
> > > >
> > > > Imagis IST3038B is a var
On Fri, Dec 08, 2023 at 12:41:38PM +0100, Tobias Huschle wrote:
> On Fri, Dec 08, 2023 at 05:31:18AM -0500, Michael S. Tsirkin wrote:
> > On Fri, Dec 08, 2023 at 10:24:16AM +0100, Tobias Huschle wrote:
> > > On Thu, Dec 07, 2023 at 01:48:40AM -0500, Michael S. Tsirkin wrote:
> > > > On Thu, Dec 07,
On Sat, Dec 09, 2023 at 10:05:27AM +0100, Karel Balej wrote:
> On Mon Dec 4, 2023 at 1:52 PM CET, Conor Dooley wrote:
> > On Mon, Dec 04, 2023 at 02:40:44PM +0200, Markuss Broks wrote:
> > > On 12/3/23 13:20, Conor Dooley wrote:
> > > > On Sat, Dec 02, 2023 at 01:48:33PM +0100, Karel Balej wrote:
>
> dinghao.liu@ wrote:
> > > Dave Jiang wrote:
>
> [snip]
>
> > > That said, this patch does not completely fix freelist from leaking in the
> > > following error path.
> > >
> > > discover_arenas()
> > > btt_freelist_init() -> ok (memory allocated)
> > > btt_rtt_init() -> f
On 8.12.2023 16:04, Neil Armstrong wrote:
> The current memory region assign only supports a single
> memory region.
>
> But new platforms introduces more regions to make the
> memory requirements more flexible for various use cases.
> Those new platforms also shares the memory region between the
On Fri, Dec 8, 2023 at 8:14 PM Tom Cook wrote:
>
> I'm trying to build a signed .deb kernel package of
> https://github.com/torvalds/linux/tree/v6.6. I've copied
> certs/default_x509.genkey to certs/x509.genkey. The .config is the
> one from Ubuntu 23.10's default kernel with all new options acc
On Tue, Dec 5, 2023 at 2:54 PM Luis Chamberlain wrote:
>
> On Sun, Nov 26, 2023 at 04:19:14PM +0900, Masahiro Yamada wrote:
> > Commit f50169324df4 ("module.h: split out the EXPORT_SYMBOL into
> > export.h") appropriately separated EXPORT_SYMBOL into
> > because modules and EXPORT_SYMBOL are orth
Hi Karel,
On 12/8/23 23:59, Karel Balej wrote:
Markuss,
thank you for the review.
diff --git a/drivers/input/touchscreen/imagis.c
b/drivers/input/touchscreen/imagis.c
index 84a02672ac47..41f28e6e9cb1 100644
--- a/drivers/input/touchscreen/imagis.c
+++ b/drivers/input/touchscreen/imagis.c
@@
From: "Steven Rostedt (Google)"
The maximum ring buffer data size is the maximum size of data that can be
recorded on the ring buffer. Events must be smaller than the sub buffer
data size minus any meta data. This size is checked before trying to
allocate from the ring buffer because the allocati
On Sat, 9 Dec 2023 17:01:39 -0500
Steven Rostedt wrote:
> From: "Steven Rostedt (Google)"
>
> The maximum ring buffer data size is the maximum size of data that can be
> recorded on the ring buffer. Events must be smaller than the sub buffer
> data size minus any meta data. This size is checked
From: "Steven Rostedt (Google)"
If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:
~# cat /sys/kernel/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 2/2 #P:8
#
#_--
From: "Steven Rostedt (Google)"
Allow a trace write to be as big as the ring buffer tracing data will
allow. Currently, it only allows writes of 1KB in size, but there's no
reason that it cannot allow what the ring buffer can hold.
Signed-off-by: Steven Rostedt (Google)
---
[
Depends on:
ht
From: "Steven Rostedt (Google)"
Now that trace_marker can hold more than 1KB string, and can write as much
as the ring buffer can hold, the trace_seq is not big enough to hold
writes:
~# a="1234567890"
~# cnt=4080
~# s=""
~# while [ $cnt -gt 10 ]; do
~# s="${s}${a}"
~# cnt=$((cnt-1
From: "Steven Rostedt (Google)"
There's no reason to give an arbitrary limit to the size of a raw trace
marker. Just let it be as big as the size that is allowed by the ring
buffer itself.
And there's also no reason to artificially break up the write to
TRACE_BUF_SIZE, as that's not even used.
From: "Tzvetomir Stoyanov (VMware)"
There are two approaches when changing the size of the ring buffer
sub page:
1. Destroying all pages and allocating new pages with the new size.
2. Allocating new pages, copying the content of the old pages before
destroying them.
The first approach is ea
From: "Steven Rostedt (Google)"
On failure to allocate ring buffer pages, the pointer to the CPU buffer
pages is freed, but the pages that were allocated previously were not.
Make sure they are freed too.
Fixes: TBD ("tracing: Set new size of the ring buffer sub page")
Signed-off-by: Steven Rost
From: "Steven Rostedt (Google)"
Add to the documentation how to use the buffer_subbuf_order file to change
the size and how it affects what events can be added to the ring buffer.
Signed-off-by: Steven Rostedt (Google)
---
Documentation/trace/ftrace.rst | 27 +++
1 file
From: "Steven Rostedt (Google)"
The function ring_buffer_subbuf_order_set() just updated the sub-buffers
to the new size, but this also changes the size of the buffer in doing so.
As the size is determined by nr_pages * subbuf_size. If the subbuf_size is
increased without decreasing the nr_pages,
From: "Tzvetomir Stoyanov (VMware)"
In order to introduce sub-buffer size per ring buffer, some internal
refactoring is needed. As ring_buffer_print_page_header() will depend on
the trace_buffer structure, it is moved after the structure definition.
Link:
https://lore.kernel.org/linux-trace-dev
From: "Steven Rostedt (Google)"
As all the subbuffer order (subbuffer sizes) must be the same throughout
the ring buffer, check the order of the buffers that are doing a CPU
buffer swap in ring_buffer_swap_cpu() to make sure they are the same.
If the are not the same, then fail to do the swap, o
Note, this has been on my todo list since the ring buffer was created back
in 2008.
Tzvetomir last worked on this in 2020 and I need to finally get it in.
His last series was:
https://lore.kernel.org/linux-trace-devel/20211213094825.61876-1-tz.stoya...@gmail.com/
With the description of:
From: "Tzvetomir Stoyanov (VMware)"
Currently the size of one sub buffer page is global for all buffers and
it is hard coded to one system page. In order to introduce configurable
ring buffer sub page size, the internal logic should be refactored to
work with sub page size per ring buffer.
Link:
From: "Steven Rostedt (Google)"
Add a self test that will write into the trace buffer with differ trace
sub buffer order sizes.
Signed-off-by: Steven Rostedt (Google)
---
.../ftrace/test.d/00basic/ringbuffer_order.tc | 46 +++
1 file changed, 46 insertions(+)
create mode 10064
From: "Steven Rostedt (Google)"
The ring_buffer_subbuf_order_set() was creating ring_buffer_per_cpu
cpu_buffers with the new subbuffers with the updated order, and if they
all successfully were created, then they the ring_buffer's per_cpu buffers
would be freed and replaced by them.
The problem
From: "Steven Rostedt (Google)"
Now that the ring buffer specifies the size of its sub buffers, they all
need to be the same size. When doing a read, a swap is done with a spare
page. Make sure they are the same size before doing the swap, otherwise
the read will fail.
Signed-off-by: Steven Rost
From: "Tzvetomir Stoyanov (VMware)"
The trace ring buffer sub page size can be configured, per trace
instance. A new ftrace file "buffer_subbuf_order" is added to get and
set the size of the ring buffer sub page for current trace instance.
The size must be an order of system page size, that's why
From: "Tzvetomir Stoyanov (VMware)"
As the size of the ring sub buffer page can be changed dynamically,
the logic that reads and writes to the buffer should be fixed to take
that into account. Some internal ring buffer APIs are changed:
ring_buffer_alloc_read_page()
ring_buffer_free_read_page()
From: "Steven Rostedt (Google)"
When updating the order of the sub buffers for the main buffer, make sure
that if the snapshot buffer exists, that it gets its order updated as
well.
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/trace.c | 45 ++-
From: "Steven Rostedt (Google)"
Because the main buffer and the snapshot buffer need to be the same for
some tracers, otherwise it will fail and disable all tracing, the tracers
need to be stopped while updating the sub buffer sizes so that the tracers
see the main and snapshot buffers with the s
This is a small series getting rid of paravirt patching by switching
completely to alternative patching for the same functionality.
The basic idea is to add the capability to switch from indirect to
direct calls via a special alternative patching option.
This removes _some_ of the paravirt macro
Introduce the macro ALT_NOT_XEN as a short form of
ALT_NOT(X86_FEATURE_XENPV).
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Juergen Gross
---
V3:
- split off from next patch
V5:
- move patch to the start of the series (Boris Petkov)
---
arch/x86/include/asm/paravirt.h | 42
As a preparation for replacing paravirt patching completely by
alternative patching, move some backend functions and #defines to
alternative code and header.
Signed-off-by: Juergen Gross
---
V4:
- rename x86_nop() to nop_func() and x86_BUG() to BUG_func() (Boris
Petkov)
---
arch/x86/include/as
Instead of stacking alternative and paravirt patching, use the new
ALT_FLAG_CALL flag to switch those mixed calls to pure alternative
handling.
This eliminates the need to be careful regarding the sequence of
alternative and paravirt patching.
Signed-off-by: Juergen Gross
Acked-by: Peter Zijlstr
Now that paravirt is using the alternatives patching infrastructure,
remove the paravirt patching code.
Signed-off-by: Juergen Gross
Acked-by: Peter Zijlstra (Intel)
---
arch/x86/include/asm/paravirt.h | 13 --
arch/x86/include/asm/paravirt_types.h | 38 ---
arch/x86/inclu
34 matches
Mail list logo