On Wed, 13 Dec 2023 at 08:17, Viresh Kumar wrote:
>
> On 12-12-23, 15:27, Vincent Guittot wrote:
> > Provide to the scheduler a feedback about the temporary max available
> > capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
> > filtered as the pressure will happen for dozens
On Tue, Dec 12, 2023 at 08:43:07PM +0300, Arseniy Krasnov wrote:
On 12.12.2023 19:12, Michael S. Tsirkin wrote:
On Tue, Dec 12, 2023 at 06:59:03PM +0300, Arseniy Krasnov wrote:
On 12.12.2023 18:54, Michael S. Tsirkin wrote:
On Tue, Dec 12, 2023 at 12:16:54AM +0300, Arseniy Krasnov wrote:
On 11.12.23 12:43, David Stevens wrote:
From: David Stevens
Hi David,
Add a wakeup event for when the balloon is inflating or deflating.
Userspace can enable this wakeup event to prevent the system from
suspending while the balloon is being adjusted. This allows
/sys/power/wakeup_count to b
On 13.12.2023 11:43, Stefano Garzarella wrote:
> On Tue, Dec 12, 2023 at 08:43:07PM +0300, Arseniy Krasnov wrote:
>>
>>
>> On 12.12.2023 19:12, Michael S. Tsirkin wrote:
>>> On Tue, Dec 12, 2023 at 06:59:03PM +0300, Arseniy Krasnov wrote:
On 12.12.2023 18:54, Michael S. Tsirkin wr
On Wed, Dec 13, 2023 at 12:08:27PM +0300, Arseniy Krasnov wrote:
On 13.12.2023 11:43, Stefano Garzarella wrote:
On Tue, Dec 12, 2023 at 08:43:07PM +0300, Arseniy Krasnov wrote:
On 12.12.2023 19:12, Michael S. Tsirkin wrote:
On Tue, Dec 12, 2023 at 06:59:03PM +0300, Arseniy Krasnov wrote:
On 13.12.2023 12:41, Stefano Garzarella wrote:
> On Wed, Dec 13, 2023 at 12:08:27PM +0300, Arseniy Krasnov wrote:
>>
>>
>> On 13.12.2023 11:43, Stefano Garzarella wrote:
>>> On Tue, Dec 12, 2023 at 08:43:07PM +0300, Arseniy Krasnov wrote:
On 12.12.2023 19:12, Michael S. Tsirkin wr
On Tue, Dec 12, 2023 at 11:15:01AM -0500, Michael S. Tsirkin wrote:
> On Tue, Dec 12, 2023 at 11:00:12AM +0800, Jason Wang wrote:
> > On Tue, Dec 12, 2023 at 12:54 AM Michael S. Tsirkin wrote:
We played around with the suggestions and some other ideas.
I would like to share some initial results.
On Mon, 2023-12-11 at 22:04 -0600, Haitao Huang wrote:
> Hi Kai
>
> On Mon, 27 Nov 2023 03:57:03 -0600, Huang, Kai wrote:
>
> > On Mon, 2023-11-27 at 00:27 +0800, Haitao Huang wrote:
> > > On Mon, 20 Nov 2023 11:45:46 +0800, Huang, Kai
> > > wrote:
> > >
> > > > On Mon, 2023-10-30 at 11:20 -
Hi Jason,
On 12/13/23 05:52, Jason Wang wrote:
On Tue, Dec 12, 2023 at 9:17 PM Maxime Coquelin
wrote:
Virtio-net driver control queue implementation is not safe
when used with VDUSE. If the VDUSE application does not
reply to control queue messages, it currently ends up
hanging the kernel thr
On Wed, 2023-12-13 at 02:31 +0100, Ilya Leoshkevich wrote:
> On Fri, 2023-12-08 at 16:25 +0100, Alexander Potapenko wrote:
> > > A problem with __memset() is that, at least for me, it always
> > > ends
> > > up being a call. There is a use case where we need to write only
> > > 1
> > > byte, so I t
On Wed, Dec 13, 2023 at 11:37:23AM +0100, Tobias Huschle wrote:
> On Tue, Dec 12, 2023 at 11:15:01AM -0500, Michael S. Tsirkin wrote:
> > On Tue, Dec 12, 2023 at 11:00:12AM +0800, Jason Wang wrote:
> > > On Tue, Dec 12, 2023 at 12:54 AM Michael S. Tsirkin
> > > wrote:
>
> We played around with t
On Wed, Dec 13, 2023 at 07:00:53AM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 13, 2023 at 11:37:23AM +0100, Tobias Huschle wrote:
> > On Tue, Dec 12, 2023 at 11:15:01AM -0500, Michael S. Tsirkin wrote:
> > > On Tue, Dec 12, 2023 at 11:00:12AM +0800, Jason Wang wrote:
> > > > On Tue, Dec 12, 202
On Wed, 13 Dec 2023 09:51:38 +0800
Zheng Yejian wrote:
> ---
> kernel/trace/trace_events_hist.c | 18 ++
> 1 file changed, 14 insertions(+), 4 deletions(-)
>
> Steve, thanks for your review!
>
> v2:
> - Introduce tracing_single_release_file_tr() to add the missing call for
>
Hi Rob,
On Tue, Dec 12, 2023 at 12:44:06PM -0600, Rob Herring wrote:
> On Tue, Dec 12, 2023 at 10:38 AM Alexandru Elisei
> wrote:
> >
> > Hi Rob,
> >
> > Thank you so much for the feedback, I'm not very familiar with device tree,
> > and any comments are very useful.
> >
> > On Mon, Dec 11, 2023
On Wed, Dec 13, 2023 at 7:05 AM Alexandru Elisei
wrote:
>
> Hi Rob,
>
> On Tue, Dec 12, 2023 at 12:44:06PM -0600, Rob Herring wrote:
> > On Tue, Dec 12, 2023 at 10:38 AM Alexandru Elisei
> > wrote:
> > >
> > > Hi Rob,
> > >
> > > Thank you so much for the feedback, I'm not very familiar with devi
From: "Steven Rostedt (Google)"
A trace instance may only need to enable specific events. As the eventfs
directory of an instance currently creates all events which adds overhead,
allow internal instances to be created with just the events in systems
that they care about. This currently only deal
Trying to probe update_sd_lb_stats() using perf results in the below
message in the kernel log:
trace_kprobe: Could not probe notrace function _text
This is because 'perf probe' specifies the kprobe location as an offset
from '_text':
$ sudo perf probe -D update_sd_lb_stats
p:probe/update_sd
On Wed, Dec 13, 2023 at 01:45:35PM +0100, Tobias Huschle wrote:
> On Wed, Dec 13, 2023 at 07:00:53AM -0500, Michael S. Tsirkin wrote:
> > On Wed, Dec 13, 2023 at 11:37:23AM +0100, Tobias Huschle wrote:
> > > On Tue, Dec 12, 2023 at 11:15:01AM -0500, Michael S. Tsirkin wrote:
> > > > On Tue, Dec 12,
Hi,
On Wed, Dec 13, 2023 at 08:06:44AM -0600, Rob Herring wrote:
> On Wed, Dec 13, 2023 at 7:05 AM Alexandru Elisei
> wrote:
> >
> > Hi Rob,
> >
> > On Tue, Dec 12, 2023 at 12:44:06PM -0600, Rob Herring wrote:
> > > On Tue, Dec 12, 2023 at 10:38 AM Alexandru Elisei
> > > wrote:
> > > >
> > > > H
On Wed, Dec 13, 2023 at 01:45:35PM +0100, Tobias Huschle wrote:
> On Wed, Dec 13, 2023 at 07:00:53AM -0500, Michael S. Tsirkin wrote:
> > On Wed, Dec 13, 2023 at 11:37:23AM +0100, Tobias Huschle wrote:
> > > On Tue, Dec 12, 2023 at 11:15:01AM -0500, Michael S. Tsirkin wrote:
> > > > On Tue, Dec 12,
On Mon, 2023-12-11 at 12:50 +0100, Alexander Potapenko wrote:
> On Tue, Nov 21, 2023 at 11:06 PM Ilya Leoshkevich
> wrote:
> >
> > Like for KASAN, it's useful to temporarily disable KMSAN checks
> > around,
> > e.g., redzone accesses. Introduce kmsan_disable_current() and
> > kmsan_enable_current
On Wed, Dec 13, 2023 at 12:08:27PM +0300, Arseniy Krasnov wrote:
>
>
> On 13.12.2023 11:43, Stefano Garzarella wrote:
> > On Tue, Dec 12, 2023 at 08:43:07PM +0300, Arseniy Krasnov wrote:
> >>
> >>
> >> On 12.12.2023 19:12, Michael S. Tsirkin wrote:
> >>> On Tue, Dec 12, 2023 at 06:59:03PM +0300,
On Wed, Dec 13, 2023 at 10:05:44AM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 13, 2023 at 12:08:27PM +0300, Arseniy Krasnov wrote:
> >
> >
> > On 13.12.2023 11:43, Stefano Garzarella wrote:
> > > On Tue, Dec 12, 2023 at 08:43:07PM +0300, Arseniy Krasnov wrote:
> > >>
> > >>
> > >> On 12.12.20
From: "Steven Rostedt (Google)"
There's no reason to give an arbitrary limit to the size of a raw trace
marker. Just let it be as big as the size that is allowed by the ring
buffer itself.
And there's also no reason to artificially break up the write to
TRACE_BUF_SIZE, as that's not even used.
On 12/12/23 20:12, dinghao@zju.edu.cn wrote:
>>
>> On 12/10/23 03:27, Dinghao Liu wrote:
>>> Use the scope based resource management (defined in
>>> linux/cleanup.h) to automate resource lifetime
>>> control on struct btt_sb *super in discover_arenas().
>>>
>>> Signed-off-by: Dinghao Liu
>>
On Tue, 12 Dec 2023 12:08:31 -0700
Vishal Verma wrote:
> Introduce a guard(device) macro to lock a 'struct device', and unlock it
> automatically when going out of scope using Scope Based Resource
> Management semantics. A lot of the sysfs attribute writes in
> drivers/dax/bus.c benefit from a cl
On Tue, 12 Dec 2023 12:08:30 -0700
Vishal Verma wrote:
> Add the missing sysfs ABI documentation for the device DAX subsystem.
> Various ABI attributes under this have been present since v5.1, and more
> have been added over time. In preparation for adding a new attribute,
> add this file with th
On 13.12.2023 18:13, Michael S. Tsirkin wrote:
> On Wed, Dec 13, 2023 at 10:05:44AM -0500, Michael S. Tsirkin wrote:
>> On Wed, Dec 13, 2023 at 12:08:27PM +0300, Arseniy Krasnov wrote:
>>>
>>>
>>> On 13.12.2023 11:43, Stefano Garzarella wrote:
On Tue, Dec 12, 2023 at 08:43:07PM +0300, Arsen
On Wed, Dec 13, 2023 at 8:51 AM Alexandru Elisei
wrote:
>
> Hi,
>
> On Wed, Dec 13, 2023 at 08:06:44AM -0600, Rob Herring wrote:
> > On Wed, Dec 13, 2023 at 7:05 AM Alexandru Elisei
> > wrote:
> > >
> > > Hi Rob,
> > >
> > > On Tue, Dec 12, 2023 at 12:44:06PM -0600, Rob Herring wrote:
> > > > On
On Wed, Dec 13, 2023 at 11:22:17AM -0600, Rob Herring wrote:
> On Wed, Dec 13, 2023 at 8:51 AM Alexandru Elisei
> wrote:
> >
> > Hi,
> >
> > On Wed, Dec 13, 2023 at 08:06:44AM -0600, Rob Herring wrote:
> > > On Wed, Dec 13, 2023 at 7:05 AM Alexandru Elisei
> > > wrote:
> > > >
> > > > Hi Rob,
> >
On Wed, Dec 13, 2023 at 08:11:57PM +0300, Arseniy Krasnov wrote:
>
>
> On 13.12.2023 18:13, Michael S. Tsirkin wrote:
> > On Wed, Dec 13, 2023 at 10:05:44AM -0500, Michael S. Tsirkin wrote:
> >> On Wed, Dec 13, 2023 at 12:08:27PM +0300, Arseniy Krasnov wrote:
> >>>
> >>>
> >>> On 13.12.2023 11:43
IPQ9574 and IPQ5332 can have multiple reserved memory regions that needs
to be collected as part of the coredump.
Loop through all the regions defined under reserved-memory node in the
devicetree for the remoteproc and add each as a custom segment for
coredump.
Also, update the ELF info for cored
From: "Tzvetomir Stoyanov (VMware)"
In order to introduce sub-buffer size per ring buffer, some internal
refactoring is needed. As ring_buffer_print_page_header() will depend on
the trace_buffer structure, it is moved after the structure definition.
Link:
https://lore.kernel.org/linux-trace-dev
From: "Steven Rostedt (Google)"
On failure to allocate ring buffer pages, the pointer to the CPU buffer
pages is freed, but the pages that were allocated previously were not.
Make sure they are freed too.
Fixes: TBD ("tracing: Set new size of the ring buffer sub page")
Signed-off-by: Steven Rost
From: "Steven Rostedt (Google)"
As all the subbuffer order (subbuffer sizes) must be the same throughout
the ring buffer, check the order of the buffers that are doing a CPU
buffer swap in ring_buffer_swap_cpu() to make sure they are the same.
If the are not the same, then fail to do the swap, o
From: "Tzvetomir Stoyanov (VMware)"
Currently the size of one sub buffer page is global for all buffers and
it is hard coded to one system page. In order to introduce configurable
ring buffer sub page size, the internal logic should be refactored to
work with sub page size per ring buffer.
Link:
From: "Steven Rostedt (Google)"
Add to the documentation how to use the buffer_subbuf_order file to change
the size and how it affects what events can be added to the ring buffer.
Signed-off-by: Steven Rostedt (Google)
---
Documentation/trace/ftrace.rst | 27 +++
1 file
From: "Steven Rostedt (Google)"
Because the main buffer and the snapshot buffer need to be the same for
some tracers, otherwise it will fail and disable all tracing, the tracers
need to be stopped while updating the sub buffer sizes so that the tracers
see the main and snapshot buffers with the s
From: "Steven Rostedt (Google)"
The ring_buffer_subbuf_order_set() was creating ring_buffer_per_cpu
cpu_buffers with the new subbuffers with the updated order, and if they
all successfully were created, then they the ring_buffer's per_cpu buffers
would be freed and replaced by them.
The problem
From: "Steven Rostedt (Google)"
The function ring_buffer_subbuf_order_set() just updated the sub-buffers
to the new size, but this also changes the size of the buffer in doing so.
As the size is determined by nr_pages * subbuf_size. If the subbuf_size is
increased without decreasing the nr_pages,
From: "Tzvetomir Stoyanov (VMware)"
The trace ring buffer sub page size can be configured, per trace
instance. A new ftrace file "buffer_subbuf_order" is added to get and
set the size of the ring buffer sub page for current trace instance.
The size must be an order of system page size, that's why
From: "Steven Rostedt (Google)"
Now that the ring buffer specifies the size of its sub buffers, they all
need to be the same size. When doing a read, a swap is done with a spare
page. Make sure they are the same size before doing the swap, otherwise
the read will fail.
Signed-off-by: Steven Rost
From: "Tzvetomir Stoyanov (VMware)"
As the size of the ring sub buffer page can be changed dynamically,
the logic that reads and writes to the buffer should be fixed to take
that into account. Some internal ring buffer APIs are changed:
ring_buffer_alloc_read_page()
ring_buffer_free_read_page()
From: "Steven Rostedt (Google)"
Using page order for deciding what the size of the ring buffer sub buffers
are is exposing a bit too much of the implementation. Although the sub
buffers are only allocated in orders of pages, allow the user to specify
the minimum size of each sub-buffer via kiloby
From: "Steven Rostedt (Google)"
When updating the order of the sub buffers for the main buffer, make sure
that if the snapshot buffer exists, that it gets its order updated as
well.
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/trace.c | 45 ++-
From: "Steven Rostedt (Google)"
Add a self test that will write into the trace buffer with differ trace
sub buffer order sizes.
Signed-off-by: Steven Rostedt (Google)
---
.../ftrace/test.d/00basic/ringbuffer_order.tc | 95 +++
1 file changed, 95 insertions(+)
create mode 10064
From: "Tzvetomir Stoyanov (VMware)"
There are two approaches when changing the size of the ring buffer
sub page:
1. Destroying all pages and allocating new pages with the new size.
2. Allocating new pages, copying the content of the old pages before
destroying them.
The first approach is ea
[
This is hopefully the last version before I push this to Linux next.
It's now going though the final phases of my testing.
]
Note, this has been on my todo list since the ring buffer was created back
in 2008.
Tzvetomir last worked on this in 2021 and I need to finally get it in.
His last
On Wed, Dec 13, 2023 at 11:44 AM Alexandru Elisei
wrote:
>
> On Wed, Dec 13, 2023 at 11:22:17AM -0600, Rob Herring wrote:
> > On Wed, Dec 13, 2023 at 8:51 AM Alexandru Elisei
> > wrote:
> > >
> > > Hi,
> > >
> > > On Wed, Dec 13, 2023 at 08:06:44AM -0600, Rob Herring wrote:
> > > > On Wed, Dec 13
Document the compatible for the MSM8926-based Motorola Moto G 4G smartphone.
Signed-off-by: André Apitzsch
---
Documentation/devicetree/bindings/arm/qcom.yaml | 1 +
1 file changed, 1 insertion(+)
diff --git a/Documentation/devicetree/bindings/arm/qcom.yaml
b/Documentation/devicetree/bindings/
On 12/13/23 21:33, André Apitzsch wrote:
This dts adds support for Motorola Moto G 4G released in 2013.
I have a similar one in my drawer.. not the 4g kind, titan IIRC?
Wasn't this one codenamed thea?
Konrad
This dts adds support for Motorola Moto G 4G released in 2013.
Add a device tree with initial support for:
- GPIO keys
- Hall sensor
- SDHCI
- Vibrator
Signed-off-by: André Apitzsch
---
André Apitzsch (2):
dt-bindings: arm: qcom: Add Motorola Moto G 4G
ARM: dts: qcom: msm8926-motoro
From: "Steven Rostedt (Google)"
As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/ring_buffer.c | 6 ++
1 file changed, 6 insertio
On Wed, Dec 13, 2023 at 03:15:15PM +, Hardevsinh Palaniya wrote:
>Hello [1]@Bjorn Andersson,
Please use appropriate mail list etiquette, and avoid HTML and
top-posting in your responses.
>
>"strscpy_pad" itself takes care of null-terminated strings. So, there
>will be no leak.
Y
On Wed, 13 Dec 2023 20:09:14 +0530
Naveen N Rao wrote:
> Trying to probe update_sd_lb_stats() using perf results in the below
> message in the kernel log:
> trace_kprobe: Could not probe notrace function _text
>
> This is because 'perf probe' specifies the kprobe location as an offset
> from '
At present there are ~200 usages of device_lock() in the kernel. Some of
those usages lead to "goto unlock;" patterns which have proven to be
error prone. Define a "device" guard() definition to allow for those to
be cleaned up and prevent new ones from appearing.
Link:
http://lore.kernel.org/r/6
KMSAN relies on memblock returning all available pages to it
(see kmsan_memblock_free_pages()). It partitions these pages into 3
categories: pages available to the buddy allocator, shadow pages and
origin pages. This partitioning is static.
If new pages appear after kmsan_init_runtime(), it is con
Architectures use assembly code to initialize ftrace_regs and call
ftrace_ops_list_func(). Therefore, from the KMSAN's point of view,
ftrace_regs is poisoned on ftrace_ops_list_func entry(). This causes
KMSAN warnings when running the ftrace testsuite.
Fix by trusting the architecture-specific ass
v2: https://lore.kernel.org/lkml/20231121220155.1217090-1-...@linux.ibm.com/
v2 -> v3: Drop kmsan_memmove_metadata() and strlcpy() patches;
Remove kmsan_get_metadata() stub;
Move kmsan_enable_current() and kmsan_disable_current() to
include/linux/kmsan.h, explain why a
The inline assembly block in s390's chsc() stores that much.
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
mm/kmsan/instrumentation.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/kmsan/instrumentation.c b/mm/kmsan/instrumentation.c
index c
Comparing pointers with TASK_SIZE does not make sense when kernel and
userspace overlap. Assume that we are handling user memory access in
this case.
Reported-by: Alexander Gordeev
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
mm/kmsan/hooks.c | 3 ++-
1 file changed, 2
Even though the KMSAN warnings generated by memchr_inv() are suppressed
by metadata_access_enable(), its return value may still be poisoned.
The reason is that the last iteration of memchr_inv() returns
`*start != value ? start : NULL`, where *start is poisoned. Because of
this, somewhat counterin
Building the kernel with CONFIG_SLUB_DEBUG and CONFIG_KMSAN causes
KMSAN to complain about touching redzones in kfree().
Fix by extending the existing KASAN-related metadata_access_enable()
and metadata_access_disable() functions to KMSAN.
Acked-by: Vlastimil Babka
Signed-off-by: Ilya Leoshkevic
The value assigned to prot is immediately overwritten on the next line
with PAGE_KERNEL. The right hand side of the assignment has no
side-effects.
Fixes: b073d7f8aee4 ("mm: kmsan: maintain KMSAN metadata for page operations")
Suggested-by: Alexander Gordeev
Reviewed-by: Alexander Potapenko
Sign
KMSAN warns about check_canary() accessing the canary.
The reason is that, even though set_canary() is properly instrumented
and sets shadow, slub explicitly poisons the canary's address range
afterwards.
Unpoisoning the canary is not the right thing to do: only
check_canary() is supposed to ever
The constraints of the DFLTCC inline assembly are not precise: they
do not communicate the size of the output buffers to the compiler, so
it cannot automatically instrument it.
Add the manual kmsan_unpoison_memory() calls for the output buffers.
The logic is the same as in [1].
[1]
https://githu
Like for KASAN, it's useful to temporarily disable KMSAN checks around,
e.g., redzone accesses. Introduce kmsan_disable_current() and
kmsan_enable_current(), which are similar to their KASAN counterparts.
Make them reentrant in order to handle memory allocations in interrupt
context. Repurpose the
All other sanitizers are disabled for these components as well.
While at it, add a comment to boot and purgatory.
Reviewed-by: Alexander Gordeev
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
arch/s390/boot/Makefile | 2 ++
arch/s390/kernel/vdso32/Makefile | 3 ++
On s390 the virtual address 0 is valid (current CPU's lowcore is mapped
there), therefore KMSAN should not complain about it.
Disable the respective check on s390. There doesn't seem to be a
Kconfig option to describe this situation, so explicitly check for
s390.
Reviewed-by: Alexander Potapenko
Add a KMSAN check to the CKSM inline assembly, similar to how it was
done for ASAN in commit e42ac7789df6 ("s390/checksum: always use cksm
instruction").
Acked-by: Alexander Gordeev
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
arch/s390/include/asm/checksum.h | 2 ++
1
It's useful to have both tests and kmsan.panic=1 during development,
but right now the warnings, that the tests cause, lead to kernel
panics.
Temporarily set kmsan.panic=0 for the duration of the KMSAN testing.
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
mm/kmsan/kmsan
Prevent KMSAN from complaining about buffers filled by cpacf_trng()
being uninitialized.
Tested-by: Alexander Gordeev
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
arch/s390/include/asm/cpacf.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/s390/include/asm/
Diagnose 224 stores 4k bytes, which cannot be deduced from the inline
assembly constraints. This leads to KMSAN false positives.
Unpoison the output buffer manually with kmsan_unpoison_memory().
Signed-off-by: Ilya Leoshkevich
---
arch/s390/kernel/diag.c | 2 ++
1 file changed, 2 insertions(+)
s390 uses assembly code to initialize ftrace_regs and call
kprobe_ftrace_handler(). Therefore, from the KMSAN's point of view,
ftrace_regs is poisoned on kprobe_ftrace_handler() entry. This causes
KMSAN warnings when running the ftrace testsuite.
Fix by trusting the assembly code and always unpois
Replace the x86-specific asm/pgtable_64_types.h #include with the
linux/pgtable.h one, which all architectures have.
While at it, sort the headers alphabetically for the sake of
consistency with other KMSAN code.
Fixes: f80be4571b19 ("kmsan: add KMSAN runtime core")
Suggested-by: Heiko Carstens
Each s390 CPU has lowcore pages associated with it. Each CPU sees its
own lowcore at virtual address 0 through a hardware mechanism called
prefixing. Additionally, all lowcores are mapped to non-0 virtual
addresses stored in the lowcore_ptr[] array.
When lowcore is accessed through virtual address
Add KMSAN support for the s390 implementations of the string functions.
Do this similar to how it's already done for KASAN, except that the
optimized memset{16,32,64}() functions need to be disabled: it's
important for KMSAN to know that they initialized something.
The way boot code is built with
When building the kmsan test as a module, modpost fails with the
following error message:
ERROR: modpost: "panic_on_kmsan" [mm/kmsan/kmsan_test.ko] undefined!
Export panic_on_kmsan in order to improve the KMSAN usability for
modules.
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leos
stcctm() uses the "Q" constraint for dest, therefore KMSAN does not
understand that it fills multiple doublewords pointed to by dest, not
just one. This results in false positives.
Unpoison the whole dest manually with kmsan_unpoison_memory().
Reported-by: Alexander Gordeev
Signed-off-by: Ilya L
This is normally done by the generic entry code, but the
kernel_stack_overflow() flow bypasses it.
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
arch/s390/kernel/traps.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/
Avoid false KMSAN negatives with SLUB_DEBUG by allowing
kmsan_slab_free() to poison the freed memory, and by preventing
init_object() from unpoisoning new allocations by using __memset().
There are two alternatives to this approach. First, init_object()
can be marked with __no_sanitize_memory. Thi
The unwind code can read uninitialized frames. Furthermore, even in
the good case, KMSAN does not emit shadow for backchains. Therefore
disable it for the unwinding functions.
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
arch/s390/kernel/unwind_bc.c | 4
1 file chan
KMSAN generates the following false positives on s390x:
[6.063666] DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())
[ ...]
[6.577050] Call Trace:
[6.619637] [<0690d2de>] check_flags+0x1fe/0x210
[6.665411] ([<0690d2da>] check_flags+0x1fa/0x210)
[6.707478]
arch_kmsan_get_meta_or_null() finds the lowcore shadow by querying the
prefix and calling kmsan_get_metadata() again.
kmsan_virt_addr_valid() delegates to virt_addr_valid().
Signed-off-by: Ilya Leoshkevich
---
arch/s390/include/asm/kmsan.h | 43 +++
1 file change
put_user() uses inline assembly with precise constraints, so Clang is
in principle capable of instrumenting it automatically. Unfortunately,
one of the constraints contains a dereferenced user pointer, and Clang
does not currently distinguish user and kernel pointers. Therefore
KMSAN attempts to ac
Improve the readability by replacing the custom aligning logic with
ALIGN_DOWN(). Unlike other places where a similar sequence is used,
there is no size parameter that needs to be adjusted, so the standard
macro fits.
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
mm/kmsan
The pages for the KMSAN metadata associated with most kernel mappings
are taken from memblock by the common code. However, vmalloc and module
metadata needs to be defined by the architectures.
Be a little bit more careful than x86: allocate exactly MODULES_LEN
for the module shadow and origins, an
It should be possible to have inline functions in the s390 header
files, which call kmsan_unpoison_memory(). The problem is that these
header files might be included by the decompressor, which does not
contain KMSAN runtime, causing linker errors.
Not compiling these calls if __SANITIZE_MEMORY__ i
Comparing pointers with TASK_SIZE does not make sense when kernel and
userspace overlap. Skip the comparison when this is the case.
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
mm/kmsan/instrumentation.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git
Adjust the stack size for the KMSAN-enabled kernel like it was done
for the KASAN-enabled one in commit 7fef92ccadd7 ("s390/kasan: double
the stack size"). Both tools have similar requirements.
Reviewed-by: Alexander Gordeev
Reviewed-by: Alexander Potapenko
Signed-off-by: Ilya Leoshkevich
---
Now that everything else is in place, enable KMSAN in Kconfig.
Signed-off-by: Ilya Leoshkevich
---
arch/s390/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 3bec98d20283..160ad2220c53 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@
Dan Williams wrote:
> At present there are ~200 usages of device_lock() in the kernel. Some of
> those usages lead to "goto unlock;" patterns which have proven to be
> error prone. Define a "device" guard() definition to allow for those to
> be cleaned up and prevent new ones from appearing.
>
> L
On Wed, 2023-12-13 at 15:02 -0800, Dan Williams wrote:
> At present there are ~200 usages of device_lock() in the kernel. Some of
> those usages lead to "goto unlock;" patterns which have proven to be
> error prone. Define a "device" guard() definition to allow for those to
"Define a definition" s
On Tue, 2023-12-12 at 15:27 +0100, Vincent Guittot wrote:
> Provide to the scheduler a feedback about the temporary max available
> capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
> filtered as the pressure will happen for dozens ms or more.
>
> Signed-off-by: Vincent Guitto
KASAN report following issue. The root cause is when opening 'hist'
file of an instance and accessing 'trace_event_file' in hist_show(),
but 'trace_event_file' has been freed due to the instance being removed.
'hist_debug' file has the same problem. To fix it, call
tracing_{open,release}_file_tr()
From: "Steven Rostedt (Google)"
Each event has a 27 bit timestamp delta that is used to hold the delta
from the last event. If the time between events is greater than 2^27, then
a timestamp is added that holds a 59 bit absolute timestamp.
Until a389d86f7fd09 ("ring-buffer: Have nested events sti
Hello:
This patch was applied to netdev/net.git (main)
by Jakub Kicinski :
On Mon, 11 Dec 2023 19:23:17 +0300 you wrote:
> We need to do signed arithmetic if we expect condition
> `if (bytes < 0)` to be possible
>
> Found by Linux Verification Center (linuxtesting.org) with SVACE
>
> Fixes: 06a
Linus,
Looking for some advice on this.
tl;dr; The ring-buffer timestamp requires a 64-bit cmpxchg to keep the
timestamps in sync (only in the slow paths). I was told that 64-bit cmpxchg
can be extremely slow on 32-bit architectures. So I created a rb_time_t
that on 64-bit was a normal local64_t
On Wed, 13 Dec 2023 21:11:26 -0500
Steven Rostedt wrote:
> @@ -3720,6 +3517,7 @@ rb_reserve_next_event(struct trace_buffer *buffer,
> struct rb_event_info info;
> int nr_loops = 0;
> int add_ts_default;
> + static int once;
>
> /* ring buffer does cmpxchg, make sure
On 2023-12-13 21:11, Steven Rostedt wrote:
From: "Steven Rostedt (Google)"
Each event has a 27 bit timestamp delta that is used to hold the delta
from the last event. If the time between events is greater than 2^27, then
a timestamp is added that holds a 59 bit absolute timestamp.
Until a389d8
1 - 100 of 119 matches
Mail list logo