On Tue, Feb 11, 2025 at 7:31 PM Randy Dunlap wrote:
>
>
>
> On 2/11/25 7:21 PM, jef...@chromium.org wrote:
> > From: Jeff Xu
> >
>
> > ---
> > include/linux/userprocess.h | 18 ++
> > init/Kconfig| 18 ++
> > security/Kconfig| 18 ++
From: Jeff Xu
Provide infrastructure to mseal system mappings. Establish
two kernel configs (CONFIG_MSEAL_SYSTEM_MAPPINGS,
ARCH_HAS_MSEAL_SYSTEM_MAPPINGS) and a header file (userprocess.h)
for future patches.
As discussed during mseal() upstream process [1], mseal() protects
the VMAs of a given
From: Jeff Xu
The commit message in the first patch contains the full description of
this series.
--
History:
V5
- Remove kernel cmd line (Lorenzo Stoakes)
- Add test info (Lorenzo Stoakes)
- Add threat model info (Lorenzo Stoakes)
- Fix x86 selftest: test_mremap_vdso
On 2/11/25 7:21 PM, jef...@chromium.org wrote:
> From: Jeff Xu
>
> ---
> include/linux/userprocess.h | 18 ++
> init/Kconfig| 18 ++
> security/Kconfig| 18 ++
> 3 files changed, 54 insertions(+)
> create mode 10064
From: Jeff Xu
Provide support to mseal the uprobe mapping.
Unlike other system mappings, the uprobe mapping is not
established during program startup. However, its lifetime is the same
as the process's lifetime. It could be sealed from creation.
Signed-off-by: Jeff Xu
---
kernel/events/uprobe
From: Jeff Xu
Provide support for CONFIG_MSEAL_SYSTEM_MAPPINGS on arm64, covering
the vdso, vvar, and compat-mode vectors and sigpage mappings.
Production release testing passes on Android and Chrome OS.
Signed-off-by: Jeff Xu
---
arch/arm64/Kconfig | 1 +
arch/arm64/kernel/vdso.c | 23
From: Jeff Xu
Provide support for CONFIG_MSEAL_SYSTEM_MAPPINGS on UML, covering
the vdso.
Testing passes on UML.
Signed-off-by: Jeff Xu
Tested-by: Benjamin Berg
---
arch/um/Kconfig| 1 +
arch/x86/um/vdso/vma.c | 7 +--
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git
From: Jeff Xu
Provide support for CONFIG_MSEAL_SYSTEM_MAPPINGS on x86-64,
covering the vdso, vvar, vvar_vclock.
Production release testing passes on Android and Chrome OS.
Signed-off-by: Jeff Xu
---
arch/x86/Kconfig | 1 +
arch/x86/entry/vdso/vma.c | 17 +++--
2 files ch
From: Jeff Xu
Add code to detect if the vdso is memory sealed, skip the test
if it is.
Signed-off-by: Jeff Xu
---
.../testing/selftests/x86/test_mremap_vdso.c | 38 +++
1 file changed, 38 insertions(+)
diff --git a/tools/testing/selftests/x86/test_mremap_vdso.c
b/tools/testi
From: Jeff Xu
Provide infrastructure to mseal system mappings. Establish
two kernel configs (CONFIG_MSEAL_SYSTEM_MAPPINGS,
ARCH_HAS_MSEAL_SYSTEM_MAPPINGS) and a header file (userprocess.h)
for future patches.
As discussed during mseal() upstream process [1], mseal() protects
the VMAs of a given
From: Jeff Xu
The commit message in the first patch contains the full description of
this series.
--
History:
V5
- Remove kernel cmd line (Lorenzo Stoakes)
- Add test info (Lorenzo Stoakes)
- Add threat model info (Lorenzo Stoakes)
- Fix x86 selftest: test_mremap_vdso
On Tue, 11 Feb 2025 20:59:24 +0200 Gal Pressman wrote:
> > Everything else looks very good, though, yes, I would agree with the
> > alignment comments made down-thread. This was "accidentally correct"
> > before in the sense that the end of the struct would be padded for
> > alignment, but isn't gu
On 11/02/2025 19:49, Kees Cook wrote:
>> @@ -659,7 +654,7 @@ static inline void ip_tunnel_info_opts_set(struct
>> ip_tunnel_info *info,
>> {
>> info->options_len = len;
>> if (len > 0) {
>> -memcpy(ip_tunnel_info_opts(info), from, len);
>> +memcpy(info->options,
On 2/11/25 07:13, Mark Brown wrote:
> On Mon, Feb 10, 2025 at 11:08:27PM -0500, Ethan Carter Edwards wrote:
>> There is a possibility for an uninitialized *ret* variable to be
>> returned in some code paths.
>>
>> Setting to 0 prevents a random value from being returned.
>
> That'll shut up the wa
On Sun, Feb 09, 2025 at 12:18:53PM +0200, Gal Pressman wrote:
> Remove the hidden assumption that options are allocated at the end of
> the struct, and teach the compiler about them using a flexible array.
>
> With this, we can revert the unsafe_memcpy() call we have in
> tun_dst_unclone() [1], an
On 2/11/25 08:19, Ethan Carter Edwards wrote:
There is a possibility for an uninitialized*ret* variable to be
returned in some code paths.
This explicitly returns 0 without an error. Also removes goto that
returned*ret* and simply returns in place.
Closes:https://scan5.scan.coverity.com/#/proje
On 11/02/25 06:22, Dave Hansen wrote:
> On 2/11/25 05:33, Valentin Schneider wrote:
>>> 2. It's wrong to assume that TLB entries are only populated for
>>> addresses you access - thanks to speculative execution, you have to
>>> assume that the CPU might be populating random TLB entries all over
>>>
On 11/02/25 14:03, Mark Rutland wrote:
> On Tue, Feb 11, 2025 at 02:33:51PM +0100, Valentin Schneider wrote:
>> On 10/02/25 23:08, Jann Horn wrote:
>> > 2. It's wrong to assume that TLB entries are only populated for
>> > addresses you access - thanks to speculative execution, you have to
>> > assu
On 09/02/2025 22:16, Ilya Maximets wrote:
> Ideally we would have a proper union with all the potential option types
> instead of this hacky construct. But if that's not the the way to go, then
> 8 bytes may indeed be the way, as it is the maximum guaranteed alignment
> for allocations and the cur
On 11/02/2025 2:01, Justin Stitt wrote:
On Mon, Feb 10, 2025 at 09:45:05AM -0800, Kees Cook wrote:
GCC can see that the value range for "order" is capped, but this leads
it to consider that it might be negative, leading to a false positive
warning (with GCC 15 with -Warray-bounds -fdiagnostic
On 2/11/25 05:33, Valentin Schneider wrote:
>> 2. It's wrong to assume that TLB entries are only populated for
>> addresses you access - thanks to speculative execution, you have to
>> assume that the CPU might be populating random TLB entries all over
>> the place.
> Gotta love speculation. Now it
On Tue, Feb 11, 2025 at 02:33:51PM +0100, Valentin Schneider wrote:
> On 10/02/25 23:08, Jann Horn wrote:
> > On Mon, Feb 10, 2025 at 7:36 PM Valentin Schneider
> > wrote:
> >> What if isolated CPUs unconditionally did a TLBi as late as possible in
> >> the stack right before returning to userspa
On 10/02/25 23:08, Jann Horn wrote:
> On Mon, Feb 10, 2025 at 7:36 PM Valentin Schneider
> wrote:
>> What if isolated CPUs unconditionally did a TLBi as late as possible in
>> the stack right before returning to userspace? This would mean that upon
>> re-entering the kernel, an isolated CPU's TLB
On 4. Feb 2025, at 17:44, Thorsten Blum wrote:
> On 14. Jan 2025, at 22:49, Thorsten Blum wrote:
>> Add the __counted_by compiler attribute to the flexible array member
>> attrs to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
>> CONFIG_FORTIFY_SOURCE.
>>
>> Increment num before addin
On Mon, Feb 10, 2025 at 11:08:27PM -0500, Ethan Carter Edwards wrote:
> There is a possibility for an uninitialized *ret* variable to be
> returned in some code paths.
>
> Setting to 0 prevents a random value from being returned.
That'll shut up the warning but is the warning trying to tell us th
On Mon, Feb 10, 2025 at 01:35:52PM -0800, Jeff Xu wrote:
> Hi Lorenzo,
>
> Gentle ping for my clarification questions.
>
> I also tried the new ioctl PROCMAP_QUERY, please see below for details.
>
Hi Jeff,
Sorry I thought you'd send a new version which is why I didn't reply. I will
take a look th
-Wflex-array-member-not-at-end was introduced in GCC-14, and we are
getting ready to enable it, globally.
So, in order to avoid ending up with flexible-array members in the
middle of other structs, we use the `__struct_group()` helper to
separate the flexible arrays from the rest of the members in
On Mon, Feb 3, 2025 at 1:18 PM Andy Shevchenko
wrote:
> Switch to use dev_err_probe() to simplify the error path and
> unify a message template.
>
> Signed-off-by: Andy Shevchenko
Reviewed-by: Linus Walleij
Yours,
Linus Walleij
On 07/02/2025 05:52, Kees Cook wrote:
> On Mon, Feb 03, 2025 at 10:28:09AM +, Kevin Brodsky wrote:
>> Add basic tests for the kpkeys_hardened_pgtables feature: try to
>> perform a direct write to current->{cred,real_cred} and ensure it
>> fails.
>>
>> Signed-off-by: Kevin Brodsky
>> ---
>> mm
From: Bartosz Golaszewski
On Fri, 07 Feb 2025 17:17:07 +0200, Andy Shevchenko wrote:
> Seems like I have had a cleanup series for 74x164, but forgot to send it
> last year, here it is.
>
> Changelog v2:
> - remove ->remove() leftover (Bart)
> - collected tags (Geert, Gustavo)
>
> [...]
Applie
30 matches
Mail list logo