Vishal!
On Wed, Jul 30 2025 at 23:35, Vishal Parmar wrote:
Please do not top-post and trim your replies.
> The intent behind this change is to make output useful as is.
> for example, to provide a performance report in case of regression.
The point John was making:
>> So it might be worth look
On Mon, Jul 07 2025 at 09:17, Thomas Weißschuh wrote:
> On Sun, Jul 06, 2025 at 10:26:31PM +0200, Thomas Gleixner wrote:
>> On Tue, Jul 01 2025 at 10:58, Thomas Weißschuh wrote:
>>
>> > Extend the auxclock test to also cover the vDSO.
>>
>> I'm not reall
On Mon, Jul 07 2025 at 13:34, Arnd Bergmann wrote:
> On Mon, Jul 7, 2025, at 08:57, Thomas Gleixner wrote:
>> On Tue, Jul 01 2025 at 10:58, Thomas Weißschuh wrote:
>>>
>>> +#if defined(CONFIG_GENERIC_TIME_VSYSCALL) &&
>>> defined(CONFI
On Tue, Jul 01 2025 at 10:58, Thomas Weißschuh wrote:
>
> +#if defined(CONFIG_GENERIC_TIME_VSYSCALL) &&
> defined(CONFIG_GENERIC_GETTIMEOFDAY) && \
> + defined(CONFIG_POSIX_AUX_CLOCKS)
CONFIG_GENERIC_GETTIMEOFDAY requires CONFIG_GENERIC_TIME_VSYSCALL, but
that's not expressed anywhere. This
On Tue, Jul 01 2025 at 10:58, Thomas Weißschuh wrote:
> This reverts commit c9fbaa879508 ("selftests: vDSO: parse_vdso: Use UAPI
> headers instead of libc headers")
>
> The kernel headers were used to make parse_vdso.c compatible with nolibc.
> Unfortunately linux/elf.h is incompatible with glibc'
On Tue, Jul 01 2025 at 10:58, Thomas Weißschuh wrote:
> Extend the auxclock test to also cover the vDSO.
I'm not really convinved, that this is the right thing to do. Why can't
this just extend selftests/vDSO instead of creating these
> +#include "../vDSO/parse_vdso.c"
> +#include "../vDSO/vdso_
On Tue, Jul 01 2025 at 10:58, Thomas Weißschuh wrote:
> +static __always_inline
> +bool do_aux(const struct vdso_time_data *vd, clockid_t clock, struct
> __kernel_timespec *ts)
> +{
> + const struct vdso_clock *vc;
> + u64 sec, ns;
> + u32 seq;
> + u8 idx;
> +
> + if (!IS_ENABL
On Fri, Jun 27 2025 at 17:08, Ben Zong-You Xie wrote:
> glibc does not define SYS_futex for 32-bit architectures using 64-bit
Kinda. The kernel does not provide sys_futex() on 32-bit architectures,
which do not support 32-bit time representations. As a consequence glibc
obviously cannot define SY
On Mon, Jun 09 2025 at 20:09, Terry Tritton wrote:
> futex_numa was never added to the .gitignore file.
> Add it.
>
This lacks a Fixes: tag.
On Wed, Jul 02 2025 at 11:21, Terry Tritton wrote:
> Futex_waitv can not accept old_timespec32 struct, so userspace should
sys_futex_wait()
> convert it from 32bit to 64bit before syscall in 32bit compatible mode.
>
> This fix is based off [1]
>
> Link: https://lore.kernel.org/all/20231203235117
On Tue, Jul 01 2025 at 15:23, Terry Tritton wrote:
> Futex_waitv can not accept old_timespec32 struct, so userspace should
> convert it from 32bit to 64bit before syscall in 32bit compatible mode.
>
> This fix is based off [1]
>
> Link: https://lore.kernel.org/all/20231203235117.29677-1-we...@suse.
On Fri, Jun 27 2025 at 17:23, André Almeida wrote:
> Em 26/06/2025 19:07, Thomas Gleixner escreveu:
>> On Thu, Jun 26 2025 at 14:11, André Almeida wrote:
>>
>>> Create ASSERT_{EQ, NE, TRUE, FALSE} macros to make test creation easier.
>>
>> What's so futex
On Thu, Jun 26 2025 at 14:11, André Almeida wrote:
> Expand the current robust list test for the new set_robust_list2
> syscall. Create an option to make it possible to run the same tests
> using the new syscall, and also add two new relevant test: test long
> lists (bigger than ROBUST_LIST_LIMIT)
On Thu, Jun 26 2025 at 14:11, André Almeida wrote:
> Remove the limit of ROBUST_LIST_LIMIT elements that a robust list can
> have, for the ones created with the new interface. This is done by
With which new interface?
> overwritten the list as it's proceeded in a way that we avoid circular
overw
On Thu, Jun 26 2025 at 14:11, André Almeida wrote:
> Create a new robust_list() syscall. The current syscall can't be
> expanded to cover the following use case, so a new one is needed. This
> new syscall allows users to set multiple robust lists per process and to
> have either 32bit or 64bit poin
On Thu, Jun 26 2025 at 14:11, André Almeida wrote:
$subject lacks a () function notation
> There are two functions for handling robust lists during the task
during a tasks exit
> exit: exit_robust_list() and compat_exit_robust_list(). The first one
> handles either 64bit or 32bit lists, de
On Thu, Jun 26 2025 at 14:11, André Almeida wrote:
> +
> +int set_robust_list(struct robust_list_head *head, size_t len)
This function and the get() counterpart are global because they can?
> +{
> + return syscall(SYS_set_robust_list, head, len);
> +}
> +/*
> + * Basic lock struct, contains j
On Fri, Jun 27 2025 at 00:07, Thomas Gleixner wrote:
> On Thu, Jun 26 2025 at 14:11, André Almeida wrote:
>
>> Create ASSERT_{EQ, NE, TRUE, FALSE} macros to make test creation easier.
>
> What's so futex special about this that it can't use the same muck in
On Thu, Jun 26 2025 at 14:11, André Almeida wrote:
> Create ASSERT_{EQ, NE, TRUE, FALSE} macros to make test creation easier.
What's so futex special about this that it can't use the same muck in
tools/testing/selftests/kselftest_harness.h
or at least share the implementation in some way?
Than
On Tue, May 27 2025 at 17:35, Ben Zong-You Xie wrote:
> glibc does not define SYS_futex for 32-bit architectures using 64-bit
> time_t e.g. riscv32, therefore this test fails to compile since it does not
> find SYS_futex in C library headers. Define SYS_futex as SYS_futex_time64
> in this situation
On Mon, Jun 09 2025 at 14:10, Terry Tritton wrote:
> Futex_waitv can not accept old_timespec32 struct, so userspace should
> convert it from 32bit to 64bit before syscall in 32bit compatible mode.
>
> This fix is based off [1]
>
> Link: https://lore.kernel.org/all/20231203235117.29677-1-we...@sus
BLE set. Just like
> live-patching, the freezer needs to be able to stop tasks in a safe /
> known state.
>
> Compile tested only.
>
> [bigeasy: use likely() in __klp_sched_try_switch() and update comments]
>
> Signed-off-by: Peter Zijlstra (Intel)
> Signed-off-by: Se
GNU General
> Public License, Version 2. And the comment in crc32.h clearly indicates
> that it's meant to have the same license as crc32.c. Therefore, apply
> SPDX-License-Identifier: GPL-2.0-only to both files.
>
> Signed-off-by: Eric Biggers
Reviewed-by: Thomas Gleixner
On Mon, Apr 14 2025 at 14:30, Frank Li wrote:
> This patches add new API to pci-epf-core, so any EP driver can use it.
> platform-msi: Add msi_remove_device_irq_domain() in
> platform_device_msi_free_irqs_all()
> irqdomain: Add IRQ_DOMAIN_FLAG_MSI_IMMUTABLE and
> irq_domain_is_msi_imm
On Mon, Apr 14 2025 at 14:31, Frank Li wrote:
> Some MSI controller change address/data pair when irq_set_affinity().
> Current PCI endpoint can't support this type MSI controller. So add flag
> MSI_FLAG_MUTABLE in include/linux/msi.h and check it when allocate
> doorbell.
This changelog has no re
On Fri, Apr 18 2025 at 08:37, Thomas Gleixner wrote:
> On Thu, Apr 17 2025 at 17:46, John Stultz wrote:
>> Instead it seems like we should just do:
>> tk->coarse_nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift;
>
> You end up with the same probl
On Thu, Apr 17 2025 at 17:46, John Stultz wrote:
> On Sat, Apr 5, 2025 at 2:40 PM Thomas Gleixner wrote:
>> @@ -1831,6 +1847,8 @@ void timekeeping_resume(void)
>> /* Re-base the last cycle value */
>> tks->tkr_mono.cycle_last = cycle_now;
>>
On Wed, Apr 16 2025 at 22:29, John Stultz wrote:
> Looking over the patch, it seems ok to me, but in a test run with it,
> I've seen an error with CLOCK_REALTIME_COARSE during the
> clocksource-switch test (as well as some seemingly unrelated test
> errors, which I need to investigate) so I'm look
Tim!
On Wed, Apr 09 2025 at 17:44, Tim Bird wrote:
>> From: Thomas Gleixner
>> On Tue, Apr 08 2025 at 17:34, Tim Bird wrote:
>> And yes, it ignores not yet tracked files, but if you want to check
>> them, then it's easy enough to commit them temporarily or provide
On Tue, Apr 08 2025 at 17:34, Tim Bird wrote:
>> -Original Message-
> For what it's worth, I've always been a bit skeptical of the use of the
> python git module
> in spdxcheck.py. Its use makes it impossible to use spdxcheck on a kernel
> source tree
> from a tarball (ie, on source not
ck")
> Signed-off-by: Edward Liaw
Reviewed-by: Thomas Gleixner
e a remaining offset.
This leaves the adjtimex() behaviour unmodified and prevents coarse time
from going backwards.
Fixes: da15cfdae033 ("time: Introduce CLOCK_REALTIME_COARSE")
Reported-by: Lei Chen
Signed-off-by: Thomas Gleixner
Closes:
https://lore.kernel.org/lkml/20250310030004
On Thu, Mar 20 2025 at 19:01, John Stultz wrote:
> On Sun, Mar 16, 2025 at 9:56 PM Thomas Gleixner wrote:
>> #define TK_CLEAR_NTP(1 << 0)
>> #define TK_CLOCK_WAS_SET(1 << 1)
>>
>> So it clears NTP instead. Not really what you want either
On Mon, Mar 31 2025 at 16:53, Miroslav Lichvar wrote:
> On Thu, Mar 27, 2025 at 04:42:49PM +0100, Miroslav Lichvar wrote:
>> Maybe I could simply patch the kernel to force a small clock
>> multiplier to increase the rate at which the error accumulates.
>
> I tried that and it indeed makes the issue
On Thu, Apr 03 2025 at 10:32, Miroslav Lichvar wrote:
> On Tue, Apr 01, 2025 at 08:29:23PM +0200, Thomas Gleixner wrote:
>> > 64 64 0.138
>>
>> That's weird as it only delays the update to the next tick.
>
> Ok, so it's not an
On Tue, Apr 01 2025 at 13:19, Miroslav Lichvar wrote:
> On Tue, Apr 01, 2025 at 08:34:23AM +0200, Thomas Gleixner wrote:
>> On Mon, Mar 31 2025 at 16:53, Miroslav Lichvar wrote:
>> > Mult reduction Updates/sec Skew before Skew after
>> > 16 4
On Thu, Mar 27 2025 at 16:42, Miroslav Lichvar wrote:
> On Thu, Mar 27, 2025 at 10:22:31AM +0100, Thomas Gleixner wrote:
>> The original implementation respected this base period, but John's
>> approach of forwarding, which cures the coarse time getter issue,
>> violate
On Tue, Mar 25 2025 at 12:32, Miroslav Lichvar wrote:
> On Thu, Mar 20, 2025 at 01:03:00PM -0700, John Stultz wrote:
>> +static u64 timekeeping_accumulate(struct timekeeper *tk, u64 offset,
>> + enum timekeeping_adv_mode mode,
>> + unsigned
On Sat, Mar 15 2025 at 16:22, John Stultz wrote:
> On Sat, Mar 15, 2025 at 12:23 PM Thomas Gleixner wrote:
>> > So to fix this, rework the timekeeping_advance() logic a bit
>> > so that when we are called from do_adjtimex() and the offset
>> > is smaller the
On Fri, Mar 14 2025 at 17:37, John Stultz wrote:
> Now, by design, this negative adjustment should be fine, because
> the logic run from timekeeping_adjust() is done after we
> accumulate approx mult*interval_cycles into xtime_nsec.
> The accumulated (mult*interval_cycles) will be larger then the
>
On Sat, Mar 01 2025 at 12:02, Marc Zyngier wrote:
> - This IMMUTABLE thing serves no purpose, because you don't randomly
> plug this end-point block on any MSI controller. They come as part
> of an SoC.
Yes and no. The problem is that the EP implementation is meant to be a
generic library and
hip, then the related code on
> the iommu side is compiled out.
>
> Signed-off-by: Jason Gunthorpe
> Signed-off-by: Nicolin Chen
Reviewed-by: Thomas Gleixner
I don't think I have conflicting changes here, so the MSI/IRQ related
changes can be routed through the IOMMU tree along with the rest.
Thanks,
tglx
mpose_msi_msg() in dma-iommu.c
> as it no longer provides the only iommu_dma_prepare_msi() implementation.
>
> Signed-off-by: Jason Gunthorpe
> Signed-off-by: Nicolin Chen
Reviewed-by: Thomas Gleixner
: Jason Gunthorpe
> Signed-off-by: Nicolin Chen
With that fixed:
Reviewed-by: Thomas Gleixner
On Thu, Feb 20 2025 at 15:01, Frank Li wrote:
> On Tue, Feb 11, 2025 at 02:21:53PM -0500, Frank Li wrote:
>
> Thomas Gleixner and Marc Zyngier:
>
> Do you have any comments about irq/msi part?
I'm not having objections, but this needs to be acked by Marc as he had
prett
On Sat, Feb 08 2025 at 01:02, Nicolin Chen wrote:
> From: Jason Gunthorpe
>
> The new function is used to take in a u64 MSI address and store it in the
Which new function? The subject claims this is a rename. That's
confusing at best.
> msi_msg. If the iommu has provided an alternative address
On Sat, Feb 08 2025 at 01:02, Nicolin Chen wrote:
> From: Jason Gunthorpe
>
> All the iommu cases simply want to override the MSI page's address with
> the IOVA that was mapped through the iommu. This doesn't need a cookie
> pointer, we just need to store the IOVA and its page size in the
> msi_de
On Fri, Feb 07 2025 at 10:34, Jason Gunthorpe wrote:
> On Fri, Jan 10, 2025 at 07:32:16PM -0800, Nicolin Chen wrote:
>> Though these two approaches feel very different on the surface, they can
>> share some underlying common infrastructure. Currently, only one pair of
>> sw_msi functions (prepare/c
On Fri, Jan 17 2025 at 15:11, Frederic Weisbecker wrote:
> Le Thu, Jan 16, 2025 at 11:59:48AM +0100, Thomas Gleixner a écrit :
>> > + if (enqueue_hrtimer(timer, new_base, mode))
>> > + smp_call_function_single_async(cpu, &new_cpu_base->csd);
>>
On Tue, Dec 31 2024 at 18:07, Frederic Weisbecker wrote:
> hrtimers are migrated away from the dying CPU to any online target at
> the CPUHP_AP_HRTIMERS_DYING stage in order not to delay bandwidth timers
> handling tasks involved in the CPU hotplug forward progress.
>
> However wake ups can still b
On Wed, Nov 06 2024 at 10:22, Nam Cao wrote:
> Commit 0a1eb2d474ed ("fs/proc: Stop reporting eip and esp in
> /proc/PID/stat") disabled stack pointer reading, because it is generally
> dangerous to do so.
>
> Commit fd7d56270b52 ("fs/proc: Report eip/esp in /prod/PID/stat for
> coredumping") made a
On Fri, Nov 22 2024 at 11:54, Bjorn Helgaas wrote:
> On Mon, Nov 11, 2024 at 03:21:36PM +0800, Joseph Jang wrote:
>> We could not detect the duplicated hwirq number (0xc800) in this
>> case.
>
> Again, this is really out of my area, but based on
> Documentation/core-api/irq/irq-domain.rst, I as
On Thu, Oct 31 2024 at 10:56, Frederic Weisbecker wrote:
> On Thu, Oct 31, 2024 at 02:10:14PM +0530, Naresh Kamboju wrote:
>> <4>[ 0.220657] WARNING: CPU: 1 PID: 0 at kernel/time/clockevents.c:455
>> clockevents_register_device (kernel/time/clockevents.c:455
>
> It's possible that I messed up somet
On Thu, Oct 31 2024 at 14:10, Naresh Kamboju wrote:
> The QEMU-ARM64 boot has failed with the Linux next-20241031 tag.
> The boot log shows warnings at clockevents_register_device and followed
> by rcu_preempt detected stalls.
>
> However, the system did not proceed far enough to reach the login pr
Signed-off-by: Shuah Khan
Reviewed-by: Thomas Gleixner
Thank you!
On Wed, Sep 11 2024 at 10:54, Thomas Gleixner wrote:
> On Wed, Sep 11 2024 at 15:44, Leizhen wrote:
>> 2. Member tot_cnt of struct global_pool can be deleted. We can get it
>>simply and quickly through (slot_idx * ODEBUG_BATCH_SIZE). Avoid
>>redundant maintenanc
u selection, at most 256
>> vCPUs are supported for interrupt routing.
> This patch is OK for me now, but seems it depends on the first two,
> and the first two will get upstream via loongarch-kvm tree. So is that
> possible to also apply this one to loongarch-kvm with your Acked-by?
Go ahead.
Reviewed-by: Thomas Gleixner
es. Include include/vdso/time64.h instead. This change
> will also make the defines consistent.
Acked-by: Thomas Gleixner
On Wed, Sep 11 2024 at 15:44, Leizhen wrote:
> On 2024/9/10 19:44, Thomas Gleixner wrote:
>> That minimizes the pool lock contention and the cache foot print. The
>> global to free pool must have an extra twist to accomodate non-batch
>> sized drops and to handle the all slo
On Tue, Sep 10 2024 at 12:00, Leizhen wrote:
> On 2024/9/10 2:41, Thomas Gleixner wrote:
>> All related functions have this problem and all of this code is very
>> strict about boundaries. Instead of accurately doing the refill, purge
>> etc. we should look into proper batch
On Wed, Sep 04 2024 at 21:41, Zhen Lei wrote:
> Currently, there are multiple instances where several nodes are extracted
> from one list and added to another list. One by one extraction, and then
> one by one splicing, not only low efficiency, readability is also poor.
> The work can be done well
On Fri, Sep 06 2024 at 10:19, zhangji...@cmss.chinamobile.com wrote:
> @@ -362,6 +363,7 @@ int main(int argc, char *argv[])
> {
> char *test_name;
> int c, ret;
> + bool is_static = false;
what means is_static? It's not connected to test_name in any way and
please use reverse fir
On Thu, Aug 29 2024 at 20:29, Huacai Chen wrote:
> On Thu, Aug 29, 2024 at 9:46 AM maobibo wrote:
>> > I think qemu hasn't release with v-eiointc? So we still have a chance
>> > to modify qemu and this driver to simplify registers:
>> It is already merged in qemu mainline, code is frozen and qemu
On Mon, Aug 05 2024 at 15:35, Bibo Mao wrote:
> Interrupts can be routed to maximal four virtual CPUs with one external
> hardware interrupt. Add the extioi virt extension support so that
> Interrupts can be routed to 256 vcpus on hypervisor mode.
interrupts 256 vCPUs in hypervisor mode.
>
On Wed, Jul 31 2024 at 23:30, wangy...@uniontech.com wrote:
> When SGX is not supported by the BIOS, the kernel log still output
> the error 'SGX disabled by BIOS', which can be confusing since
> there might not be an SGX-related option in the BIOS settings.
>
> As a kernel, it's difficult to disti
On Mon, Jul 29 2024 at 14:52, Chen Yu wrote:
> #ifdef CONFIG_PARAVIRT
> /*
> - * virt_spin_lock_key - enables (by default) the virt_spin_lock() hijack.
> + * virt_spin_lock_key - disables (by default) the virt_spin_lock() hijack.
> *
> * Native (and PV wanting native due to vCPU pinning) shou
On Tue, Jun 25 2024 at 20:01, David Woodhouse wrote:
> From: David Woodhouse
>
> The vmclock "device" provides a shared memory region with precision clock
> information. By using shared memory, it is safe across Live Migration.
>
> Like the KVM PTP clock, this can convert TSC-based cross timestamp
On Wed, May 22 2024 at 15:02, Dongli Zhang wrote:
> The absence of IRQD_MOVE_PCNTXT prevents immediate effectiveness of
> interrupt affinity reconfiguration via procfs. Instead, the change is
> deferred until the next instance of the interrupt being triggered on the
> original CPU.
>
> When the int
On Wed, May 15 2024 at 12:51, Dongli Zhang wrote:
> On 5/13/24 3:46 PM, Thomas Gleixner wrote:
>> So yes, moving the invocation of irq_force_complete_move() before the
>> irq_needs_fixup() call makes sense, but it wants this to actually work
>> correctly:
>> @
On Mon, May 13 2024 at 10:43, Dongli Zhang wrote:
> On 5/13/24 5:44 AM, Thomas Gleixner wrote:
>> On Fri, May 10 2024 at 12:06, Dongli Zhang wrote:
>> Any interrupt which is affine to an outgoing CPU is migrated and
>> eventually pending moves are enfo
On Fri, May 10 2024 at 12:06, Dongli Zhang wrote:
> The absence of IRQD_MOVE_PCNTXT prevents immediate effectiveness of
> interrupt affinity reconfiguration via procfs. Instead, the change is
> deferred until the next instance of the interrupt being triggered on the
> original CPU.
>
> When the int
On Mon, Apr 22 2024 at 16:09, Dongli Zhang wrote:
> On 4/22/24 13:58, Thomas Gleixner wrote:
>> On Thu, Apr 18 2024 at 18:33, Dongli Zhang wrote:
> Would you mind suggesting if the below commit message is fine to you?
>
>
> genirq/cpuhotplug: retry with cpu_online_mask whe
On Thu, Apr 18 2024 at 18:33, Dongli Zhang wrote:
> When a CPU is offline, its IRQs may migrate to other CPUs. For managed
> IRQs, they are migrated, or shutdown (if all CPUs of the managed IRQ
> affinity are offline). For regular IRQs, there will only be a
> migration.
Please write out interrupt
Ira,
On Tue, Dec 07 2021 at 16:51, Ira Weiny wrote:
> On Thu, Nov 25, 2021 at 03:25:09PM +0100, Thomas Gleixner wrote:
>
> u32 pkey_update_pkval(u32 pkval, int pkey, u32 accessbits)
> {
> - int shift = pkey * PKR_BITS_PER_PKEY;
> + int shift = PKR_PKEY_SHIFT(pk
Ira,
On Mon, Dec 06 2021 at 17:54, Ira Weiny wrote:
> On Thu, Nov 25, 2021 at 03:12:47PM +0100, Thomas Gleixner wrote:
>> > +.macro __call_ext_ptregs cfunc annotate_retpoline_safe:req
>> > +#ifdef CONFIG_ARCH_ENABLE_SUPERVISOR_PKEYS
>> > + /* add space for ex
On Thu, Nov 25 2021 at 16:15, Thomas Gleixner wrote:
> On Tue, Aug 03 2021 at 21:32, ira weiny wrote:
> Aside of that, the function which set's up the init value is really
> bogus. As you explained in the cover letter a kernel user has to:
>
>1) Claim an index in the enum
&
On Fri, Nov 26 2021 at 11:11, taoyi ty wrote:
> On 11/25/21 11:15 PM, Thomas Gleixner wrote:
>>> +void setup_pks(void)
>>> +{
>>> + if (!cpu_feature_enabled(X86_FEATURE_PKS))
>>> + return;
>>> +
>>> + write_pkrs(pkrs_init_value
On Thu, Nov 25 2021 at 15:25, Thomas Gleixner wrote:
> On Tue, Aug 03 2021 at 21:32, ira weiny wrote:
>> @@ -200,16 +200,14 @@ __setup("init_pkru=", setup_init_pkru);
>> */
>> u32 update_pkey_val(u32 pk_reg, int pkey, unsigned int flags)
>> {
>> -
On Tue, Aug 03 2021 at 21:32, ira weiny wrote:
> @@ -658,6 +659,8 @@ __switch_to(struct task_struct *prev_p, struct
> task_struct *next_p)
> /* Load the Intel cache allocation PQR MSR. */
> resctrl_sched_in();
>
> + pkrs_write_current();
This is invoked from switch_to() and does
On Tue, Aug 03 2021 at 21:32, ira weiny wrote:
> +#ifdef CONFIG_ARCH_ENABLE_SUPERVISOR_PKEYS
> +
> +void setup_pks(void);
pks_setup()
> +#ifdef CONFIG_ARCH_ENABLE_SUPERVISOR_PKEYS
> +
> +static DEFINE_PER_CPU(u32, pkrs_cache);
> +u32 __read_mostly pkrs_init_value;
> +
> +/*
> + * write_pkrs() opt
On Tue, Aug 03 2021 at 21:32, ira weiny wrote:
> @@ -200,16 +200,14 @@ __setup("init_pkru=", setup_init_pkru);
> */
> u32 update_pkey_val(u32 pk_reg, int pkey, unsigned int flags)
> {
> - int pkey_shift = pkey * PKR_BITS_PER_PKEY;
> -
> /* Mask out old bit values */
> - pk_reg &=
On Tue, Aug 03 2021 at 21:32, ira weiny wrote:
> +/*
> + * Replace disable bits for @pkey with values from @flags
> + *
> + * Kernel users use the same flags as user space:
> + * PKEY_DISABLE_ACCESS
> + * PKEY_DISABLE_WRITE
> + */
> +u32 update_pkey_val(u32 pk_reg, int pkey, unsigned int fl
Ira,
On Tue, Aug 03 2021 at 21:32, ira weiny wrote:
> +/*
> + * __call_ext_ptregs - Helper macro to call into C with extended pt_regs
> + * @cfunc: C function to be called
> + *
> + * This will ensure that extended_ptregs is added and removed as needed
> during
> + * a call into C code.
On Fri, Nov 12 2021 at 16:50, Ira Weiny wrote:
> On Tue, Aug 03, 2021 at 09:32:21PM -0700, 'Ira Weiny' wrote:
>> From: Ira Weiny
>>
>> The PKRS MSR is not managed by XSAVE. It is preserved through a context
>> switch but this support leaves exception handling code open to memory
>> accesses duri
On Tue, Apr 20 2021 at 11:08, kernel test robot wrote:
> FYI, we noticed a -3.3% regression of will-it-scale.per_thread_ops due to
> commit:
>
> commit: 4bad58ebc8bc4f20d89cff95417c9b4674769709 ("signal: Allow tasks to
> cache one sigqueue struct")
> https://git.kernel.org/cgit/linux/kernel/git/t
On Tue, Apr 20 2021 at 17:15, Lorenzo Colitti wrote:
> On Fri, Apr 16, 2021 at 1:47 AM Thomas Gleixner wrote:
>> Enable tracing and enable the following tracepoints:
>> [...]
>
> Sorry for the delay. I had to learn a bit about how to use the tracing
> infrastructure. I
On Mon, Apr 19 2021 at 20:12, Maciej Żenczykowski wrote:
> On Thu, Apr 15, 2021 at 9:47 AM Thomas Gleixner wrote:
>> Run the test on a kernels with and without that commit and collect trace
>> data for both.
>>
>> That should give me a pretty clear picture what's
On Mon, Apr 19 2021 at 16:39, Marcelo Tosatti wrote:
>
> +static void clock_was_set_force_reprogram_work(struct work_struct *work)
> +{
> + clock_was_set(true);
> +}
> +
> +static DECLARE_WORK(hrtimer_force_reprogram_work,
> clock_was_set_force_reprogram_work);
> +
> +
> static void clock_w
On Mon, Apr 19 2021 at 03:36, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:1216f02e Add linux-next specific files for 20210415
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=1032ba29d0
> kernel config: https://sy
On Sat, Apr 10 2021 at 13:00, Marc Zyngier wrote:
> dev_id and percpu_dev_id are mutually exclusive in struct irqaction,
> as they conceptually represent the same thing, only in a per-cpu
> fashion.
>
> Move them into an anonymous union, saving a few bytes on the way.
The reason why they are not i
On Sat, Apr 17 2021 at 17:11, Andy Lutomirski wrote:
> On Sat, Apr 17, 2021 at 4:53 PM Thomas Gleixner wrote:
>> which works for
>>
>> foo = function_nocfi(bar);
>
> I agree in general. But right now, we have, in asm/proto.h:
>
> void entry_SYSCALL_64(vo
On Sat, Apr 17 2021 at 16:19, Andy Lutomirski wrote:
> On Fri, Apr 16, 2021 at 4:40 PM Kees Cook wrote:
>> Okay, you're saying you want __builtin_gimme_body_p() to be a constant
>> expression for the compiler, not inline asm?
>
> Yes.
>
> I admit that, in the trivial case where the asm code is *no
On Sat, Apr 17 2021 at 15:54, Paul E. McKenney wrote:
> On Sat, Apr 17, 2021 at 02:24:23PM +0200, Thomas Gleixner wrote:
>> I so wish we could just delete all of this horror instead of making it
>> more horrible.
>
> Revisit deleting it in five years if there are no issu
Mike!
On Sun, Apr 18 2021 at 00:39, Thomas Gleixner wrote:
> If you can't come up with something sensible anytime soon before the
> merge window opens then I'm simply going to revert 41e2da9b5e67 and you
> can try again for the next cycle.
so I just figured out that Boris w
Mike!
On Thu, Apr 15 2021 at 17:06, Mike Travis wrote:
I'm slowly getting tired of the fact that every patch coming from you
fails to comply with the minimal requirements of the documented
procedures.
$subject: [PATCH] Fix set apic mode from x2apic enabled bit patch
Documentation clearly states
On Sat, Apr 17 2021 at 18:24, Thomas Gleixner wrote:
> On Fri, Apr 16 2021 at 13:13, Peter Xu wrote:
>> On Fri, Apr 16, 2021 at 01:00:23PM -0300, Marcelo Tosatti wrote:
>>>
>>> +#define CLOCK_SET_BASES ((1U << HRTIMER_BASE_REALTIME) |
On Fri, Apr 16 2021 at 13:13, Peter Xu wrote:
> On Fri, Apr 16, 2021 at 01:00:23PM -0300, Marcelo Tosatti wrote:
>>
>> +#define CLOCK_SET_BASES ((1U << HRTIMER_BASE_REALTIME) |\
>> + (1U << HRTIMER_BASE_REALTIME_SOFT) | \
>> + (1U << HRTIMER_BASE_TAI)
On Tue, Apr 13 2021 at 21:36, Paul E. McKenney wrote:
Bah, hit send too quick.
> + cpumask_clear(&cpus_ahead);
> + cpumask_clear(&cpus_behind);
> + preempt_disable();
Daft.
> + testcpu = smp_processor_id();
> + pr_warn("Checking clocksource %s synchronization from CPU %d.\n
On Tue, Apr 13 2021 at 21:36, Paul E. McKenney wrote:
> diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
> index 1fc0962c89c0..97eeaf164296 100644
> --- a/arch/x86/kernel/kvmclock.c
> +++ b/arch/x86/kernel/kvmclock.c
> @@ -169,7 +169,7 @@ struct clocksource kvm_clock = {
>
On Tue, Apr 13 2021 at 21:35, Paul E. McKenney wrote:
> #define WATCHDOG_INTERVAL (HZ >> 1)
> #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
> +#define WATCHDOG_MAX_SKEW (NSEC_PER_SEC >> 6)
That's ~15ms which is a tad large I'd say...
> static void clocksource_watchdog_work(struct work_struc
1 - 100 of 7726 matches
Mail list logo