On Sat, Apr 05 2025 at 20:16, Petr Vaněk wrote:
> Xen PV guests in DomU have APIC disabled by design, which causes
> topology_apply_cmdline_limits_early() to limit the number of possible
> CPUs to 1, regardless of the configured number of vCPUs.
PV guests have a APIC emulation and there is no cod
pointer dereference.
Cure this by using the existing pci_msi_domain_supports() helper, which
handles all possible cases correctly.
Fixes: c3164d2e0d18 ("PCI/MSI: Convert pci_msi_ignore_mask to per MSI domain
flag")
Reported-by: Daniel Gomez
Reported-by: Borislav Petkov
Signed-off-
On Thu, Mar 27 2025 at 16:29, kernel test robot wrote:
> kernel test robot noticed "Kernel_panic-not_syncing:Fatal_exception" on:
>
> commit: d9f2164238d814d119e8c979a3579d1199e271bb ("PCI/MSI: Convert
> pci_msi_ignore_mask to per MSI domain flag")
> https://git.kernel.org/cgit/linux/kernel/git/ne
On Wed, Mar 26 2025 at 13:09, Jürgen Groß wrote:
> On 26.03.25 13:05, Thomas Gleixner wrote:
>> The conversion of the XEN specific global variable pci_msi_ignore_mask to a
>> MSI domain flag, missed the facts that:
>>
>> 1) Legacy architectures do not provide a
On Tue, Mar 25 2025 at 11:55, Roger Pau Monné wrote:
> On Tue, Mar 25, 2025 at 11:27:51AM +0100, Thomas Gleixner wrote:
>> On Tue, Mar 25 2025 at 11:22, Roger Pau Monné wrote:
>> > On Tue, Mar 25, 2025 at 10:20:43AM +0100, Thomas Gleixner wrote:
>> >
On Tue, Mar 25 2025 at 11:22, Roger Pau Monné wrote:
> On Tue, Mar 25, 2025 at 10:20:43AM +0100, Thomas Gleixner wrote:
> I'm a bit confused by what msi_create_device_irq_domain() does, as it
> does allocate an irq_domain with an associated msi_domain_info
> structure, however t
On Tue, Mar 25 2025 at 09:11, Thomas Gleixner wrote:
> On Mon, Mar 24 2025 at 20:18, Roger Pau Monné wrote:
>> On Mon, Mar 24, 2025 at 07:58:14PM +0100, Daniel Gomez wrote:
>>> The issue is that info appears to be uninitialized. So, this worked for me:
>>
>> Indeed,
On Mon, Mar 24 2025 at 20:18, Roger Pau Monné wrote:
> On Mon, Mar 24, 2025 at 07:58:14PM +0100, Daniel Gomez wrote:
>> The issue is that info appears to be uninitialized. So, this worked for me:
>
> Indeed, irq_domain->host_data is NULL, there's no msi_domain_info. As
> this is x86, I was expecti
gt; kvmclock resume hook runs before timekeeping_resume()).
>
> Note, there is no evidence that any clocksource supported by the kernel
> depends on a persistent clock.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Thomas Gleixner
evices behind a VMD bridge do work fine when use from a Linux Xen
>> hardware domain. That's the whole point of the series.
>>
>> Signed-off-by: Roger Pau Monné
>
> Needs an ack from Thomas.
No objections from my side (aside of your change log comments).
Reviewed-by: Thomas Gleixner
teven Rostedt (Google) # for kernel/trace/
> Reviewed-by: Martin K. Petersen # SCSI
> Reviewed-by: Darrick J. Wong # xfs
> Acked-by: Jani Nikula
> Acked-by: Corey Minyard
> Signed-off-by: Joel Granados
Acked-by: Thomas Gleixner
On Fri, Nov 15 2024 at 14:15, Easwar Hariharan wrote:
> On 11/15/2024 1:41 PM, Jeff Johnson wrote:
>>
>> How do you expect this series to land since it overlaps a large number of
>> maintainer trees? Do you have a maintainer who has volunteered to take the
>> series and the maintainers should just
73edad3f20b30b4d2fff66c1a85.ca...@redhat.com/
>
> Replace the call to pci_intx() with one to the never-managed version
> pci_intx_unmanaged().
>
> Signed-off-by: Philipp Stanner
Reviewed-by: Thomas Gleixner
On Thu, Nov 14 2024 at 10:05, Philipp Stanner wrote:
> On Wed, 2024-11-13 at 17:22 +0100, Thomas Gleixner wrote:
>> On Wed, Nov 13 2024 at 13:41, Philipp Stanner wrote:
>> > pci_intx() is a hybrid function which can sometimes be managed
>> > through
>> > devre
On Wed, Nov 13 2024 at 13:41, Philipp Stanner wrote:
> pci_intx() is a hybrid function which can sometimes be managed through
> devres. This hybrid nature is undesirable.
>
> Since all users of pci_intx() have by now been ported either to
> always-managed pcim_intx() or never-managed pci_intx_unman
On Wed, Nov 13 2024 at 13:41, Philipp Stanner wrote:
> +/**
> + * pci_intx_unmanaged - enables/disables PCI INTx for device dev,
> + * unmanaged version
> + * @pdev: the PCI device to operate on
> + * @enable: boolean: whether to enable or disable PCI INTx
Except that the argument is of type int,
On Tue, Oct 15 2024 at 20:51, Philipp Stanner wrote:
> +/**
> + * pci_intx - enables/disables PCI INTx for device dev, unmanaged version
mismatch vs. actual function name.
> + * @pdev: the PCI device to operate on
> + * @enable: boolean: whether to enable or disable PCI INTx
> + *
> + * Enables/d
On Mon, Oct 28 2024 at 19:05, Thomas Gleixner wrote:
> On Fri, Oct 25 2024 at 18:06, Jinjie Ruan wrote:
>
>> As the front patch 6 ~ 13 did, the arm64_preempt_schedule_irq() is
>
> Once this series is applied nobody knows what 'front patch 6 ~ 13' did.
>
>> sa
On Fri, Oct 25 2024 at 18:06, Jinjie Ruan wrote:
$Subject: Can you please make this simply:
entry: Add arch_pre/post_report_syscall_entry/exit()
> Add some syscall arch functions to support arm64 to use generic syscall
> code, which do not affect existing architectures that use generic entry
On Fri, Oct 25 2024 at 18:06, Jinjie Ruan wrote:
> As the front patch 6 ~ 13 did, the arm64_preempt_schedule_irq() is
Once this series is applied nobody knows what 'front patch 6 ~ 13' did.
> same with the irq preempt schedule code of generic entry besides those
> architecture-related logic call
hree drivers overriding it depend on that. They should
> probably also be marked broken, but we can give them a bit of a grace
> period for that.
One week :)
> Signed-off-by: Christoph Hellwig
Reviewed-by: Thomas Gleixner
On Fri, Aug 02 2024 at 16:25, Nikolay Borisov wrote:
> On 2.08.24 г. 11:50 ч., Alexey Dobriyan wrote:
>> If this memcmp() is not inlined then PVH early boot code can call
>> into KASAN-instrumented memcmp() which results in unbootable VMs:
>>
>> pvh_start_xen
>> xen_prepare_pvh
>> x
On Fri, Aug 02 2024 at 11:53, Alexey Dobriyan wrote:
> If this memset() is not inlined than PVH early boot code can call
> into KASAN-instrumented memset() which results in unbootable VMs.
>
> Ubuntu's 22.04.4 LTS gcc version 11.4.0 (Ubuntu 11.4.0-1ubuntu1~22.04)
> doesn't inline this memset but in
On Fri, Aug 02 2024 at 11:50, Alexey Dobriyan wrote:
Please amend functions with '()' in the subject line and the change log
consistently.
> diff --git a/arch/x86/include/asm/cpuid.h b/arch/x86/include/asm/cpuid.h
> index 6b122a31da06..3eca7824430e 100644
> --- a/arch/x86/include/asm/cpuid.h
> ++
On Wed, Apr 10 2024 at 15:48, Jason Andryuk wrote:
> ---
> arch/x86/kernel/head_64.S| 22 ++
> arch/x86/kernel/pgtable_64_helpers.h | 28
That's the wrong place as you want to include it from arch/x86/platform.
arch/x86/include/asm/
On Mon, Dec 04 2023 at 13:31, Stefano Stabellini wrote:
> On Mon, 3 Dec 2023, Chen, Jiqian wrote:
>> >> vpci device state when device is reset on dom0 side.
>> >>
>> >> And call that function in pcistub_init_device. Because when
>> >> we use "pci-assignable-add" to assign a passthrough device in
>>
On Fri, Nov 24 2023 at 18:31, Jiqian Chen wrote:
> diff --git a/drivers/xen/xen-pciback/pci_stub.c
> b/drivers/xen/xen-pciback/pci_stub.c
> index 5a96b6c66c07..b83d02bcc76c 100644
> --- a/drivers/xen/xen-pciback/pci_stub.c
> +++ b/drivers/xen/xen-pciback/pci_stub.c
> @@ -357,6 +357,7 @@ static int
On Fri, Nov 24 2023 at 18:31, Jiqian Chen wrote:
> When device on dom0 side has been reset, the vpci on Xen side
> won't get notification, so that the cached state in vpci is
> all out of date with the real device state.
> To solve that problem, this patch add a function to clear all
Please get ri
On Mon, Sep 25 2023 at 09:07, H. Peter Anvin wrote:
> On September 23, 2023 2:42:10 AM PDT, Xin Li wrote:
>>+/* May not be marked __init: used by software suspend */
>>+void syscall_init(void)
>>+{
>>+ /* The default user and kernel segments */
>>+ wrmsr(MSR_STAR, 0, (__USER32_CS << 16) |
On Fri, Sep 22 2023 at 08:16, Xin3 Li wrote:
>> > > +static __always_inline void __wrmsrns(u32 msr, u32 low, u32 high)
>> >
>> > Shouldn't this be named wrmsrns_safe since it has exception handling,
>> > similar
>> to
>> > the current wrmsrl_safe.
>> >
>>
>> Both safe and unsafe versions have exc
On Thu, Sep 21 2023 at 12:48, Nikolay Borisov wrote:
> On 14.09.23 г. 7:47 ч., Xin Li wrote:
>> +
>> +/* INT80 */
>> +case IA32_SYSCALL_VECTOR:
>> +if (likely(IS_ENABLED(CONFIG_IA32_EMULATION))) {
>
> Since future kernels will support boottime toggling of whether 32bit
> syscal
On Wed, Sep 20 2023 at 04:33, Li, Xin3 wrote:
>> > +static inline void fred_syscall_init(void) {
>> > + /*
>> > + * Per FRED spec 5.0, FRED uses the ring 3 FRED entrypoint for SYSCALL
>> > + * and SYSENTER, and ERETU is the only legit instruction to return to
>> > + * ring 3, as a result the
On Wed, Sep 13 2023 at 21:48, Xin Li wrote:
> +static inline void fred_syscall_init(void)
> +{
> + /*
> + * Per FRED spec 5.0, FRED uses the ring 3 FRED entrypoint for SYSCALL
> + * and SYSENTER, and ERETU is the only legit instruction to return to
> + * ring 3, as a result there
On Thu, Sep 14 2023 at 14:15, andrew wrote:
> PV guests are never going to see FRED (or LKGS for that matter) because
> it advertises too much stuff which simply traps because the kernel is in
> CPL3.
>
> That said, the 64bit PV ABI is a whole lot closer to FRED than it is to
> IDT delivery. (Almo
On Fri, Sep 15 2023 at 00:46, andrew wrote:
> On 15/09/2023 12:00 am, Thomas Gleixner wrote:
> What I meant was "there should be the two top-level APIs, and under the
> covers they DTRT". Part of doing the right thing is to wire up paravirt
> for configs where that is specif
On Fri, Sep 15 2023 at 00:46, andrew wrote:
> On 15/09/2023 12:00 am, Thomas Gleixner wrote:
>> So no. I'm fundamentally disagreeing with your recommendation. The way
>> forward is:
>>
>> 1) Provide the native variant for wrmsrns(), i.e. rename the proposed
>
Andrew!
On Thu, Sep 14 2023 at 15:05, andrew wrote:
> On 14/09/2023 5:47 am, Xin Li wrote:
>> +static __always_inline void wrmsrns(u32 msr, u64 val)
>> +{
>> +__wrmsrns(msr, val, val >> 32);
>> +}
>
> This API works in terms of this series where every WRMSRNS is hidden
> behind a FRED check, b
On Wed, Sep 13 2023 at 12:02, Andrew Cooper wrote:
> The PSTATE MSRs are entirely model specific, fully read/write, and the
> Enable bit is not an enable bit; its a "not valid yet" bit that firmware
> is required to adjust to be consistent across the coherency fabric.
>
> Linux is simply wrong with
On Mon, Sep 11 2023 at 19:24, Andrew Cooper wrote:
> Furthermore, cursory testing that Thomas did for the Linux topology work
> demonstrates that it is broken anyway for reasons unrelated to ACPI parsing.
>
> Even furthermore, it's an area of the Xen / dom0 boundary which is
> fundamentally broken
Jan!
On Wed, Aug 30 2023 at 09:20, Jan Beulich wrote:
> On 30.08.2023 00:54, Thomas Gleixner wrote:
>> On Tue, Aug 29 2023 at 16:25, Roger Pau Monné wrote:
>>
>> Correct. These IDs are invalid independent of any flag value.
>
> What we apparently agree on is the
On Tue, Aug 29 2023 at 16:25, Roger Pau Monné wrote:
> On Sun, Aug 27, 2023 at 05:44:15PM +0200, Thomas Gleixner wrote:
>> The APIC/X2APIC description of MADT specifies flags:
>>
>> Enabled If this bit is set the processor is ready for use. If
>>
On Wed, Aug 23 2023 at 14:56, Jan Beulich wrote:
> On 23.08.2023 11:21, Andrew Cooper wrote:
>> In the spec, exactly where you'd expect to find them...
>>
>> "OSPM does not expect the information provided in this table to be
>> updated if the processor information changes during the lifespan of an
__xen_evtchn_do_upcall(), while renaming __xen_evtchn_do_upcall() to
> xen_evtchn_do_upcall()
>
> Signed-off-by: Juergen Gross
Reviewed-by: Thomas Gleixner
Hi!
Something in XEN/PV time management seems to be seriously broken:
timekeeping watchdog on CPU9: Marking clocksource 'tsc' as unstable because the
skew is too large:
[ 152.557154] clocksource: 'xen' wd_nsec: 511979417
wd_now: 24e4d7625e wd_last: 24c65332c5 mask: ff
On Fri, Aug 04 2023 at 21:01, Peter Zijlstra wrote:
> On Fri, Aug 04, 2023 at 05:35:11PM +, Li, Xin3 wrote:
>> > > The commit d99015b1abbad ("x86: move entry_64.S register saving out of
>> > > the macros") introduced the changes to set orig_ax to -1, but I can't
>> > > see why it's required. Ou
On Thu, Jun 08 2023 at 17:33, Hou Wenlong wrote:
> On Wed, Jun 07, 2023 at 08:49:15PM +0800, Dave Hansen wrote:
>> What problems does this patch set solve? How might that solution be
>> visible to end users? Why is this problem important to you?
>
> We want to build the kernel as PIE and allow th
On Mon, May 15 2023 at 16:19, Hou Wenlong wrote:
> This patchset unifies FIXADDR_TOP as a variable for x86, allowing the
> fixmap area to be movable and relocated with the kernel image in the
> x86/PIE patchset [0]. This enables the kernel image to be relocated in
> the top 512G of the address spa
ot/64: Implement
arch_cpuhp_init_parallel_bringup() and enable it")
Reported-by: Kirill A. Shutemov
Signed-off-by: Thomas Gleixner
---
arch/x86/coco/tdx/tdx.c | 11 +++
arch/x86/include/asm/x86_init.h |3 +++
arch/x86/kernel/smpboot.c | 19 ++-
On Tue, May 30 2023 at 15:03, Tom Lendacky wrote:
> On 5/30/23 14:51, Thomas Gleixner wrote:
>> That aside. From a semantical POV making this decision about parallel
>> bootup based on some magic CC encryption attribute is questionable.
>>
>> I'm tending to ju
On Tue, May 30 2023 at 09:56, Sean Christopherson wrote:
> On Tue, May 30, 2023, Thomas Gleixner wrote:
>> On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote:
>> > On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
>> >> The decision to allow para
On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote:
> On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
>> The decision to allow parallel bringup of secondary CPUs checks
>> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
>> pa
: Kirill A. Shutemov
Signed-off-by: Thomas Gleixner
---
arch/x86/kernel/smpboot.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1282,7 +1282,7 @@ bool __init arch_cpuhp_init_parallel_bri
* Intel-TDX has a se
and rename it
to LOCK_AND_LOAD_REALMODE_ESP to make it clear what this is about.
Fixes: f6f1ae9128d2 ("x86/smpboot: Implement a bit spinlock to protect the
realmode stack")
Reported-by: Kirill A. Shutemov
Signed-off-by: Thomas Gleixner
---
arch/x86/realmode/rm/trampoline_64.S | 12 ++
On Tue, May 30 2023 at 11:26, Thomas Gleixner wrote:
> On Tue, May 30 2023 at 03:54, Kirill A. Shutemov wrote:
>> On Mon, May 29, 2023 at 11:31:29PM +0300, Kirill A. Shutemov wrote:
>>> Disabling parallel bringup helps. I didn't look closer yet. If you have
>>> an
On Tue, May 30 2023 at 03:54, Kirill A. Shutemov wrote:
> On Mon, May 29, 2023 at 11:31:29PM +0300, Kirill A. Shutemov wrote:
>> Disabling parallel bringup helps. I didn't look closer yet. If you have
>> an idea let me know.
>
> Okay, it crashes around .Lread_apicid due to touching MSRs that trigge
On Mon, May 29 2023 at 23:31, Kirill A. Shutemov wrote:
> Aaand the next patch that breaks TDX boot is...
>
> x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable
> it
>
> Disabling parallel bringup helps. I didn't look closer yet. If you have
> an idea let me know.
So h
On Mon, May 29 2023 at 05:39, Kirill A. Shutemov wrote:
> On Sat, May 27, 2023 at 03:40:02PM +0200, Thomas Gleixner wrote:
> But it gets broken again on "x86/smpboot: Implement a bit spinlock to
> protect the realmode stack" with
>
> [0.554079] node
On Fri, May 26 2023 at 12:14, Thomas Gleixner wrote:
> On Wed, May 24 2023 at 23:48, Kirill A. Shutemov wrote:
>> This patch causes boot regression on TDX guest. The guest crashes on SMP
>> bring up.
The below should fix that. Sigh...
Thanks,
tglx
Subject: x86/
On Wed, May 24 2023 at 23:48, Kirill A. Shutemov wrote:
> On Mon, May 08, 2023 at 09:44:17PM +0200, Thomas Gleixner wrote:
>> #ifdef CONFIG_SMP
>> -/**
>> - * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary
>> thread
>> - * @apicid:
On Mon, May 22 2023 at 23:27, Mark Brown wrote:
> On Mon, May 22, 2023 at 11:04:17PM +0200, Thomas Gleixner wrote:
>
>> That does not make any sense at all and my tired brain does not help
>> either.
>
>> Can you please apply the below debug patch and provide the ou
On Mon, May 22 2023 at 20:45, Mark Brown wrote:
> On Fri, May 12, 2023 at 11:07:50PM +0200, Thomas Gleixner wrote:
>> From: Thomas Gleixner
>>
>> There is often significant latency in the early stages of CPU bringup, and
>> time is wasted by waking each CPU (e.g. with
From: Thomas Gleixner
The x86 CPU bringup state currently does AP wake-up, wait for AP to
respond and then release it for full bringup.
It is safe to be split into a wake-up and and a separate wait+release
state.
Provide the required functions and enable the split CPU bringup, which
prepares
From: Thomas Gleixner
For parallel CPU brinugp it's required to read the APIC ID in the low level
startup code. The virtual APIC base address is a constant because its a
fix-mapped address. Exposing that constant which is composed via macros to
assembly code is non-trivial due to h
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/arm/Kconfig |1 +
arch/arm/include/asm/smp.h |2 +-
arch/arm/kernel/smp.c
From: Thomas Gleixner
There is often significant latency in the early stages of CPU bringup, and
time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and
then waiting for it to respond before moving on to the next.
Allow a platform to enable parallel setup which brings all to be
This occurs when the CPU to be brought up is
in the CPUHP_OFFLINE state, which should correctly do the cleanup any
time the CPU has been taken down to the point where such is needed.
Signed-off-by: David Woodhouse
Signed-off-by: Thomas Gleixner
Tested-by: Mark Rutland
Tested-by: Michael Kelle
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/parisc/Kconfig |1 +
arch/parisc/kernel/process.c |4 ++--
arch/parisc/kernel
From: Thomas Gleixner
All users converted to the hotplug core mechanism.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
include/linux/cpu.h |2 -
kernel/smpboot.c| 75
2 files changed, 77 deletions(-)
--- a
From: Thomas Gleixner
Make the primary thread tracking CPU mask based in preparation for simpler
handling of parallel bootup.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/include/asm/apic.h |2 --
arch/x86/include/asm/topology.h | 19
From: Thomas Gleixner
No more users.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
include/linux/cpu.h |2 -
kernel/smpboot.c| 90
2 files changed, 92 deletions(-)
--- a/include/linux/cpu.h
+++ b/include/linux
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. This unfortunately requires to add dead reporting to the non CPS
platforms as CPS is the only user, but it allows an overall consolidation
of this functionality.
No functional change intended
From: Thomas Gleixner
Implement the validation function which tells the core code whether
parallel bringup is possible.
The only condition for now is that the kernel does not run in an encrypted
guest as these will trap the RDMSR via #VC, which cannot be handled at that
point in early startup
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/csky/Kconfig |1 +
arch/csky/include/asm/smp.h |2 +-
arch/csky/kernel/smp.c
,
split the bitlock part out ]
Co-developed-by: Thomas Gleixner
Co-developed-by: Brian Gerst
Signed-off-by: Thomas Gleixner
Signed-off-by: Brian Gerst
Signed-off-by: David Woodhouse
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
V4: Remove the lock prefix in the error path - Peter
From: Thomas Gleixner
The bring up logic of a to be onlined CPU consists of several parts, which
are considered to be a single hotplug state:
1) Control CPU issues the wake-up
2) To be onlined CPU starts up, does the minimal initialization,
reports to be alive and waits for release
From: Thomas Gleixner
Parallel AP bringup requires that the APs can run fully parallel through
the early startup code including the real mode trampoline.
To prepare for this implement a bit-spinlock to serialize access to the
real mode stack so that parallel upcoming APs are not going to
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
Acked-by: Palmer Dabbelt
---
arch/riscv/Kconfig |1 +
arch/riscv/include/asm/smp.h
From: Thomas Gleixner
Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.
Signed-off-by: Thomas Gleixner
Tested-by: Mark Rutland
Tested-by: Michael Kelley
---
arch/arm64/Kconfig |1 +
arch/arm64/include/asm/smp.h |2
From: Thomas Gleixner
Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/kernel/smpboot.c | 26 ++
1
From: Thomas Gleixner
The new AP state tracking and synchronization mechanism in the CPU hotplug
core code allows to remove quite some x86 specific code:
1) The AP alive synchronization based on cpumasks
2) The decision whether an AP can be brought up again
Signed-off-by: Thomas Gleixner
From: Thomas Gleixner
The CPU state tracking and synchronization mechanism in smpboot.c is
completely independent of the hotplug code and all logic around it is
implemented in architecture specific code.
Except for the state reporting of the AP there is absolutely nothing
architecture specific
From: Thomas Gleixner
No point in this conditional voodoo. Un-initializing the lock mechanism is
safe to be called unconditionally even if it was already invoked when the
CPU died.
Remove the invocation of xen_smp_intr_free() as that has been already
cleaned up in xen_cpu_dead_hvm().
Signed
From: Thomas Gleixner
There is no harm to hold sparse_irq lock until the upcoming CPU completes
in cpuhp_online_idle(). This allows to remove cpu_online() synchronization
from architecture code.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
V4: Amend comment about sparse irq
From: Thomas Gleixner
Now that TSC synchronization is SMP function call based there is no reason
to wait for the AP to be set in smp_callin_mask. The control CPU waits for
the AP to set itself in the online mask anyway.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
V4: Rename
From: Thomas Gleixner
cpu_callout_mask is used for the stop machine based MTRR/PAT init.
In preparation of moving the BP/AP synchronization to the core hotplug
code, use a private CPU mask for cacheinfo and manage it in the
starting/dying hotplug state.
Signed-off-by: Thomas Gleixner
Tested
From: Thomas Gleixner
Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.
Whether the control CPU runs in a wait loop or sleeps in the core code
waiting for the online operation to complete makes no diffe
From: Thomas Gleixner
The usage is in smpboot.c and not in the CPU initialization code.
The XEN_PV usage of cpu_callout_mask is obsolete as cpu_init() not longer
waits and cacheinfo has its own CPU mask now, so cpu_callout_mask can be
made static too.
Signed-off-by: Thomas Gleixner
Tested-by
From: Thomas Gleixner
Spin-waiting on the control CPU until the AP reaches the TSC
synchronization is just a waste especially in the case that there is no
synchronization required.
As the synchronization has to run with interrupts disabled the control CPU
part can just be done from a SMP
From: Thomas Gleixner
The synchronization of the AP with the control CPU is a SMP boot problem
and has nothing to do with cpu_init().
Open code cpu_init_secondary() in start_secondary() and move
wait_for_master_cpu() into the SMP boot code.
No functional change.
Signed-off-by: Thomas Gleixner
From: Thomas Gleixner
Now that the CPU0 hotplug cruft is gone, the only user is AMD SEV.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/kernel/callthunks.c |2 +-
arch/x86/kernel/head_32.S| 14 --
arch/x86/kernel/head_64.S|2 +-
3 files
cpu_init().
As the barrier has zero value, remove it.
Reported-by: Peter Zijlstra
Signed-off-by: Thomas Gleixner
Link:
https://lore.kernel.org/r/20230509100421.gu83...@hirez.programming.kicks-ass.net
---
V4: New patch
---
arch/x86/kernel/smpboot.c |2 --
1 file changed, 2 deletions(-)
--- a
ned-off-by: David Woodhouse
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/kernel/smpboot.c | 184 +-
1 file changed, 119 insertions(+), 65 deletions(-)
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -193
From: Thomas Gleixner
This was introduced together with commit e1c467e69040 ("x86, hotplug: Wake
up CPU0 via NMI instead of INIT, SIPI, SIPI") to eventually support
physical hotplug of CPU0:
"We'll change this code in the future to wake up hard offlined CPU0 if
real plat
From: Thomas Gleixner
This is used in the SEV play_dead() implementation to re-online CPUs. But
that has nothing to do with CPU0.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/include/asm/cpu.h |2 +-
arch/x86/kernel/callthunks.c |2 +-
arch/x86/kernel
From: Thomas Gleixner
No point in keeping them around.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/kernel/smpboot.c |4 ++--
kernel/cpu.c |2 +-
kernel/smp.c |2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
--- a/arch
From: Thomas Gleixner
This was introduced with commit e1c467e69040 ("x86, hotplug: Wake up CPU0
via NMI instead of INIT, SIPI, SIPI") to eventually support physical
hotplug of CPU0:
"We'll change this code in the future to wake up hard offlined CPU0 if
real platform and r
From: Thomas Gleixner
When TSC is synchronized across sockets then there is no reason to
calibrate the delay for the first CPU which comes up on a socket.
Just reuse the existing calibration value.
This removes 100ms pointlessly wasted time from CPU hotplug per socket.
Signed-off-by: Thomas
From: Thomas Gleixner
Make topology_phys_to_logical_pkg_die() static as it's only used in
smpboot.c and fixup the kernel-doc warnings for both functions.
Signed-off-by: Thomas Gleixner
Tested-by: Michael Kelley
---
arch/x86/include/asm/topology.h |3 ---
arch/x86/kernel/smpb
Hi!
This is version 4 of the reworked parallel bringup series. Version 3 can be
found here:
https://lore.kernel.org/lkml/20230508181633.089804...@linutronix.de
This is just a reiteration to address the following details:
1) Address review feedback (Peter Zijlstra)
2) Fix a MIPS related
On Tue, May 09 2023 at 12:04, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
> Not to the detriment of this patch, but this barrier() and it's comment
> seem weird vs smp_callin(). That function ends with an atomic bitop (it
> has to, at
1 - 100 of 680 matches
Mail list logo