On 18/11/2025 3:08 pm, Jan Beulich wrote:
> There's no need to invoke CPUID yet another time. However, as the host CPU
> policy is set up only shortly after init_intel() ran on the BSP, defer the
> logic to a pre-SMP initcall. This can't be (a new) one in cpu/intel.c
> though, as that's linked after acpi/cpu_idle.c (which is where we already
> need the feature set). Since opt_arat is local to the cpu/ subtree,
> introduce a new Intel-specific helper to hold the code needed.
>
> Further, as we assume symmetry anyway, use setup_force_cpu_cap() and hence
> limit the checking to the boot CPU.
>
> Signed-off-by: Jan Beulich <[email protected]>
> ---
> The need to move where cpu_has_arat is checked would go away if we did
> away with opt_arat (as mentioned in the previous patch), and hence could
> use cpu_has_arat directly where right now XEN_ARAT is checked.
>
> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -1666,6 +1666,9 @@ static int __init cf_check cpuidle_presm
>  {
>      void *cpu = (void *)(long)smp_processor_id();
>  
> +    if ( boot_cpu_data.vendor == X86_VENDOR_INTEL )
> +        intel_init_arat();

I really would prefer to avoid the need for this.

Now that microcode loading has moved to the start of day, we can drop
most of the order-of-init complexity for CPUID/etc, and I expect that
problems like this will cease to exist as a result.

Notably, we've now got no relevant difference between early_init() and
regular init().  That was a complexity we inherited from Linux.

~Andrew

Reply via email to