On 06.11.2020 08:13, Jan Beulich wrote:
> --- a/xen/arch/x86/xstate.c
> +++ b/xen/arch/x86/xstate.c
> @@ -541,6 +541,41 @@ int xstate_alloc_save_area(struct vcpu *
>  
>      return 0;
>  }
> +
> +int xstate_update_save_area(struct vcpu *v)
> +{
> +    unsigned int i, size, old;
> +    struct xsave_struct *save_area;
> +    uint64_t xcr0_max = cpuid_policy_xcr0_max(v->domain->arch.cpuid);
> +
> +    ASSERT(!is_idle_vcpu(v));
> +
> +    if ( !cpu_has_xsave )
> +        return 0;
> +
> +    if ( v->arch.xcr0_accum & ~xcr0_max )
> +        return -EBUSY;
> +
> +    for ( size = old = XSTATE_AREA_MIN_SIZE, i = 2; i < xstate_features; ++i 
> )
> +    {
> +        if ( xcr0_max & (1ull << i) )
> +            size = max(size, xstate_offsets[i] + xstate_sizes[i]);
> +        if ( v->arch.xcr0_accum & (1ull << i) )
> +            old = max(old, xstate_offsets[i] + xstate_sizes[i]);
> +    }

This could be further shrunk if we used XSAVEC / if we really
used XSAVES, as then we don't need to also cover the holes. But
since we currently use neither of the two in reality, this would
require more work than just adding the alternative size
calculation here.

Jan

Reply via email to