On 11/03/2019 07:57, Chao Gao wrote:
> Updating microcode is less error prone when caches have been flushed and
> depending on what exactly the microcode is updating. For example, some
> of the issues around certain Broadwell parts can be addressed by doing a
> full cache flush.
> 
> [linux commit: 91df9fdf51492aec9fed6b4cbd33160886740f47]
> Signed-off-by: Chao Gao <chao....@intel.com>
> Cc: Ashok Raj <ashok....@intel.com>
> ---
> Changes in v6:
>  - new
> ---
>  xen/arch/x86/microcode_intel.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/xen/arch/x86/microcode_intel.c b/xen/arch/x86/microcode_intel.c
> index c921ea9..d5ef145 100644
> --- a/xen/arch/x86/microcode_intel.c
> +++ b/xen/arch/x86/microcode_intel.c
> @@ -351,6 +351,12 @@ static int apply_microcode(void)
>      /* serialize access to the physical write to MSR 0x79 */
>      spin_lock_irqsave(&microcode_update_lock, flags);
>  
> +    /*
> +     * Writeback and invalidate caches before updating microcode to avoid
> +     * internal issues depending on what the microcode is updating.
> +     */
> +    wbinvd();

While this is a workaround for a specific Broadwell erratum, does wbinvd
need to be performed unconditionally everywhere? High-end Xeons have
up to 38.5M L3 and 28M L2. What is the upper bound of wbinvd latency on
such systems?

> +
>      /* write microcode via MSR 0x79 */
>      wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc_intel->bits);
>      wrmsrl(MSR_IA32_UCODE_REV, 0x0ULL);
> 

Thanks,
Sergey

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to