On 07.01.2023 23:07, Demi Marie Obenour wrote:
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -227,6 +227,39 @@ config XEN_ALIGN_2M
>  
>  endchoice
>  
> +config LINUX_PAT
> +     bool "Use Linux's PAT instead of Xen's default"
> +     help
> +       Use Linux's Page Attribute Table instead of the default Xen value.
> +
> +       The Page Attribute Table (PAT) maps three bits in the page table entry
> +       to the actual cacheability used by the processor.  Many Intel
> +       integrated GPUs have errata (bugs) that cause CPU access to GPU memory
> +       to ignore the topmost bit.  When using Xen's default PAT, this results
> +       in caches not being flushed and incorrect images being displayed.  The
> +       default PAT used by Linux does not cause this problem.
> +
> +       If you say Y here, you will be able to use Intel integrated GPUs that
> +       are attached to your Linux dom0 or other Linux PV guests.  However,
> +       you will not be able to use non-Linux OSs in dom0, and attaching a PCI
> +       device to a non-Linux PV guest will result in unpredictable guest
> +       behavior.  If you say N here, you will be able to use a non-Linux
> +       dom0, and will be able to attach PCI devices to non-Linux PV guests.
> +
> +       Note that saving a PV guest with an assigned PCI device on a machine
> +       with one PAT and restoring it on a machine with a different PAT won't
> +       work: the resulting guest may boot and even appear to work, but caches
> +       will not be flushed when needed, with unpredictable results.  HVM
> +       (including PVH and PVHVM) guests and guests without assigned PCI
> +       devices do not care what PAT Xen uses, and migration (even live)
> +       between hypervisors with different PATs will work fine.  Guests using
> +       PV Shim care about the PAT used by the PV Shim firmware, not the
> +       host’s PAT.  Also, non-default PAT values are incompatible with the
> +       (deprecated) qemu-traditional stubdomain.
> +
> +       Say Y if you are building a hypervisor for a Linux distribution that
> +       supports Intel iGPUs.  Say N otherwise.

I'm not convinced we want this; if other maintainers think differently,
then I don't mean to stand in the way though. If so, however,
- the above likely wants guarding by EXPERT and/or UNSUPPORTED
- the support status of using this setting wants to be made crystal
  clear, perhaps by an addition to ./SUPPORT.md.

Jan


Reply via email to