On 10/09/2024 3:39 pm, Jan Beulich wrote:
> As of 68e1183411be ("libxc: introduce a xc_dom_arch for hvm-3.0-x86_32
> guests") caching mode is disabled for HVM domains from start-of-day, due
> the disabling being unconditional in hvm/save.c:arch_hvm_load(). With
> that the field is useless, and can be dropped. Drop the helper functions
> manipulating / checking as well right away, but leave the use sites of
> stdvga_cache_is_enabled() with the hard-coded result the function would
> have produced, to aid validation of subsequent dropping of further code.
>
> Signed-off-by: Jan Beulich <jbeul...@suse.com>

This only applies to VMs constructed with libxenguest, which isn't all
VMs.  But, it's probably close enough to ~100% of VMs to count.

Personally I think it would be clearer to say "Since 68e1183411be, HVM
guests are built using XEN_DOMCTL_sethvmcontext, which intentionally
disables stdvga caching in arch_hvm_load()".

As a minor tangent, this is yet another casualty of nothing being wired
into the migration stream.  Rightly or wrongly, that mindset has caused
an immense amount of problems in Xen.


How does this interact with X86_EMU_VGA?

stdvga_init() is called unconditionally for HVM domains, exiting early
for !EMU_VGA (skipping the pointless re-zero of the structure). 
Nevertheless, the cache is UNINITIALISED for all configurations at this
point.

And we won't hit the stdvga_try_cache_enable() calls in any of
stdvga_{outb,mem_{write,accept}}() prior to XEN_DOMCTL_sethvmcontext,
which will unconditionally set DISABLED.

So yes, I think this is for-all-intents-and-purposes dead logic.

Reviewed-by: Andrew Cooper <andrew.coop...@citrix.com>

Reply via email to