On Thu, May 18, 2023 at 10:24:10AM +0300, Xenia Ragiadakou wrote:
> 
> On 15/5/23 17:17, Jan Beulich wrote:
> > On 13.05.2023 03:17, Stefano Stabellini wrote:
> > > From: Stefano Stabellini <stefano.stabell...@amd.com>
> > > 
> > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > the tables in the guest. Instead, copy the tables to Dom0.
> > Do you really mean "in the guest" (i.e. from Xen's perspective, i.e.
> > ignoring that when running on qemu it is kind of a guest itself)?
> > 
> > I also consider the statement too broad anyway: Various people have
> > run PVH Dom0 without running into such an issue, so it's clearly not
> > just "leads to".
> 
> In my opinion the issue is broader.
> 
> In pvh_setup_acpi(), the code adding the ACPI tables to dom0 memory map does
> not check the return value of pvh_add_mem_range(). If there is an overlap
> and the overlapping region is marked as E820_ACPI, it maps not just the
> allowed tables but the entire overlapping range ,

But that's the indented behavior: all ACPI regions will be mapped into
the dom0 physmap, the filtering of the tables exposed to dom0 is done
in the XSDT, but not in by filtering the mapped regions.  Note this
won't be effective anyway, as the minimal granularity of physmap
entries is 4k, so multiple tables could live in the same 4K region.
Also Xen cannot parse dynamic tables (SSDT) or execute methods, and
hence doesn't know exactly which memory will be used.

Xen relies on the firmware to have the ACPI tables in ACPI, NVS or
RESERVED regions in order for them to be mapped into the gust physmap.
The call to pvh_add_mem_range() in pvh_setup_acpi() is just an attempt
to workaround broken systems that have tables placed in memory map
holes, and hence ignoring the return value is fine.

> while if the overlapping
> range is marked as E820_RESERVED, it does not map the tables at all (the
> issue that Stefano saw with qemu). Since dom0 memory map is initialized
> based on the native one, the code adding the ACPI table memory ranges will
> naturally fall into one of the two cases above.

Xen does map them, but that's done in arch_iommu_hwdom_init() which get
short-circuited by the usage of dom0-iommu=none in your example.  See
my reply to Stefano about moving such mappings into pvh_populate_p2m().

> So even when not running into this issue, pvh_add_mem_range() still fails
> and the memory range mapped is wider than the allowed one.

The intention of that call to pvh_add_mem_range() is not to limit what
gets mapped into dom0 physmap, but rather to workaround bugs in the
firmware if ACPI tables are placed in memory map holes.

Thanks, Roger.

Reply via email to