On Thu, May 18, 2023 at 02:36:41PM +0300, Xenia Ragiadakou wrote:
> 
> On 18/5/23 12:31, Roger Pau Monné wrote:
> > On Thu, May 18, 2023 at 10:24:10AM +0300, Xenia Ragiadakou wrote:
> > > On 15/5/23 17:17, Jan Beulich wrote:
> > > > On 13.05.2023 03:17, Stefano Stabellini wrote:
> > > > > From: Stefano Stabellini <stefano.stabell...@amd.com>
> > > > > 
> > > > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > > > the tables in the guest. Instead, copy the tables to Dom0.
> > > > Do you really mean "in the guest" (i.e. from Xen's perspective, i.e.
> > > > ignoring that when running on qemu it is kind of a guest itself)?
> > > > 
> > > > I also consider the statement too broad anyway: Various people have
> > > > run PVH Dom0 without running into such an issue, so it's clearly not
> > > > just "leads to".
> > > In my opinion the issue is broader.
> > > 
> > > In pvh_setup_acpi(), the code adding the ACPI tables to dom0 memory map 
> > > does
> > > not check the return value of pvh_add_mem_range(). If there is an overlap
> > > and the overlapping region is marked as E820_ACPI, it maps not just the
> > > allowed tables but the entire overlapping range ,
> > But that's the indented behavior: all ACPI regions will be mapped into
> > the dom0 physmap, the filtering of the tables exposed to dom0 is done
> > in the XSDT, but not in by filtering the mapped regions.  Note this
> > won't be effective anyway, as the minimal granularity of physmap
> > entries is 4k, so multiple tables could live in the same 4K region.
> > Also Xen cannot parse dynamic tables (SSDT) or execute methods, and
> > hence doesn't know exactly which memory will be used.
> Thanks a lot for the explanation. I checked more carefully the code and it's
> true that xen does not aim to restrict dom0 access to ACPI tables. I got
> confused by the name of the function pvh_acpi_table_allowed.
> > 
> > Xen relies on the firmware to have the ACPI tables in ACPI, NVS or
> > RESERVED regions in order for them to be mapped into the gust physmap.
> > The call to pvh_add_mem_range() in pvh_setup_acpi() is just an attempt
> > to workaround broken systems that have tables placed in memory map
> > holes, and hence ignoring the return value is fine.
> In pvh_setup_acpi(), xen identity maps E820_ACPI and E820_NVS ranges to
> dom0. Why it does not do the same for E820_RESERVED, since ACPI tables may
> also lie there and since it does not know which memory will be used?

So far I at least wasn't considering that ACPI tables could reside in
RESERVED regions.  Given the behavior exposed by QEMU I think we need
to move the mapping of RESERVED regions from arch_iommu_hwdom_init()
into pvh_populate_p2m() for PVH dom0, thus rendering
arch_iommu_hwdom_init() PV-only.

> > > while if the overlapping
> > > range is marked as E820_RESERVED, it does not map the tables at all (the
> > > issue that Stefano saw with qemu). Since dom0 memory map is initialized
> > > based on the native one, the code adding the ACPI table memory ranges will
> > > naturally fall into one of the two cases above.
> > Xen does map them, but that's done in arch_iommu_hwdom_init() which get
> > short-circuited by the usage of dom0-iommu=none in your example.  See
> > my reply to Stefano about moving such mappings into pvh_populate_p2m().
> Indeed, if dom0-iommu=none is removed from the xen cmdline and qemu is
> configured with an iommu, the issue is not triggered. Because
> arch_iommu_hwdom_init() identity maps to dom0 at least the first 4G, right?

For PVH dom0 only reserved regions are identity mapped into the
physmap.

Thanks, Roger.

Reply via email to