+Marcel, Laine, Daniel On 08/21/20 12:30, Igor Mammedov wrote: > On Tue, 18 Aug 2020 23:52:23 +0200 > Julia Suvorova <jus...@redhat.com> wrote: > >> PCIe native hot-plug has numerous problems with racing events and >> unpredictable guest behaviour (Windows). > Documenting these misterious problems I've asked for in previous > review hasn't been addressed. > Pls see v1 for comments and add requested info into cover letter at > least or in a commit message.
Igor, I assume you are referring to http://mid.mail-archive.com/20200715153321.3495e62d@redhat.com and I couldn't agree more. I'd like to understand the specific motivation for this patch series. - I'm very concerned that it could regress various hotplug scenarios with at least OVMF. So minimally I'm hoping that the work is being meticulously tested with OVMF. - I don't recall testing native PCIe hot-*unplug*, but we had repeatedly tested native PCIe plug with both Linux and Windows guests, and in the end, it worked fine. (I recall working with at least Marcel on that; one historical reference I can find now is <https://bugzilla.tianocore.org/show_bug.cgi?id=75>.) I remember users confirming that native PCIe hotplug worked with assigned physical devices even (e.g. GPUs), assuming they made use of the resource reservation capability (e.g. they'd reserve large MMIO64 areas during initial enumeration). - I seem to remember that we had tested hotplug on extra root bridges (PXB) too; regressing that -- per the pxb-pcie mention in the blurb, quoted below -- wouldn't be great. At least, please don't flip the big switch so soon (IIUC, there is a big switch being proposed). - The documentation at "docs/pcie.txt" and "docs/pcie_pci_bridge.txt" is chock-full of hotplug references; we had spent days if not weeks for writing and reviewing those. I hope it's being evaluated how much of that is going to need an update. In particular, do we know how this work is going to affect the resource reservation capability? $ qemu-system-x86_64 -device pcie-root-port,\? | grep reserve bus-reserve=<uint32> - (default: 4294967295) io-reserve=<size> - (default: 18446744073709551615) mem-reserve=<size> - (default: 18446744073709551615) pref32-reserve=<size> - (default: 18446744073709551615) pref64-reserve=<size> - (default: 18446744073709551615) The OVMF-side code (OvmfPkg/PciHotPlugInitDxe) was tough to write. As far as I remember, especially commit fe4049471bdf ("OvmfPkg/PciHotPlugInitDxe: translate QEMU's resource reservation hints", 2017-10-03) had taken a lot of navel-gazing. So the best answer I'm looking for here is "this series does not affect resource reservation at all". - If my message is suggesting that I'm alarmed: that's an understatement. This stuff is a mine-field. A good example is Gerd's (correct!) response "Oh no, please don't" to Igor's question in the v1 thread, as to whether the piix4 IO port range could be reused: http://mid.mail-archive.com/20200715065751.ogchtdqmnn7cxsyi@sirius.home.kraxel.org That kind of "reuse" would have been a catastrophe, because for PCI IO port aperture, OVMF uses [0xC000..0xFFFF] on i440fx, but [0x6000..0xFFFF] on q35: commit bba734ab4c7c9b4386d39420983bf61484f65dda Author: Laszlo Ersek <ler...@redhat.com> Date: Mon May 9 22:54:36 2016 +0200 OvmfPkg/PlatformPei: provide 10 * 4KB of PCI IO Port space on Q35 This can accommodate 10 bridges (including root bridges, PCIe upstream and downstream ports, etc -- see <https://bugzilla.redhat.com/show_bug.cgi?id=1333238#c12> for more details). 10 is not a whole lot, but closer to the architectural limit of 15 than our current 4, so it can be considered a stop-gap solution until all guests manage to migrate to virtio-1.0, and no longer need PCI IO BARs behind PCIe downstream ports. Cc: Gabriel Somlo <so...@cmu.edu> Cc: Jordan Justen <jordan.l.jus...@intel.com> Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1333238 Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <ler...@redhat.com> Reviewed-by: Jordan Justen <jordan.l.jus...@intel.com> Tested-by: Gabriel Somlo <so...@cmu.edu> - If native PCIe hot-unplug is not working well (or at all) right now, then I guess I can't just summarily say "we had better drop this like hot potato". But then, if we are committed to *juggling* that potato, we should at least document the use case / motivation / current issues meticulously, please. Do we have a public BZ at least? - The other work, with regard to *disabling* unplug, which seems to be progressing in parallel, is similarly in need of a good explanation, in my opinion: 20200820092157.17792-1-ani@anisinha.ca">http://mid.mail-archive.com/20200820092157.17792-1-ani@anisinha.ca Yes, I have read Laine's long email, linked from the QEMU cover letter: https://www.redhat.com/archives/libvir-list/2020-February/msg00110.html The whole use case "prevent guest admins from unplugging virtual devices" still doesn't make any sense to me. How is "some cloud admins don't want that" acceptable at face value, without further explanation? Thanks Laszlo > > >> Switching to ACPI hot-plug for now. >> >> Tested on RHEL 8 and Windows 2019. >> pxb-pcie is not yet supported. >> >> v2: >> * new ioport range for acpiphp [Gerd] >> * drop find_pci_host() [Igor] >> * explain magic numbers in _OSC [Igor] >> * drop build_q35_pci_hotplug() wrapper [Igor] >> >> Julia Suvorova (4): >> hw/acpi/ich9: Trace ich9_gpe_readb()/writeb() >> hw/i386/acpi-build: Add ACPI PCI hot-plug methods to q35 >> hw/i386/acpi-build: Turn off support of PCIe native hot-plug and SHPC >> in _OSC >> hw/acpi/ich9: Enable ACPI PCI hot-plug >> >> hw/i386/acpi-build.h | 12 ++++++++++ >> include/hw/acpi/ich9.h | 3 +++ >> include/hw/acpi/pcihp.h | 3 ++- >> hw/acpi/ich9.c | 52 ++++++++++++++++++++++++++++++++++++++++- >> hw/acpi/pcihp.c | 15 ++++++++---- >> hw/acpi/piix4.c | 2 +- >> hw/i386/acpi-build.c | 48 +++++++++++++++++++++++-------------- >> hw/i386/pc.c | 1 + >> hw/acpi/trace-events | 4 ++++ >> 9 files changed, 114 insertions(+), 26 deletions(-) >> > >