[Bug 1749393] Re: sbrk() not working under qemu-user with a PIE-compiled binary?
FYI the release of this is slowed down by the slow verification of bug https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1929926 -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1749393 Title: sbrk() not working under qemu-user with a PIE-compiled binary? Status in QEMU: Fix Released Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Focal: Fix Committed Bug description: [Impact] * The current space reserved can be too small and we can end up with no space at all for BRK. It can happen to any case, but is much more likely with the now common PIE binaries. * Backport the upstream fix which reserves a bit more space while loading and giving it back after interpreter and stack is loaded. [Test Plan] * On x86 run: sudo apt install -y qemu-user-static docker.io sudo docker run --rm arm64v8/debian:bullseye bash -c 'apt update && apt install -y wget' ... Running hooks in /etc/ca-certificates/update.d... done. Errors were encountered while processing: libc-bin E: Sub-process /usr/bin/dpkg returned an error code (1) Second test from bug 1928075 $ sudo qemu-debootstrap --arch=arm64 bullseye bullseye-arm64 http://ftp.debian.org/debian In the bad case this is failing like W: Failure trying to run: /sbin/ldconfig W: See //debootstrap/debootstrap.log for detail And in that log file you'll see the segfault $ tail -n 2 bullseye-arm64/debootstrap/debootstrap.log qemu: uncaught target signal 11 (Segmentation fault) - core dumped Segmentation fault (core dumped) [Where problems could occur] * Regressions would be around use-cases of linux-user that is emulation not of a system but of binaries. Commonly uses for cross-tests and cross-builds so that is the space to watch for regressions [Other Info] * n/a --- In Debian unstable, we recently switched bash to be a PIE-compiled binary (for hardening). Unfortunately this resulted in bash being broken when run under qemu-user (for all target architectures, host being amd64 for me). $ sudo chroot /srv/chroots/sid-i386/ qemu-i386-static /bin/bash bash: xmalloc: .././shell.c:1709: cannot allocate 10 bytes (0 bytes allocated) bash has its own malloc implementation based on sbrk(): https://git.savannah.gnu.org/cgit/bash.git/tree/lib/malloc/malloc.c When we disable this internal implementation and rely on glibc's malloc, then everything is fine. But it might be that glibc has a fallback when sbrk() is not working properly and it might hide the underlying problem in qemu-user. This issue has also been reported to the bash upstream author and he suggested that the issue might be in qemu-user so I'm opening a ticket here. Here's the discussion with the bash upstream author: https://lists.gnu.org/archive/html/bug-bash/2018-02/threads.html#00080 You can find the problematic bash binary in that .deb file: http://snapshot.debian.org/archive/debian/20180206T154716Z/pool/main/b/bash/bash_4.4.18-1_i386.deb The version of qemu I have been using is 2.11 (Debian package qemu- user-static version 1:2.11+dfsg-1) but I have had reports that the problem is reproducible with older versions (back to 2.8 at least). Here are the related Debian bug reports: https://bugs.debian.org/889869 https://bugs.debian.org/865599 It's worth noting that bash used to have this problem (when compiled as a PIE binary) even when run directly but then something got fixed in the kernel and now the problem only appears when run under qemu-user: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1518483 To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1749393/+subscriptions
Re: [PATCH 10/12] s390x/pci: use I/O Address Translation assist when interpreting
On 12/7/21 22:04, Matthew Rosato wrote: Allow the underlying kvm host to handle the Refresh PCI Translation instruction intercepts. Signed-off-by: Matthew Rosato --- hw/s390x/s390-pci-bus.c | 6 ++-- hw/s390x/s390-pci-inst.c | 51 ++-- hw/s390x/s390-pci-vfio.c | 33 + include/hw/s390x/s390-pci-inst.h | 2 +- include/hw/s390x/s390-pci-vfio.h | 10 +++ 5 files changed, 95 insertions(+), 7 deletions(-) diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c index 1ae8792a26..ab442f17fb 100644 --- a/hw/s390x/s390-pci-bus.c +++ b/hw/s390x/s390-pci-bus.c @@ -196,7 +196,7 @@ void s390_pci_sclp_deconfigure(SCCB *sccb) pci_dereg_irqs(pbdev); } if (pbdev->iommu->enabled) { -pci_dereg_ioat(pbdev->iommu); +pci_dereg_ioat(pbdev); } pbdev->state = ZPCI_FS_STANDBY; rc = SCLP_RC_NORMAL_COMPLETION; @@ -1261,7 +1261,7 @@ static void s390_pcihost_reset(DeviceState *dev) pci_dereg_irqs(pbdev); } if (pbdev->iommu->enabled) { -pci_dereg_ioat(pbdev->iommu); +pci_dereg_ioat(pbdev); } pbdev->state = ZPCI_FS_STANDBY; s390_pci_perform_unplug(pbdev); @@ -1402,7 +1402,7 @@ static void s390_pci_device_reset(DeviceState *dev) pci_dereg_irqs(pbdev); } if (pbdev->iommu->enabled) { -pci_dereg_ioat(pbdev->iommu); +pci_dereg_ioat(pbdev); } fmb_timer_free(pbdev); diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c index 02093e82f9..598e5a5d52 100644 --- a/hw/s390x/s390-pci-inst.c +++ b/hw/s390x/s390-pci-inst.c @@ -978,6 +978,24 @@ int pci_dereg_irqs(S390PCIBusDevice *pbdev) return 0; } +static int reg_ioat_interp(S390PCIBusDevice *pbdev, uint64_t iota) +{ +int rc; + +rc = s390_pci_probe_ioat(pbdev); +if (rc) { +return rc; +} + +rc = s390_pci_set_ioat(pbdev, iota); +if (rc) { +return rc; +} + +pbdev->iommu->enabled = true; +return 0; +} + static int reg_ioat(CPUS390XState *env, S390PCIBusDevice *pbdev, ZpciFib fib, uintptr_t ra) { @@ -995,6 +1013,16 @@ static int reg_ioat(CPUS390XState *env, S390PCIBusDevice *pbdev, ZpciFib fib, return -EINVAL; } +/* If this is an interpreted device, we must use the IOAT assist */ +if (pbdev->interp) { +if (reg_ioat_interp(pbdev, g_iota)) { +error_report("failure starting ioat assist"); +s390_program_interrupt(env, PGM_OPERAND, ra); +return -EINVAL; +} +return 0; +} + /* currently we only support designation type 1 with translation */ if (!(dt == ZPCI_IOTA_RTTO && t)) { error_report("unsupported ioat dt %d t %d", dt, t); @@ -1011,8 +1039,25 @@ static int reg_ioat(CPUS390XState *env, S390PCIBusDevice *pbdev, ZpciFib fib, return 0; } -void pci_dereg_ioat(S390PCIIOMMU *iommu) +static void dereg_ioat_interp(S390PCIBusDevice *pbdev) { +if (s390_pci_probe_ioat(pbdev) != 0) { +return; +} + +s390_pci_set_ioat(pbdev, 0); +pbdev->iommu->enabled = false; +} + +void pci_dereg_ioat(S390PCIBusDevice *pbdev) +{ +S390PCIIOMMU *iommu = pbdev->iommu; + +if (pbdev->interp) { +dereg_ioat_interp(pbdev); +return; +} + s390_pci_iommu_disable(iommu); iommu->pba = 0; iommu->pal = 0; @@ -1251,7 +1296,7 @@ int mpcifc_service_call(S390CPU *cpu, uint8_t r1, uint64_t fiba, uint8_t ar, cc = ZPCI_PCI_LS_ERR; s390_set_status_code(env, r1, ZPCI_MOD_ST_SEQUENCE); } else { -pci_dereg_ioat(pbdev->iommu); +pci_dereg_ioat(pbdev); } break; case ZPCI_MOD_FC_REREG_IOAT: @@ -1262,7 +1307,7 @@ int mpcifc_service_call(S390CPU *cpu, uint8_t r1, uint64_t fiba, uint8_t ar, cc = ZPCI_PCI_LS_ERR; s390_set_status_code(env, r1, ZPCI_MOD_ST_SEQUENCE); } else { -pci_dereg_ioat(pbdev->iommu); +pci_dereg_ioat(pbdev); if (reg_ioat(env, pbdev, fib, ra)) { cc = ZPCI_PCI_LS_ERR; s390_set_status_code(env, r1, ZPCI_MOD_ST_INSUF_RES); diff --git a/hw/s390x/s390-pci-vfio.c b/hw/s390x/s390-pci-vfio.c index 6f9271df87..6fc03a858a 100644 --- a/hw/s390x/s390-pci-vfio.c +++ b/hw/s390x/s390-pci-vfio.c @@ -240,6 +240,39 @@ int s390_pci_get_aif(S390PCIBusDevice *pbdev, bool enable, bool assist) return rc; } +int s390_pci_probe_ioat(S390PCIBusDevice *pbdev) +{ +VFIOPCIDevice *vdev = container_of(pbdev->pdev, VFIOPCIDevice, pdev); +struct vfio_device_feature feat = { +.argsz = sizeof(struct vfio_device_feature), +.flags = VFIO_DEVICE_FEATURE_PROBE + VFIO_DEVICE_FEATURE_ZPCI_IOAT +}; + +a
Re: [PATCH 12/12] s390x/pci: let intercept devices have separate PCI groups
On 12/7/21 22:04, Matthew Rosato wrote: Let's use the reserved pool of simulated PCI groups to allow intercept devices to have separate groups from interpreted devices as some group values may be different. If we run out of simulated PCI groups, subsequent intercept devices just get the default group. Furthermore, if we encounter any PCI groups from hostdevs that are marked as simulated, let's just assign them to the default group to avoid conflicts between host simulated groups and our own simulated groups. I have a problem here. We will have the same hardware viewed by 2 different VFIO implementation (interpretation vs interception) reporting different groups ID. The alternative is to have them reporting same group ID with different values. I fear both are wrong. On the other hand, should we have a difference in the QEMU command line between intercepted and interpreted devices for default values. If not why not give up the host values so that in an hypothetical future migration we are clean with the GID ? I am not sure of this, just want to open a little discussion on this. For example what could go wrong to keep the host values returned by the CAP? Signed-off-by: Matthew Rosato --- hw/s390x/s390-pci-bus.c | 19 ++-- hw/s390x/s390-pci-vfio.c| 40 ++--- include/hw/s390x/s390-pci-bus.h | 6 - 3 files changed, 59 insertions(+), 6 deletions(-) diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c index ab442f17fb..8b0f3ef120 100644 --- a/hw/s390x/s390-pci-bus.c +++ b/hw/s390x/s390-pci-bus.c @@ -747,13 +747,14 @@ static void s390_pci_iommu_free(S390pciState *s, PCIBus *bus, int32_t devfn) object_unref(OBJECT(iommu)); } -S390PCIGroup *s390_group_create(int id) +S390PCIGroup *s390_group_create(int id, int host_id) { S390PCIGroup *group; S390pciState *s = s390_get_phb(); group = g_new0(S390PCIGroup, 1); group->id = id; +group->host_id = host_id; QTAILQ_INSERT_TAIL(&s->zpci_groups, group, link); return group; } @@ -771,12 +772,25 @@ S390PCIGroup *s390_group_find(int id) return NULL; } +S390PCIGroup *s390_group_find_host_sim(int host_id) +{ +S390PCIGroup *group; +S390pciState *s = s390_get_phb(); + +QTAILQ_FOREACH(group, &s->zpci_groups, link) { +if (group->id >= ZPCI_SIM_GRP_START && group->host_id == host_id) { +return group; +} +} +return NULL; +} + static void s390_pci_init_default_group(void) { S390PCIGroup *group; ClpRspQueryPciGrp *resgrp; -group = s390_group_create(ZPCI_DEFAULT_FN_GRP); +group = s390_group_create(ZPCI_DEFAULT_FN_GRP, ZPCI_DEFAULT_FN_GRP); resgrp = &group->zpci_group; resgrp->fr = 1; resgrp->dasm = 0; @@ -824,6 +838,7 @@ static void s390_pcihost_realize(DeviceState *dev, Error **errp) NULL, g_free); s->zpci_table = g_hash_table_new_full(g_int_hash, g_int_equal, NULL, NULL); s->bus_no = 0; +s->next_sim_grp = ZPCI_SIM_GRP_START; QTAILQ_INIT(&s->pending_sei); QTAILQ_INIT(&s->zpci_devs); QTAILQ_INIT(&s->zpci_dma_limit); diff --git a/hw/s390x/s390-pci-vfio.c b/hw/s390x/s390-pci-vfio.c index c9269683f5..bdc5892287 100644 --- a/hw/s390x/s390-pci-vfio.c +++ b/hw/s390x/s390-pci-vfio.c @@ -305,13 +305,17 @@ static void s390_pci_read_group(S390PCIBusDevice *pbdev, { struct vfio_info_cap_header *hdr; struct vfio_device_info_cap_zpci_group *cap; +S390pciState *s = s390_get_phb(); ClpRspQueryPciGrp *resgrp; VFIOPCIDevice *vpci = container_of(pbdev->pdev, VFIOPCIDevice, pdev); hdr = vfio_get_device_info_cap(info, VFIO_DEVICE_INFO_CAP_ZPCI_GROUP); -/* If capability not provided, just use the default group */ -if (hdr == NULL) { +/* + * If capability not provided or the underlying hostdev is simulated, just + * use the default group. + */ +if (hdr == NULL || pbdev->zpci_fn.pfgid >= ZPCI_SIM_GRP_START) { trace_s390_pci_clp_cap(vpci->vbasedev.name, VFIO_DEVICE_INFO_CAP_ZPCI_GROUP); pbdev->zpci_fn.pfgid = ZPCI_DEFAULT_FN_GRP; @@ -320,11 +324,41 @@ static void s390_pci_read_group(S390PCIBusDevice *pbdev, } cap = (void *) hdr; +/* + * For an intercept device, let's use an existing simulated group if one + * one was already created for other intercept devices in this group. + * If not, create a new simulated group if any are still available. + * If all else fails, just fall back on the default group. + */ +if (!pbdev->interp) { +pbdev->pci_group = s390_group_find_host_sim(pbdev->zpci_fn.pfgid); +if (pbdev->pci_group) { +/* Use existing simulated group */ +pbdev->zpci_fn.pfgid = pbdev->pci_group->id; +return; +} else { +if (s->next_sim_
Re: [PATCH v2 1/9] hw/intc: sifive_plic: Add a reset function
On 12/16/21 05:54, Alistair Francis wrote: > From: Alistair Francis > > Signed-off-by: Alistair Francis > --- > hw/intc/sifive_plic.c | 18 ++ > 1 file changed, 18 insertions(+) Reviewed-by: Philippe Mathieu-Daudé
Re: [PATCH v2 8/9] hw/riscv: virt: Allow support for 32 cores
On 12/16/21 06:58, Anup Patel wrote: > On Thu, Dec 16, 2021 at 10:27 AM Alistair Francis > wrote: >> >> From: Alistair Francis >> >> Linux supports up to 32 cores for both 32-bit and 64-bit RISC-V, so >> let's set that as the maximum for the virt board. >> >> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/435 >> Signed-off-by: Alistair Francis > IMO, we should keep QEMU VIRT_CPUS_MAX as high as > possible to allow any kind of software Linux, OpenSBI, FreeBSD, > Xvisor, Xen, etc. Let the guest software decide it's own limit (such > as NR_CPUS of Linux). Agreed. Reviewed-by: Philippe Mathieu-Daudé > > Reviewed-by: Anup Patel > > Regards, > Anup
Re: [PATCH v2 2/2] hw/arm: Add Cortex-A5 to virt device
On 12/16/21 07:48, Byron Lathi wrote: > Add the Cortex-A5 to the list of supported CPUs by the virt platform. > > Signed-off-by: Byron Lathi > --- > docs/system/arm/virt.rst | 1 + > hw/arm/virt.c| 1 + > 2 files changed, 2 insertions(+) Reviewed-by: Philippe Mathieu-Daudé
[PATCH] gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices
The new msys2-64bit job is often running for more than 50 minutes - and if the CI is currently loaded, it times out after 60 minutes. The job has been declared with a bigger timeout, but seems like this is getting ignored on the shared Gitlab-CI Windows runners, so we're currently seeing a lot of failures with this job. Thus we have to reduce the time it takes to finish this job. Since we want to test compiling the WHPX and HAX accelerator code with this job, switching to another target CPU is not really a good option, so let's reduce the amount of code that we have to compile with the --without-default-devices switch instead. Signed-off-by: Thomas Huth --- .gitlab-ci.d/windows.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.gitlab-ci.d/windows.yml b/.gitlab-ci.d/windows.yml index 309f7e7fb8..62dd9ed832 100644 --- a/.gitlab-ci.d/windows.yml +++ b/.gitlab-ci.d/windows.yml @@ -58,7 +58,7 @@ msys2-64bit: - $env:CHERE_INVOKING = 'yes' # Preserve the current working directory - $env:MSYSTEM = 'MINGW64' # Start a 64 bit Mingw environment - .\msys64\usr\bin\bash -lc './configure --target-list=x86_64-softmmu - --enable-capstone=system' + --enable-capstone=system --without-default-devices' - .\msys64\usr\bin\bash -lc "sed -i '/^ROMS=/d' build/config-host.mak" - .\msys64\usr\bin\bash -lc 'make -j2' - .\msys64\usr\bin\bash -lc 'make check' -- 2.27.0
Re: [PATCH v5 4/8] tests/unit/test-smp-parse: Add 'smp-generic-invalid' machine type
On 12/16/21 04:20, wangyanan (Y) wrote: > Hi Philippe, > > On 2021/12/16 0:48, Philippe Mathieu-Daudé wrote: >> Avoid modifying the MachineClass internals by adding the >> 'smp-generic-invalid' machine, which inherits from TYPE_MACHINE. >> >> Reviewed-by: Richard Henderson >> Signed-off-by: Philippe Mathieu-Daudé >> --- >> tests/unit/test-smp-parse.c | 25 - >> 1 file changed, 16 insertions(+), 9 deletions(-) >> @@ -606,6 +609,10 @@ static const TypeInfo smp_machine_types[] = { >> .class_init = machine_base_class_init, >> .class_size = sizeof(MachineClass), >> .instance_size = sizeof(MachineState), >> + }, { >> + .name = MACHINE_TYPE_NAME("smp-generic-invalid"), >> + .parent = TYPE_MACHINE, >> + .class_init = machine_without_dies_invalid_class_init, > Maybe it's better to rename "machine_without_dies_invalid_class_init" to > "machine_generic_invalid_class_init" to be consistent with the .name field. Indeed I missed that... Thanks.
[PATCH] ui: fix gtk clipboard clear assertion
From: Marc-André Lureau When closing the QEMU Gtk display window, it can occasionaly warn: qemu-system-x86_64: Gtk: gtk_clipboard_set_with_data: assertion 'targets != NULL' failed #3 0x74f02f22 in gtk_clipboard_set_with_data (clipboard=, targets=, n_targets=, get_func=, clear_func=, user_data=) at /usr/src/debug/gtk3-3.24.30-4.fc35.x86_64/gtk/gtkclipboard.c:672 #4 0x7552cd75 in gd_clipboard_update_info (gd=0x579a9e00, info=0x57ba4b50) at ../ui/gtk-clipboard.c:98 #5 0x7552ce00 in gd_clipboard_notify (notifier=0x579aaba8, data=0x7fffd720) at ../ui/gtk-clipboard.c:128 #6 0x5603e0ff in notifier_list_notify (list=0x56657470 , data=0x7fffd720) at ../util/notify.c:39 #7 0x5594e8e0 in qemu_clipboard_update (info=0x57ba4b50) at ../ui/clipboard.c:54 #8 0x5594e840 in qemu_clipboard_peer_release (peer=0x5684a5b0, selection=QEMU_CLIPBOARD_SELECTION_PRIMARY) at ../ui/clipboard.c:40 #9 0x5594e786 in qemu_clipboard_peer_unregister (peer=0x5684a5b0) at ../ui/clipboard.c:19 #10 0x5595f044 in vdagent_disconnect (vd=0x5684a400) at ../ui/vdagent.c:852 #11 0x5595f262 in vdagent_chr_fini (obj=0x5684a400) at ../ui/vdagent.c:908 Signed-off-by: Marc-André Lureau --- ui/gtk-clipboard.c | 21 +++-- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/ui/gtk-clipboard.c b/ui/gtk-clipboard.c index 35b7a2c22838..0ed630cf9b68 100644 --- a/ui/gtk-clipboard.c +++ b/ui/gtk-clipboard.c @@ -84,7 +84,7 @@ static void gd_clipboard_notify(Notifier *notifier, void *data) if (info != qemu_clipboard_info(s)) { gd->cbpending[s] = 0; if (!self_update) { -GtkTargetList *list; +g_autoptr(GtkTargetList) list = NULL; GtkTargetEntry *targets; gint n_targets; @@ -95,15 +95,16 @@ static void gd_clipboard_notify(Notifier *notifier, void *data) targets = gtk_target_table_new_from_list(list, &n_targets); gtk_clipboard_clear(gd->gtkcb[s]); -gd->cbowner[s] = true; -gtk_clipboard_set_with_data(gd->gtkcb[s], -targets, n_targets, -gd_clipboard_get_data, -gd_clipboard_clear, -gd); - -gtk_target_table_free(targets, n_targets); -gtk_target_list_unref(list); +if (targets) { +gd->cbowner[s] = true; +gtk_clipboard_set_with_data(gd->gtkcb[s], +targets, n_targets, +gd_clipboard_get_data, +gd_clipboard_clear, +gd); + +gtk_target_table_free(targets, n_targets); +} } return; } -- 2.34.1.8.g35151cf07204
[PATCH 0/2] hw/net: Move MV88W8618 network device out of hw/arm/ directory
This series simply extract the MV88W8618 device from the ARM machine in hw/arm/ and move it to hw/net/. I was expecting for this to get merged before posting a generic network code rework series, then figured I never posted it >_< Philippe Mathieu-Daudé (2): hw/arm/musicpal: Fix coding style of code related to MV88W8618 device hw/net: Move MV88W8618 network device out of hw/arm/ directory include/hw/net/mv88w8618_eth.h | 16 ++ hw/arm/musicpal.c | 380 +- hw/net/mv88w8618_eth.c | 406 + MAINTAINERS| 2 + hw/net/meson.build | 1 + 5 files changed, 426 insertions(+), 379 deletions(-) create mode 100644 include/hw/net/mv88w8618_eth.h create mode 100644 hw/net/mv88w8618_eth.c -- 2.33.1
[PATCH 1/2] hw/arm/musicpal: Fix coding style of code related to MV88W8618 device
We are going to move this code, so fix its style first to avoid: ERROR: spaces required around that '/' (ctx:VxV) Signed-off-by: Philippe Mathieu-Daudé --- hw/arm/musicpal.c | 14 +++--- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c index 2d612cc0c9b..6b5310117b8 100644 --- a/hw/arm/musicpal.c +++ b/hw/arm/musicpal.c @@ -309,13 +309,13 @@ static uint64_t mv88w8618_eth_read(void *opaque, hwaddr offset, return s->imr; case MP_ETH_FRDP0 ... MP_ETH_FRDP3: -return s->frx_queue[(offset - MP_ETH_FRDP0)/4]; +return s->frx_queue[(offset - MP_ETH_FRDP0) / 4]; case MP_ETH_CRDP0 ... MP_ETH_CRDP3: -return s->rx_queue[(offset - MP_ETH_CRDP0)/4]; +return s->rx_queue[(offset - MP_ETH_CRDP0) / 4]; case MP_ETH_CTDP0 ... MP_ETH_CTDP1: -return s->tx_queue[(offset - MP_ETH_CTDP0)/4]; +return s->tx_queue[(offset - MP_ETH_CTDP0) / 4]; default: return 0; @@ -360,16 +360,16 @@ static void mv88w8618_eth_write(void *opaque, hwaddr offset, break; case MP_ETH_FRDP0 ... MP_ETH_FRDP3: -s->frx_queue[(offset - MP_ETH_FRDP0)/4] = value; +s->frx_queue[(offset - MP_ETH_FRDP0) / 4] = value; break; case MP_ETH_CRDP0 ... MP_ETH_CRDP3: -s->rx_queue[(offset - MP_ETH_CRDP0)/4] = -s->cur_rx[(offset - MP_ETH_CRDP0)/4] = value; +s->rx_queue[(offset - MP_ETH_CRDP0) / 4] = +s->cur_rx[(offset - MP_ETH_CRDP0) / 4] = value; break; case MP_ETH_CTDP0 ... MP_ETH_CTDP1: -s->tx_queue[(offset - MP_ETH_CTDP0)/4] = value; +s->tx_queue[(offset - MP_ETH_CTDP0) / 4] = value; break; } } -- 2.33.1
Re: [PATCH v2 4/4] intel_iommu: Fix irqchip / X2APIC configuration checks
Hi, David, On Thu, Dec 09, 2021 at 10:08:40PM +, David Woodhouse wrote: > We don't need to check kvm_enable_x2apic(). It's perfectly OK to support > interrupt remapping even if we can't address CPUs above 254. Kind of > pointless, but still functional. We only checks kvm_enable_x2apic() if eim=on is set, right? I mean, we can still enable IR without x2apic even with current code? Could you elaborate what's the use scenario for this patch? Thanks in advance. > > The check on kvm_enable_x2apic() needs to happen *anyway* in order to > allow CPUs above 254 even without an IOMMU, so allow that to happen > elsewhere. > > However, we do require the *split* irqchip in order to rewrite I/OAPIC > destinations. So fix that check while we're here. > > Signed-off-by: David Woodhouse > Reviewed-by: Peter Xu > Acked-by: Jason Wang I think the r-b and a-b should be for patch 2 not this one? :) > --- > hw/i386/intel_iommu.c | 7 +-- > 1 file changed, 1 insertion(+), 6 deletions(-) > > diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c > index bd288d45bb..0d1c72f08e 100644 > --- a/hw/i386/intel_iommu.c > +++ b/hw/i386/intel_iommu.c > @@ -3760,15 +3760,10 @@ static bool vtd_decide_config(IntelIOMMUState *s, > Error **errp) >ON_OFF_AUTO_ON : > ON_OFF_AUTO_OFF; > } > if (s->intr_eim == ON_OFF_AUTO_ON && !s->buggy_eim) { > -if (!kvm_irqchip_in_kernel()) { > +if (!kvm_irqchip_is_split()) { I think this is okay, but note that we'll already fail if !split in x86_iommu_realize(): bool irq_all_kernel = kvm_irqchip_in_kernel() && !kvm_irqchip_is_split(); /* Both Intel and AMD IOMMU IR only support "kernel-irqchip={off|split}" */ if (x86_iommu_ir_supported(x86_iommu) && irq_all_kernel) { error_setg(errp, "Interrupt Remapping cannot work with " "kernel-irqchip=on, please use 'split|off'."); return; } > error_setg(errp, "eim=on requires > accel=kvm,kernel-irqchip=split"); > return false; > } > -if (!kvm_enable_x2apic()) { > -error_setg(errp, "eim=on requires support on the KVM side" > - "(X2APIC_API, first shipped in v4.7)"); > -return false; > -} > } > > /* Currently only address widths supported are 39 and 48 bits */ > -- > 2.31.1 > -- Peter Xu
[PATCH 2/2] hw/net: Move MV88W8618 network device out of hw/arm/ directory
The Marvell 88W8618 network device is hidden in the Musicpal machine. Move it into a new unit file under the hw/net/ directory. Signed-off-by: Philippe Mathieu-Daudé --- include/hw/net/mv88w8618_eth.h | 16 ++ hw/arm/musicpal.c | 380 +- hw/net/mv88w8618_eth.c | 406 + MAINTAINERS| 2 + hw/net/meson.build | 1 + 5 files changed, 426 insertions(+), 379 deletions(-) create mode 100644 include/hw/net/mv88w8618_eth.h create mode 100644 hw/net/mv88w8618_eth.c diff --git a/include/hw/net/mv88w8618_eth.h b/include/hw/net/mv88w8618_eth.h new file mode 100644 index 000..65e880e0eb7 --- /dev/null +++ b/include/hw/net/mv88w8618_eth.h @@ -0,0 +1,16 @@ +/* + * Marvell MV88W8618 / Freecom MusicPal emulation. + * + * Copyright (c) 2008 Jan Kiszka + * + * This code is licensed under the GNU GPL v2. + * + * Contributions after 2012-01-13 are licensed under the terms of the + * GNU GPL, version 2 or (at your option) any later version. + */ +#ifndef HW_NET_MV88W8618_H +#define HW_NET_MV88W8618_H + +#define TYPE_MV88W8618_ETH "mv88w8618_eth" + +#endif diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c index 6b5310117b8..7c840fb4283 100644 --- a/hw/arm/musicpal.c +++ b/hw/arm/musicpal.c @@ -34,12 +34,12 @@ #include "ui/pixel_ops.h" #include "qemu/cutils.h" #include "qom/object.h" +#include "hw/net/mv88w8618_eth.h" #define MP_MISC_BASE0x80002000 #define MP_MISC_SIZE0x1000 #define MP_ETH_BASE 0x80008000 -#define MP_ETH_SIZE 0x1000 #define MP_WLAN_BASE0x8000C000 #define MP_WLAN_SIZE0x0800 @@ -84,383 +84,6 @@ /* Wolfson 8750 I2C address */ #define MP_WM_ADDR 0x1A -/* Ethernet register offsets */ -#define MP_ETH_SMIR 0x010 -#define MP_ETH_PCXR 0x408 -#define MP_ETH_SDCMR0x448 -#define MP_ETH_ICR 0x450 -#define MP_ETH_IMR 0x458 -#define MP_ETH_FRDP00x480 -#define MP_ETH_FRDP10x484 -#define MP_ETH_FRDP20x488 -#define MP_ETH_FRDP30x48C -#define MP_ETH_CRDP00x4A0 -#define MP_ETH_CRDP10x4A4 -#define MP_ETH_CRDP20x4A8 -#define MP_ETH_CRDP30x4AC -#define MP_ETH_CTDP00x4E0 -#define MP_ETH_CTDP10x4E4 - -/* MII PHY access */ -#define MP_ETH_SMIR_DATA0x -#define MP_ETH_SMIR_ADDR0x03FF -#define MP_ETH_SMIR_OPCODE (1 << 26) /* Read value */ -#define MP_ETH_SMIR_RDVALID (1 << 27) - -/* PHY registers */ -#define MP_ETH_PHY1_BMSR0x0021 -#define MP_ETH_PHY1_PHYSID1 0x0041 -#define MP_ETH_PHY1_PHYSID2 0x0061 - -#define MP_PHY_BMSR_LINK0x0004 -#define MP_PHY_BMSR_AUTONEG 0x0008 - -#define MP_PHY_88E3015 0x01410E20 - -/* TX descriptor status */ -#define MP_ETH_TX_OWN (1U << 31) - -/* RX descriptor status */ -#define MP_ETH_RX_OWN (1U << 31) - -/* Interrupt cause/mask bits */ -#define MP_ETH_IRQ_RX_BIT 0 -#define MP_ETH_IRQ_RX (1 << MP_ETH_IRQ_RX_BIT) -#define MP_ETH_IRQ_TXHI_BIT 2 -#define MP_ETH_IRQ_TXLO_BIT 3 - -/* Port config bits */ -#define MP_ETH_PCXR_2BSM_BIT28 /* 2-byte incoming suffix */ - -/* SDMA command bits */ -#define MP_ETH_CMD_TXHI (1 << 23) -#define MP_ETH_CMD_TXLO (1 << 22) - -typedef struct mv88w8618_tx_desc { -uint32_t cmdstat; -uint16_t res; -uint16_t bytes; -uint32_t buffer; -uint32_t next; -} mv88w8618_tx_desc; - -typedef struct mv88w8618_rx_desc { -uint32_t cmdstat; -uint16_t bytes; -uint16_t buffer_size; -uint32_t buffer; -uint32_t next; -} mv88w8618_rx_desc; - -#define TYPE_MV88W8618_ETH "mv88w8618_eth" -OBJECT_DECLARE_SIMPLE_TYPE(mv88w8618_eth_state, MV88W8618_ETH) - -struct mv88w8618_eth_state { -/*< private >*/ -SysBusDevice parent_obj; -/*< public >*/ - -MemoryRegion iomem; -qemu_irq irq; -MemoryRegion *dma_mr; -AddressSpace dma_as; -uint32_t smir; -uint32_t icr; -uint32_t imr; -int mmio_index; -uint32_t vlan_header; -uint32_t tx_queue[2]; -uint32_t rx_queue[4]; -uint32_t frx_queue[4]; -uint32_t cur_rx[4]; -NICState *nic; -NICConf conf; -}; - -static void eth_rx_desc_put(AddressSpace *dma_as, uint32_t addr, -mv88w8618_rx_desc *desc) -{ -cpu_to_le32s(&desc->cmdstat); -cpu_to_le16s(&desc->bytes); -cpu_to_le16s(&desc->buffer_size); -cpu_to_le32s(&desc->buffer); -cpu_to_le32s(&desc->next); -dma_memory_write(dma_as, addr, desc, sizeof(*desc)); -} - -static void eth_rx_desc_get(AddressSpace *dma_as, uint32_t addr, -mv88w8618_rx_desc *desc) -{ -dma_memory_read(dma_as, addr, desc, sizeof(*desc)); -le32_to_cpus(&desc->cmdstat); -le16_to_cpus(&desc->bytes); -l
[PATCH 02/10] configure: make $targetos lowercase, use windows instead of MINGW32
targetos is already mostly the same as Meson host_machine.system(), just in CamelCase. Adjust Windows, which is different, and switch to lowercase to match Meson. Signed-off-by: Paolo Bonzini --- configure | 58 --- 1 file changed, 25 insertions(+), 33 deletions(-) diff --git a/configure b/configure index ba7ab435a6..63438c1900 100755 --- a/configure +++ b/configure @@ -502,30 +502,30 @@ EOF } if check_define __linux__ ; then - targetos="Linux" + targetos=linux elif check_define _WIN32 ; then - targetos='MINGW32' + targetos=windows elif check_define __OpenBSD__ ; then - targetos='OpenBSD' + targetos=openbsd elif check_define __sun__ ; then - targetos='SunOS' + targetos=sunos elif check_define __HAIKU__ ; then - targetos='Haiku' + targetos=haiku elif check_define __FreeBSD__ ; then - targetos='FreeBSD' + targetos=freebsd elif check_define __FreeBSD_kernel__ && check_define __GLIBC__; then - targetos='GNU/kFreeBSD' + targetos=gnu/kfreebsd elif check_define __DragonFly__ ; then - targetos='DragonFly' + targetos=dragonfly elif check_define __NetBSD__; then - targetos='NetBSD' + targetos=netbsd elif check_define __APPLE__; then - targetos='Darwin' + targetos=darwin else # This is a fatal error, but don't report it yet, because we # might be going to just print the --help text, or it might # be the result of a missing compiler. - targetos='bogus' + targetos=bogus fi # Some host OSes need non-standard checks for which CPU to use. @@ -533,7 +533,7 @@ fi # cross-compiling to one of these OSes then you'll need to specify # the correct CPU with the --cpu option. case $targetos in -SunOS) +sunos) # $(uname -m) returns i86pc even on an x86_64 box, so default based on isainfo if test -z "$cpu" && test "$(isainfo -k)" = "amd64"; then cpu="x86_64" @@ -624,40 +624,40 @@ fi # OS specific case $targetos in -MINGW32*) +windows) mingw32="yes" plugins="no" pie="no" ;; -GNU/kFreeBSD) +gnu/kfreebsd) bsd="yes" ;; -FreeBSD) +freebsd) bsd="yes" bsd_user="yes" make="${MAKE-gmake}" # needed for kinfo_getvmmap(3) in libutil.h ;; -DragonFly) +dragonfly) bsd="yes" make="${MAKE-gmake}" ;; -NetBSD) +netbsd) bsd="yes" make="${MAKE-gmake}" ;; -OpenBSD) +openbsd) bsd="yes" make="${MAKE-gmake}" ;; -Darwin) +darwin) bsd="yes" darwin="yes" # Disable attempts to use ObjectiveC features in os/object.h since they # won't work when we're compiling with gcc as a C compiler. QEMU_CFLAGS="-DOS_OBJECT_USE_OBJC=0 $QEMU_CFLAGS" ;; -SunOS) +sunos) solaris="yes" make="${MAKE-gmake}" smbd="${SMBD-/usr/sfw/sbin/smbd}" @@ -666,11 +666,11 @@ SunOS) # needed for TIOCWIN* defines in termios.h QEMU_CFLAGS="-D__EXTENSIONS__ $QEMU_CFLAGS" ;; -Haiku) +haiku) pie="no" QEMU_CFLAGS="-DB_USE_POSITIVE_POSIX_ERRORS -D_BSD_SOURCE -fPIC $QEMU_CFLAGS" ;; -Linux) +linux) linux="yes" linux_user="yes" vhost_user=${default_feature:-yes} @@ -3355,8 +3355,8 @@ QEMU_GA_MSI_MINGW_DLL_PATH="$($pkg_config --variable=prefix glib-2.0)/bin" # Mac OS X ships with a broken assembler roms= if { test "$cpu" = "i386" || test "$cpu" = "x86_64"; } && \ -test "$targetos" != "Darwin" && test "$targetos" != "SunOS" && \ -test "$targetos" != "Haiku" && test "$softmmu" = yes ; then +test "$targetos" != "darwin" && test "$targetos" != "sunos" && \ +test "$targetos" != "haiku" && test "$softmmu" = yes ; then # Different host OS linkers have different ideas about the name of the ELF # emulation. Linux and OpenBSD/amd64 use 'elf_i386'; FreeBSD uses the _fbsd # variant; OpenBSD/i386 uses the _obsd variant; and Windows uses i386pe. @@ -3914,15 +3914,7 @@ if test "$skip_meson" = no; then if test "$cross_compile" = "yes"; then cross_arg="--cross-file config-meson.cross" echo "[host_machine]" >> $cross -if test "$mingw32" = "yes" ; then -echo "system = 'windows'" >> $cross -fi -if test "$linux" = "yes" ; then -echo "system = 'linux'" >> $cross -fi -if test "$darwin" = "yes" ; then -echo "system = 'darwin'" >> $cross -fi +echo "system = '$targetos'" >> $cross case "$ARCH" in i386) echo "cpu_family = 'x86'" >> $cross -- 2.33.1
Re: [PATCH v2 2/4] intel_iommu: Support IR-only mode without DMA translation
On Thu, Dec 09, 2021 at 10:08:38PM +, David Woodhouse wrote: > From: David Woodhouse > > By setting none of the SAGAW bits we can indicate to a guest that DMA > translation isn't supported. Tested by booting Windows 10, as well as > Linux guests with the fix at https://git.kernel.org/torvalds/c/c40c10 > > Signed-off-by: David Woodhouse > Acked-by: Claudio Fontana Reviewed-by: Peter Xu -- Peter Xu
[PATCH 01/10] configure: simplify creation of plugin symbol list
--dynamic-list is present on all supported ELF (not Windows or Darwin) platforms, since it dates back to 2006; -exported_symbols_list is likewise present on all supported versions of macOS. Do not bother doing a functional test in configure. Since stuff is being moved to meson, move the creation of the Darwin-formatted symbols list there, reducing the transform to a single sed command. This also requires using -Xlinker instead of -Wl, in order to support weird paths that include a comma. Signed-off-by: Paolo Bonzini --- configure | 16 plugins/meson.build | 13 + 2 files changed, 9 insertions(+), 20 deletions(-) diff --git a/configure b/configure index d3aac031a5..ba7ab435a6 100755 --- a/configure +++ b/configure @@ -3689,22 +3689,6 @@ fi if test "$plugins" = "yes" ; then echo "CONFIG_PLUGIN=y" >> $config_host_mak -# Copy the export object list to the build dir -if test "$ld_dynamic_list" = "yes" ; then - echo "CONFIG_HAS_LD_DYNAMIC_LIST=yes" >> $config_host_mak - ld_symbols=qemu-plugins-ld.symbols - cp "$source_path/plugins/qemu-plugins.symbols" $ld_symbols -elif test "$ld_exported_symbols_list" = "yes" ; then - echo "CONFIG_HAS_LD_EXPORTED_SYMBOLS_LIST=yes" >> $config_host_mak - ld64_symbols=qemu-plugins-ld64.symbols - echo "# Automatically generated by configure - do not modify" > $ld64_symbols - grep 'qemu_' "$source_path/plugins/qemu-plugins.symbols" | sed 's/;//g' | \ - sed -E 's/^[[:space:]]*(.*)/_\1/' >> $ld64_symbols -else - error_exit \ - "If \$plugins=yes, either \$ld_dynamic_list or " \ - "\$ld_exported_symbols_list should have been set to 'yes'." -fi fi if test -n "$gdb_bin"; then diff --git a/plugins/meson.build b/plugins/meson.build index b3de57853b..eec10283c5 100644 --- a/plugins/meson.build +++ b/plugins/meson.build @@ -1,10 +1,15 @@ plugin_ldflags = [] # Modules need more symbols than just those in plugins/qemu-plugins.symbols if not enable_modules - if 'CONFIG_HAS_LD_DYNAMIC_LIST' in config_host -plugin_ldflags = ['-Wl,--dynamic-list=qemu-plugins-ld.symbols'] - elif 'CONFIG_HAS_LD_EXPORTED_SYMBOLS_LIST' in config_host -plugin_ldflags = ['-Wl,-exported_symbols_list,qemu-plugins-ld64.symbols'] + if targetos == 'darwin' +qemu_plugins_symbols_list = configure_file( + input: files('qemu-plugins.symbols'), + output: 'qemu-plugins-ld64.symbols', + capture: true, + command: ['sed', '-ne', 's/^[[:space:]]*\\(qemu_.*\\);/_\\1/p', '@INPUT@']) +plugin_ldflags = ['-Xlinker', '-exported_symbols_list', '-Xlinker', qemu_plugins_symbols_list] + else +plugin_ldflags = ['-Xlinker', '--dynamic-list=' + (meson.project_source_root() / 'plugins/qemu-plugins.symbols')] endif endif -- 2.33.1
[PATCH 00/10] configure cleanups, mostly wrt $cpu and $targetos
The bulk is in patches 2 to 9; they unify the names of architectures between configure and meson, and remove the ARCH symbol from configure. It is only used during QEMU build, so it can be derived from host_machine.cpu_family(), instead of having to pass it in config-host.mak. Paolo Bonzini (10): configure: simplify creation of plugin symbol list configure: make $targetos lowercase, use windows instead of MINGW32 configure: move target detection before CPU detection configure: do not set bsd_user/linux_user early configure: unify two case statements on $cpu configure: unify ppc64 and ppc64le configure: unify x86_64 and x32 meson: rename "arch" variable configure, meson: move ARCH to meson.build configure: remove unnecessary symlinks configure | 283 + meson.build| 39 +++--- pc-bios/meson.build| 2 +- plugins/meson.build| 13 +- tests/tcg/configure.sh | 4 +- 5 files changed, 155 insertions(+), 186 deletions(-) -- 2.33.1
[PATCH 09/10] configure, meson: move ARCH to meson.build
$ARCH and the HOST_* symbols are only used by the QEMU build; configure uses $cpu instead. Remove it from config-host.mak. Signed-off-by: Paolo Bonzini --- configure | 21 - meson.build | 26 +- 2 files changed, 21 insertions(+), 26 deletions(-) diff --git a/configure b/configure index 741c33c7ad..2cecbd4c66 100755 --- a/configure +++ b/configure @@ -634,11 +634,9 @@ else cpu=$(uname -m) fi -ARCH= -# Normalise host CPU name, set ARCH and multilib cflags +# Normalise host CPU name, set multilib cflags # Note that this case should only have supported host CPUs, not guests. case "$cpu" in - aarch64|riscv) ;; armv*b|armv*l|arm) cpu="arm" ;; @@ -667,8 +665,7 @@ case "$cpu" in CPU_CFLAGS="-m64 -mlittle" ;; s390) -CPU_CFLAGS="-m31" -ARCH=unknown ;; +CPU_CFLAGS="-m31" ;; s390x) CPU_CFLAGS="-m64" ;; @@ -677,15 +674,7 @@ case "$cpu" in CPU_CFLAGS="-m32 -mv8plus -mcpu=ultrasparc" ;; sparc64) CPU_CFLAGS="-m64 -mcpu=ultrasparc" ;; - - *) -# This will result in either an error or falling back to TCI later -ARCH=unknown - ;; esac -if test -z "$ARCH"; then - ARCH="$cpu" -fi : ${make=${MAKE-make}} @@ -3432,8 +3421,6 @@ echo "GIT=$git" >> $config_host_mak echo "GIT_SUBMODULES=$git_submodules" >> $config_host_mak echo "GIT_SUBMODULES_ACTION=$git_submodules_action" >> $config_host_mak -echo "ARCH=$ARCH" >> $config_host_mak - if test "$debug_tcg" = "yes" ; then echo "CONFIG_DEBUG_TCG=y" >> $config_host_mak fi @@ -3914,12 +3901,12 @@ if test "$skip_meson" = no; then cross_arg="--cross-file config-meson.cross" echo "[host_machine]" >> $cross echo "system = '$targetos'" >> $cross -case "$ARCH" in +case "$cpu" in i386) echo "cpu_family = 'x86'" >> $cross ;; *) -echo "cpu_family = '$ARCH'" >> $cross +echo "cpu_family = '$cpu'" >> $cross ;; esac echo "cpu = '$cpu'" >> $cross diff --git a/meson.build b/meson.build index 677ab1b57d..93f296e283 100644 --- a/meson.build +++ b/meson.build @@ -67,6 +67,14 @@ endif targetos = host_machine.system() +if cpu not in supported_cpus + host_arch = 'unknown' +elif cpu == 'x86' + host_arch = 'i386' +else + host_arch = cpu +endif + if cpu in ['x86', 'x86_64'] kvm_targets = ['i386-softmmu', 'x86_64-softmmu'] elif cpu == 'aarch64' @@ -335,9 +343,9 @@ if targetos == 'netbsd' endif endif -tcg_arch = config_host['ARCH'] +tcg_arch = host_arch if not get_option('tcg').disabled() - if cpu not in supported_cpus + if host_arch == 'unknown' if get_option('tcg_interpreter') warning('Unsupported CPU @0@, will use TCG with TCI (slow)'.format(cpu)) else @@ -353,11 +361,11 @@ if not get_option('tcg').disabled() endif if get_option('tcg_interpreter') tcg_arch = 'tci' - elif config_host['ARCH'] == 'sparc64' + elif host_arch == 'sparc64' tcg_arch = 'sparc' - elif config_host['ARCH'] == 'x86_64' + elif host_arch == 'x86_64' tcg_arch = 'i386' - elif config_host['ARCH'] == 'ppc64' + elif host_arch == 'ppc64' tcg_arch = 'ppc' endif add_project_arguments('-iquote', meson.current_source_dir() / 'tcg' / tcg_arch, @@ -1422,6 +1430,8 @@ config_host_data.set_quoted('CONFIG_QEMU_LOCALSTATEDIR', get_option('prefix') / config_host_data.set_quoted('CONFIG_QEMU_MODDIR', get_option('prefix') / qemu_moddir) config_host_data.set_quoted('CONFIG_SYSCONFDIR', get_option('prefix') / get_option('sysconfdir')) +config_host_data.set('HOST_' + host_arch.to_upper(), 1) + config_host_data.set('CONFIG_ATTR', libattr.found()) config_host_data.set('CONFIG_BRLAPI', brlapi.found()) config_host_data.set('CONFIG_COCOA', cocoa.found()) @@ -1770,8 +1780,6 @@ foreach k, v: config_host v = '"' + '", "'.join(v.split()) + '", ' endif config_host_data.set(k, v) - elif k == 'ARCH' -config_host_data.set('HOST_' + v.to_upper(), 1) elif strings.contains(k) config_host_data.set_quoted(k, v) elif k.startswith('CONFIG_') @@ -1914,7 +1922,7 @@ foreach target : target_dirs endif foreach k, v: disassemblers -if config_host['ARCH'].startswith(k) or config_target['TARGET_BASE_ARCH'].startswith(k) +if host_arch.startswith(k) or config_target['TARGET_BASE_ARCH'].startswith(k) foreach sym: v config_target += { sym: 'y' } config_all_disas += { sym: 'y' } @@ -2883,7 +2891,7 @@ foreach target : target_dirs endif if 'CONFIG_LINUX_USER' in config_target base_dir = 'linux-user' - target_inc += include_directories('linux-user/host/' / config_host['ARCH']) + target_inc += include_directories('linux-user/host/' / host_arch) endif if 'CONFIG_BSD_USER' in config_target base_dir = 'bsd-user' -- 2.33.1
[PATCH 03/10] configure: move target detection before CPU detection
This makes more sense, since target detection can affect CPU detection on Solaris. Signed-off-by: Paolo Bonzini --- configure | 115 ++ 1 file changed, 55 insertions(+), 60 deletions(-) diff --git a/configure b/configure index 63438c1900..157691d99e 100755 --- a/configure +++ b/configure @@ -528,16 +528,67 @@ else targetos=bogus fi -# Some host OSes need non-standard checks for which CPU to use. -# Note that these checks are broken for cross-compilation: if you're -# cross-compiling to one of these OSes then you'll need to specify -# the correct CPU with the --cpu option. +# OS specific + case $targetos in +windows) + mingw32="yes" + plugins="no" + pie="no" +;; +gnu/kfreebsd) + bsd="yes" +;; +freebsd) + bsd="yes" + bsd_user="yes" + make="${MAKE-gmake}" + # needed for kinfo_getvmmap(3) in libutil.h +;; +dragonfly) + bsd="yes" + make="${MAKE-gmake}" +;; +netbsd) + bsd="yes" + make="${MAKE-gmake}" +;; +openbsd) + bsd="yes" + make="${MAKE-gmake}" +;; +darwin) + bsd="yes" + darwin="yes" + # Disable attempts to use ObjectiveC features in os/object.h since they + # won't work when we're compiling with gcc as a C compiler. + QEMU_CFLAGS="-DOS_OBJECT_USE_OBJC=0 $QEMU_CFLAGS" +;; sunos) + solaris="yes" + make="${MAKE-gmake}" + smbd="${SMBD-/usr/sfw/sbin/smbd}" +# needed for CMSG_ macros in sys/socket.h + QEMU_CFLAGS="-D_XOPEN_SOURCE=600 $QEMU_CFLAGS" +# needed for TIOCWIN* defines in termios.h + QEMU_CFLAGS="-D__EXTENSIONS__ $QEMU_CFLAGS" # $(uname -m) returns i86pc even on an x86_64 box, so default based on isainfo + # Note that this check is broken for cross-compilation: if you're + # cross-compiling to one of these OSes then you'll need to specify + # the correct CPU with the --cpu option. if test -z "$cpu" && test "$(isainfo -k)" = "amd64"; then cpu="x86_64" fi +;; +haiku) + pie="no" + QEMU_CFLAGS="-DB_USE_POSITIVE_POSIX_ERRORS -D_BSD_SOURCE -fPIC $QEMU_CFLAGS" +;; +linux) + linux="yes" + linux_user="yes" + vhost_user=${default_feature:-yes} +;; esac if test ! -z "$cpu" ; then @@ -621,62 +672,6 @@ if test -z "$ARCH"; then ARCH="$cpu" fi -# OS specific - -case $targetos in -windows) - mingw32="yes" - plugins="no" - pie="no" -;; -gnu/kfreebsd) - bsd="yes" -;; -freebsd) - bsd="yes" - bsd_user="yes" - make="${MAKE-gmake}" - # needed for kinfo_getvmmap(3) in libutil.h -;; -dragonfly) - bsd="yes" - make="${MAKE-gmake}" -;; -netbsd) - bsd="yes" - make="${MAKE-gmake}" -;; -openbsd) - bsd="yes" - make="${MAKE-gmake}" -;; -darwin) - bsd="yes" - darwin="yes" - # Disable attempts to use ObjectiveC features in os/object.h since they - # won't work when we're compiling with gcc as a C compiler. - QEMU_CFLAGS="-DOS_OBJECT_USE_OBJC=0 $QEMU_CFLAGS" -;; -sunos) - solaris="yes" - make="${MAKE-gmake}" - smbd="${SMBD-/usr/sfw/sbin/smbd}" -# needed for CMSG_ macros in sys/socket.h - QEMU_CFLAGS="-D_XOPEN_SOURCE=600 $QEMU_CFLAGS" -# needed for TIOCWIN* defines in termios.h - QEMU_CFLAGS="-D__EXTENSIONS__ $QEMU_CFLAGS" -;; -haiku) - pie="no" - QEMU_CFLAGS="-DB_USE_POSITIVE_POSIX_ERRORS -D_BSD_SOURCE -fPIC $QEMU_CFLAGS" -;; -linux) - linux="yes" - linux_user="yes" - vhost_user=${default_feature:-yes} -;; -esac - : ${make=${MAKE-make}} # We prefer python 3.x. A bare 'python' is traditionally -- 2.33.1
[PATCH 10/10] configure: remove unnecessary symlinks
Make pc-bios/meson.build use the files in the source tree as inputs to bzip2. Signed-off-by: Paolo Bonzini --- configure | 1 - pc-bios/meson.build | 2 +- 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/configure b/configure index 2cecbd4c66..7751cc31b8 100755 --- a/configure +++ b/configure @@ -3817,7 +3817,6 @@ for bios_file in \ $source_path/pc-bios/*.img \ $source_path/pc-bios/openbios-* \ $source_path/pc-bios/u-boot.* \ -$source_path/pc-bios/edk2-*.fd.bz2 \ $source_path/pc-bios/palcode-* \ $source_path/pc-bios/qemu_vga.ndrv diff --git a/pc-bios/meson.build b/pc-bios/meson.build index b40ff3f2bd..1812a4084f 100644 --- a/pc-bios/meson.build +++ b/pc-bios/meson.build @@ -15,7 +15,7 @@ if unpack_edk2_blobs roms += custom_target(f, build_by_default: have_system, output: f, - input: '@0@.bz2'.format(f), + input: files('@0@.bz2'.format(f)), capture: true, install: get_option('install_blobs'), install_dir: qemu_datadir, -- 2.33.1
[PATCH 07/10] configure: unify x86_64 and x32
The only difference between the two, as far as either configure or Meson are concerned, is in the multilib flags passed to the compiler. For QEMU, this fixes the handling of TYPE_OLDDEVT in include/exec/user/thunk.h and enables testing of dirty ring buffer, because both are using HOST_X86_64. For tests/tcg, this means that on a hypothetical x32 host the cross compiler will not be used to build the tests. Signed-off-by: Paolo Bonzini --- configure | 6 ++ meson.build | 3 +-- 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/configure b/configure index 741ff99f4f..741c33c7ad 100755 --- a/configure +++ b/configure @@ -646,6 +646,7 @@ case "$cpu" in cpu="i386" CPU_CFLAGS="-m32" ;; x32) +cpu="x86_64" CPU_CFLAGS="-mx32" ;; x86_64|amd64) cpu="x86_64" @@ -3735,7 +3736,7 @@ fi if test "$linux" = "yes" ; then mkdir -p linux-headers case "$cpu" in - i386|x86_64|x32) + i386|x86_64) linux_arch=x86 ;; ppc|ppc64) @@ -3917,9 +3918,6 @@ if test "$skip_meson" = no; then i386) echo "cpu_family = 'x86'" >> $cross ;; -x86_64|x32) -echo "cpu_family = 'x86_64'" >> $cross -;; *) echo "cpu_family = '$ARCH'" >> $cross ;; diff --git a/meson.build b/meson.build index 96de1a6ef9..903d4f3b10 100644 --- a/meson.build +++ b/meson.build @@ -355,7 +355,7 @@ if not get_option('tcg').disabled() tcg_arch = 'tci' elif config_host['ARCH'] == 'sparc64' tcg_arch = 'sparc' - elif config_host['ARCH'] in ['x86_64', 'x32'] + elif config_host['ARCH'] == 'x86_64' tcg_arch = 'i386' elif config_host['ARCH'] == 'ppc64' tcg_arch = 'ppc' @@ -1801,7 +1801,6 @@ disassemblers = { 'hppa' : ['CONFIG_HPPA_DIS'], 'i386' : ['CONFIG_I386_DIS'], 'x86_64' : ['CONFIG_I386_DIS'], - 'x32' : ['CONFIG_I386_DIS'], 'm68k' : ['CONFIG_M68K_DIS'], 'microblaze' : ['CONFIG_MICROBLAZE_DIS'], 'mips' : ['CONFIG_MIPS_DIS'], -- 2.33.1
[PATCH 04/10] configure: do not set bsd_user/linux_user early
Similar to other optional features, leave the variables empty and compute the actual value later. Use the existence of include or source directories to detect whether an OS or CPU supports respectively bsd-user and linux-user. For now, BSD user-mode emulation is buildable even on TCI-only architectures. This probably will change once safe signals are brought over from linux-user. Signed-off-by: Paolo Bonzini --- configure | 27 --- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/configure b/configure index 157691d99e..a774086891 100755 --- a/configure +++ b/configure @@ -322,8 +322,8 @@ linux="no" solaris="no" profiler="no" softmmu="yes" -linux_user="no" -bsd_user="no" +linux_user="" +bsd_user="" pkgversion="" pie="" qom_cast_debug="yes" @@ -541,7 +541,6 @@ gnu/kfreebsd) ;; freebsd) bsd="yes" - bsd_user="yes" make="${MAKE-gmake}" # needed for kinfo_getvmmap(3) in libutil.h ;; @@ -586,7 +585,6 @@ haiku) ;; linux) linux="yes" - linux_user="yes" vhost_user=${default_feature:-yes} ;; esac @@ -1280,18 +1278,25 @@ if eval test -z "\${cross_cc_$cpu}"; then cross_cc_vars="$cross_cc_vars cross_cc_${cpu}" fi -# For user-mode emulation the host arch has to be one we explicitly -# support, even if we're using TCI. -if [ "$ARCH" = "unknown" ]; then - bsd_user="no" - linux_user="no" -fi - default_target_list="" deprecated_targets_list=ppc64abi32-linux-user deprecated_features="" mak_wilds="" +if [ "$linux_user" != no ]; then +if [ "$targetos" = linux ] && [ -d $source_path/linux-user/host/$cpu ]; then +linux_user=yes +elif [ "$linux_user" = yes ]; then +error_exit "linux-user not supported on this architecture" +fi +fi +if [ "$bsd_user" != no ]; then +if [ -d $source_path/bsd-user/$targetos ]; then +bsd_user=yes +elif [ "$bsd_user" = yes ]; then +error_exit "bsd-user not supported on this host OS" +fi +fi if [ "$softmmu" = "yes" ]; then mak_wilds="${mak_wilds} $source_path/configs/targets/*-softmmu.mak" fi -- 2.33.1
[PATCH 05/10] configure: unify two case statements on $cpu
Signed-off-by: Paolo Bonzini --- configure | 67 ++- 1 file changed, 32 insertions(+), 35 deletions(-) diff --git a/configure b/configure index a774086891..65df8d02d9 100755 --- a/configure +++ b/configure @@ -635,32 +635,47 @@ else fi ARCH= -# Normalise host CPU name and set ARCH. +# Normalise host CPU name, set ARCH and multilib cflags # Note that this case should only have supported host CPUs, not guests. case "$cpu" in - ppc|ppc64|s390x|sparc64|x32|riscv) - ;; - ppc64le) -ARCH="ppc64" - ;; + aarch64|riscv) ;; + armv*b|armv*l|arm) +cpu="arm" ;; + i386|i486|i586|i686|i86pc|BePC) cpu="i386" - ;; +CPU_CFLAGS="-m32" ;; + x32) +CPU_CFLAGS="-mx32" ;; x86_64|amd64) cpu="x86_64" - ;; - armv*b|armv*l|arm) -cpu="arm" - ;; - aarch64) -cpu="aarch64" - ;; +# ??? Only extremely old AMD cpus do not have cmpxchg16b. +# If we truly care, we should simply detect this case at +# runtime and generate the fallback to serial emulation. +CPU_CFLAGS="-m64 -mcx16" ;; + mips*) -cpu="mips" - ;; +cpu="mips" ;; + + ppc) +CPU_CFLAGS="-m32" ;; + ppc64) +CPU_CFLAGS="-m64" ;; + ppc64le) +ARCH="ppc64" ;; + + s390) +CPU_CFLAGS="-m31" +ARCH=unknown ;; + s390x) +CPU_CFLAGS="-m64" ;; + sparc|sun4[cdmuv]) cpu="sparc" - ;; +CPU_CFLAGS="-m32 -mv8plus -mcpu=ultrasparc" ;; + sparc64) +CPU_CFLAGS="-m64 -mcpu=ultrasparc" ;; + *) # This will result in either an error or falling back to TCI later ARCH=unknown @@ -1255,24 +1270,6 @@ local_statedir="${local_statedir:-$prefix/var}" firmwarepath="${firmwarepath:-$datadir/qemu-firmware}" localedir="${localedir:-$datadir/locale}" -case "$cpu" in -ppc) CPU_CFLAGS="-m32" ;; -ppc64) CPU_CFLAGS="-m64" ;; -sparc) CPU_CFLAGS="-m32 -mv8plus -mcpu=ultrasparc" ;; -sparc64) CPU_CFLAGS="-m64 -mcpu=ultrasparc" ;; -s390) CPU_CFLAGS="-m31" ;; -s390x) CPU_CFLAGS="-m64" ;; -i386) CPU_CFLAGS="-m32" ;; -x32) CPU_CFLAGS="-mx32" ;; - -# ??? Only extremely old AMD cpus do not have cmpxchg16b. -# If we truly care, we should simply detect this case at -# runtime and generate the fallback to serial emulation. -x86_64) CPU_CFLAGS="-m64 -mcx16" ;; - -# No special flags required for other host CPUs -esac - if eval test -z "\${cross_cc_$cpu}"; then eval "cross_cc_${cpu}=\$cc" cross_cc_vars="$cross_cc_vars cross_cc_${cpu}" -- 2.33.1
[PATCH 08/10] meson: rename "arch" variable
Avoid confusion between the ARCH variable of configure/config-host.mak and the same-named variable of meson.build. Signed-off-by: Paolo Bonzini --- meson.build | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/meson.build b/meson.build index 903d4f3b10..677ab1b57d 100644 --- a/meson.build +++ b/meson.build @@ -2845,7 +2845,7 @@ emulators = {} foreach target : target_dirs config_target = config_target_mak[target] target_name = config_target['TARGET_NAME'] - arch = config_target['TARGET_BASE_ARCH'] + target_base_arch = config_target['TARGET_BASE_ARCH'] arch_srcs = [config_target_h[target]] arch_deps = [] c_args = ['-DNEED_CPU_H', @@ -2861,11 +2861,11 @@ foreach target : target_dirs if target.endswith('-softmmu') qemu_target_name = 'qemu-system-' + target_name target_type='system' -t = target_softmmu_arch[arch].apply(config_target, strict: false) +t = target_softmmu_arch[target_base_arch].apply(config_target, strict: false) arch_srcs += t.sources() arch_deps += t.dependencies() -hw_dir = target_name == 'sparc64' ? 'sparc64' : arch +hw_dir = target_name == 'sparc64' ? 'sparc64' : target_base_arch hw = hw_arch[hw_dir].apply(config_target, strict: false) arch_srcs += hw.sources() arch_deps += hw.dependencies() @@ -2876,8 +2876,8 @@ foreach target : target_dirs abi = config_target['TARGET_ABI_DIR'] target_type='user' qemu_target_name = 'qemu-' + target_name -if arch in target_user_arch - t = target_user_arch[arch].apply(config_target, strict: false) +if target_base_arch in target_user_arch + t = target_user_arch[target_base_arch].apply(config_target, strict: false) arch_srcs += t.sources() arch_deps += t.dependencies() endif @@ -2915,7 +2915,7 @@ foreach target : target_dirs arch_srcs += gdbstub_xml endif - t = target_arch[arch].apply(config_target, strict: false) + t = target_arch[target_base_arch].apply(config_target, strict: false) arch_srcs += t.sources() arch_deps += t.dependencies() -- 2.33.1
[PATCH 06/10] configure: unify ppc64 and ppc64le
The only difference between the two, as far as either configure or Meson are concerned, is the default endianness of the compiler. For tests/tcg, specify the endianness explicitly on the command line; for configure, do the same so that it is possible to have --cpu=ppc64le on a bigendian system or vice versa. Apart from this, cpu=ppc64le can be normalized to ppc64 also in configure and not just in the meson cross file. Signed-off-by: Paolo Bonzini --- configure | 10 -- tests/tcg/configure.sh | 4 +++- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/configure b/configure index 65df8d02d9..741ff99f4f 100755 --- a/configure +++ b/configure @@ -660,9 +660,10 @@ case "$cpu" in ppc) CPU_CFLAGS="-m32" ;; ppc64) -CPU_CFLAGS="-m64" ;; +CPU_CFLAGS="-m64 -mbig" ;; ppc64le) -ARCH="ppc64" ;; +cpu="ppc64" +CPU_CFLAGS="-m64 -mlittle" ;; s390) CPU_CFLAGS="-m31" @@ -3737,7 +3738,7 @@ if test "$linux" = "yes" ; then i386|x86_64|x32) linux_arch=x86 ;; - ppc|ppc64|ppc64le) + ppc|ppc64) linux_arch=powerpc ;; s390x) @@ -3919,9 +3920,6 @@ if test "$skip_meson" = no; then x86_64|x32) echo "cpu_family = 'x86_64'" >> $cross ;; -ppc64le) -echo "cpu_family = 'ppc64'" >> $cross -;; *) echo "cpu_family = '$ARCH'" >> $cross ;; diff --git a/tests/tcg/configure.sh b/tests/tcg/configure.sh index 9b76f58258..9ef913df5b 100755 --- a/tests/tcg/configure.sh +++ b/tests/tcg/configure.sh @@ -64,7 +64,9 @@ fi : ${cross_cc_ppc="powerpc-linux-gnu-gcc"} : ${cross_cc_cflags_ppc="-m32"} : ${cross_cc_ppc64="powerpc64-linux-gnu-gcc"} -: ${cross_cc_ppc64le="powerpc64le-linux-gnu-gcc"} +: ${cross_cc_cflags_ppc64="-m64 -mbig"} +: ${cross_cc_ppc64le="$cross_cc_ppc64"} +: ${cross_cc_cflags_ppc64le="-m64 -mlittle"} : ${cross_cc_riscv64="riscv64-linux-gnu-gcc"} : ${cross_cc_s390x="s390x-linux-gnu-gcc"} : ${cross_cc_sh4="sh4-linux-gnu-gcc"} -- 2.33.1
Re: [PATCH 2/2] hw/net: Move MV88W8618 network device out of hw/arm/ directory
On 12/16/21 09:45, Philippe Mathieu-Daudé wrote: > The Marvell 88W8618 network device is hidden in the Musicpal > machine. Move it into a new unit file under the hw/net/ directory. > > Signed-off-by: Philippe Mathieu-Daudé > --- > include/hw/net/mv88w8618_eth.h | 16 ++ > hw/arm/musicpal.c | 380 +- > hw/net/mv88w8618_eth.c | 406 + > MAINTAINERS| 2 + > hw/net/meson.build | 1 + > 5 files changed, 426 insertions(+), 379 deletions(-) > create mode 100644 include/hw/net/mv88w8618_eth.h > create mode 100644 hw/net/mv88w8618_eth.c > > diff --git a/include/hw/net/mv88w8618_eth.h b/include/hw/net/mv88w8618_eth.h > new file mode 100644 > index 000..65e880e0eb7 > --- /dev/null > +++ b/include/hw/net/mv88w8618_eth.h > @@ -0,0 +1,16 @@ > +/* > + * Marvell MV88W8618 / Freecom MusicPal emulation. > + * > + * Copyright (c) 2008 Jan Kiszka > + * > + * This code is licensed under the GNU GPL v2. > + * > + * Contributions after 2012-01-13 are licensed under the terms of the > + * GNU GPL, version 2 or (at your option) any later version. These 2 lines are probably irrelevant here. > + */ > +#ifndef HW_NET_MV88W8618_H > +#define HW_NET_MV88W8618_H > + > +#define TYPE_MV88W8618_ETH "mv88w8618_eth" > + > +#endif
Re: [PATCH 06/10] configure: unify ppc64 and ppc64le
On 12/16/21 09:51, Paolo Bonzini wrote: > The only difference between the two, as far as either configure or > Meson are concerned, is the default endianness of the compiler. > > For tests/tcg, specify the endianness explicitly on the command line; > for configure, do the same so that it is possible to have --cpu=ppc64le > on a bigendian system or vice versa. Apart from this, cpu=ppc64le can > be normalized to ppc64 also in configure and not just in the meson > cross file. > > Signed-off-by: Paolo Bonzini > --- > configure | 10 -- > tests/tcg/configure.sh | 4 +++- > 2 files changed, 7 insertions(+), 7 deletions(-) Reviewed-by: Philippe Mathieu-Daudé
Re: [PATCH 07/10] configure: unify x86_64 and x32
On 12/16/21 09:51, Paolo Bonzini wrote: > The only difference between the two, as far as either configure or > Meson are concerned, is in the multilib flags passed to the compiler. > > For QEMU, this fixes the handling of TYPE_OLDDEVT in > include/exec/user/thunk.h and enables testing of dirty ring buffer, > because both are using HOST_X86_64. > > For tests/tcg, this means that on a hypothetical x32 host the > cross compiler will not be used to build the tests. Why not add the cross compiler definitions to tests/tcg? +: ${cross_cc_x32="$cross_cc_x86_64"} +: ${cross_cc_cflags_x32="-mx32"} > Signed-off-by: Paolo Bonzini > --- > configure | 6 ++ > meson.build | 3 +-- > 2 files changed, 3 insertions(+), 6 deletions(-)
Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
On Thu, Dec 16, 2021 at 11:01:40AM +0800, Jason Wang wrote: > On Wed, Dec 15, 2021 at 6:07 PM Stefan Hajnoczi wrote: > > > > On Wed, Dec 15, 2021 at 11:18:05AM +0800, Jason Wang wrote: > > > On Tue, Dec 14, 2021 at 9:11 PM Stefan Hajnoczi > > > wrote: > > > > > > > > On Tue, Dec 14, 2021 at 10:22:53AM +0800, Jason Wang wrote: > > > > > On Mon, Dec 13, 2021 at 11:14 PM Stefan Hajnoczi > > > > > wrote: > > > > > > > > > > > > On Mon, Dec 13, 2021 at 10:47:00AM +0800, Jason Wang wrote: > > > > > > > On Sun, Dec 12, 2021 at 5:30 PM Michael S. Tsirkin > > > > > > > wrote: > > > > > > > > > > > > > > > > On Sat, Dec 11, 2021 at 03:00:27AM +, Longpeng (Mike, Cloud > > > > > > > > Infrastructure Service Product Dept.) wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > -Original Message- > > > > > > > > > > From: Stefan Hajnoczi [mailto:stefa...@redhat.com] > > > > > > > > > > Sent: Thursday, December 9, 2021 5:17 PM > > > > > > > > > > To: Longpeng (Mike, Cloud Infrastructure Service Product > > > > > > > > > > Dept.) > > > > > > > > > > > > > > > > > > > > Cc: jasow...@redhat.com; m...@redhat.com; pa...@nvidia.com; > > > > > > > > > > xieyon...@bytedance.com; sgarz...@redhat.com; Yechuan > > > > > > > > > > ; > > > > > > > > > > Gonglei (Arei) ; > > > > > > > > > > qemu-devel@nongnu.org > > > > > > > > > > Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host > > > > > > > > > > device support > > > > > > > > > > > > > > > > > > > > On Wed, Dec 08, 2021 at 01:20:10PM +0800, Longpeng(Mike) > > > > > > > > > > wrote: > > > > > > > > > > > From: Longpeng > > > > > > > > > > > > > > > > > > > > > > Hi guys, > > > > > > > > > > > > > > > > > > > > > > This patch introduces vhost-vdpa-net device, which is > > > > > > > > > > > inspired > > > > > > > > > > > by vhost-user-blk and the proposal of vhost-vdpa-blk > > > > > > > > > > > device [1]. > > > > > > > > > > > > > > > > > > > > > > I've tested this patch on Huawei's offload card: > > > > > > > > > > > ./x86_64-softmmu/qemu-system-x86_64 \ > > > > > > > > > > > -device vhost-vdpa-net-pci,vdpa-dev=/dev/vhost-vdpa-0 > > > > > > > > > > > > > > > > > > > > > > For virtio hardware offloading, the most important > > > > > > > > > > > requirement for us > > > > > > > > > > > is to support live migration between offloading cards > > > > > > > > > > > from different > > > > > > > > > > > vendors, the combination of netdev and virtio-net seems > > > > > > > > > > > too heavy, we > > > > > > > > > > > prefer a lightweight way. > > > > > > > > > > > > > > > > > > > > > > Maybe we could support both in the future ? Such as: > > > > > > > > > > > > > > > > > > > > > > * Lightweight > > > > > > > > > > > Net: vhost-vdpa-net > > > > > > > > > > > Storage: vhost-vdpa-blk > > > > > > > > > > > > > > > > > > > > > > * Heavy but more powerful > > > > > > > > > > > Net: netdev + virtio-net + vhost-vdpa > > > > > > > > > > > Storage: bdrv + virtio-blk + vhost-vdpa > > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > > > https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html > > > > > > > > > > > > > > > > > > > > Stefano presented a plan for vdpa-blk at KVM Forum 2021: > > > > > > > > > > https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-sof > > > > > > > > > > tware-offload-for-virtio-blk-stefano-garzarella-red-hat > > > > > > > > > > > > > > > > > > > > It's closer to today's virtio-net + vhost-net approach than > > > > > > > > > > the > > > > > > > > > > vhost-vdpa-blk device you have mentioned. The idea is to > > > > > > > > > > treat vDPA as > > > > > > > > > > an offload feature rather than a completely separate code > > > > > > > > > > path that > > > > > > > > > > needs to be maintained and tested. That way QEMU's block > > > > > > > > > > layer features > > > > > > > > > > and live migration work with vDPA devices and re-use the > > > > > > > > > > virtio-blk > > > > > > > > > > code. The key functionality that has not been implemented > > > > > > > > > > yet is a "fast > > > > > > > > > > path" mechanism that allows the QEMU virtio-blk device's > > > > > > > > > > virtqueue to be > > > > > > > > > > offloaded to vDPA. > > > > > > > > > > > > > > > > > > > > The unified vdpa-blk architecture should deliver the same > > > > > > > > > > performance > > > > > > > > > > as the vhost-vdpa-blk device you mentioned but with more > > > > > > > > > > features, so I > > > > > > > > > > wonder what aspects of the vhost-vdpa-blk idea are > > > > > > > > > > important to you? > > > > > > > > > > > > > > > > > > > > QEMU already has vhost-user-blk, which takes a similar > > > > > > > > > > approach as the > > > > > > > > > > vhost-vdpa-blk device you are proposing. I'm not against the > > > > > > > > > > vhost-vdpa-blk approach in priciple, but would like to > > > > > > > > > > understand your > > > > > > > > > > requirements and see if there is a way to collaborate on > > > >
Re: [PATCH] gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices
On 12/16/21 09:22, Thomas Huth wrote: > The new msys2-64bit job is often running for more than 50 minutes - and > if the CI is currently loaded, it times out after 60 minutes. The job > has been declared with a bigger timeout, but seems like this is getting > ignored on the shared Gitlab-CI Windows runners, so we're currently > seeing a lot of failures with this job. Thus we have to reduce the time > it takes to finish this job. Since we want to test compiling the WHPX > and HAX accelerator code with this job, switching to another target CPU > is not really a good option, so let's reduce the amount of code that we > have to compile with the --without-default-devices switch instead. Good idea! Reviewed-by: Philippe Mathieu-Daudé > > Signed-off-by: Thomas Huth > --- > .gitlab-ci.d/windows.yml | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-)
Re: [PATCH 08/10] meson: rename "arch" variable
On 12/16/21 09:51, Paolo Bonzini wrote: > Avoid confusion between the ARCH variable of configure/config-host.mak > and the same-named variable of meson.build. > > Signed-off-by: Paolo Bonzini > --- > meson.build | 12 ++-- > 1 file changed, 6 insertions(+), 6 deletions(-) Reviewed-by: Philippe Mathieu-Daudé
[PATCH 3/5] migration: ram_release_pages() always receive 1 page as argument
Remove the pages argument. And s/pages/page/ Signed-off-by: Juan Quintela --- migration/ram.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 7223b0d8ca..8d29d64b35 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1204,13 +1204,13 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) return -1; } -static void ram_release_pages(const char *rbname, uint64_t offset, int pages) +static void ram_release_page(const char *rbname, uint64_t offset) { if (!migrate_release_ram() || !migration_in_postcopy()) { return; } -ram_discard_range(rbname, offset, ((ram_addr_t)pages) << TARGET_PAGE_BITS); +ram_discard_range(rbname, offset, ((ram_addr_t)1) << TARGET_PAGE_BITS); } /* @@ -1365,7 +1365,7 @@ static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, } exit: -ram_release_pages(block->idstr, offset & TARGET_PAGE_MASK, 1); +ram_release_page(block->idstr, offset & TARGET_PAGE_MASK); return zero_page; } @@ -2153,7 +2153,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss) xbzrle_cache_zero_page(rs, block->offset + offset); XBZRLE_cache_unlock(); } -ram_release_pages(block->idstr, offset, res); +ram_release_page(block->idstr, offset); return res; } -- 2.33.1
[PATCH 5/5] migration: Move ram_release_pages() call to save_zero_page_to_file()
We always need to call it when we find a zero page, so put it in a single place. Signed-off-by: Juan Quintela --- migration/ram.c | 21 ++--- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 0911811ed9..b0b888 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1158,6 +1158,15 @@ static void migration_bitmap_sync_precopy(RAMState *rs) } } +static void ram_release_page(const char *rbname, uint64_t offset) +{ +if (!migrate_release_ram() || !migration_in_postcopy()) { +return; +} + +ram_discard_range(rbname, offset, ((ram_addr_t)1) << TARGET_PAGE_BITS); +} + /** * save_zero_page_to_file: send the zero page to the file * @@ -1179,6 +1188,7 @@ static int save_zero_page_to_file(RAMState *rs, QEMUFile *file, len += save_page_header(rs, file, block, offset | RAM_SAVE_FLAG_ZERO); qemu_put_byte(file, 0); len += 1; +ram_release_page(block->idstr, offset & TARGET_PAGE_MASK); } return len; } @@ -1204,15 +1214,6 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) return -1; } -static void ram_release_page(const char *rbname, uint64_t offset) -{ -if (!migrate_release_ram() || !migration_in_postcopy()) { -return; -} - -ram_discard_range(rbname, offset, ((ram_addr_t)1) << TARGET_PAGE_BITS); -} - /* * @pages: the number of pages written by the control path, *< 0 - error @@ -1344,7 +1345,6 @@ static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, int ret; if (save_zero_page_to_file(rs, f, block, offset)) { -ram_release_page(block->idstr, offset & TARGET_PAGE_MASK); return true; } @@ -2148,7 +2148,6 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss) xbzrle_cache_zero_page(rs, block->offset + offset); XBZRLE_cache_unlock(); } -ram_release_page(block->idstr, offset); return res; } -- 2.33.1
[PATCH 0/5] migration: misc cleanups
Hi This series do several cleanups: - Dave found that I was using "%d" for unsigned, fix all uses. - We pass last_stage left and right, but we only use it in two places Just move it to RAMState. - do_compress_page() used a goto when not needed. - ram_release_pages() was always used with one page - And put it when we detect zero pages, instead of everywhere we have find a zero page. Please, review. Juan Quintela (5): migration: All this fields are unsigned migration: We only need last_stage in two places migration: ram_release_pages() always receive 1 page as argument migration: simplify do_compress_ram_page migration: Move ram_release_pages() call to save_zero_page_to_file() migration/multifd-zlib.c | 20 +-- migration/multifd-zstd.c | 24 +++--- migration/multifd.c | 16 - migration/ram.c | 71 +--- migration/trace-events | 26 +++ 5 files changed, 73 insertions(+), 84 deletions(-) -- 2.33.1
[PATCH 2/5] migration: We only need last_stage in two places
And we are passing it all around. Just add a field to RAMState that passes it. Signed-off-by: Juan Quintela --- migration/ram.c | 41 ++--- 1 file changed, 18 insertions(+), 23 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 57efa67f20..7223b0d8ca 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -325,7 +325,8 @@ struct RAMState { uint64_t xbzrle_bytes_prev; /* Start using XBZRLE (e.g., after the first round). */ bool xbzrle_enabled; - +/* Are we on the last stage of migration */ +bool last_stage; /* compression statistics since the beginning of the period */ /* amount of count that no free thread to compress data */ uint64_t compress_thread_busy_prev; @@ -683,11 +684,10 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr) * @current_addr: addr of the page * @block: block that contains the page we want to send * @offset: offset inside the block for the page - * @last_stage: if we are at the completion stage */ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data, ram_addr_t current_addr, RAMBlock *block, -ram_addr_t offset, bool last_stage) +ram_addr_t offset) { int encoded_len = 0, bytes_xbzrle; uint8_t *prev_cached_page; @@ -695,7 +695,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data, if (!cache_is_cached(XBZRLE.cache, current_addr, ram_counters.dirty_sync_count)) { xbzrle_counters.cache_miss++; -if (!last_stage) { +if (!rs->last_stage) { if (cache_insert(XBZRLE.cache, current_addr, *current_data, ram_counters.dirty_sync_count) == -1) { return -1; @@ -734,7 +734,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data, * Update the cache contents, so that it corresponds to the data * sent, in all cases except where we skip the page. */ -if (!last_stage && encoded_len != 0) { +if (!rs->last_stage && encoded_len != 0) { memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE); /* * In the case where we couldn't compress, ensure that the caller @@ -1290,9 +1290,8 @@ static int save_normal_page(RAMState *rs, RAMBlock *block, ram_addr_t offset, * @rs: current RAM state * @block: block that contains the page we want to send * @offset: offset inside the block for the page - * @last_stage: if we are at the completion stage */ -static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage) +static int ram_save_page(RAMState *rs, PageSearchStatus *pss) { int pages = -1; uint8_t *p; @@ -1307,8 +1306,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage) XBZRLE_cache_lock(); if (rs->xbzrle_enabled && !migration_in_postcopy()) { pages = save_xbzrle_page(rs, &p, current_addr, block, - offset, last_stage); -if (!last_stage) { + offset); +if (!rs->last_stage) { /* Can't send this cached data async, since the cache page * might get updated before it gets to the wire */ @@ -2129,10 +2128,8 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) * * @rs: current RAM state * @pss: data about the page we want to send - * @last_stage: if we are at the completion stage */ -static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, -bool last_stage) +static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss) { RAMBlock *block = pss->block; ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS; @@ -2171,7 +2168,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, return ram_save_multifd_page(rs, block, offset); } -return ram_save_page(rs, pss, last_stage); +return ram_save_page(rs, pss); } /** @@ -2190,10 +2187,8 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, * @rs: current RAM state * @ms: current migration state * @pss: data about the page we want to send - * @last_stage: if we are at the completion stage */ -static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss, - bool last_stage) +static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss) { int tmppages, pages = 0; size_t pagesize_bits = @@ -2211,7 +2206,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss, do { /* Check the pages is dirty and if it is send it */ if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) { -tmppages = ram_save_target_page(rs, pss, last_stage); +tmppages = ram_save_target_page(r
Re: [PATCH v10 3/3] cpus-common: implement dirty page limit on virtual CPU
在 2021/12/16 14:22, Peter Xu 写道: On Wed, Dec 15, 2021 at 02:41:32PM +0100, Markus Armbruster wrote: Peter Xu writes: On Wed, Dec 15, 2021 at 03:56:55PM +0800, Hyman Huang wrote: +{ 'command': 'vcpu-dirty-limit', + 'data': { 'enable': 'bool', +'*cpu-index': 'uint64', +'*dirty-rate': 'uint64'} } Drop @enable, please. If @dirty-rate is present, set the limit to its value. If it's absent, cancel the limit. Ok. Indeed, this is the simplest style. :) So the final qmp format should be like: case 1: setup vcpu 0 dirty page limit 100MB/s vcpu-dirty-limit cpu-index=0 dirty-rate=100MB/s case 2: cancle vcpu 0 dirty page limit vcpu-dirty-limit cpu-index=0 I actually agree with what you said... for human beings no one will read it as "disable vcpu throttling", instead people could consider it enables vcpu throttle with a default dirty rate from a gut feeling. I think what Markus suggested is the simplest solution for computers, but it can confuse human beings. So it turns out to be a general question to QMP scheme design: should we always assume QMP client to be a piece of software, or should we still consider the feeling of human beings operating on QMP interfaces using qmp-shell. IMHO we should still consider the latter, if we don't lose much, anyway. But I don't have a strong opinion. If you want a more explicit interface, then I'd recommend to go right back to v7: {"execute": "set-vcpu-dirty-limit", "arguments": {"cpu-index": 0, "dirtyrate": 200}} {"execute": "cancel-vcpu-dirty-limit", "arguments": {"cpu-index": 0}} Bonus: it already has my Acked-by. Fair enough. :) That looks good to me too. Yong, please hold-off a bit on reposting (if there's a plan) - I'll read the other parts soon.. Ok, i'm not going to repost the next version untill the consensus is achieved. So the final format of qmp we conclude are: case 1: setup vcpu 0 dirty page limit 100MB/s set-vcpu-dirty-limit cpu-index=0 dirty-rate=100 case 2: setup all vcpu dirty page limit 100MB/s set-vcpu-dirty-limit dirty-rate=100 case 3: cancel vcpu 0 dirty page limit cancel-vcpu-dirty-limit cpu-index=0 case 4: cancel all vcpu dirty page limit cancel-vcpu-dirty-limit case 5: query limit infomatioin of all vcpu enabled query-vcpu-dirty-limit And the corresponding hmp format keep the same style: Is there any advice? :) Thanks, -- Best regard Hyman Huang(黄勇)
[PATCH 1/5] migration: All this fields are unsigned
So printing it as %d is wrong. Notice that for the channel id, that is an uint8_t, but I changed it anyways for consistency. Signed-off-by: Juan Quintela --- migration/multifd-zlib.c | 20 ++-- migration/multifd-zstd.c | 24 migration/multifd.c | 16 migration/trace-events | 26 +- 4 files changed, 43 insertions(+), 43 deletions(-) diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c index da6201704c..9f6ebf1076 100644 --- a/migration/multifd-zlib.c +++ b/migration/multifd-zlib.c @@ -51,7 +51,7 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp) zs->opaque = Z_NULL; if (deflateInit(zs, migrate_multifd_zlib_level()) != Z_OK) { g_free(z); -error_setg(errp, "multifd %d: deflate init failed", p->id); +error_setg(errp, "multifd %u: deflate init failed", p->id); return -1; } /* To be safe, we reserve twice the size of the packet */ @@ -60,7 +60,7 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp) if (!z->zbuff) { deflateEnd(&z->zs); g_free(z); -error_setg(errp, "multifd %d: out of memory for zbuff", p->id); +error_setg(errp, "multifd %u: out of memory for zbuff", p->id); return -1; } p->data = z; @@ -132,12 +132,12 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp) ret = deflate(zs, flush); } while (ret == Z_OK && zs->avail_in && zs->avail_out); if (ret == Z_OK && zs->avail_in) { -error_setg(errp, "multifd %d: deflate failed to compress all input", +error_setg(errp, "multifd %u: deflate failed to compress all input", p->id); return -1; } if (ret != Z_OK) { -error_setg(errp, "multifd %d: deflate returned %d instead of Z_OK", +error_setg(errp, "multifd %u: deflate returned %d instead of Z_OK", p->id, ret); return -1; } @@ -190,7 +190,7 @@ static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp) zs->avail_in = 0; zs->next_in = Z_NULL; if (inflateInit(zs) != Z_OK) { -error_setg(errp, "multifd %d: inflate init failed", p->id); +error_setg(errp, "multifd %u: inflate init failed", p->id); return -1; } /* To be safe, we reserve twice the size of the packet */ @@ -198,7 +198,7 @@ static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp) z->zbuff = g_try_malloc(z->zbuff_len); if (!z->zbuff) { inflateEnd(zs); -error_setg(errp, "multifd %d: out of memory for zbuff", p->id); +error_setg(errp, "multifd %u: out of memory for zbuff", p->id); return -1; } return 0; @@ -247,7 +247,7 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp) int i; if (flags != MULTIFD_FLAG_ZLIB) { -error_setg(errp, "multifd %d: flags received %x flags expected %x", +error_setg(errp, "multifd %u: flags received %x flags expected %x", p->id, flags, MULTIFD_FLAG_ZLIB); return -1; } @@ -284,19 +284,19 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp) } while (ret == Z_OK && zs->avail_in && (zs->total_out - start) < page_size); if (ret == Z_OK && (zs->total_out - start) < page_size) { -error_setg(errp, "multifd %d: inflate generated too few output", +error_setg(errp, "multifd %u: inflate generated too few output", p->id); return -1; } if (ret != Z_OK) { -error_setg(errp, "multifd %d: inflate returned %d instead of Z_OK", +error_setg(errp, "multifd %u: inflate returned %d instead of Z_OK", p->id, ret); return -1; } } out_size = zs->total_out - out_size; if (out_size != expected_size) { -error_setg(errp, "multifd %d: packet size received %d size expected %d", +error_setg(errp, "multifd %u: packet size received %u size expected %u", p->id, out_size, expected_size); return -1; } diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c index 2d5b61106c..cc4e991724 100644 --- a/migration/multifd-zstd.c +++ b/migration/multifd-zstd.c @@ -55,7 +55,7 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp) z->zcs = ZSTD_createCStream(); if (!z->zcs) { g_free(z); -error_setg(errp, "multifd %d: zstd createCStream failed", p->id); +error_setg(errp, "multifd %u: zstd createCStream failed", p->id); return -1; } @@ -63,7 +63,7 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp) if (ZSTD_isError(res)) { ZSTD_freeCStream(z->zcs); g_free(z); -error_setg(errp, "
[PATCH 4/5] migration: simplify do_compress_ram_page
The goto is not needed at all. Signed-off-by: Juan Quintela --- migration/ram.c | 11 +++ 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 8d29d64b35..0911811ed9 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1341,12 +1341,11 @@ static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, { RAMState *rs = ram_state; uint8_t *p = block->host + (offset & TARGET_PAGE_MASK); -bool zero_page = false; int ret; if (save_zero_page_to_file(rs, f, block, offset)) { -zero_page = true; -goto exit; +ram_release_page(block->idstr, offset & TARGET_PAGE_MASK); +return true; } save_page_header(rs, f, block, offset | RAM_SAVE_FLAG_COMPRESS_PAGE); @@ -1361,12 +1360,8 @@ static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, if (ret < 0) { qemu_file_set_error(migrate_get_current()->to_dst_file, ret); error_report("compressed data failed!"); -return false; } - -exit: -ram_release_page(block->idstr, offset & TARGET_PAGE_MASK); -return zero_page; +return false; } static void -- 2.33.1
Re: [RFC PATCH 0/6] Introduce GPIO transmitter and connect to NPCM7xx
+Marc-André (chardev) & Dave On 12/16/21 02:54, Joe Komlodi wrote: > Hi all, > > This series introduces a GPIO transmitter, which allows the transmission > of GPIO controller pin state over chardev, and attaches it to the NPCM7xx > GPIO controller. > > The GPIO transmitter takes in a GPIO controller number and a bitfield > containing the GPIO state of that controller, then formats a packet and > sends it via chardev to whomever is listening for it. > The purpose of this is for external software to receive the packet and > interpret it so it can do any actions it needs to, based on GPIO state. > > For example, in our use case, we have a VM manager managing an x86 guest > and an ARM (NPCM7xx) guest. On real hardware, the ARM SoC is a BMC which > has some power control over the x86 SoC. > Because of this, we need to relay GPIO power events from the BMC to the > x86 machine (i.e. reset, power off, etc), so we have software read in > the GPIO transmitter packets, keep track of what power state the x86 > machine is in based on the GPIO state of the BMC, and notify the VM > manager of any important changes. > The VM manager can then power up/down and reset the x86 machine as > needed. > > Thanks! > Joe > > Joe Komlodi (6): > hw/gpio/gpio_transmitter: Add Device > hw/gpio/gpio_transmitter: Add allowlist > hw/gpio/npcm7xx: Number controllers > hw/arm/npcm7xx: gpio: Add GPIO transmitter > hw/gpio/npcm7xx: init GPIO transmitter allowlist > qtests/gpio_transmitter: Add test > > hw/arm/Kconfig| 1 + > hw/arm/npcm7xx.c | 8 + > hw/gpio/Kconfig | 3 + > hw/gpio/google_gpio_transmitter.c | 249 ++ > hw/gpio/meson.build | 1 + > hw/gpio/npcm7xx_gpio.c| 25 +++ > include/hw/arm/npcm7xx.h | 2 + > include/hw/gpio/google_gpio_transmitter.h | 66 ++ > include/hw/gpio/npcm7xx_gpio.h| 4 + > tests/qtest/google_gpio_tx-test.c | 216 +++ > tests/qtest/meson.build | 1 + > 11 files changed, 576 insertions(+) > create mode 100644 hw/gpio/google_gpio_transmitter.c > create mode 100644 include/hw/gpio/google_gpio_transmitter.h > create mode 100644 tests/qtest/google_gpio_tx-test.c >
Re: [PATCH 1/5] migration: All this fields are unsigned
On 12/16/21 10:13, Juan Quintela wrote: > So printing it as %d is wrong. Notice that for the channel id, that > is an uint8_t, but I changed it anyways for consistency. > > Signed-off-by: Juan Quintela > --- > migration/multifd-zlib.c | 20 ++-- > migration/multifd-zstd.c | 24 > migration/multifd.c | 16 > migration/trace-events | 26 +- > 4 files changed, 43 insertions(+), 43 deletions(-) > # multifd.c > -multifd_new_send_channel_async(uint8_t id) "channel %d" > -multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, > uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " pages %d flags > 0x%x next packet size %d" > -multifd_recv_new_channel(uint8_t id) "channel %d" > +multifd_new_send_channel_async(uint8_t id) "channel %u" > +multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, > uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " pages %u flags > 0x%x next packet size %u" > +multifd_recv_new_channel(uint8_t id) "channel %u" > multifd_recv_sync_main(long packet_num) "packet num %ld" > -multifd_recv_sync_main_signal(uint8_t id) "channel %d" > -multifd_recv_sync_main_wait(uint8_t id) "channel %d" > +multifd_recv_sync_main_signal(uint8_t id) "channel %u" > +multifd_recv_sync_main_wait(uint8_t id) "channel %u" > multifd_recv_terminate_threads(bool error) "error %d" > -multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) > "channel %d packets %" PRIu64 " pages %" PRIu64 > -multifd_recv_thread_start(uint8_t id) "%d" > -multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, > uint32_t next_packet_size) "channel %d packet_num %" PRIu64 " pages %d flags > 0x%x next packet size %d" > -multifd_send_error(uint8_t id) "channel %d" > +multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) > "channel %u packets %" PRIu64 " pages %" PRIu64 > +multifd_recv_thread_start(uint8_t id) "%u" > +multifd_send(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, > uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " pages %u flags > 0x%x next packet size %u" > +multifd_send_error(uint8_t id) "channel %u" > multifd_send_sync_main(long packet_num) "packet num %ld" > -multifd_send_sync_main_signal(uint8_t id) "channel %d" > -multifd_send_sync_main_wait(uint8_t id) "channel %d" > +multifd_send_sync_main_signal(uint8_t id) "channel %u" > +multifd_send_sync_main_wait(uint8_t id) "channel %u" > multifd_send_terminate_threads(bool error) "error %d" Nitpicking: bool is unsigned :) Reviewed-by: Philippe Mathieu-Daudé
Re: [PATCH 3/5] migration: ram_release_pages() always receive 1 page as argument
On 12/16/21 10:13, Juan Quintela wrote: > Remove the pages argument. And s/pages/page/ > > Signed-off-by: Juan Quintela > --- > migration/ram.c | 8 > 1 file changed, 4 insertions(+), 4 deletions(-) > -static void ram_release_pages(const char *rbname, uint64_t offset, int pages) > +static void ram_release_page(const char *rbname, uint64_t offset) > { > if (!migrate_release_ram() || !migration_in_postcopy()) { > return; > } > > -ram_discard_range(rbname, offset, ((ram_addr_t)pages) << > TARGET_PAGE_BITS); > +ram_discard_range(rbname, offset, ((ram_addr_t)1) << TARGET_PAGE_BITS); 1ULL? Otherwise, Reviewed-by: Philippe Mathieu-Daudé
Re: [PATCH 2/5] migration: We only need last_stage in two places
On 12/16/21 10:13, Juan Quintela wrote: > And we are passing it all around. Just add a field to RAMState that > passes it. Splitting half of the description in the subject makes it hard to read. Better to repeat the subject as first description line. Matter of taste. > Signed-off-by: Juan Quintela > --- > migration/ram.c | 41 ++--- > 1 file changed, 18 insertions(+), 23 deletions(-)
Re: [PATCH v4 04/14] vfio-user: define vfio-user-server object
On Wed, Dec 15, 2021 at 10:35:28AM -0500, Jagannathan Raman wrote: > diff --git a/qapi/qom.json b/qapi/qom.json > index ccd1167808..6001a9b8f0 100644 > --- a/qapi/qom.json > +++ b/qapi/qom.json > @@ -703,6 +703,20 @@ > { 'struct': 'RemoteObjectProperties', >'data': { 'fd': 'str', 'devid': 'str' } } > > +## > +# @VfioUserServerProperties: > +# > +# Properties for x-vfio-user-server objects. > +# > +# @socket: socket to be used by the libvfiouser library > +# > +# @device: the id of the device to be emulated at the server > +# > +# Since: 6.2 6.2 has been released so the version number needs to be incremented. > +struct VfuObjectClass { > +ObjectClass parent_class; > + > +unsigned int nr_devs; > + > +bool daemon; I was wondering what this means. auto_shutdown might be a clearer name. > +static void vfu_object_set_socket(Object *obj, Visitor *v, const char *name, > + void *opaque, Error **errp) > +{ > +VfuObject *o = VFU_OBJECT(obj); > + > +qapi_free_SocketAddress(o->socket); > + > +o->socket = NULL; > + > +visit_type_SocketAddress(v, name, &o->socket, errp); > + > +if (o->socket->type != SOCKET_ADDRESS_TYPE_UNIX) { > +qapi_free_SocketAddress(o->socket); > +o->socket = NULL; > +error_setg(errp, "vfu: Unsupported socket type - %s", > + o->socket->u.q_unix.path); s/o->socket->u.q_unix.path/SocketAddressType_str(o->socket->type)/ signature.asc Description: PGP signature
Re: [PATCH] e1000: fix tx re-entrancy problem
Hi Jon, On 10/21/21 18:10, Jon Maloy wrote: > The fact that the MMIO handler is not re-entrant causes an infinite > loop under certain conditions: > > Guest write to TDT -> Loopback -> RX (DMA to TDT) -> TX > > We now eliminate the effect of this problem locally in e1000, by adding > a boolean in struct E1000State indicating when the TX side is busy. This > will cause any entering new call to return early instead of interfering > with the ongoing work, and eliminates any risk of looping. > > This is intended to address CVE-2021-20257. > > Signed-off-by: Jon Maloy > --- > hw/net/e1000.c | 7 +++ > 1 file changed, 7 insertions(+) I can not find the reproducer in the repository, have you sent one?
Re: [PATCH v2 01/25] python/aqmp: add __del__ method to legacy interface
15.12.2021 22:39, John Snow wrote: asyncio can complain *very* loudly if you forget to back out of things gracefully before the garbage collector starts destroying objects that contain live references to asyncio Tasks. The usual fix is just to remember to call aqmp.disconnect(), but for the sake of the legacy wrapper and quick, one-off scripts where a graceful shutdown is not necessarily of paramount imporance, add a courtesy cleanup that will trigger prior to seeing screenfuls of confusing asyncio tracebacks. Note that we can't *always* save you from yourself; depending on when the GC runs, you might just seriously be out of luck. The best we can do in this case is to gently remind you to clean up after yourself. (Still much better than multiple pages of incomprehensible python warnings for the crime of forgetting to put your toys away.) Signed-off-by: John Snow --- python/qemu/aqmp/legacy.py | 18 ++ 1 file changed, 18 insertions(+) diff --git a/python/qemu/aqmp/legacy.py b/python/qemu/aqmp/legacy.py index 9e7b9fb80b..2ccb136b02 100644 --- a/python/qemu/aqmp/legacy.py +++ b/python/qemu/aqmp/legacy.py @@ -16,6 +16,8 @@ import qemu.qmp from qemu.qmp import QMPMessage, QMPReturnValue, SocketAddrT +from .error import AQMPError +from .protocol import Runstate from .qmp_client import QMPClient @@ -136,3 +138,19 @@ def settimeout(self, timeout: Optional[float]) -> None: def send_fd_scm(self, fd: int) -> None: self._aqmp.send_fd_scm(fd) + +def __del__(self) -> None: +if self._aqmp.runstate == Runstate.IDLE: +return + +if not self._aloop.is_running(): +self.close() As I understand, if close() was called by hand, runstate should already be IDLE, and we don't call close() twice here? +else: +# Garbage collection ran while the event loop was running. +# Nothing we can do about it now, but if we don't raise our +# own error, the user will be treated to a lot of traceback +# they might not understand. +raise AQMPError( +"QEMUMonitorProtocol.close()" +" was not called before object was garbage collected" +) weak, as I'm far from understanding aqmp library: Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
[PULL 1/8] s390: kvm: adjust diag318 resets to retain data
From: Collin Walling The CPNC portion of the diag318 data is erroneously reset during an initial CPU reset caused by SIGP. Let's go ahead and relocate the diag318_info field within the CPUS390XState struct such that it is only zeroed during a clear reset. This way, the CPNC will be retained for each VCPU in the configuration after the diag318 instruction has been invoked. The s390_machine_reset code already takes care of zeroing the diag318 data on VM resets, which also cover resets caused by diag308. Fixes: fabdada9357b ("s390: guest support for diagnose 0x318") Reported-by: Christian Borntraeger Signed-off-by: Collin Walling Reviewed-by: Janosch Frank Reviewed-by: Christian Borntraeger Message-Id: <2027152303.627969-1-wall...@linux.ibm.com> Signed-off-by: Thomas Huth --- target/s390x/cpu.h | 4 ++-- target/s390x/kvm/kvm.c | 4 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h index ca3845d023..a75e559134 100644 --- a/target/s390x/cpu.h +++ b/target/s390x/cpu.h @@ -63,6 +63,8 @@ struct CPUS390XState { uint64_t etoken; /* etoken */ uint64_t etoken_extension; /* etoken extension */ +uint64_t diag318_info; + /* Fields up to this point are not cleared by initial CPU reset */ struct {} start_initial_reset_fields; @@ -118,8 +120,6 @@ struct CPUS390XState { uint16_t external_call_addr; DECLARE_BITMAP(emergency_signals, S390_MAX_CPUS); -uint64_t diag318_info; - #if !defined(CONFIG_USER_ONLY) uint64_t tlb_fill_tec; /* translation exception code during tlb_fill */ int tlb_fill_exc;/* exception number seen during tlb_fill */ diff --git a/target/s390x/kvm/kvm.c b/target/s390x/kvm/kvm.c index 5b1fdb55c4..6acf14d5ec 100644 --- a/target/s390x/kvm/kvm.c +++ b/target/s390x/kvm/kvm.c @@ -1585,6 +1585,10 @@ void kvm_s390_set_diag318(CPUState *cs, uint64_t diag318_info) env->diag318_info = diag318_info; cs->kvm_run->s.regs.diag318 = diag318_info; cs->kvm_run->kvm_dirty_regs |= KVM_SYNC_DIAG318; +/* + * diag 318 info is zeroed during a clear reset and + * diag 308 IPL subcodes. + */ } } -- 2.27.0
[PULL 3/8] s390x/pci: use a reserved ID for the default PCI group
From: Matthew Rosato The current default PCI group being used can technically collide with a real group ID passed from a hostdev. Let's instead use a group ID that comes from a special pool (0xF0-0xFF) that is architected to be reserved for simulated devices. Fixes: 28dc86a072 ("s390x/pci: use a PCI Group structure") Signed-off-by: Matthew Rosato Reviewed-by: Eric Farman Reviewed-by: Pierre Morel Message-Id: <20211203142706.427279-2-mjros...@linux.ibm.com> Signed-off-by: Thomas Huth --- include/hw/s390x/s390-pci-bus.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/hw/s390x/s390-pci-bus.h b/include/hw/s390x/s390-pci-bus.h index aa891c178d..2727e7bdef 100644 --- a/include/hw/s390x/s390-pci-bus.h +++ b/include/hw/s390x/s390-pci-bus.h @@ -313,7 +313,7 @@ typedef struct ZpciFmb { } ZpciFmb; QEMU_BUILD_BUG_MSG(offsetof(ZpciFmb, fmt0) != 48, "padding in ZpciFmb"); -#define ZPCI_DEFAULT_FN_GRP 0x20 +#define ZPCI_DEFAULT_FN_GRP 0xFF typedef struct S390PCIGroup { ClpRspQueryPciGrp zpci_group; int id; -- 2.27.0
[PULL 2/8] MAINTAINERS: update email address of Christian Borntraeger
From: Christian Borntraeger My borntrae...@de.ibm.com email is just a forwarder to the linux.ibm.com address. Let us remove the extra hop to avoid a potential source of errors. While at it, add the relevant email addresses to mailmap. Signed-off-by: Christian Borntraeger Message-Id: <20211126102449.287524-1-borntrae...@linux.ibm.com> Signed-off-by: Thomas Huth --- .mailmap| 1 + MAINTAINERS | 6 +++--- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/.mailmap b/.mailmap index 8beb2f95ae..c45d1c5301 100644 --- a/.mailmap +++ b/.mailmap @@ -50,6 +50,7 @@ Aleksandar Rikalo Aleksandar Rikalo Alexander Graf Anthony Liguori Anthony Liguori +Christian Borntraeger Filip Bozuta Frederic Konrad Greg Kurz diff --git a/MAINTAINERS b/MAINTAINERS index 7543eb4d59..0644ba2b4a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -393,7 +393,7 @@ F: target/ppc/kvm.c S390 KVM CPUs M: Halil Pasic -M: Christian Borntraeger +M: Christian Borntraeger S: Supported F: target/s390x/kvm/ F: target/s390x/ioinst.[ch] @@ -1527,7 +1527,7 @@ S390 Machines - S390 Virtio-ccw M: Halil Pasic -M: Christian Borntraeger +M: Christian Borntraeger S: Supported F: hw/char/sclp*.[hc] F: hw/char/terminal3270.c @@ -1541,7 +1541,7 @@ T: git https://github.com/borntraeger/qemu.git s390-next L: qemu-s3...@nongnu.org S390-ccw boot -M: Christian Borntraeger +M: Christian Borntraeger M: Thomas Huth S: Supported F: hw/s390x/ipl.* -- 2.27.0
[PULL 0/8] s390x patches (and one gitlab-CI fix)
Hi! The following changes since commit e630bc7ec9dda656a452ed28cac4d1e9ed605d71: Merge tag 'pull-block-2021-12-15' of git://repo.or.cz/qemu/armbru into staging (2021-12-15 12:14:44 -0800) are available in the Git repository at: https://gitlab.com/thuth/qemu.git tags/s390x-2021-12-16 for you to fetch changes up to 79e42726050b7be6c2104a9af678ce911897dfaa: gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices (2021-12-16 10:22:39 +0100) * Small fixes for the s390x PCI code * Fix reset handling of the diag318 data * Add compat machines for 7.0 (all architectures) * Ease timeout problem of the new msys2-64bit job Christian Borntraeger (1): MAINTAINERS: update email address of Christian Borntraeger Collin L. Walling (1): s390: kvm: adjust diag318 resets to retain data Cornelia Huck (1): hw: Add compat machines for 7.0 Matthew Rosato (4): s390x/pci: use a reserved ID for the default PCI group s390x/pci: don't use hard-coded dma range in reg_ioat s390x/pci: use the passthrough measurement update interval s390x/pci: add supported DT information to clp response Thomas Huth (1): gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices .gitlab-ci.d/windows.yml| 2 +- .mailmap| 1 + MAINTAINERS | 6 +++--- hw/arm/virt.c | 9 - hw/core/machine.c | 3 +++ hw/i386/pc.c| 3 +++ hw/i386/pc_piix.c | 14 +- hw/i386/pc_q35.c| 13 - hw/ppc/spapr.c | 15 +-- hw/s390x/s390-pci-bus.c | 1 + hw/s390x/s390-pci-inst.c| 15 +-- hw/s390x/s390-pci-vfio.c| 1 + hw/s390x/s390-virtio-ccw.c | 14 +- include/hw/boards.h | 3 +++ include/hw/i386/pc.h| 3 +++ include/hw/s390x/s390-pci-bus.h | 3 ++- include/hw/s390x/s390-pci-clp.h | 3 ++- target/s390x/cpu.h | 4 ++-- target/s390x/kvm/kvm.c | 4 19 files changed, 97 insertions(+), 20 deletions(-)
[PULL 8/8] gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices
The new msys2-64bit job is often running for more than 50 minutes - and if the CI is currently loaded, it times out after 60 minutes. The job has been declared with a bigger timeout, but seems like this is getting ignored on the shared Gitlab-CI Windows runners, so we're currently seeing a lot of failures with this job. Thus we have to reduce the time it takes to finish this job. Since we want to test compiling the WHPX and HAX accelerator code with this job, switching to another target CPU is not really a good option, so let's reduce the amount of code that we have to compile with the --without-default-devices switch instead. Message-Id: <20211216082253.43899-1-th...@redhat.com> Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Thomas Huth --- .gitlab-ci.d/windows.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.gitlab-ci.d/windows.yml b/.gitlab-ci.d/windows.yml index 309f7e7fb8..62dd9ed832 100644 --- a/.gitlab-ci.d/windows.yml +++ b/.gitlab-ci.d/windows.yml @@ -58,7 +58,7 @@ msys2-64bit: - $env:CHERE_INVOKING = 'yes' # Preserve the current working directory - $env:MSYSTEM = 'MINGW64' # Start a 64 bit Mingw environment - .\msys64\usr\bin\bash -lc './configure --target-list=x86_64-softmmu - --enable-capstone=system' + --enable-capstone=system --without-default-devices' - .\msys64\usr\bin\bash -lc "sed -i '/^ROMS=/d' build/config-host.mak" - .\msys64\usr\bin\bash -lc 'make -j2' - .\msys64\usr\bin\bash -lc 'make check' -- 2.27.0
[PULL 4/8] s390x/pci: don't use hard-coded dma range in reg_ioat
From: Matthew Rosato Instead use the values from clp info, they will either be the hard-coded values or what came from the host driver via vfio. Fixes: 9670ee752727 ("s390x/pci: use a PCI Function structure") Signed-off-by: Matthew Rosato Reviewed-by: Eric Farman Reviewed-by: Pierre Morel Message-Id: <20211203142706.427279-3-mjros...@linux.ibm.com> Signed-off-by: Thomas Huth --- hw/s390x/s390-pci-inst.c | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c index 1c8ad91175..11b7f6bfa1 100644 --- a/hw/s390x/s390-pci-inst.c +++ b/hw/s390x/s390-pci-inst.c @@ -916,9 +916,10 @@ int pci_dereg_irqs(S390PCIBusDevice *pbdev) return 0; } -static int reg_ioat(CPUS390XState *env, S390PCIIOMMU *iommu, ZpciFib fib, +static int reg_ioat(CPUS390XState *env, S390PCIBusDevice *pbdev, ZpciFib fib, uintptr_t ra) { +S390PCIIOMMU *iommu = pbdev->iommu; uint64_t pba = ldq_p(&fib.pba); uint64_t pal = ldq_p(&fib.pal); uint64_t g_iota = ldq_p(&fib.iota); @@ -927,7 +928,7 @@ static int reg_ioat(CPUS390XState *env, S390PCIIOMMU *iommu, ZpciFib fib, pba &= ~0xfff; pal |= 0xfff; -if (pba > pal || pba < ZPCI_SDMA_ADDR || pal > ZPCI_EDMA_ADDR) { +if (pba > pal || pba < pbdev->zpci_fn.sdma || pal > pbdev->zpci_fn.edma) { s390_program_interrupt(env, PGM_OPERAND, ra); return -EINVAL; } @@ -1125,7 +1126,7 @@ int mpcifc_service_call(S390CPU *cpu, uint8_t r1, uint64_t fiba, uint8_t ar, } else if (pbdev->iommu->enabled) { cc = ZPCI_PCI_LS_ERR; s390_set_status_code(env, r1, ZPCI_MOD_ST_SEQUENCE); -} else if (reg_ioat(env, pbdev->iommu, fib, ra)) { +} else if (reg_ioat(env, pbdev, fib, ra)) { cc = ZPCI_PCI_LS_ERR; s390_set_status_code(env, r1, ZPCI_MOD_ST_INSUF_RES); } @@ -1150,7 +1151,7 @@ int mpcifc_service_call(S390CPU *cpu, uint8_t r1, uint64_t fiba, uint8_t ar, s390_set_status_code(env, r1, ZPCI_MOD_ST_SEQUENCE); } else { pci_dereg_ioat(pbdev->iommu); -if (reg_ioat(env, pbdev->iommu, fib, ra)) { +if (reg_ioat(env, pbdev, fib, ra)) { cc = ZPCI_PCI_LS_ERR; s390_set_status_code(env, r1, ZPCI_MOD_ST_INSUF_RES); } -- 2.27.0
[PULL 6/8] s390x/pci: add supported DT information to clp response
From: Matthew Rosato The DTSM is a mask that specifies which I/O Address Translation designation types are supported. Today QEMU only supports DT=1. Signed-off-by: Matthew Rosato Reviewed-by: Eric Farman Reviewed-by: Pierre Morel Message-Id: <20211203142706.427279-5-mjros...@linux.ibm.com> Signed-off-by: Thomas Huth --- hw/s390x/s390-pci-bus.c | 1 + hw/s390x/s390-pci-inst.c| 1 + hw/s390x/s390-pci-vfio.c| 1 + include/hw/s390x/s390-pci-bus.h | 1 + include/hw/s390x/s390-pci-clp.h | 3 ++- 5 files changed, 6 insertions(+), 1 deletion(-) diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c index 1b51a72838..01b58ebc70 100644 --- a/hw/s390x/s390-pci-bus.c +++ b/hw/s390x/s390-pci-bus.c @@ -782,6 +782,7 @@ static void s390_pci_init_default_group(void) resgrp->i = 128; resgrp->maxstbl = 128; resgrp->version = 0; +resgrp->dtsm = ZPCI_DTSM; } static void set_pbdev_info(S390PCIBusDevice *pbdev) diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c index 07bab85ce5..6d400d4147 100644 --- a/hw/s390x/s390-pci-inst.c +++ b/hw/s390x/s390-pci-inst.c @@ -329,6 +329,7 @@ int clp_service_call(S390CPU *cpu, uint8_t r2, uintptr_t ra) stw_p(&resgrp->i, group->zpci_group.i); stw_p(&resgrp->maxstbl, group->zpci_group.maxstbl); resgrp->version = group->zpci_group.version; +resgrp->dtsm = group->zpci_group.dtsm; stw_p(&resgrp->hdr.rsp, CLP_RC_OK); break; } diff --git a/hw/s390x/s390-pci-vfio.c b/hw/s390x/s390-pci-vfio.c index 2a153fa8c9..6f80a47e29 100644 --- a/hw/s390x/s390-pci-vfio.c +++ b/hw/s390x/s390-pci-vfio.c @@ -160,6 +160,7 @@ static void s390_pci_read_group(S390PCIBusDevice *pbdev, resgrp->i = cap->noi; resgrp->maxstbl = cap->maxstbl; resgrp->version = cap->version; +resgrp->dtsm = ZPCI_DTSM; } } diff --git a/include/hw/s390x/s390-pci-bus.h b/include/hw/s390x/s390-pci-bus.h index 2727e7bdef..da3cde2bb4 100644 --- a/include/hw/s390x/s390-pci-bus.h +++ b/include/hw/s390x/s390-pci-bus.h @@ -37,6 +37,7 @@ #define ZPCI_MAX_UID 0x #define UID_UNDEFINED 0 #define UID_CHECKING_ENABLED 0x01 +#define ZPCI_DTSM 0x40 OBJECT_DECLARE_SIMPLE_TYPE(S390pciState, S390_PCI_HOST_BRIDGE) OBJECT_DECLARE_SIMPLE_TYPE(S390PCIBus, S390_PCI_BUS) diff --git a/include/hw/s390x/s390-pci-clp.h b/include/hw/s390x/s390-pci-clp.h index 96b8e3f133..cc8c8662b8 100644 --- a/include/hw/s390x/s390-pci-clp.h +++ b/include/hw/s390x/s390-pci-clp.h @@ -163,7 +163,8 @@ typedef struct ClpRspQueryPciGrp { uint8_t fr; uint16_t maxstbl; uint16_t mui; -uint64_t reserved3; +uint8_t dtsm; +uint8_t reserved3[7]; uint64_t dasm; /* dma address space mask */ uint64_t msia; /* MSI address */ uint64_t reserved4; -- 2.27.0
[PULL 7/8] hw: Add compat machines for 7.0
From: Cornelia Huck Add 7.0 machine types for arm/i440fx/q35/s390x/spapr. Signed-off-by: Cornelia Huck Message-Id: <20211208170241.110551-1-coh...@redhat.com> Acked-by: Cédric Le Goater Reviewed-by: Juan Quintela Signed-off-by: Thomas Huth --- hw/arm/virt.c | 9 - hw/core/machine.c | 3 +++ hw/i386/pc.c | 3 +++ hw/i386/pc_piix.c | 14 +- hw/i386/pc_q35.c | 13 - hw/ppc/spapr.c | 15 +-- hw/s390x/s390-virtio-ccw.c | 14 +- include/hw/boards.h| 3 +++ include/hw/i386/pc.h | 3 +++ 9 files changed, 71 insertions(+), 6 deletions(-) diff --git a/hw/arm/virt.c b/hw/arm/virt.c index 6bce595aba..4593fea1ce 100644 --- a/hw/arm/virt.c +++ b/hw/arm/virt.c @@ -2856,10 +2856,17 @@ static void machvirt_machine_init(void) } type_init(machvirt_machine_init); +static void virt_machine_7_0_options(MachineClass *mc) +{ +} +DEFINE_VIRT_MACHINE_AS_LATEST(7, 0) + static void virt_machine_6_2_options(MachineClass *mc) { +virt_machine_7_0_options(mc); +compat_props_add(mc->compat_props, hw_compat_6_2, hw_compat_6_2_len); } -DEFINE_VIRT_MACHINE_AS_LATEST(6, 2) +DEFINE_VIRT_MACHINE(6, 2) static void virt_machine_6_1_options(MachineClass *mc) { diff --git a/hw/core/machine.c b/hw/core/machine.c index 53a99abc56..a9c15479fe 100644 --- a/hw/core/machine.c +++ b/hw/core/machine.c @@ -37,6 +37,9 @@ #include "hw/virtio/virtio.h" #include "hw/virtio/virtio-pci.h" +GlobalProperty hw_compat_6_2[] = {}; +const size_t hw_compat_6_2_len = G_N_ELEMENTS(hw_compat_6_2); + GlobalProperty hw_compat_6_1[] = { { "vhost-user-vsock-device", "seqpacket", "off" }, { "nvme-ns", "shared", "off" }, diff --git a/hw/i386/pc.c b/hw/i386/pc.c index a2ef40ecbc..fccde2ef39 100644 --- a/hw/i386/pc.c +++ b/hw/i386/pc.c @@ -94,6 +94,9 @@ #include "trace.h" #include CONFIG_DEVICES +GlobalProperty pc_compat_6_2[] = {}; +const size_t pc_compat_6_2_len = G_N_ELEMENTS(pc_compat_6_2); + GlobalProperty pc_compat_6_1[] = { { TYPE_X86_CPU, "hv-version-id-build", "0x1bbc" }, { TYPE_X86_CPU, "hv-version-id-major", "0x0006" }, diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c index 223dd3e05d..b03026bf06 100644 --- a/hw/i386/pc_piix.c +++ b/hw/i386/pc_piix.c @@ -413,7 +413,7 @@ static void pc_i440fx_machine_options(MachineClass *m) machine_class_allow_dynamic_sysbus_dev(m, TYPE_VMBUS_BRIDGE); } -static void pc_i440fx_6_2_machine_options(MachineClass *m) +static void pc_i440fx_7_0_machine_options(MachineClass *m) { PCMachineClass *pcmc = PC_MACHINE_CLASS(m); pc_i440fx_machine_options(m); @@ -422,6 +422,18 @@ static void pc_i440fx_6_2_machine_options(MachineClass *m) pcmc->default_cpu_version = 1; } +DEFINE_I440FX_MACHINE(v7_0, "pc-i440fx-7.0", NULL, + pc_i440fx_7_0_machine_options); + +static void pc_i440fx_6_2_machine_options(MachineClass *m) +{ +pc_i440fx_machine_options(m); +m->alias = NULL; +m->is_default = false; +compat_props_add(m->compat_props, hw_compat_6_2, hw_compat_6_2_len); +compat_props_add(m->compat_props, pc_compat_6_2, pc_compat_6_2_len); +} + DEFINE_I440FX_MACHINE(v6_2, "pc-i440fx-6.2", NULL, pc_i440fx_6_2_machine_options); diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c index e1e100316d..6b66eb16bb 100644 --- a/hw/i386/pc_q35.c +++ b/hw/i386/pc_q35.c @@ -360,7 +360,7 @@ static void pc_q35_machine_options(MachineClass *m) m->max_cpus = 288; } -static void pc_q35_6_2_machine_options(MachineClass *m) +static void pc_q35_7_0_machine_options(MachineClass *m) { PCMachineClass *pcmc = PC_MACHINE_CLASS(m); pc_q35_machine_options(m); @@ -368,6 +368,17 @@ static void pc_q35_6_2_machine_options(MachineClass *m) pcmc->default_cpu_version = 1; } +DEFINE_Q35_MACHINE(v7_0, "pc-q35-7.0", NULL, + pc_q35_7_0_machine_options); + +static void pc_q35_6_2_machine_options(MachineClass *m) +{ +pc_q35_machine_options(m); +m->alias = NULL; +compat_props_add(m->compat_props, hw_compat_6_2, hw_compat_6_2_len); +compat_props_add(m->compat_props, pc_compat_6_2, pc_compat_6_2_len); +} + DEFINE_Q35_MACHINE(v6_2, "pc-q35-6.2", NULL, pc_q35_6_2_machine_options); diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 3b5fd749be..8373429325 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -4665,15 +4665,26 @@ static void spapr_machine_latest_class_options(MachineClass *mc) }\ type_init(spapr_machine_register_##suffix) +/* + * pseries-7.0 + */ +static void spapr_machine_7_0_class_options(MachineClass *mc) +{ +/* Defaults for the latest behaviour inherited from the base class */ +} + +DEFINE_SPAPR_MACHINE(7_0, "7.0", true); + /* * pseries-6.2 */ static void spapr_machine_6_2_class_options(MachineClass *mc) { -/* Defaults for the lates
[PULL 5/8] s390x/pci: use the passthrough measurement update interval
From: Matthew Rosato We may have gotten a measurement update interval from the underlying host via vfio -- Use it to set the interval via which we update the function measurement block. Fixes: 28dc86a072 ("s390x/pci: use a PCI Group structure") Signed-off-by: Matthew Rosato Reviewed-by: Eric Farman Reviewed-by: Pierre Morel Message-Id: <20211203142706.427279-4-mjros...@linux.ibm.com> Signed-off-by: Thomas Huth --- hw/s390x/s390-pci-inst.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c index 11b7f6bfa1..07bab85ce5 100644 --- a/hw/s390x/s390-pci-inst.c +++ b/hw/s390x/s390-pci-inst.c @@ -1046,7 +1046,7 @@ static void fmb_update(void *opaque) sizeof(pbdev->fmb.last_update))) { return; } -timer_mod(pbdev->fmb_timer, t + DEFAULT_MUI); +timer_mod(pbdev->fmb_timer, t + pbdev->pci_group->zpci_group.mui); } int mpcifc_service_call(S390CPU *cpu, uint8_t r1, uint64_t fiba, uint8_t ar, @@ -1204,7 +1204,8 @@ int mpcifc_service_call(S390CPU *cpu, uint8_t r1, uint64_t fiba, uint8_t ar, } pbdev->fmb_addr = fmb_addr; timer_mod(pbdev->fmb_timer, - qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) + DEFAULT_MUI); + qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) + +pbdev->pci_group->zpci_group.mui); break; } default: -- 2.27.0
Re: [PATCH v4 0/3] hw/block/fdc: Fix CVE-2021-20196
On 12/10/21 14:42, Kevin Wolf wrote: > Am 24.11.2021 um 17:15 hat Philippe Mathieu-Daudé geschrieben: >> Since v3: >> - Preliminary extract blk_create_empty_drive() >> - qtest checks qtest_check_clang_sanitizer() enabled >> - qtest uses null-co:// driver instead of file >> >> Philippe Mathieu-Daudé (3): >> hw/block/fdc: Extract blk_create_empty_drive() >> hw/block/fdc: Kludge missing floppy drive to fix CVE-2021-20196 >> tests/qtest/fdc-test: Add a regression test for CVE-2021-20196 > > If I may ask a meta question: No doubt that this is a bug and it's good > that we fixed it, but why was it assigned a CVE? No clue, I suppose this is audited and handled by qemu-security@ team members. Cc'ing them. > Any guest can legitimately shut down and we don't consider that a denial > of service. This bug was essentially just another undocumented way for > the guest kernel to shut down, as unprivileged users in the guest can't > normally access the I/O ports of the floppy controller. I don't think we > generally consider guests killing themselves a security problem as long > as it requires kernel or root privileges in the guest. Agreed.
Re: [PATCH v4 05/14] vfio-user: instantiate vfio-user context
On Wed, Dec 15, 2021 at 10:35:29AM -0500, Jagannathan Raman wrote: > +static void vfu_object_init_ctx(VfuObject *o, Error **errp) > +{ > +ERRP_GUARD(); > + > +if (o->vfu_ctx || !o->socket || !o->device || > +!phase_check(PHASE_MACHINE_READY)) { > +return; > +} > + > +if (o->err) { > +error_propagate(errp, o->err); Missing o->err = NULL because ownership has been passed to errp. signature.asc Description: PGP signature
Re: [PULL 31/33] tests/acpi: add test case for VIOT
On Wed, Dec 15, 2021 at 04:59:10PM -0800, Richard Henderson wrote: > On 12/15/21 2:40 AM, Peter Maydell wrote: > > From: Jean-Philippe Brucker > > > > Add two test cases for VIOT, one on the q35 machine and the other on > > virt. To test complex topologies the q35 test has two PCIe buses that > > bypass the IOMMU (and are therefore not described by VIOT), and two > > buses that are translated by virtio-iommu. > > > > Reviewed-by: Eric Auger > > Reviewed-by: Igor Mammedov > > Signed-off-by: Jean-Philippe Brucker > > Message-id: 20211210170415.583179-7-jean-phili...@linaro.org > > Signed-off-by: Peter Maydell > > --- > > tests/qtest/bios-tables-test.c | 38 ++ > > 1 file changed, 38 insertions(+) > > I should have been more careful while applying. The aarch64 host failure > for this is not transient as I first assumed: > > PASS 5 qtest-aarch64/bios-tables-test /aarch64/acpi/virt/oem-fields > qemu-system-aarch64: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Invalid > argument > Broken pipe > ERROR qtest-aarch64/bios-tables-test - too few tests run (expected 6, got 5) > make: *** [Makefile.mtest:312: run-test-37] Error 1 I'm guessing adding "tcg_only = true", like all other virt machine tests in this file, should work around this, but I don't really understand the problem because I can't reproduce it on my aarch64 host (I'm running "configure --target-list=aarch64-softmmu" followed by "make -j10 check-qtest V=1" in a loop) Does the attached patch fix it for you? Thanks, Jean --- 8< --- >From 6da0e4d98022d173c79e3caab273b226617d5943 Mon Sep 17 00:00:00 2001 From: Jean-Philippe Brucker Date: Thu, 16 Dec 2021 09:15:06 + Subject: [PATCH] tests/qtest/bios-tables-test: Only run VIOT test on TCG The VIOT test does not always work under KVM on the virt machine: PASS 5 qtest-aarch64/bios-tables-test /aarch64/acpi/virt/oem-fields qemu-system-aarch64: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Invalid argument Broken pipe Make it TCG only. Reported-by: Richard Henderson Signed-off-by: Jean-Philippe Brucker --- tests/qtest/bios-tables-test.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c index 58df53b15b..9a468e29eb 100644 --- a/tests/qtest/bios-tables-test.c +++ b/tests/qtest/bios-tables-test.c @@ -1489,6 +1489,7 @@ static void test_acpi_virt_viot(void) { test_data data = { .machine = "virt", +.tcg_only = true, .uefi_fl1 = "pc-bios/edk2-aarch64-code.fd", .uefi_fl2 = "pc-bios/edk2-arm-vars.fd", .cd = "tests/data/uefi-boot-images/bios-tables-test.aarch64.iso.qcow2", -- 2.34.1
Re: [PATCH v4 04/14] vfio-user: define vfio-user-server object
On Wed, Dec 15, 2021 at 10:35:28AM -0500, Jagannathan Raman wrote: > +static void vfu_object_set_socket(Object *obj, Visitor *v, const char *name, > + void *opaque, Error **errp) > +{ > +VfuObject *o = VFU_OBJECT(obj); > + > +qapi_free_SocketAddress(o->socket); > + > +o->socket = NULL; > + > +visit_type_SocketAddress(v, name, &o->socket, errp); > + > +if (o->socket->type != SOCKET_ADDRESS_TYPE_UNIX) { > +qapi_free_SocketAddress(o->socket); > +o->socket = NULL; > +error_setg(errp, "vfu: Unsupported socket type - %s", > + o->socket->u.q_unix.path); > +return; > +} > + > +trace_vfu_prop("socket", o->socket->u.q_unix.path); > +} > + > +static void vfu_object_set_device(Object *obj, const char *str, Error **errp) > +{ > +VfuObject *o = VFU_OBJECT(obj); > + > +g_free(o->device); > + > +o->device = g_strdup(str); > + > +trace_vfu_prop("device", str); > +} It appears "socket" and "device" can be changed after the vfio-user server has started. In the best case it just means the properties contain values that do not reflect the actual socket/device currently in use, which is confusing. It's safer to refuse changing these properties once the vfio-user server has started. Stefan signature.asc Description: PGP signature
Re: [PATCH] block/file-posix: Simplify the XFS_IOC_DIOINFO handling
Am 15.12.2021 um 13:58 hat Thomas Huth geschrieben: > The handling for the XFS_IOC_DIOINFO ioctl is currently quite excessive: > This is not a "real" feature like the other features that we provide with > the "--enable-xxx" and "--disable-xxx" switches for the configure script, > since this does not influence lots of code (it's only about one call to > xfsctl() in file-posix.c), so people don't gain much with the ability to > disable this with "--disable-xfsctl". > It's also unfortunate that the ioctl will be disabled on Linux in case > the user did not install the right xfsprogs-devel package before running > configure. Thus let's simplify this by providing the ioctl definition > on our own, so we can completely get rid of the header dependency and > thus the related code in the configure script. > > Suggested-by: Paolo Bonzini > Signed-off-by: Thomas Huth Thanks, applied to the block branch. Kevin
Re: [PATCH v2 04/25] python/aqmp: add SocketAddrT to package root
15.12.2021 22:39, John Snow wrote: It's a commonly needed definition, it can be re-exported by the root. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
[PATCH] thunk: do not use HOST_* defines
Just use sizeof, avoiding the need to write down all the ABIs twice. Cc: Laurent Vivier Signed-off-by: Paolo Bonzini --- include/exec/user/thunk.h | 12 ++-- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/include/exec/user/thunk.h b/include/exec/user/thunk.h index 300a840d58..c50ba17317 100644 --- a/include/exec/user/thunk.h +++ b/include/exec/user/thunk.h @@ -22,6 +22,7 @@ #include "cpu.h" #include "exec/user/abitypes.h" +#include /* types enums definitions */ @@ -109,16 +110,7 @@ static inline int thunk_type_size(const argtype *type_ptr, int is_host) break; case TYPE_OLDDEVT: if (is_host) { -#if defined(HOST_X86_64) -return 8; -#elif defined(HOST_ALPHA) || defined(HOST_IA64) || defined(HOST_MIPS) || \ - defined(HOST_PARISC) || defined(HOST_SPARC64) -return 4; -#elif defined(HOST_PPC) -return sizeof(void *); -#else -return 2; -#endif +return sizeof(__kernel_old_dev_t); } else { #if defined(TARGET_X86_64) return 8; -- 2.33.1
Re: [PATCH v2 02/25] python/aqmp: handle asyncio.TimeoutError on execute()
15.12.2021 22:39, John Snow wrote: This exception can be injected into any await statement. If we are canceled via timeout, we want to clear the pending execution record on our way out. Hmm, but there are more await statements in the file, shouldn't we care about them too ? Signed-off-by: John Snow --- python/qemu/aqmp/qmp_client.py | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/python/qemu/aqmp/qmp_client.py b/python/qemu/aqmp/qmp_client.py index 8105e29fa8..6a985ffe30 100644 --- a/python/qemu/aqmp/qmp_client.py +++ b/python/qemu/aqmp/qmp_client.py @@ -435,7 +435,11 @@ async def _issue(self, msg: Message) -> Union[None, str]: msg_id = msg['id'] self._pending[msg_id] = asyncio.Queue(maxsize=1) -await self._outgoing.put(msg) +try: +await self._outgoing.put(msg) +except: Doesn't pylint and others complain about plain "except". Do we really need to catch any exception here? As far as I know that's not a good practice. +del self._pending[msg_id] +raise return msg_id @@ -452,9 +456,9 @@ async def _reply(self, msg_id: Union[str, None]) -> Message: was lost, or some other problem. """ queue = self._pending[msg_id] -result = await queue.get() try: +result = await queue.get() if isinstance(result, ExecInterruptedError): raise result return result This one looks good, just include it into existing try-finally Hmm. _issue() and _reply() are used only in one place, as a pair. It looks like both "awaits" should be better under one try-finally block. For example, move "self._pending[msg_id] = asyncio.Queue(maxsize=1)" to _execute, and just do try-finally in _execute() around _issue and _reply. Or may be just merge the whole logic in _execute, it doesn't seem too much. What do you think? -- Best regards, Vladimir
Re: [PATCH v2 05/25] python/aqmp: rename AQMPError to QMPError
15.12.2021 22:39, John Snow wrote: This is in preparation for renaming qemu.aqmp to qemu.qmp. I should have done this from this from the very beginning, but it's a convenient time to make sure this churn is taken care of. Signed-off-by: John Snow Honestly, I don't want to check how it intersects with exising QMPError from old library, as it's going to be removed anyway. Hope, everything is OK: Reviewed-by: Vladimir Sementsov-Ogievskiy --- python/qemu/aqmp/__init__.py | 6 +++--- python/qemu/aqmp/error.py | 12 ++-- python/qemu/aqmp/events.py | 4 ++-- python/qemu/aqmp/legacy.py | 4 ++-- python/qemu/aqmp/protocol.py | 8 python/qemu/aqmp/qmp_client.py | 8 6 files changed, 21 insertions(+), 21 deletions(-) diff --git a/python/qemu/aqmp/__init__.py b/python/qemu/aqmp/__init__.py index c6fa2dda58..e1efab00af 100644 --- a/python/qemu/aqmp/__init__.py +++ b/python/qemu/aqmp/__init__.py @@ -6,7 +6,7 @@ QEMU Guest Agent, and the QEMU Storage Daemon. `QMPClient` provides the main functionality of this package. All errors -raised by this library dervive from `AQMPError`, see `aqmp.error` for +raised by this library dervive from `QMPError`, see `aqmp.error` for preexisting: s/dervive/derive/ -- Best regards, Vladimir
Re: [PATCH v2 13/25] scripts/cpu-x86-uarch-abi: fix CLI parsing
On Wed, Dec 15, 2021 at 02:39:27PM -0500, John Snow wrote: > Signed-off-by: John Snow > --- > scripts/cpu-x86-uarch-abi.py | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) Reviewed-by: Daniel P. Berrangé Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v2 06/25] python/qemu-ga-client: update instructions to newer CLI syntax
I had to search a bit through the history to check this ) The commit ccd3b3b8112b670f "qemu-option: warn for short-form boolean options" may be noted here. And may be subject changed to "don't use deprecated syntax in comment" or something like this. server=on is not a *new* syntax I think. 15.12.2021 22:39, John Snow wrote: Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy --- python/qemu/qmp/qemu_ga_client.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/python/qemu/qmp/qemu_ga_client.py b/python/qemu/qmp/qemu_ga_client.py index 67ac0b4211..b3e1d98c9e 100644 --- a/python/qemu/qmp/qemu_ga_client.py +++ b/python/qemu/qmp/qemu_ga_client.py @@ -5,7 +5,7 @@ Start QEMU with: -# qemu [...] -chardev socket,path=/tmp/qga.sock,server,wait=off,id=qga0 \ +# qemu [...] -chardev socket,path=/tmp/qga.sock,server=on,wait=off,id=qga0 \ -device virtio-serial \ -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -- Best regards, Vladimir
Re: [PATCH v2 14/25] scripts/cpu-x86-uarch-abi: switch to AQMP
On Wed, Dec 15, 2021 at 02:39:28PM -0500, John Snow wrote: > Signed-off-by: John Snow > --- > scripts/cpu-x86-uarch-abi.py | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) Reviewed-by: Daniel P. Berrangé Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v2 03/25] python/aqmp: copy type definitions from qmp
15.12.2021 22:39, John Snow wrote: Copy the remaining type definitions from QMP into the qemu.aqmp.legacy module. Now, most users don't need to import anything else but qemu.aqmp.legacy. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v2 06/25] python/qemu-ga-client: update instructions to newer CLI syntax
On Wed, Dec 15, 2021 at 02:39:20PM -0500, John Snow wrote: > Signed-off-by: John Snow > --- > python/qemu/qmp/qemu_ga_client.py | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) Reviewed-by: Daniel P. Berrangé Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v2 03/25] python/aqmp: copy type definitions from qmp
On Wed, Dec 15, 2021 at 02:39:17PM -0500, John Snow wrote: > Copy the remaining type definitions from QMP into the qemu.aqmp.legacy > module. Now, most users don't need to import anything else but > qemu.aqmp.legacy. I'm probably missing the historical discussion but it feels very wierd to be saying "most users don't need anything except legacy" Naively, I'd expect most users to want something *not* legacy. Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v10 3/3] cpus-common: implement dirty page limit on virtual CPU
Hyman Huang writes: [...] > So the final format of qmp we conclude are: > > case 1: setup vcpu 0 dirty page limit 100MB/s > set-vcpu-dirty-limit cpu-index=0 dirty-rate=100 > > case 2: setup all vcpu dirty page limit 100MB/s > set-vcpu-dirty-limit dirty-rate=100 > > case 3: cancel vcpu 0 dirty page limit > cancel-vcpu-dirty-limit cpu-index=0 > > case 4: cancel all vcpu dirty page limit > cancel-vcpu-dirty-limit > > case 5: query limit infomatioin of all vcpu enabled > query-vcpu-dirty-limit > > And the corresponding hmp format keep the same style: > > Is there any advice? :) Looks okay to me.
Re: Redesign of QEMU startup & initial configuration
Paolo Bonzini writes: > On 12/14/21 12:48, Markus Armbruster wrote: >> Let's start with where we (hopefully) agree: > > More or less I do agree with this, except for a couple points below > where I think we disagree. > >> * We need a single, cohesive, low-level interface suitable for >> management applications. >> >> * The existing interface is specified in QAPI. Its concrete transport >> is QMP. >> >> * The existing interface is not complete: certain things can only be >> done with the CLI. >> >> * The existing transport is not available early enough to permit >> completing the interface. > > So far so good. > >> * Fixing that involves a rework of startup. >> >> * Reworking the existing startup and managing incompatible changes is >> impractical, and likely to make the mess we have on our hands worse. > > Not really, in particular the startup has been mostly reworked already > and I disagree that it is messy. softmmu/vl.c is messy, sure: it has > N different command line parser for command line options, magic > related to default devices, and complicated ordering of -object > creation. > > But the building of emulator data structures is not messy; only the > code that transforms the user's instructions into startup commands. > The messy parts are almost entirely contained within softmmu/vl.c. In my opinion, the worst mess is the reordering and the (commonly unstated, sometimes unknown) dependencies that govern it. The reordering is part of the stable interface. Its finer points basically nobody understands, at least not without staring at the code. When we mess with the order, we commonly break things. This is what leads me to my "still a mess" and "impractical" verdicts. > The one (and important, but fixable) exception is backends for > on-board devices: serial_hd, drive_get, and nd_table. Practical ideas on fixing it? >> * A new binary sidesteps the need to manage incompatible change. > > More precisely, a new binary sidesteps the need to integrate an > existing mechanism with a new transport, and deal with the > incompatibilities that arise. I'm not sure I got this. >> Any objections so far? >> >> Now let me make a few more points: >> >> * Single, cohesive interface does not require single transport. In >> fact, we already have two: QMP and the (internal) C interface. >> >> * QMP encodes the abstract interface in JSON, and offers the result on a >> Unix domain socket[1]. >> >> * The (internal) C interface encodes the abstract interface as a set of >> C data types and functions. >> >> * Consider a configuration file transport that encodes the abstract >> interface in JSON. The only wart this adds is syntax that is >> arguiably ill-suited to the purpose. More suitable syntax exists. >> >> * Similar for CLI. >> >> * To get a "a second set of warts layered on top", we actually have to >> layer something on top that isn't utterly trivial. Like a >> higher-level interface. The "second set of warts" objection does not >> apply to (sane) transports. > > The problem is that CLI and HMP, being targeted to humans (and as you > say below humans matter), are not necessarily trivial transports. If > people find the trivial transport unusable, we will not be able to > retire the old CLI. Yes, a trivial CLI transport alone may not suffice to retire the old CLI. By itself, that doesn't mean trivial transports must be avoided. Do I have to argue the benefits of a trivial configuration file transport? Do I have to argue the benefits of a trivial CLI transport for use by relatively unsophisticated programs / relatively sophisticated humans (such as developers)? Can we agree these benefits are not zero? If yes, we can move on to debate whether such trivial transports are worth their (modest) keep. > Bad CLI is also very hard to deprecate because, unlike QMP (for which > you can delegate the workarounds to Libvirt & friends) and HMP (for > which people can just learn the new thing and type it), it is baked in > countless scripts. People hate it when scripts break. I assure you that bad QMP is plenty hard to deprecate, even when libvirt can be updated to cope. I think HMP is easier to change than QMP and CLI mostly because we've spent years guiding people in need of a stable interface towards QMP. >> * The old CLI is partly layered on QMP, partly on HMP, and partly on >> internal C interfaces. It's full of warts. > > I've worked on moving it more towards QMP or at least QOM, and much > less on internal C interfaces. It still has warts but they are > self-contained and due to the baroque ordering of options. My point > is that we can continue this work to the point that having separate > entry points (one CLI-centered, one QMP-only) is not a problem. > > The issues with the CLI then can be completely self-contained within > softmmu/vl.c, and will not influence the innards of QEMU. The issues with the CLI will still influence its users. Can we agree that human
Re: [PATCH] hw/acpi: Set memory regions to native endian as a work around
ping On Mon, Nov 08, 2021 at 08:33:01AM -0500, Michael S. Tsirkin wrote: > On Mon, Nov 08, 2021 at 02:05:42PM +0100, BALATON Zoltan wrote: > > When using ACPI on big endian machine (such as ppc/pegasos2 which has > > a VT8231 south bridge with ACPI) writes to ACPI registers come out > > byte swapped. This may be caused by a bug in memory subsystem but > > until that is fixed setting the ACPI memory regions to native endian > > makes it usable for big endian machines. This fixes ACPI shutdown with > > pegasos2 when using the board firmware for now. > > This could be reverted when the memory layer is fixed. > > > > Signed-off-by: BALATON Zoltan > > > Paolo, could you weight in on whether we can fix it properly > in the memory core? I suspect it's not a good idea to switch > to native without adding a bunch of byteswaps all > over the place ... > > > --- > > hw/acpi/core.c | 6 +++--- > > 1 file changed, 3 insertions(+), 3 deletions(-) > > > > diff --git a/hw/acpi/core.c b/hw/acpi/core.c > > index 1e004d0078..543e4a7875 100644 > > --- a/hw/acpi/core.c > > +++ b/hw/acpi/core.c > > @@ -461,7 +461,7 @@ static const MemoryRegionOps acpi_pm_evt_ops = { > > .impl.min_access_size = 2, > > .valid.min_access_size = 1, > > .valid.max_access_size = 2, > > -.endianness = DEVICE_LITTLE_ENDIAN, > > +.endianness = DEVICE_NATIVE_ENDIAN, > > }; > > > > void acpi_pm1_evt_init(ACPIREGS *ar, acpi_update_sci_fn update_sci, > > @@ -531,7 +531,7 @@ static const MemoryRegionOps acpi_pm_tmr_ops = { > > .impl.min_access_size = 4, > > .valid.min_access_size = 1, > > .valid.max_access_size = 4, > > -.endianness = DEVICE_LITTLE_ENDIAN, > > +.endianness = DEVICE_NATIVE_ENDIAN, > > }; > > > > void acpi_pm_tmr_init(ACPIREGS *ar, acpi_update_sci_fn update_sci, > > @@ -608,7 +608,7 @@ static const MemoryRegionOps acpi_pm_cnt_ops = { > > .impl.min_access_size = 2, > > .valid.min_access_size = 1, > > .valid.max_access_size = 2, > > -.endianness = DEVICE_LITTLE_ENDIAN, > > +.endianness = DEVICE_NATIVE_ENDIAN, > > }; > > > > void acpi_pm1_cnt_init(ACPIREGS *ar, MemoryRegion *parent, > > -- > > 2.30.2
Re: [PATCH v2 07/25] python/qmp: switch qemu-ga-client to AQMP
15.12.2021 22:39, John Snow wrote: Signed-off-by: John Snow Not simple to check, how much new behavior is equal to the old one.. And impossible to check, is everything updated that should be ) --- python/qemu/qmp/qemu_ga_client.py | 22 +++--- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/python/qemu/qmp/qemu_ga_client.py b/python/qemu/qmp/qemu_ga_client.py index b3e1d98c9e..15ed430c61 100644 --- a/python/qemu/qmp/qemu_ga_client.py +++ b/python/qemu/qmp/qemu_ga_client.py [..] try: client = QemuGuestAgentClient(address) -except OSError as err: +except ConnectError as err: print(err) -if err.errno == errno.ECONNREFUSED: -print('Hint: qemu is not running?') +if isinstance(err.exc, ConnectionError): +print('(Is QEMU running?)') It at least a bit changed from checking errno to checking the class, I'd note it in commit message. And anyway commit message may be more informative. Still, I just don't care too much about testing framework. Nothing seems wrong to me, so weak: Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [RFC PATCH 0/6] Removal of Aiocontext lock and usage of subtree drains in aborted transactions
Am 15.12.2021 um 13:34 hat Hanna Reitz geschrieben: > On 14.12.21 19:10, Emanuele Giuseppe Esposito wrote: > > > > > > On 13/12/2021 15:52, Stefan Hajnoczi wrote: > > > Off-topic: I don't understand the difference between the effects of > > > bdrv_drained_begin() and bdrv_subtree_drained_begin(). Both call > > > aio_disable_external(aio_context) and aio_poll(). bdrv_drained_begin() > > > only polls parents and itself, while bdrv_subtree_drained_begin() also > > > polls children. But why does that distinction matter? I wouldn't know > > > when to use one over the other. > > > > Good point. Now I am wondering the same, so it would be great if anyone > > could clarify it. > > As far as I understand, bdrv_drained_begin() is used to drain and stop > requests on a single BDS, whereas bdrv_subtree_drained_begin() drains the > BDS and all of its children. So when you don’t care about lingering > requests in child nodes, then bdrv_drained_begin() suffices. Right. This is different in practice when a child node has multiple parents. Usually, when you want to quiesce one parent, the other parent can keep using the child without being in the way. For example, two qcow2 overlays based on a single template: vda vdb | | v v qcow2 qcow2 (vda.qcow2) (vdb.qcow2) | | +-+ +-+ | | v v qcow2 (template.qcow2) If you drain vdb.qcow2 because you want to safely modify something in its BlockDriverState, there is nothing that should stop vda.qcow2 from processing requests. If you're not sure which one to use, bdrv_drained_begin() is what you want. If you want bdrv_subtree_drained_begin(), you'll know. (It's currently only used by reopen and by drop_intermediates, which both operate on more than one node.) Kevin
Re: [PATCH] docs: Add measurement calculation details to amd-memory-encryption.txt
On 14/12/2021 20:39, Daniel P. Berrangé wrote: > On Tue, Dec 14, 2021 at 01:59:10PM +, Dov Murik wrote: >> Add a section explaining how the Guest Owner should calculate the >> expected guest launch measurement for SEV and SEV-ES. >> >> Also update the name and link to the SEV API Spec document. >> >> Signed-off-by: Dov Murik >> Suggested-by: Daniel P. Berrangé >> --- >> docs/amd-memory-encryption.txt | 50 +++--- >> 1 file changed, 46 insertions(+), 4 deletions(-) >> >> diff --git a/docs/amd-memory-encryption.txt b/docs/amd-memory-encryption.txt >> index ffca382b5f..f97727482f 100644 >> --- a/docs/amd-memory-encryption.txt >> +++ b/docs/amd-memory-encryption.txt >> @@ -43,7 +43,7 @@ The guest policy is passed as plaintext. A hypervisor may >> choose to read it, >> but should not modify it (any modification of the policy bits will result >> in bad measurement). The guest policy is a 4-byte data structure containing >> several flags that restricts what can be done on a running SEV guest. >> -See KM Spec section 3 and 6.2 for more details. >> +See SEV API Spec [1] section 3 and 6.2 for more details. >> >> The guest policy can be provided via the 'policy' property (see below) >> >> @@ -88,7 +88,7 @@ expects. >> LAUNCH_FINISH finalizes the guest launch and destroys the cryptographic >> context. >> >> -See SEV KM API Spec [1] 'Launching a guest' usage flow (Appendix A) for the >> +See SEV API Spec [1] 'Launching a guest' usage flow (Appendix A) for the >> complete flow chart. >> >> To launch a SEV guest >> @@ -113,6 +113,45 @@ a SEV-ES guest: >> - Requires in-kernel irqchip - the burden is placed on the hypervisor to >> manage booting APs. >> >> +Calculating expected guest launch measurement >> +- >> +In order to verify the guest launch measurement, The Guest Owner must >> compute >> +it in the exact same way as it is calculated by the AMD-SP. SEV API Spec >> [1] >> +section 6.5.1 describes the AMD-SP operations: >> + >> +GCTX.LD is finalized, producing the hash digest of all plaintext data >> +imported into the guest. >> + >> +The launch measurement is calculated as: >> + >> +HMAC(0x04 || API_MAJOR || API_MINOR || BUILD || GCTX.POLICY || GCTX.LD >> || MNONCE; GCTX.TIK) >> + >> +where "||" represents concatenation. >> + >> +The values of API_MAJOR, API_MINOR, BUILD, and GCTX.POLICY can be obtained >> +from the 'query-sev' qmp command. >> + >> +The value of MNONCE is part of the response of 'query-sev-launch-measure': >> it >> +is the last 16 bytes of the base64-decoded data field (see SEV API Spec [1] >> +section 6.5.2 Table 52: LAUNCH_MEASURE Measurement Buffer). >> + >> +The value of GCTX.LD is SHA256(firmware_blob || kernel_hashes_blob || >> vmsas_blob), >> +where: >> + >> +* firmware_blob is the content of the entire firmware flash file (for >> example, >> + OVMF.fd). > > Lets add a caveat that the firmware flash should be built to be stateless > ie that it is not secure to attempt to measure a guest where the firmware > uses an NVRAM store. > * firmware_blob is the content of the entire firmware flash file (for example, OVMF.fd). Note that you must build a stateless firmware file which doesn't use an NVRAM store, because the NVRAM area is not measured, and therefore it is not secure to use a firmware which uses state from an NVRAM store. >> +* if kernel is used, and kernel-hashes=on, then kernel_hashes_blob is the >> + content of PaddedSevHashTable (including the zero padding), which itself >> + includes the hashes of kernel, initrd, and cmdline that are passed to the >> + guest. The PaddedSevHashTable struct is defined in target/i386/sev.c . >> +* if SEV-ES is enabled (policy & 0x4 != 0), vmsas_blob is the concatenation >> of >> + all VMSAs of the guest vcpus. Each VMSA is 4096 bytes long; its content >> is >> + defined inside Linux kernel code as struct vmcb_save_area, or in AMD APM >> + Volume 2 [2] Table B-2: VMCB Layout, State Save Area. > > Is there any practical guidance we can give apps on the way the VMSAs > can be expected to be initialized ? eg can they assume essentially > all fields in vmcb_save_area are 0 initialized except for certain > ones ? Is initialization likely to vary at all across KVM or EDK2 > vesions or something ? >From my own experience, the VMSA of vcpu0 doesn't change; it is basically what >QEMU sets up in x86_cpu_reset() (which is mostly zeros but not all). I don't know if it may change in newer QEMU (machine types?) or kvm. As for vcpu1+, in SEV-ES the CS:EIP for the APs is taken from a GUIDed table at the end of the OVMF image, and has actually changed a few months ago when the memory layout changed to support both TDX and SEV. Here are the VMSAs for my 2-vcpu SEV-ES VM: $ hd vmsa/vmsa_cpu0.bin 00 00 93 00 ff ff 00 00 00 00 00 00 00 00 00 00 || 0010 00 f0 9b 00 ff f
Re: [PATCH v4 06/14] vfio-user: find and init PCI device
On Wed, Dec 15, 2021 at 10:35:30AM -0500, Jagannathan Raman wrote: > @@ -150,6 +157,38 @@ static void vfu_object_init_ctx(VfuObject *o, Error > **errp) > +o->pci_dev = PCI_DEVICE(dev); ... > @@ -190,6 +229,8 @@ static void vfu_object_finalize(Object *obj) > > o->device = NULL; > > +o->pci_dev = NULL; We need to consider how this interacts with device_del hot unplug. o->pci_dev is a pointer and we don't hold a refcount, so I think o->pci_dev could disappear at any moment. A pair of object_ref/unref(OBJECT(o->pci_dev)) calls would not be enough because device_del will still unrealize the device that's in use by the vfio-user server. I suggest adding a check to qdev_unplug() that prevents unplug when the device is in use by the vfio-user server. That's similar to the code in that function for preventing unplug during migration. One way to do that is by adding a new API: /* * Register an Error that is raised when unplug is attempted on a * device. If another blocker has already been registered then that * Error may be raised during unplug instead. * * qdev_del_unplug_blocker() must be called to remove this blocker. */ void qdev_add_unplug_blocker(DeviceState *dev, Error *blocker); /* * Deregister an Error that was previously registered with * qdev_add_unplug_blocker(). */ void qdev_del_unplug_blocker(DeviceState *dev, Error *blocker); The vfio-user server then needs to add an Error *unplug_blocker field to VfuObject and call qdev_add/del_unplug_blocker() on the PCI device. >From a user perspective this means that device_del fails with "Device currently in use by vfio-user server '%s'". Stefan signature.asc Description: PGP signature
Re: [RFC qemu.qmp PATCH 00/24] Python: Fork qemu.qmp Python lib into independent repo
On Wed, Dec 15, 2021 at 04:06:10PM -0500, John Snow wrote: > Hi, this series is part of an effort to publish the qemu.qmp package on > PyPI. It is the second of three series to complete this work: > > (1) Switch the new Async QMP library in to python/qemu/qmp > --> (2) Fork python/qemu/qmp out into its own repository, > with updated GitLab CI/CD targets to build packages. > (3) Update qemu.git to install qemu.qmp from PyPI, > and then delete python/qemu/qmp. > > This series is not meant to apply to qemu.git, rather -- it's the series > that performs the split and would apply to a brand new repository. > > I am submitting it to the QEMU mailing list for these reasons: > > (1) To more broadly announce my intentions, and as reference alongside > series #1 and #3 detailed above. > > (2) To ask for permission to become the maintainer of a > 'qemu-project/qemu.qmp' repository, where I would like to host this > subproject. I'd say we need 3 designated maintainers as a minimum for redundancy. > (3) To ask for review on the README.rst file which details my intended > contribution guidelines for this subproject. > > (4) To ask for review on the .gitlab-ci.d/ files and other repo-level > CI/CD ephemera, including and especially the docs-building process. I > think the generated docs are still ugly, and I'd like to upload them to > readthedocs, among other things -- hence the RFC quality of this series. > Some review/RFC notes: > > - I use jsnow/qemu.qmp as the repo name throughout the series; that will > have to be changed eventually, but for the purposes of prototyping, it > was nicer to have a fully working series. > > - I'm planning on using gitlab issues and MRs for the subproject. Great ! > - I plan to version this lib independently, starting at 0.0.1 for the > initial public release and bumping only the micro version for every > last release. I plan to bump the minor version once it hits a "beta" > state. There will be no cross-versioning against QEMU. I don't plan to > publish new releases during QEMU freezes. IMHO if we're saying that QEMU is going to use this library straight from PyPI from the start, then we're defacto considering it staable from the start too. We can't accept changes published to PyPI that are going to be incompatible with existing QEMU. If that isn't acceptable, then QEMU is going to have to be pinned to a very specific version from PyPi, and explicitly not pull the latest. > - Docs are not yet uploaded anywhere (GitLab pages, readthedocs?) Since we're already using gitlab, personally I'd just setup a 'pages' job and assign a qemu.org sub-domain to gitlab pages service. > - Tags on a commit trigger two pipelines; this causes one of the package > builds to fail as the version number will be duplicated in this > case. Not entirely sure how I want to fix this yet ... If you dont have any 'rules:' stanza gitlab creates a pipeline for any 'push' event - this means pushes of branch commits or pushes of tags. To remove the duplicates we need to filter based on certain standard variables - CI_COMMIT_BRANCH or CI_COMMIT_TAG rules: - if '$CI_PIPELINE_SOURCE != "push"' when: never - if '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH' when: never - if '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH' when: on_success - when: never will cull jobs for pushes of branch commit, leaving pipelines for tag pushes. It can get arbitrarily more complicated depending on what you need to achieve. Since we're going to use merge requests, we should be aiming to *NOT* run pipelines on branch commit pushes for forks. We only want pipelines attached to the merge request. You'll need pipelines on pushes of tags for the post-merge publishing jobs potentially, unless you want todo that on a nightly schedule Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v2 08/25] python/qmp: switch qom tools to AQMP
15.12.2021 22:39, John Snow wrote: Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [RFC qemu.qmp PATCH 03/24] Update maintainer metadata
On Wed, Dec 15, 2021 at 04:06:13PM -0500, John Snow wrote: > I'm the primary author of this particular component; update the metadata > accordingly. > > Signed-off-by: John Snow > --- > setup.cfg | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/setup.cfg b/setup.cfg > index bca..7cd8470 100644 > --- a/setup.cfg > +++ b/setup.cfg > @@ -1,7 +1,9 @@ > [metadata] > name = qemu.qmp > version = file:VERSION > -maintainer = QEMU Developer Team > +author = John Snow > +author_email = js...@redhat.com Isn't the authorship of this more of a collaborative effort ? IOW, shouldn't it just be "The QEMU Project" as for the maintainer. > +maintainer = QEMU Project > maintainer_email = qemu-devel@nongnu.org > url = https://www.qemu.org/ > download_url = https://www.qemu.org/download/ Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v2 09/25] python/qmp: switch qmp-shell to AQMP
15.12.2021 22:39, John Snow wrote: We have a replacement for async QMP, but it doesn't have feature parity yet. For now, then, port the old tool onto the new backend. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v2 10/25] python: move qmp utilities to python/qemu/utils
15.12.2021 22:39, John Snow wrote: In order to upload a QMP package to PyPI, I want to remove any scripts that I am not 100% confident I want to support upstream, beyond our castle walls. Move most of our QMP utilities into the utils package so we can split them out from the PyPI upload. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
[PATCH v7] isa-applesmc: provide OSK forwarding on Apple hosts
On Apple hosts we can read AppleSMC OSK key directly from host's SMC and forward this value to QEMU Guest. New 'hostosk' property is added: `-device isa-applesmc,hostosk=on` The property is set to 'on' by default for machine version > 6.2 Apple licence allows use and run up to two additional copies or instances of macOS operating system within virtual operating system environments on each Apple-branded computer that is already running the Apple Software, for purposes of: * software development * testing during software development * using macOS Server * personal, non-commercial use Guest macOS requires AppleSMC with correct OSK. The most legal way to pass it to the Guest is to forward the key from host SMC without any value exposion. Based on https://web.archive.org/web/20200103161737/osxbook.com/book/bonus/chapter7/tpmdrmmyth/ Signed-off-by: Vladislav Yaroshchuk --- hw/arm/virt.c | 1 + hw/core/machine.c | 5 ++ hw/i386/pc_piix.c | 1 + hw/i386/pc_q35.c | 1 + hw/m68k/virt.c | 1 + hw/misc/applesmc.c | 123 - hw/ppc/spapr.c | 1 + hw/s390x/s390-virtio-ccw.c | 1 + include/hw/boards.h| 3 + 9 files changed, 135 insertions(+), 2 deletions(-) diff --git a/hw/arm/virt.c b/hw/arm/virt.c index 6bce595aba..4d6b72fe6f 100644 --- a/hw/arm/virt.c +++ b/hw/arm/virt.c @@ -2858,6 +2858,7 @@ type_init(machvirt_machine_init); static void virt_machine_6_2_options(MachineClass *mc) { +compat_props_add(mc->compat_props, hw_compat_6_2, hw_compat_6_2_len); } DEFINE_VIRT_MACHINE_AS_LATEST(6, 2) diff --git a/hw/core/machine.c b/hw/core/machine.c index 53a99abc56..afa871378e 100644 --- a/hw/core/machine.c +++ b/hw/core/machine.c @@ -37,6 +37,11 @@ #include "hw/virtio/virtio.h" #include "hw/virtio/virtio-pci.h" +GlobalProperty hw_compat_6_2[] = { +{ "isa-applesmc", "hostosk", "off" } +}; +const size_t hw_compat_6_2_len = G_N_ELEMENTS(hw_compat_6_2); + GlobalProperty hw_compat_6_1[] = { { "vhost-user-vsock-device", "seqpacket", "off" }, { "nvme-ns", "shared", "off" }, diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c index 223dd3e05d..1c2ab5222e 100644 --- a/hw/i386/pc_piix.c +++ b/hw/i386/pc_piix.c @@ -420,6 +420,7 @@ static void pc_i440fx_6_2_machine_options(MachineClass *m) m->alias = "pc"; m->is_default = true; pcmc->default_cpu_version = 1; +compat_props_add(m->compat_props, hw_compat_6_2, hw_compat_6_2_len); } DEFINE_I440FX_MACHINE(v6_2, "pc-i440fx-6.2", NULL, diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c index e1e100316d..7186d736d3 100644 --- a/hw/i386/pc_q35.c +++ b/hw/i386/pc_q35.c @@ -366,6 +366,7 @@ static void pc_q35_6_2_machine_options(MachineClass *m) pc_q35_machine_options(m); m->alias = "q35"; pcmc->default_cpu_version = 1; +compat_props_add(m->compat_props, hw_compat_6_2, hw_compat_6_2_len); } DEFINE_Q35_MACHINE(v6_2, "pc-q35-6.2", NULL, diff --git a/hw/m68k/virt.c b/hw/m68k/virt.c index 0efa4a45c7..10281a81c5 100644 --- a/hw/m68k/virt.c +++ b/hw/m68k/virt.c @@ -306,6 +306,7 @@ type_init(virt_machine_register_types) static void virt_machine_6_2_options(MachineClass *mc) { +compat_props_add(mc->compat_props, hw_compat_6_2, hw_compat_6_2_len); } DEFINE_VIRT_MACHINE(6, 2, true) diff --git a/hw/misc/applesmc.c b/hw/misc/applesmc.c index 1b9acaf1d3..72755db380 100644 --- a/hw/misc/applesmc.c +++ b/hw/misc/applesmc.c @@ -37,6 +37,11 @@ #include "qemu/module.h" #include "qemu/timer.h" #include "qom/object.h" +#include "qapi/error.h" + +#if defined(__APPLE__) && defined(__MACH__) +#include +#endif /* #define DEBUG_SMC */ @@ -109,6 +114,7 @@ struct AppleSMCState { uint8_t data_pos; uint8_t data[255]; char *osk; +bool hostosk; QLIST_HEAD(, AppleSMCData) data_def; }; @@ -312,6 +318,101 @@ static const MemoryRegionOps applesmc_err_io_ops = { }, }; +#if defined(__APPLE__) && defined(__MACH__) +/* + * Based on + * https://web.archive.org/web/20200103161737/osxbook.com/book/bonus/chapter7/tpmdrmmyth/ + */ +enum { +SMC_HANDLE_EVENT = 2, +SMC_READ_KEY = 5 +}; + +struct AppleSMCParam { +uint32_t key; +uint8_t pad0[22]; +IOByteCount data_size; +uint8_t pad1[10]; +uint8_t command; +uint32_t pad2; +uint8_t bytes[32]; +}; + +static bool applesmc_read_host_osk(char *host_osk, Error **errp) +{ +assert(host_osk != NULL); + +io_service_t hostsmc_service = IO_OBJECT_NULL; +io_connect_t hostsmc_connect = IO_OBJECT_NULL; +size_t smc_param_size = sizeof(struct AppleSMCParam); +IOReturn status = kIOReturnError; +int i; + +struct AppleSMCParam smc_param[2] = { + { + .key = ('OSK0'), + .data_size = sizeof(smc_param[0].bytes), + .command = SMC_READ_KEY, + }, { + .key = ('OSK1'), + .data_size = sizeof(smc_param[0
Re: [RFC qemu.qmp PATCH 17/24] Makefile: add build and publish targets
On Wed, Dec 15, 2021 at 04:06:27PM -0500, John Snow wrote: > Signed-off-by: John Snow > --- > Makefile | 32 > 1 file changed, 32 insertions(+) > > diff --git a/Makefile b/Makefile > index 97d737a..81bfca8 100644 > --- a/Makefile > +++ b/Makefile > @@ -110,3 +110,35 @@ distclean: clean > rm -f .coverage .coverage.* > rm -rf htmlcov/ > rm -rf test-results/ > + > +.PHONY: pristine > +pristine: > + @git diff-files --quiet --ignore-submodules -- || \ > + (echo "You have unstaged changes."; exit 1) > + @git diff-index --cached --quiet HEAD --ignore-submodules -- || \ > + (echo "Your index contains uncommitted changes."; exit 1) > + @[ -z "$(shell git ls-files -o)" ] || \ > + (echo "You have untracked files: $(shell git ls-files -o)"; > exit 1) > + > +dist: setup.cfg setup.py Makefile README.rst > + python3 -m build > + @touch dist > + > +.PHONY: pre-publish > +pre-publish: pristine dist > + @git describe --exact-match 2>/dev/null || \ > + (echo -e "\033[0;31mThere is no annotated tag for this > commit.\033[0m"; exit 1) > + python3 -m twine check --strict dist/* > + git push -v --atomic --follow-tags --dry-run > + > +.PHONY: publish > +publish: pre-publish > + # Set the username via TWINE_USERNAME. > + # Set the password via TWINE_PASSWORD. > + # Set the pkg repository via TWINE_REPOSITORY. > + python3 -m twine upload --verbose dist/* > + git push -v --atomic --follow-tags > + > +.PHONY: publish-test > +publish-test: pre-publish > + python3 -m twine upload --verbose -r testpypi dist/* It doesn't feel very pythonic to have a makefile in the project. If we want some helpers for publishing releases, I would have expected to see a python script eg scripts/publish.py Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v2 11/25] python: move qmp-shell under the AQMP package
15.12.2021 22:39, John Snow wrote: Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v2 00/25] Python: delete synchronous qemu.qmp package
On Wed, Dec 15, 2021 at 02:39:14PM -0500, John Snow wrote: > GitLab: https://gitlab.com/jsnow/qemu/-/commits/python-qmp-legacy-switch > CI: https://gitlab.com/jsnow/qemu/-/pipelines/430491195 > > Hi, this series is part of an effort to publish the qemu.qmp package on > PyPI. It is the first of three series to complete this work: > > --> (1) Switch the new Async QMP library in to python/qemu/qmp > (2) Fork python/qemu/qmp out into its own repository, > with updated GitLab CI/CD targets to build packages. > (3) Update qemu.git to install qemu.qmp from PyPI, > and then delete python/qemu/qmp. What timeframe are you suggesting step (3) for ? In the series for (2) you're calling it version 0.0.1 indicating it is liable to have API incompatible changes. For step (3), either we're going to have to fetch a precise version number to avoid risk of API breakage, or we're going to have to call it stable in (2) and commit to the API. Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v2 12/25] python/machine: permanently switch to AQMP
15.12.2021 22:39, John Snow wrote: Remove the QEMU_PYTHON_LEGACY_QMP environment variable, making the switch permanent. Update Exceptions and import paths as necessary. Signed-off-by: John Snow --- python/qemu/machine/machine.py | 18 +++--- python/qemu/machine/qtest.py | 2 +- 2 files changed, 8 insertions(+), 12 deletions(-) diff --git a/python/qemu/machine/machine.py b/python/qemu/machine/machine.py index 67ab06ca2b..21fb4a4f30 100644 --- a/python/qemu/machine/machine.py +++ b/python/qemu/machine/machine.py @@ -40,21 +40,16 @@ TypeVar, ) -from qemu.qmp import ( # pylint: disable=import-error +from qemu.aqmp import SocketAddrT +from qemu.aqmp.legacy import ( +QEMUMonitorProtocol, QMPMessage, QMPReturnValue, -SocketAddrT, ) from . import console_socket -if os.environ.get('QEMU_PYTHON_LEGACY_QMP'): -from qemu.qmp import QEMUMonitorProtocol -else: -from qemu.aqmp.legacy import QEMUMonitorProtocol - - LOG = logging.getLogger(__name__) @@ -710,8 +705,9 @@ def events_wait(self, :param timeout: Optional timeout, in seconds. See QEMUMonitorProtocol.pull_event. -:raise QMPTimeoutError: If timeout was non-zero and no matching events -were found. +:raise asyncio.TimeoutError: +If timeout was non-zero and no matching events were found. + :return: A QMP event matching the filter criteria. If timeout was 0 and no event matched, None. """ @@ -734,7 +730,7 @@ def _match(event: QMPMessage) -> bool: event = self._qmp.pull_event(wait=timeout) if event is None: # NB: None is only returned when timeout is false-ish. -# Timeouts raise QMPTimeoutError instead! +# Timeouts raise asyncio.TimeoutError instead! break if _match(event): return event diff --git a/python/qemu/machine/qtest.py b/python/qemu/machine/qtest.py index f2f9aaa5e5..817c8a5425 100644 --- a/python/qemu/machine/qtest.py +++ b/python/qemu/machine/qtest.py @@ -26,7 +26,7 @@ TextIO, ) -from qemu.qmp import SocketAddrT # pylint: disable=import-error +from qemu.aqmp.protocol import SocketAddrT You can also import it simply from qemu.aqmp from .machine import QEMUMachine Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v11 08/10] ACPI ERST: qtest for ERST
On Wed, Dec 15, 2021 at 9:08 PM Eric DeVolder wrote: > > This change provides a qtest that locates and then does a simple > interrogation of the ERST feature within the guest. Other than couple of minor comments, > > Signed-off-by: Eric DeVolder Reviewed-by: Ani Sinha > --- > tests/qtest/erst-test.c | 167 > > tests/qtest/meson.build | 2 + > 2 files changed, 169 insertions(+) > create mode 100644 tests/qtest/erst-test.c > > diff --git a/tests/qtest/erst-test.c b/tests/qtest/erst-test.c > new file mode 100644 > index 000..370c119 > --- /dev/null > +++ b/tests/qtest/erst-test.c > @@ -0,0 +1,167 @@ > +/* > + * QTest testcase for acpi-erst > + * > + * Copyright (c) 2021 Oracle > + * > + * This work is licensed under the terms of the GNU GPL, version 2 or later. > + * See the COPYING file in the top-level directory. > + */ > + > +#include "qemu/osdep.h" > +#include > +#include "libqos/libqos-pc.h" > +#include "libqos/libqtest.h" > +#include "qemu-common.h" > + > +static void save_fn(QPCIDevice *dev, int devfn, void *data) > +{ > +QPCIDevice **pdev = (QPCIDevice **) data; > + > +*pdev = dev; > +} > + > +static QPCIDevice *get_device(QPCIBus *pcibus) get_erst_device? > +{ > +QPCIDevice *dev; > + > +dev = NULL; > +qpci_device_foreach(pcibus, 0x1b36, 0x0012, save_fn, &dev); Maybe #define the vendor and device ID here for clarity. > +g_assert(dev != NULL); > + > +return dev; > +} > + > +typedef struct _ERSTState { > +QOSState *qs; > +QPCIBar reg_bar, mem_bar; > +uint64_t reg_barsize, mem_barsize; > +QPCIDevice *dev; > +} ERSTState; > + > +#define ACTION 0 > +#define VALUE 8 > + > +static const char *reg2str(unsigned reg) > +{ > +switch (reg) { > +case 0: > +return "ACTION"; > +case 8: > +return "VALUE"; > +default: > +return NULL; > +} > +} > + > +static inline uint32_t in_reg32(ERSTState *s, unsigned reg) > +{ > +const char *name = reg2str(reg); > +uint32_t res; > + > +res = qpci_io_readl(s->dev, s->reg_bar, reg); > +g_test_message("*%s -> %08x", name, res); > + > +return res; > +} > + > +static inline uint64_t in_reg64(ERSTState *s, unsigned reg) > +{ > +const char *name = reg2str(reg); > +uint64_t res; > + > +res = qpci_io_readq(s->dev, s->reg_bar, reg); > +g_test_message("*%s -> %016llx", name, (unsigned long long)res); > + > +return res; > +} > + > +static inline void out_reg32(ERSTState *s, unsigned reg, uint32_t v) > +{ > +const char *name = reg2str(reg); > + > +g_test_message("%08x -> *%s", v, name); > +qpci_io_writel(s->dev, s->reg_bar, reg, v); > +} > + > +static inline void out_reg64(ERSTState *s, unsigned reg, uint64_t v) > +{ > +const char *name = reg2str(reg); > + > +g_test_message("%016llx -> *%s", (unsigned long long)v, name); > +qpci_io_writeq(s->dev, s->reg_bar, reg, v); > +} > + > +static void cleanup_vm(ERSTState *s) > +{ > +g_free(s->dev); > +qtest_shutdown(s->qs); > +} > + > +static void setup_vm_cmd(ERSTState *s, const char *cmd) > +{ > +const char *arch = qtest_get_arch(); > + > +if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) { > +s->qs = qtest_pc_boot(cmd); > +} else { > +g_printerr("erst-test tests are only available on x86\n"); > +exit(EXIT_FAILURE); > +} > +s->dev = get_device(s->qs->pcibus); > + > +s->reg_bar = qpci_iomap(s->dev, 0, &s->reg_barsize); > +g_assert_cmpuint(s->reg_barsize, ==, 16); > + > +s->mem_bar = qpci_iomap(s->dev, 1, &s->mem_barsize); > +g_assert_cmpuint(s->mem_barsize, ==, 0x2000); > + > +qpci_device_enable(s->dev); > +} > + > +static void test_acpi_erst_basic(void) > +{ > +ERSTState state; > +uint64_t log_address_range; > +uint64_t log_address_length; > +uint32_t log_address_attr; > + > +setup_vm_cmd(&state, > +"-object memory-backend-file," > +"mem-path=acpi-erst.XX," > +"size=64K," > +"share=on," > +"id=nvram " > +"-device acpi-erst," > +"memdev=nvram"); > + > +out_reg32(&state, ACTION, 0xD); > +log_address_range = in_reg64(&state, VALUE); > +out_reg32(&state, ACTION, 0xE); > +log_address_length = in_reg64(&state, VALUE); > +out_reg32(&state, ACTION, 0xF); > +log_address_attr = in_reg32(&state, VALUE); > + > +/* Check log_address_range is not 0, ~0 or base */ > +g_assert_cmpuint(log_address_range, !=, 0ULL); > +g_assert_cmpuint(log_address_range, !=, ~0ULL); > +g_assert_cmpuint(log_address_range, !=, state.reg_bar.addr); > +g_assert_cmpuint(log_address_range, ==, state.mem_bar.addr); > + > +/* Check log_address_length is bar1_size */ > +g_assert_cmpuint(log_address_length, ==, state.mem_barsize); > + > +/* Check log_address_attr is 0 */ > +g_assert_cmpuint(log_address_attr, ==, 0); > + > +cleanup_vm
Re: [PATCH v2 13/25] scripts/cpu-x86-uarch-abi: fix CLI parsing
15.12.2021 22:39, John Snow wrote: Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v2 14/25] scripts/cpu-x86-uarch-abi: switch to AQMP
15.12.2021 22:39, John Snow wrote: Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v2 15/25] scripts/render-block-graph: switch to AQMP
15.12.2021 22:39, John Snow wrote: Creating an instance of qemu.aqmp.ExecuteError is too involved here, so just drop the specificity down to a generic AQMPError. s/AQMPError/QMPError/ ? Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy --- scripts/render_block_graph.py | 8 +++- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/scripts/render_block_graph.py b/scripts/render_block_graph.py index da6acf050d..97778927f3 100755 --- a/scripts/render_block_graph.py +++ b/scripts/render_block_graph.py @@ -25,10 +25,8 @@ from graphviz import Digraph sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'python')) -from qemu.qmp import ( -QEMUMonitorProtocol, -QMPResponseError, -) +from qemu.aqmp import QMPError +from qemu.aqmp.legacy import QEMUMonitorProtocol def perm(arr): @@ -105,7 +103,7 @@ def command(self, cmd): reply = json.loads(subprocess.check_output(ar)) if 'error' in reply: -raise QMPResponseError(reply) +raise QMPError(reply) return reply['return'] -- Best regards, Vladimir
Re: [PATCH v2 16/25] scripts/bench-block-job: switch to AQMP
15.12.2021 22:39, John Snow wrote: For this commit, we only need to remove accommodations for the synchronous QMP library. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v2 17/25] iotests/mirror-top-perms: switch to AQMP
15.12.2021 22:39, John Snow wrote: Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy --- Note: I still need to adjust the logging. The problem now is that the logging messages include the PID of the test process, so they need to be filtered out. I'll investigate that for a follow-up, or for v2. I could just add yet another filtering function somewhere, but I think it's getting out of hand with how many filters and loggers there are, so I want to give it a slightly more serious treatment instead of a hackjob. Signed-off-by: John Snow Hmm, something is wrong with your scripts or git-notes practices ) --- tests/qemu-iotests/tests/mirror-top-perms | 7 +++ 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/tests/qemu-iotests/tests/mirror-top-perms b/tests/qemu-iotests/tests/mirror-top-perms index 0a51a613f3..f394931a00 100755 --- a/tests/qemu-iotests/tests/mirror-top-perms +++ b/tests/qemu-iotests/tests/mirror-top-perms @@ -23,7 +23,6 @@ import os from qemu.aqmp import ConnectError from qemu.machine import machine -from qemu.qmp import QMPConnectError import iotests from iotests import change_log_level, qemu_img @@ -101,13 +100,13 @@ class TestMirrorTopPerms(iotests.QMPTestCase): self.vm_b.add_device('virtio-blk,drive=drive0,share-rw=on') try: # Silence AQMP errors temporarily. -# TODO: Remove this and just allow the errors to be logged when -# AQMP fully replaces QMP. +# TODO: Remove change_log_level and allow the errors to be logged. +# This necessitates a PID filter on *all* logging output. with change_log_level('qemu.aqmp'): self.vm_b.launch() print('ERROR: VM B launched successfully, ' 'this should not have happened') -except (QMPConnectError, ConnectError): +except ConnectError: assert 'Is another process using the image' in self.vm_b.get_log() result = self.vm.qmp('block-job-cancel', -- Best regards, Vladimir
Re: [PATCH v2 18/25] iotests: switch to AQMP
15.12.2021 22:39, John Snow wrote: Simply import the type defition from the new location. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy --- tests/qemu-iotests/iotests.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py index 83bfedb902..cb21aebe36 100644 --- a/tests/qemu-iotests/iotests.py +++ b/tests/qemu-iotests/iotests.py @@ -37,7 +37,7 @@ from contextlib import contextmanager from qemu.machine import qtest -from qemu.qmp import QMPMessage +from qemu.aqmp.legacy import QMPMessage # Use this logger for logging messages directly from the iotests module logger = logging.getLogger('qemu.iotests') -- Best regards, Vladimir
Re: [PATCH v2 19/25] python: temporarily silence pylint duplicate-code warnings
15.12.2021 22:39, John Snow wrote: The next several commits copy some code from qemu.qmp to qemu.aqmp, then delete qemu.qmp. In the interim, to prevent test failures, the duplicate code detection needs to be silenced to prevent bisect problems with CI testing. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v2 20/25] python/aqmp: take QMPBadPortError and parse_address from qemu.qmp
15.12.2021 22:39, John Snow wrote: Shift these definitions over from the qmp package to the async qmp package. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v2 21/25] python/aqmp: fully separate from qmp.QEMUMonitorProtocol
15.12.2021 22:39, John Snow wrote: After this patch, qemu.aqmp.legacy.QEMUMonitorProtocol no longer inherits from qemu.qmp.QEMUMonitorProtocol. To do this, several inherited methods need to be explicitly re-defined. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PATCH v4 07/14] vfio-user: run vfio-user context
On Wed, Dec 15, 2021 at 10:35:31AM -0500, Jagannathan Raman wrote: > @@ -114,6 +118,62 @@ static void vfu_object_set_device(Object *obj, const > char *str, Error **errp) > vfu_object_init_ctx(o, errp); > } > > +static void vfu_object_ctx_run(void *opaque) > +{ > +VfuObject *o = opaque; > +int ret = -1; > + > +while (ret != 0) { > +ret = vfu_run_ctx(o->vfu_ctx); > +if (ret < 0) { > +if (errno == EINTR) { > +continue; > +} else if (errno == ENOTCONN) { > +qemu_set_fd_handler(o->vfu_poll_fd, NULL, NULL, NULL); > +o->vfu_poll_fd = -1; > +object_unparent(OBJECT(o)); > +break; If nothing else logs a message then I think that should be done here so users know why their vfio-user server object disappeared. > +} else { > +error_setg(&error_abort, "vfu: Failed to run device %s - %s", > + o->device, strerror(errno)); error_abort is equivalent to assuming !o->daemon. In the case where the user doesn't want to automatically shut down the process we need to log a message without aborting. > + break; Indentation is off. > +} > +} > +} > +} > + > +static void vfu_object_attach_ctx(void *opaque) > +{ > +VfuObject *o = opaque; > +GPollFD pfds[1]; > +int ret; > + > +qemu_set_fd_handler(o->vfu_poll_fd, NULL, NULL, NULL); > + > +pfds[0].fd = o->vfu_poll_fd; > +pfds[0].events = G_IO_IN | G_IO_HUP | G_IO_ERR; > + > +retry_attach: > +ret = vfu_attach_ctx(o->vfu_ctx); > +if (ret < 0 && (errno == EAGAIN || errno == EWOULDBLOCK)) { > +qemu_poll_ns(pfds, 1, 500 * (int64_t)SCALE_MS); > +goto retry_attach; This can block the thread indefinitely. Other events like monitor commands are not handled in this loop. Please make this asynchronous (set an fd handler and return from this function so we can try again later). The vfu_attach_ctx() implementation synchronously negotiates the vfio-user connection :(. That's a shame because even if accept(2) is handled asynchronously, the negotiation can still block. It would be cleanest to have a fully async libvfio-user's vfu_attach_ctx() API to avoid blocking. Is that possible? If libvfio-user can't make vfu_attach_ctx() fully async then it may be possible to spawn a thread just for the blocking vfu_attach_ctx() call and report the result back to the event loop (e.g. using EventNotifier). That adds a bunch of code to work around a blocking API though, so I guess we can leave the blocking part if necessary. At the very minimum, please make EAGAIN/EWOULDBLOCK async as mentioned above and add a comment explaining the situation with the partially-async vfu_attach_ctx() API so it's clear that this can block (that way it's clear that you're aware of the issue and this isn't by accident). > +} else if (ret < 0) { > +error_setg(&error_abort, > + "vfu: Failed to attach device %s to context - %s", > + o->device, strerror(errno)); error_abort assumes !o->daemon. Please handle the o->daemon == true case by logging an error without aborting. > +return; > +} > + > +o->vfu_poll_fd = vfu_get_poll_fd(o->vfu_ctx); > +if (o->vfu_poll_fd < 0) { > +error_setg(&error_abort, "vfu: Failed to get poll fd %s", o->device); Same here. > @@ -208,6 +284,8 @@ static void vfu_object_init(Object *obj) > TYPE_VFU_OBJECT, TYPE_REMOTE_MACHINE); > return; > } > + > +o->vfu_poll_fd = -1; > } This must call qemu_set_fd_handler(o->vfu_poll_fd, NULL, NULL, NULL) when o->vfu_poll_fd != -1 to avoid leaving a dangling fd handler callback registered. signature.asc Description: PGP signature
Re: [PATCH v2 22/25] python/aqmp: copy qmp docstrings to qemu.aqmp.legacy
15.12.2021 22:39, John Snow wrote: Copy the docstrings out of qemu.qmp, adjusting them as necessary to more accurately reflect the current state of this class. Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir
Re: [PULL 31/33] tests/acpi: add test case for VIOT
On Thu, 16 Dec 2021 at 09:58, Jean-Philippe Brucker wrote: > > On Wed, Dec 15, 2021 at 04:59:10PM -0800, Richard Henderson wrote: > > On 12/15/21 2:40 AM, Peter Maydell wrote: > > > From: Jean-Philippe Brucker > > > > > > Add two test cases for VIOT, one on the q35 machine and the other on > > > virt. To test complex topologies the q35 test has two PCIe buses that > > > bypass the IOMMU (and are therefore not described by VIOT), and two > > > buses that are translated by virtio-iommu. > > > > > > Reviewed-by: Eric Auger > > > Reviewed-by: Igor Mammedov > > > Signed-off-by: Jean-Philippe Brucker > > > Message-id: 20211210170415.583179-7-jean-phili...@linaro.org > > > Signed-off-by: Peter Maydell > > > --- > > > tests/qtest/bios-tables-test.c | 38 ++ > > > 1 file changed, 38 insertions(+) > > > > I should have been more careful while applying. The aarch64 host failure > > for this is not transient as I first assumed: > > > > PASS 5 qtest-aarch64/bios-tables-test /aarch64/acpi/virt/oem-fields > > qemu-system-aarch64: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Invalid > > argument > > Broken pipe > > ERROR qtest-aarch64/bios-tables-test - too few tests run (expected 6, got 5) > > make: *** [Makefile.mtest:312: run-test-37] Error 1 > > I'm guessing adding "tcg_only = true", like all other virt machine tests > in this file, should work around this, but I don't really understand the > problem because I can't reproduce it on my aarch64 host (I'm running > "configure --target-list=aarch64-softmmu" followed by "make -j10 > check-qtest V=1" in a loop) What host are you testing on? The test case explicitly asks for "-cpu cortex-a57", so it is only going to work with KVM on hosts with a Cortex-A57. Richard: given I'm off work for Christmas now, if Jean-Philippe's suggested fix fixes this are you OK with just applying it directly to un-break the CI ? thanks -- PMM
Re: [PATCH v2 23/25] python: remove the old QMP package
15.12.2021 22:39, John Snow wrote: Thank you for your service! Signed-off-by: John Snow Reviewed-by: Vladimir Sementsov-Ogievskiy -- Best regards, Vladimir