Hi Eduardo
Could you review this patch?
Tao Xu
On 3/24/2020 1:10 PM, Xu, Tao3 wrote:
Add which features are added or removed in this version.
Signed-off-by: Tao Xu
---
The output is as follows:
qemu-system-x86_64 -cpu help | grep "\["
x86 Cascadelake-Server-v2 Intel Xeon Processor (Cascade
On 5/20/20 7:30 AM, Cornelia Huck wrote:
> On Fri, 15 May 2020 18:20:32 -0400
> Collin Walling wrote:
>
>> DIAGNOSE 0x318 (diag 318) allows the storage of diagnostic data that
>> is collected by the firmware in the case of hardware/firmware service
>> events.
>>
>> The instruction is invoked in t
On 14/05/2020 14.37, Janosch Frank wrote:
> Let's make it a bit more clear that we check the full 64 bits to fit
> into the 32 we return.
>
> Signed-off-by: Janosch Frank
> Suggested-by: David Hildenbrand
> Reviewed-by: David Hildenbrand
> ---
> pc-bios/s390-ccw/helper.h | 2 +-
> 1 file chang
On 14/05/2020 14.37, Janosch Frank wrote:
> 0x00 looks odd, time to replace it with 0 or 0x0 (for pointers).
>
> Signed-off-by: Janosch Frank
> ---
> pc-bios/s390-ccw/dasd-ipl.c | 14 +++---
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/pc-bios/s390-ccw/dasd-ipl.c b
On 14/05/2020 14.37, Janosch Frank wrote:
> Why should we do conversion of a ebcdic value if we have a handy table
> where we coul look up the ascii value instead?
s/coul/could/
> Signed-off-by: Janosch Frank
> Reviewed-by: David Hildenbrand
> ---
> pc-bios/s390-ccw/bootmap.c | 4 +---
> 1 fil
On 14/05/2020 14.37, Janosch Frank wrote:
> panic() was defined for the ccw and net bios, i.e. twice, so it's
> cleaner to rather put it into the header.
>
> Also let's add an infinite loop into the assembly of disabled_wait() so
> the caller doesn't need to take care of it.
>
> Signed-off-by: Ja
On 14/05/2020 14.37, Janosch Frank wrote:
> Let's move some of the PSW mask defines into s390-arch.h and use them
> in jump2ipl.c
>
> Signed-off-by: Janosch Frank
> Reviewed-by: David Hildenbrand
> ---
> pc-bios/s390-ccw/jump2ipl.c | 10 --
> pc-bios/s390-ccw/s390-arch.h | 2 ++
> 2 f
On 21/05/2020 07.44, Thomas Huth wrote:
> On 14/05/2020 14.37, Janosch Frank wrote:
>> ZMODE has a lot of ambiguity with the ESAME architecture mode, but is
>> actually 64 bit addressing.
>>
>> Signed-off-by: Janosch Frank
>> Reviewed-by: Pierre Morel
>> Reviewed-by: David Hildenbrand
>> ---
>>
On 14/05/2020 14.37, Janosch Frank wrote:
> ZMODE has a lot of ambiguity with the ESAME architecture mode, but is
> actually 64 bit addressing.
>
> Signed-off-by: Janosch Frank
> Reviewed-by: Pierre Morel
> Reviewed-by: David Hildenbrand
> ---
> pc-bios/s390-ccw/dasd-ipl.c | 3 +--
> pc-bios/
On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:
> On Wed, 20 May 2020 19:10:07 +0530
> Kirti Wankhede wrote:
>
> > On 5/20/2020 8:25 AM, Yan Zhao wrote:
> > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
> > >> Hi folks,
> > >>
> > >> My impression is that w
On Thu, May 21, 2020 at 01:34:37AM +0200, Greg Kurz wrote:
> On Mon, 18 May 2020 16:44:17 -0500
> Reza Arbab wrote:
>
> > Make the number of NUMA associativity reference points a
> > machine-specific value, using the currently assumed default (two
> > reference points). This preps the next patch
On Thu, May 21, 2020 at 01:36:16AM +0200, Greg Kurz wrote:
> On Mon, 18 May 2020 16:44:18 -0500
> Reza Arbab wrote:
>
> > NUMA nodes corresponding to GPU memory currently have the same
> > affinity/distance as normal memory nodes. Add a third NUMA associativity
> > reference point enabling us to
When the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS is enabled, qemu will
transmit memory regions to a backend individually using the new message
VHOST_USER_ADD_MEM_REG. With this change vhost-user backends built with
libvhost-user can now map in new memory regions when VHOST_USER_ADD_MEM_REG
messag
Historically, VMs with vhost-user devices could hot-add memory a maximum
of 8 times. Now that the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS
protocol feature has been added, VMs with vhost-user backends which
support this new feature can support a configurable number of ram slots
up to the maximum s
The VHOST_USER_GET_MAX_MEM_SLOTS message allows a vhost-user backend to
specify a maximum number of ram slots it is willing to support. This
change adds support for libvhost-user to process this message. For now
the backend will reply with 8 as the maximum number of regions
supported.
libvhost-use
Historically, sending all memory regions to vhost-user backends in a
single message imposed a limitation on the number of times memory
could be hot-added to a VM with a vhost-user device. Now that backends
which support the VHOST_USER_PROTOCOL_F_CONFIGURE_SLOTS send memory
regions individually, we
When the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS protocol feature is
enabled, on memory hot-unplug qemu will transmit memory regions to
remove individually using the new message VHOST_USER_REM_MEM_REG
message. With this change, vhost-user backends build with libvhost-user
can now unmap individual
In libvhost-user, the incoming postcopy migration path for setting the
backend's memory tables has become convolued. In particular, moving the
logic which starts generating faults, having received the final ACK from
qemu can be moved to a separate function. This simplifies the code
substantially.
In QEMU today, a VM with a vhost-user device can hot add memory a
maximum of 8 times. See these threads, among others:
[1] https://lists.gnu.org/archive/html/qemu-devel/2019-07/msg01046.html
https://lists.gnu.org/archive/html/qemu-devel/2019-07/msg01236.html
[2] https://lists.gnu.org/archive/
When setting vhost-user memory tables, memory region descriptors must be
copied from the vhost_dev struct to the vhost-user message. To avoid
duplicating code in setting the memory tables, we should use a helper to
populate this field. This change adds this helper.
Signed-off-by: Raphael Norwitz
This change introduces a new feature to the vhost-user protocol allowing
a backend device to specify the maximum number of ram slots it supports.
At this point, the value returned by the backend will be capped at the
maximum number of ram slots which can be supported by vhost-user, which
is curren
With this change, when the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS
protocol feature has been negotiated, Qemu no longer sends the backend
all the memory regions in a single message. Rather, when the memory
tables are set or updated, a series of VHOST_USER_ADD_MEM_REG and
VHOST_USER_REM_MEM_REG me
When setting the memory tables, qemu uses a memory region's userspace
address to look up the region's MemoryRegion struct. Among other things,
the MemoryRegion contains the region's offset and associated file
descriptor, all of which need to be sent to the backend.
With VHOST_USER_PROTOCOL_F_CONFI
On 13/05/2020 19.51, Alex Bennée wrote:
> First we ensure all guest space initialisation logic comes through
> probe_guest_base once we understand the nature of the binary we are
> loading. The convoluted init_guest_space routine is removed and
> replaced with a number of pgb_* helpers which are ca
Patchew URL: https://patchew.org/QEMU/20200521033631.1605-1-miaoy...@huawei.com/
Hi,
This series seems to have some coding style problems. See output below for
more information:
Message-id: 20200521033631.1605-1-miaoy...@huawei.com
Subject: [PATCH v8 0/8] pci_expander_brdige:acpi: Support pxb-
Patchew URL: https://patchew.org/QEMU/20200521033631.1605-1-miaoy...@huawei.com/
Hi,
This series failed the docker-quick@centos7 build test. Please find the testing
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
The default behaviour for virtio devices is not to use the platforms normal
DMA paths, but instead to use the fact that it's running in a hypervisor
to directly access guest memory. That doesn't work if the guest's memory
is protected from hypervisor access, such as with AMD's SEV or POWER's PEF.
The kvm_memcrypt_enabled() and kvm_memcrypt_encrypt_data() helper functions
don't conceptually have any connection to KVM (although it's not possible
in practice to use them without it).
They also rely on looking at the global KVMState. But the same information
is available from the machine, and
Some upcoming POWER machines have a system called PEF (Protected
Execution Framework) which uses a small ultravisor to allow guests to
run in a way that they can't be eavesdropped by the hypervisor. The
effect is roughly similar to AMD SEV, although the mechanisms are
quite different.
Most of the
This allows failures to be reported richly and idiomatically.
Signed-off-by: David Gibson
---
accel/kvm/kvm-all.c| 4 +++-
include/exec/guest-memory-protection.h | 2 +-
target/i386/sev.c | 31 +-
3 files changed, 19 insertions(+
When the "memory-encryption" property is set, we also disable KSM
merging for the guest, since it won't accomplish anything.
We want that, but doing it in the property set function itself is
thereoretically incorrect, in the unlikely event of some configuration
environment that set the property th
Currently the "memory-encryption" property is only looked at once we get to
kvm_init(). Although protection of guest memory from the hypervisor isn't
something that could really ever work with TCG, it's not conceptually tied
to the KVM accelerator.
In addition, the way the string property is reso
SEVState is contained with SevGuestState. We've now fixed redundancies
and name conflicts, so there's no real point to the nested structure. Just
move all the fields of SEVState into SevGuestState.
This eliminates the SEVState structure, which as a bonus removes the
confusion with the SevState e
Several architectures have mechanisms which are designed to protect guest
memory from interference or eavesdropping by a compromised hypervisor. AMD
SEV does this with in-chip memory encryption and Intel has a similar
mechanism. POWER's Protected Execution Framework (PEF) accomplishes a
similar g
The user can explicitly specify a handle via the "handle" property wired
to SevGuestState::handle. That gets passed to the KVM_SEV_LAUNCH_START
ioctl() which may update it, the final value being copied back to both
SevGuestState::handle and SEVState::handle.
AFAICT, nothing will be looking SEVSta
A number of hardware platforms are implementing mechanisms whereby the
hypervisor does not have unfettered access to guest memory, in order
to mitigate the security impact of a compromised hypervisor.
AMD's SEV implements this with in-cpu memory encryption, and Intel has
its own memory encryption
Currently the "memory-encryption" machine option is notionally generic,
but in fact is only used for AMD SEV setups. Make another step towards it
being actually generic, but having using the GuestMemoryProtection QOM
interface to dispatch the initial setup, rather than directly calling
sev_guest_i
The SEVState structure has cbitpos and reduced_phys_bits fields which are
simply copied from the SevGuestState structure and never changed. Now that
SEVState is embedded in SevGuestState we can just access the original copy
directly.
Signed-off-by: David Gibson
---
target/i386/sev.c | 19 ++
Neither QSevGuestInfo nor SEVState (not to be confused with SevState) is
used anywhere outside target/i386/sev.c, so they might as well live in
there rather than in a (somewhat) exposed header.
Signed-off-by: David Gibson
---
target/i386/sev.c | 44 ++
At the moment AMD SEV sets a special function pointer, plus an opaque
handle in KVMState to let things know how to encrypt guest memory.
Now that we have a QOM interface for handling things related to guest
memory protection, use a QOM method on that interface, rather than a bare
function pointer
The SEV code uses a pretty ugly global to access its internal state. Now
that SEVState is embedded in SevGuestState, we can avoid accessing it via
the global in some cases. In the remaining cases use a new global
referencing the containing SevGuestState which will simplify some future
transformat
At the moment this is a purely passive object which is just a container for
information used elsewhere, hence the name. I'm going to change that
though, so as a preliminary rename it to SevGuestState.
That name risks confusion with both SEVState and SevState, but I'll be
working on that in follow
Currently SevGuestState contains only configuration information. For
runtime state another non-QOM struct SEVState is allocated separately.
Simplify things by instead embedding the SEVState structure in
SevGuestState.
Signed-off-by: David Gibson
---
target/i386/sev.c | 54 +
SEVState::policy is set from the final value of the policy field in the
parameter structure for the KVM_SEV_LAUNCH_START ioctl(). But, AFAICT
that ioctl() won't ever change it from the original supplied value which
comes from SevGuestState::policy.
So, remove this field and just use SevGuestState
This structure is nothing but an empty wrapper around the parent class,
which by QOM conventions means we don't need it at all.
Signed-off-by: David Gibson
---
target/i386/sev.c | 1 -
target/i386/sev_i386.h | 5 -
2 files changed, 6 deletions(-)
diff --git a/target/i386/sev.c b/target
If table size is changed between virt_acpi_build and
virt_acpi_build_update, the table size would not be updated to
UEFI, therefore, just align the size to 128kb, which is enough
and same with x86. It would warn if 64k is not enough and the
align size should be updated.
Signed-off-by: Yubo Miao
-
The resources of pxbs are obtained by crs_build and the resources
used by pxbs would be moved from the resources defined for host-bridge.
The resources for pxb are composed of following two parts:
1. The bar space of the pci-bridge/pcie-root-port behined it
2. The config space of devices behind it
The unit-test is seperated into three patches:
1. The files changed and list in bios-tables-test-allowed-diff.h
2. The unit-test
3. The binary file and clear bios-tables-test-allowed-diff.h
The ASL diff would also be listed.
Sice there are 1000+lines diff, some changes would be omitted.
* Origi
Extract crs build form acpi_build.c, the function could also be used
to build the crs for pxbs for arm. The resources are composed by two parts:
1. The bar space of pci-bridge/pcie-root-ports
2. The resources needed by devices behind PXBs.
The base and limit of memory/io are obtained from the confi
Add the binary file DSDT.pxb and clear bios-tables-test-allowed-diff.h
Signed-off-by: Yubo Miao
---
tests/data/acpi/virt/DSDT.pxb | Bin 0 -> 7802 bytes
tests/qtest/bios-tables-test-allowed-diff.h | 1 -
2 files changed, 1 deletion(-)
create mode 100644 tests/data/acpi/virt/DSDT
Add testcase for pxb to make sure the ACPI table is correct for guest.
Signed-off-by: Yubo Miao
---
tests/qtest/bios-tables-test.c | 58 ++
1 file changed, 52 insertions(+), 6 deletions(-)
diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test
Write the extra roots into the fw_cfg, therefore the uefi could
get the extra roots. Only if the uefi knows there are extra roots,
the config space of devices behind the root could be obtained.
Signed-off-by: Yubo Miao
---
hw/arm/virt.c | 8
hw/i386/pc.c | 18
Extract two APIs acpi_dsdt_add_pci_route_table and
acpi_dsdt_add_pci_osc from acpi_dsdt_add_pci. The first
API is used to specify the pci route table and the second
API is used to declare the operation system capabilities.
These two APIs would be used to specify the pxb-pcie in DSDT.
Signed-off-by
Changes with v7
v7->v8:
Fix the error:no member named 'fw_cfg' in 'struct PCMachineState'
I have one question for patch
[PATCH v8 8/8] unit-test: Add the binary file and clear diff.
I followed instructions in tests/qtest/bios-tables-test.c
to updated golden master binaries and empty
tests/qtest/b
On Wed, May 20, 2020 at 5:40 AM Alistair Francis
wrote:
>
> Signed-off-by: Alistair Francis
> ---
> target/riscv/pmp.c | 14 +-
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index 0e6b640fbd..607a991260 100644
> --- a/tar
On Wed, May 20, 2020 at 5:39 AM Alistair Francis
wrote:
>
> The reset vector is set in the init function don't set it again in
> realize.
>
> Signed-off-by: Alistair Francis
> ---
> target/riscv/cpu.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
Reviewed-by: Bin Meng
On Wed, 2020-05-20 at 10:17 +0100, Daniel P. Berrangé wrote:
> On Wed, May 20, 2020 at 10:10:07AM +0800, Chenyi Qiang wrote:
> > There are no Icelake Desktop products in the market. Remove the
> > Icelake-Client CPU model.
>
> QEMU has been shipping this CPU model for 2 years now. Regardless
> of
Hi Alistair,
On Thu, May 7, 2020 at 5:02 AM Alistair Francis wrote:
>
> On Tue, May 5, 2020 at 6:34 PM Bin Meng wrote:
> >
> > Hi Alistair,
> >
> > On Wed, May 6, 2020 at 6:37 AM Alistair Francis
> > wrote:
> > >
> > > On Tue, May 5, 2020 at 1:34 PM Alistair Francis
> > > wrote:
> > > >
> >
> -Original Message-
> From: Jason Wang
> Sent: Wednesday, May 20, 2020 8:23 PM
> To: Zhang, Chen ; qemu-devel@nongnu.org; Lukas
> Straub
> Cc: zhangc...@gmail.com
> Subject: Re: [PATCH 0/7] Latest COLO tree queued patches
>
>
> On 2020/5/20 下午5:07, Zhang, Chen wrote:
> > It looks ASa
On Fri, May 8, 2020 at 3:22 AM Alistair Francis
wrote:
>
> The RISC-V ISA spec version 1.09.1 has been deprecated in QEMU since
> 4.1. It's not commonly used so let's remove support for it.
>
> Signed-off-by: Alistair Francis
> ---
> target/riscv/cpu.c| 2 -
> target
On Fri, May 8, 2020 at 3:19 AM Alistair Francis
wrote:
>
> Signed-off-by: Alistair Francis
> ---
> target/riscv/cpu.c | 28
> target/riscv/cpu.h | 7 ---
> tests/qtest/machine-none-test.c | 4 ++--
> 3 files changed, 2 insertions(+), 3
On Fri, May 8, 2020 at 3:21 AM Alistair Francis
wrote:
>
> The ISA specific Spike machines have been deprecated in QEMU since 4.1,
nits: there are 2 spaces between 'have' and 'been'
> let's finally remove them.
>
> Signed-off-by: Alistair Francis
> Reviewed-by: Philippe Mathieu-Daudé
> ---
>
Patchew URL: https://patchew.org/QEMU/20200520235349.21215-1-pauld...@gmail.com/
Hi,
This series seems to have some coding style problems. See output below for
more information:
Message-id: 20200520235349.21215-1-pauld...@gmail.com
Subject: [PATCH v6 0/7] dwc-hsotg (aka dwc2) USB host controll
> I'm confused by VFIO_USER_ADD_MEMORY_REGION vs VFIO_USER_IOMMU_MAP_DMA.
> The former seems intended to provide the server with access to the
> entire GPA space, while the latter indicates an IOVA to GPA mapping of
> those regions. Doesn't this break the basic isolation of a vIOMMU?
> This ess
The dwc-hsotg (dwc2) USB host depends on a short packet to
indicate the end of an IN transfer. The usb-storage driver
currently doesn't provide this, so fix it.
I have tested this change rather extensively using a PC
emulation with xhci, ehci, and uhci controllers, and have
not observed any regres
Wire the dwc-hsotg (dwc2) emulation into Qemu
Signed-off-by: Paul Zimmerman
Reviewed-by: Philippe Mathieu-Daude
---
hw/arm/bcm2835_peripherals.c | 21 -
include/hw/arm/bcm2835_peripherals.h | 3 ++-
2 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/hw/
Add a check for functional dwc-hsotg (dwc2) USB host emulation to
the Raspi 2 acceptance test
Signed-off-by: Paul Zimmerman
Reviewed-by: Philippe Mathieu-Daude
---
tests/acceptance/boot_linux_console.py | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/tests/acceptanc
Import the dwc-hsotg (dwc2) register definitions file from the
Linux kernel. This is a copy of drivers/usb/dwc2/hw.h from the
mainline Linux kernel, the only changes being to the header, and
two instances of 'u32' changed to 'uint32_t' to allow it to
compile. Checkpatch throws a boatload of errors
Add the dwc-hsotg (dwc2) USB host controller emulation code.
Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c.
Note that to use this with the dwc-otg driver in the Raspbian
kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0" on
the kernel command line.
Emulation of slave mode and of descr
Add the dwc-hsotg (dwc2) USB host controller state definitions.
Mostly based on hw/usb/hcd-ehci.h.
Signed-off-by: Paul Zimmerman
---
hw/usb/hcd-dwc2.h | 190 ++
1 file changed, 190 insertions(+)
create mode 100644 hw/usb/hcd-dwc2.h
diff --git a/hw/us
This verion fixes a few things pointed out by Peter, and one by
Felippe.
This patch series adds emulation for the dwc-hsotg USB controller,
which is used on the Raspberry Pi 3 and earlier, as well as a number
of other development boards. The main benefit for Raspberry Pi is that
this enables netwo
Add BCM2835 SOC MPHI (Message-based Parallel Host Interface)
emulation. It is very basic, only providing the FIQ interrupt
needed to allow the dwc-otg USB host controller driver in the
Raspbian kernel to function.
Signed-off-by: Paul Zimmerman
Acked-by: Philippe Mathieu-Daude
Reviewed-by: Peter
On Mon, 18 May 2020 16:44:18 -0500
Reza Arbab wrote:
> NUMA nodes corresponding to GPU memory currently have the same
> affinity/distance as normal memory nodes. Add a third NUMA associativity
> reference point enabling us to give GPU nodes more distance.
>
> This is guest visible information, w
This is the CPU cache layout as shown by lscpu -a -e
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINEMAXMHZMINMHZ
00 00 0:0:0:0 yes 3800. 2200.
10 01 1:1:1:0 yes 3800. 2200.
20 02 2:2:2:0 yes 3800. 2200.00
On Mon, 18 May 2020 16:44:17 -0500
Reza Arbab wrote:
> Make the number of NUMA associativity reference points a
> machine-specific value, using the currently assumed default (two
> reference points). This preps the next patch to conditionally change it.
>
> Signed-off-by: Reza Arbab
> ---
> hw
Patchew URL: https://patchew.org/QEMU/cover.1590008051.git.lukasstra...@web.de/
Hi,
This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
#!/bin/bash
export
Patchew URL: https://patchew.org/QEMU/cover.1590008051.git.lukasstra...@web.de/
Hi,
This series failed the docker-quick@centos7 build test. Please find the testing
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
#
On Thu, 14 May 2020 13:47:07 PDT (-0700), Alistair Francis wrote:
Signed-off-by: Alistair Francis
---
hw/riscv/sifive_e.c | 41 +++--
include/hw/riscv/sifive_e.h | 4
2 files changed, 34 insertions(+), 11 deletions(-)
diff --git a/hw/riscv/sifive_e
On Thu, 14 May 2020 13:47:10 PDT (-0700), Alistair Francis wrote:
Signed-off-by: Alistair Francis
---
hw/riscv/sifive_e.c | 35 +++
include/hw/riscv/sifive_e.h | 1 +
2 files changed, 32 insertions(+), 4 deletions(-)
diff --git a/hw/riscv/sifive_e.c b/h
On Thu, May 21, 2020 at 1:01 AM Eric Blake wrote:
>
> It's useful to know how much space can be occupied by qcow2 persistent
> bitmaps, even though such metadata is unrelated to the guest-visible
> data. Report this value as an additional QMP field, present when
> measuring an existing image and
On 20/05/20 23:05, Lukas Straub wrote:
> +
> +void yank_init(void)
> +{
> +qemu_mutex_init(&lock);
> +}
You can use __constructor__ for this to avoid the call in vl.c. See
job.c for an example.
Thanks,
Paolo
The next patch will add another client that wants to merge dirty
bitmaps; it will be easier to refactor the code to construct the QAPI
struct correctly into a helper function.
Signed-off-by: Eric Blake
---
qemu-img.c | 33 -
1 file changed, 20 insertions(+), 13 de
Make it easier to copy all the persistent bitmaps of (the top layer
of) a source image along with its guest-visible contents, by adding a
boolean flag for use with qemu-img convert. This is basically
shorthand, as the same effect could be accomplished with a series of
'qemu-img bitmap --add' and '
v4 was here:
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg03182.html
original cover letter here:
https://lists.gnu.org/archive/html/qemu-devel/2020-04/msg03464.html
Based-on: <20200519175707.815782-1-ebl...@redhat.com>
[pull v3 bitmaps patches for 2020-05-18]
Since then:
- patch 1 is
It's useful to know how much space can be occupied by qcow2 persistent
bitmaps, even though such metadata is unrelated to the guest-visible
data. Report this value as an additional QMP field, present when
measuring an existing image and output format that both support
bitmaps. Update iotest 178 a
A recent change to qemu-img changed expected error message output, but
178 takes long enough to execute that it does not get run by 'make
check' or './check -g quick'.
Fixes: 43d589b074
Signed-off-by: Eric Blake
---
tests/qemu-iotests/178.out.qcow2 | 2 +-
tests/qemu-iotests/178.out.raw | 2 +-
Add a new test covering the 'qemu-img bitmap' subcommand, as well as
'qemu-img convert --bitmaps', both added in recent patches.
Signed-off-by: Eric Blake
Reviewed-by: Max Reitz
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/291 | 112 +
Jan, I tried your suggestion but it didn't make a difference. Here is my
current setup:
h/w: AMD Ryzen 9 3900X
kernel: 5.4
QEMU: 5.0.0-6
Chipset selection: Q35-5.0
Configuration: host-passthrough, cache enabled
Use CoreInfo.exe inside Windows. The problem is this:
Logical Processor to Cache Map
On Wed, May 20, 2020 at 05:54:44PM +0200, Kevin Wolf wrote:
> Am 20.05.2020 um 10:06 hat Roman Kagan geschrieben:
> > Devices (virtio-blk, scsi, etc.) and the block layer are happy to use
> > 32-bit for logical_block_size, physical_block_size, and min_io_size.
> > However, the properties in BlockCo
Robert Foley writes:
A brief rationale wouldn't go amiss in the commit message. e.g. "We will
shortly need to pass more parameters to the class so lets just pass args
rather than growing the parameter list."
Otherwise:
Reviewed-by: Alex Bennée
> Signed-off-by: Robert Foley
> ---
> tests/
On Wed, May 20, 2020 at 11:04:44AM +0200, Philippe Mathieu-Daudé wrote:
> On 5/20/20 10:06 AM, Roman Kagan wrote:
> > Devices (virtio-blk, scsi, etc.) and the block layer are happy to use
> > 32-bit for logical_block_size, physical_block_size, and min_io_size.
> > However, the properties in BlockCo
On 5/18/20 11:32 AM, Eric Blake wrote:
From: Eyal Moscovici
All calls to cvtnum check the return value and print the same error
message more or less. And so error reporting moved to cvtnum_full to
reduce code duplication and provide a single error
message. Additionally, cvtnum now wraps cvtnum_
> Using the proprietary firmware for this would be ideal. It would also
> provide reliable access to the kernel debugger which would be
> extremely
> helpful for diagnosing what's going wrong with the console. I'm not
> sure how I would go about making progress on this though. I know there
> are bi
On Wed, May 20, 2020 at 10:57:00AM +0200, Philippe Mathieu-Daudé wrote:
> On 5/20/20 10:06 AM, Roman Kagan wrote:
> > Several block device properties related to blocksize configuration must
> > be in certain relationship WRT each other: physical block must be no
> > smaller than logical block; min_
On Wed, May 20, 2020 at 6:18 AM Peter Maydell
wrote:
> On Wed, 20 May 2020 at 06:49, Paul Zimmerman wrote:
> > Is there a tree somewhere that has a working example of a
> > three-phase reset? I did a 'git grep' on the master branch and didn't
> > find any code that is actually using it. I tried
On Wed, May 20, 2020 at 06:44:44AM -0400, Michael S. Tsirkin wrote:
> On Wed, May 20, 2020 at 11:06:55AM +0300, Roman Kagan wrote:
> > The width of opt_io_size in virtio_blk_topology is 32bit.
> >
> > Use the appropriate accessor to store it.
> >
> > Signed-off-by: Roman Kagan
>
>
> Thanks for
Register yank functions on sockets to shut them down.
Signed-off-by: Lukas Straub
---
migration/migration.c | 9 +
migration/qemu-file-channel.c | 6 ++
migration/socket.c| 11 +++
3 files changed, 26 insertions(+)
diff --git a/migration/migration.c b/m
Register a yank function which shuts down the socket and sets
s->state = NBD_CLIENT_QUIT. This is the same behaviour as if an
error occured.
Signed-off-by: Lukas Straub
---
Makefile.objs | 1 +
block/nbd.c | 101 --
2 files changed, 65 insertio
Register a yank function to shutdown the socket on yank.
Signed-off-by: Lukas Straub
---
chardev/char-socket.c | 24
1 file changed, 24 insertions(+)
diff --git a/chardev/char-socket.c b/chardev/char-socket.c
index 185fe38dda..d5c6cd2153 100644
--- a/chardev/char-socket
The yank feature allows to recover from hanging qemu by "yanking"
at various parts. Other qemu systems can register themselves and
multiple yank functions. Then all yank functions for selected
instances can be called by the 'yank' out-of-band qmp command.
Available instances can be queried by a 'qu
1 - 100 of 450 matches
Mail list logo