On 10.05.2018 09:18, Su Hang wrote:
Hi,
this will be my first comment on devel as part of my GSoC participation
this year.
> +
> +QTestState *s = qtest_startf(
> +"-M versatilepb -m 128M -nographic -kernel
> ../tests/hex-loader-check-data/test.hex");
> +
The test binary "text.hex" i
On the POWER9 processor, the XIVE interrupt controller can control
interrupt sources using MMIO to trigger events, to EOI or to turn off
the sources. Priority management and interrupt acknowledgment is also
controlled by MMIO in the presenter sub-engine.
These MMIO regions are exposed to guests in
From: zhanghailiang
COLO thread may sleep at qemu_sem_wait(&s->colo_checkpoint_sem),
while failover works begin, It's better to wakeup it to quick
the process.
Signed-off-by: zhanghailiang
---
migration/colo.c | 8
1 file changed, 8 insertions(+)
diff --git a/migration/colo.c b/migra
From: zhanghailiang
If some errors happen during VM's COLO FT stage, it's important to
notify the users of this event. Together with 'x-colo-lost-heartbeat',
Users can intervene in COLO's failover work immediately.
If users don't want to get involved in COLO's failover verdict,
it is still necess
> > This patch adds Intel Hexadecimal Object File format support to
> > the loader. The file format specification is available here:
> > http://www.piclist.com/techref/fileext/hex/intel.htm
> >
> > The file format is mainly intended for embedded systems
> > and microcontrollers, such as Micro:bit
We record the address of the dirty pages that received,
it will help flushing pages that cached into SVM.
Here, it is a trick, we record dirty pages by re-using migration
dirty bitmap. In the later patch, we will start the dirty log
for SVM, just like migration, in this way, we can record both
the
Paolo Bonzini writes:
> On 11/05/2018 11:27, Peter Maydell wrote:
>>> +uint8_t replay_get_byte(void)
>>> +{
>>> +uint8_t byte = 0;
>>> +if (replay_file) {
>>> +byte = getc(replay_file);
>>> +}
>>> +return byte;
>>> +}
>> Coverity (CID 1390576) points out that this function
On Wed, 04/18 17:00, Vladimir Sementsov-Ogievskiy wrote:
> Hi all.
>
> We now have the following problem:
>
> If dirty-bitmaps migration capability is enabled, persistance flag is
> dropped for all migrated bitmaps, to prevent their storing to the storage on
> inactivate.
Why do we prevent sour
On 04/24/2018 02:13 PM, Wei Wang wrote:
This is the deivce part implementation to add a new feature,
VIRTIO_BALLOON_F_FREE_PAGE_HINT to the virtio-balloon device. The device
receives the guest free page hints from the driver and clears the
corresponding bits in the dirty bitmap, so that those fre
Eduardo Habkost writes:
> On Thu, May 10, 2018 at 06:57:32PM +0200, Paolo Bonzini wrote:
> [...]
>> > -machine->device_memory = g_malloc(sizeof(*machine->device_memory));
>> > +machine->device_memory = g_malloc0(sizeof(*machine->device_memory));
>>
>> g_new0 since you are at it? :)
>
> N
Public bug reported:
hw/block/dataplane/virtio-blk.c commit
1010cadf62332017648abee0d7a3dc7f2eef9632
in the function notify_guest_bh, the function virtio_notify_irqfd is called
to deliver the interrupt corresponding to the vq
however, without the dataplane, hw/block/virtio_blk_req_complete calls
Libvirt or other high level software can use this command query colo status.
You can test this command like that:
{'execute':'query-colo-status'}
Signed-off-by: Zhang Chen
---
migration/colo.c| 34 ++
qapi/migration.json | 33 +
Filter needs to process the event of checkpoint/failover or
other event passed by COLO frame.
Signed-off-by: zhanghailiang
---
include/net/filter.h | 5 +
net/filter.c | 17 +
net/net.c| 28
3 files changed, 50 insertions(+)
On Fri, May 11, 2018 at 03:19:37PM +0200, David Hildenbrand wrote:
> The start of the address space does not have to be aligned for the
> search. Handly this case explicitly when starting the search for a new
> address.
Handly -> Handle?
>
> Signed-off-by: David Hildenbrand
> ---
> hw/mem/memo
After one round of checkpoint, the states between PVM and SVM
become consistent, so it is unnecessary to adjust the sequence
of net packets for old connections, besides, while failover
happens, filter-rewriter needs to check if it still needs to
adjust sequence of net packets.
Signed-off-by: zhang
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 101f32c..31d9eb8 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -73,7 +73,7 @@ static void notify_guest_bh(void *opaque)
unsigned i = j + ctzl(bits);
From: zhanghailiang
There are several stages during loadvm/savevm process. In different stage,
migration incoming processes different types of sections.
We want to control these stages more accuracy, it will benefit COLO
performance, we don't have to save type of QEMU_VM_SECTION_START
sections ev
> -Original Message-
> From: Eduardo Habkost [mailto:ehabk...@redhat.com]
> Sent: Friday, May 11, 2018 10:33 PM
> To: Liu, Jingqi
> Cc: pbonz...@redhat.com; r...@twiddle.net; m...@redhat.com;
> imamm...@redhat.com; marcel.apfelb...@gmail.com; qemu-
> de...@nongnu.org
> Subject: Re: [PATCH
Am 13.05.2018 um 15:37 hat Ivan Ren geschrieben:
> > Doesn't this defeat the purpose of preallocation? Usually, the idea with
> > preallocation is that you don't need to update any metadata on the first
> > write, but if you set QCOW_OFLAG_ZERO, we do need a metadata update
> > again.
> >
> > So wh
For COLO FT, both the PVM and SVM run at the same time,
only sync the state while it needs.
So here, let SVM runs while not doing checkpoint, change
DEFAULT_MIGRATE_X_CHECKPOINT_DELAY to 200*100.
Besides, we forgot to release colo_checkpoint_semd and
colo_delay_timer, fix them here.
Signed-off-b
From: zhanghailiang
Notify all net filters about the checkpoint and failover event.
Signed-off-by: zhanghailiang
---
migration/colo.c | 12
1 file changed, 12 insertions(+)
diff --git a/migration/colo.c b/migration/colo.c
index 5e517dc..d67bdc2 100644
--- a/migration/colo.c
+++ b
On 11/5/18 5:20 pm, Zhu Yijun wrote:
> Hi all,
>
> I booted two sr-iov guests using KVM-VFIO and pinged each other with
> no-load one night. I found that most of the latency was little than 0.1ms,
> but several icmp_seq greater than 10ms, even up to 1000ms;
>
[...]
>
>VF used by these two
While do checkpoint, we need to flush all the unhandled packets,
By using the filter notifier mechanism, we can easily to notify
every compare object to do this process, which runs inside
of compare threads as a coroutine.
Signed-off-by: zhanghailiang
Signed-off-by: Zhang Chen
---
include/migra
> -Original Message-
> From: Eduardo Habkost [mailto:ehabk...@redhat.com]
> Sent: Friday, May 11, 2018 10:41 PM
> To: Liu, Jingqi
> Cc: pbonz...@redhat.com; r...@twiddle.net; m...@redhat.com;
> imamm...@redhat.com; marcel.apfelb...@gmail.com; qemu-
> de...@nongnu.org
> Subject: Re: [PATCH
We should not load PVM's state directly into SVM, because there maybe some
errors happen when SVM is receving data, which will break SVM.
We need to ensure receving all data before load the state into SVM. We use
an extra memory to cache these data (PVM's ram). The ram cache in secondary side
is i
It's a good idea to use notifier to notify COLO frame of
inconsistent packets comparing.
Signed-off-by: Zhang Chen
Signed-off-by: zhanghailiang
---
net/colo-compare.c | 32 +---
net/colo-compare.h | 2 ++
2 files changed, 27 insertions(+), 7 deletions(-)
diff --git
oh right, another note. this only manifests when using kvm.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1771042
Title:
dataplane interrupt handler doesn't support msi
Status in QEMU:
New
Bug
Hi~ All~
COLO Frame, block replication and COLO proxy(colo-compare,filter-mirror,
filter-redirector,filter-rewriter) have been exist in qemu
for long time, it's time to integrate these three parts to make COLO really
works.
In this series, we have some optimizations for COLO frame, including sep
During the time of VM's running, PVM may dirty some pages, we will transfer
PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint
time. So, the content of SVM's RAM cache will always be same with PVM's memory
after checkpoint.
Instead of flushing all content of PVM's RAM
After a net connection is closed, we didn't clear its releated resources
in connection_track_table, which will lead to memory leak.
Let't track the state of net connection, if it is closed, its related
resources will be cleared up.
Signed-off-by: zhanghailiang
Signed-off-by: Zhang Chen
---
net
From: zhanghailiang
Don't need to flush all VM's ram from cache, only
flush the dirty pages since last checkpoint
Signed-off-by: Li Zhijian
Signed-off-by: Zhang Chen
Signed-off-by: zhanghailiang
---
migration/ram.c | 12
1 file changed, 12 insertions(+)
diff --git a/migration/r
Make sure master start block replication after slave's block
replication started.
Besides, we need to activate VM's blocks before goes into
COLO state.
Signed-off-by: zhanghailiang
Signed-off-by: Li Zhijian
Signed-off-by: Zhang Chen
---
migration/colo.c | 43 +
On Fri, 05/11 08:59, Eric Blake wrote:
> On 05/11/2018 07:08 AM, Fam Zheng wrote:
> > We don't verify the request range against s->size in the I/O callbacks
> > except for raw_co_pwritev. This is wrong (especially for
> > raw_co_pwrite_zeroes and raw_co_pdiscard), so fix them.
>
> Did you bother i
We need to know if migration is going into COLO state for
incoming side before start normal migration.
Instead by using the VMStateDescription to send colo_state
from source side to destination side, we use MIG_CMD_ENABLE_COLO
to indicate whether COLO is enabled or not.
Signed-off-by: zhanghailia
On 05/13/2018 05:50 AM, Stefan Weil wrote:
> This fixes cross builds for the (rare) case where cross binutils
> but no native binutils are installed.
>
> Signed-off-by: Stefan Weil
Reviewed-by: Philippe Mathieu-Daudé
> ---
> configure | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(
> This patch adds Intel Hexadecimal Object File format support to
> the loader. The file format specification is available here:
> http://www.piclist.com/techref/fileext/hex/intel.htm
>
> The file format is mainly intended for embedded systems
> and microcontrollers, such as Micro:bit Arduino, AR
On Sat, May 12, 2018 at 10:57 AM, Richard Henderson
wrote:
> Signed-off-by: Richard Henderson
> ---
> target/xtensa/translate.c | 229 --
> 1 file changed, 122 insertions(+), 107 deletions(-)
[...]
> -} while (dc->base.is_jmp == DISAS_NEXT &&
> -
On Sat, May 12, 2018 at 10:57 AM, Richard Henderson
wrote:
> Signed-off-by: Richard Henderson
> ---
> target/xtensa/translate.c | 229 --
> 1 file changed, 122 insertions(+), 107 deletions(-)
This patch breaks tests/tcg/xtensa/test_mmu.S cross_page_tb test.
L
On 13 May 2018 at 10:57, Stefan Weil wrote:
> Am 13.05.2018 um 11:06 schrieb Stefan Weil:
>> It now prevents compiler warnings (enabled with -Wimplicit-fallthrough=
>> or -Wextra) as intended.
>>
>> Signed-off-by: Stefan Weil
>> ---
>>
>> I suggest to add and use a similar macro QEMU_FALLTHROUGH(
With a VGICv3 KVM device, if the number of vcpus exceeds the
capacity of the legacy redistributor region (123 redistributors),
we now attempt to register the second redistributor region. This
extends the number of vcpus to 512 assuming the host kernel supports:
- up to 512 vcpus
- VGICv3 KVM device
This patch allows the creation of a GICv3 node with 1 or 2
redistributor regions depending on the number of smu_cpus.
The second redistributor region is located just after the
existing RAM region, at 256GB and contains up to (512 - 123) vcpus.
Please refer to kernel documentation for further node
Let's check if KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION is supported.
If not, we check the number of redist region is equal to 1 and use the
legacy KVM_VGIC_V3_ADDR_TYPE_REDIST attribute. Otherwise we use
the new attribute and allow to register multiple regions to the
KVM device.
Signed-off-by: Eric Au
Depending on the number of smp_cpus we now register one or two
GICR structures.
Signed-off-by: Eric Auger
---
hw/arm/virt-acpi-build.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 92ceee9..6a4340a 100644
--- a/hw/arm/virt-
To prepare for multiple redistributor regions, we introduce
an array of uint32_t properties that stores the redistributor
count of each redistributor region.
Non accelerated VGICv3 only supports a single redistributor region.
The capacity of all redist regions is checked against the number of
vcpu
This updates KVM/ARM headers against
https://github.com/eauger/linux/tree/v4.17-rc2-rdist-regions-v6
Signed-off-by: Eric Auger
---
linux-headers/asm-arm/kvm.h | 1 +
linux-headers/asm-arm64/kvm.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/linux-headers/asm-arm/kvm.h b/linux-headers/
for KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION attribute, the attribute
data pointed to by kvm_device_attr.addr is a OR of the
redistributor region address and other fields such as the index
of the redistributor region and the number of redistributors the
region can contain.
The existing machine init don
Currently the max number of VCPUs usable along with the KVM GICv3
device is limited to 123. The rationale is a single redistributor
region was supported and this latter was set to [0x80A, 0x900]
within the guest physical address space, surrounded with DIST and UART
MMIO regions.
[1] now al
ping for review
On Sat, May 5, 2018 at 3:50 PM Ivan Ren wrote:
> qemu-img info with a block device which has a qcow2 format always
> return 0 for disk size, and this can not reflect the qcow2 size
> and the used space of the block device. This patch return the
> allocated size of qcow2 as the di
> Doesn't this defeat the purpose of preallocation? Usually, the idea with
> preallocation is that you don't need to update any metadata on the first
> write, but if you set QCOW_OFLAG_ZERO, we do need a metadata update
> again.
>
> So what's the advantage compared to not preallocating at all?
Yes
Le 12/05/2018 à 07:02, Richard Henderson a écrit :
> [ Ho, hum. I didn't clear out my scratch directory before sending v1.0. ]
>
> FYI, I've only tested this with linux-user-test-0.3 and
> our qemu coldfire testing kernel.
I've tested m68k-softmmu with Q800 emulation and started an LXC
container
Am 13.05.2018 um 11:06 schrieb Stefan Weil:
> It now prevents compiler warnings (enabled with -Wimplicit-fallthrough=
> or -Wextra) as intended.
>
> Signed-off-by: Stefan Weil
> ---
>
> I suggest to add and use a similar macro QEMU_FALLTHROUGH()
> for the rest of the code and can provide a patch
It now prevents compiler warnings (enabled with -Wimplicit-fallthrough=
or -Wextra) as intended.
Signed-off-by: Stefan Weil
---
I suggest to add and use a similar macro QEMU_FALLTHROUGH()
for the rest of the code and can provide a patch if that's
fine for everyone.
Regards
Stefan
disas/libvix
This fixes cross builds for the (rare) case where cross binutils
but no native binutils are installed.
Signed-off-by: Stefan Weil
---
configure | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/configure b/configure
index 24c411e346..4f6ace1ed4 100755
--- a/configure
+++ b
Richard Henderson writes:
> Changes since v3:
> * Fixup rebase vs target-arm.next. One of the middle
> patches had conflicts resolved incorrectly, so the
> patch set was non-bisectable.
I've tested with the new RISU set:
http://people.linaro.org/~alex.bennee/testcases/arm64.risu/t
Richard Henderson writes:
> Cc: qemu-sta...@nongnu.org
> Reviewed-by: Alex Bennée
> Signed-off-by: Richard Henderson
Hmm oddly this fails to apply:
Applying: target/arm: Implement FCVT (scalar,integer) for fp16
Using index info to reconstruct a base tree...
M target/arm/helper.c
M
55 matches
Mail list logo