If a pci device is not reset by VM (by writing into config space)
and unplugged by VM, after that when VM reboots, qemu may assert:
pcibus_reset: Assertion `bus->irq_count[i] == 0' failed
Signed-off-by: herongguang
---
hw/pci/pci.c | 1 +
1 file changed, 1 insertion(+)
diff --git
On 2017/4/25 7:45, Michael S. Tsirkin wrote:
On Mon, Apr 24, 2017 at 09:12:29PM +0800, Herongguang (Stephen) wrote:
If a pci device is not reset by VM (by writing into config space)
and unplugged by VM, after that when VM reboots, qemu may assert:
pcibus_reset: Assertion `bus->irq_coun
If a pci device is not reset by VM (by writing into config space)
and unplugged by VM, after that when VM reboots, qemu may assert:
pcibus_reset: Assertion `bus->irq_count[i] == 0' failed
Signed-off-by: herongguang
---
Is there need to call pci_do_device_reset()?
---
hw/pci/pci.c
On 2017/4/13 7:51, Stefano Stabellini wrote:
On Wed, 12 Apr 2017, Herongguang (Stephen) wrote:
On 2017/4/12 6:32, Stefano Stabellini wrote:
On Tue, 11 Apr 2017, hrg wrote:
On Tue, Apr 11, 2017 at 3:50 AM, Stefano Stabellini
wrote:
On Mon, 10 Apr 2017, Stefano Stabellini wrote:
On Mon, 10
On 2017/4/12 14:17, Alexey G wrote:
On Tue, 11 Apr 2017 15:32:09 -0700 (PDT)
Stefano Stabellini wrote:
On Tue, 11 Apr 2017, hrg wrote:
On Tue, Apr 11, 2017 at 3:50 AM, Stefano Stabellini
wrote:
On Mon, 10 Apr 2017, Stefano Stabellini wrote:
On Mon, 10 Apr 2017, hrg wrote:
On Sun, Apr 9,
On 2017/4/12 6:32, Stefano Stabellini wrote:
On Tue, 11 Apr 2017, hrg wrote:
On Tue, Apr 11, 2017 at 3:50 AM, Stefano Stabellini
wrote:
On Mon, 10 Apr 2017, Stefano Stabellini wrote:
On Mon, 10 Apr 2017, hrg wrote:
On Sun, Apr 9, 2017 at 11:55 PM, hrg wrote:
On Sun, Apr 9, 2017 at 11:52
On 2017/4/6 0:16, Paolo Bonzini wrote:
On 20/03/2017 15:21, Herongguang (Stephen) wrote:
We encountered a problem that when a domain starts, seabios failed to
online a vCPU.
After investigation, we found that the reason is in kvm-kmod,
KVM_APIC_INIT bit in
vcpu->arch.apic->pending_
From 8f5b9d2c2944ea7cd8149e9d3b4088f487217d20 Mon Sep 17 00:00:00 2001
From: herongguang
Date: Mon, 27 Mar 2017 15:08:59 +0800
Subject: [PATCH] KVM: pci-assign: do not map smm memory slot pages in vt-d
page table
or VM memory are not put thus leaked in kvm_iommu_unmap_memslots() when
destroy
From f6f0ee6831488bef7af841cb86f3d85a04848fe5 Mon Sep 17 00:00:00 2001
From: herongguang
Date: Mon, 27 Mar 2017 15:08:59 +0800
Subject: [PATCH] KVM: pci-assign: do not map smm memory slot pages
in vt-d page table
or VM memory are not put thus leaked in kvm_iommu_unmap_memslots() when
destroy
or pages are not unmaped and freed
Signed-off-by: herongguang
---
arch/x86/kvm/iommu.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
Well, do we should change pci-assign to not map SMM slots instead? Like vfio.
diff --git a/arch/x86/kvm/iommu.c b/arch/x86/kvm/iommu.c
index
_VALID_NMI_PENDING | KVM_VCPUEVENT_VALID_SIPI_VECTOR;
}
+if (CPU(cpu)->cpu_index == 1) {
+fprintf(stderr, "vcpu 1 sleep\n");
+sleep(10);
+}
+
return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
}
On 2017/3/20 22:21, Herongguang (Stephen) wrote:
Hi,
W
Hi,
We encountered a problem that when a domain starts, seabios failed to online a
vCPU.
After investigation, we found that the reason is in kvm-kmod, KVM_APIC_INIT bit
in
vcpu->arch.apic->pending_events was overwritten by qemu, and thus an INIT IPI
sent
to AP was lost. Qemu does this since li
On 2017/2/24 23:14, Paolo Bonzini wrote:
On 24/02/2017 16:10, Chris Friesen wrote:
On 02/23/2017 08:23 PM, Herongguang (Stephen) wrote:
On 2017/2/22 22:43, Paolo Bonzini wrote:
Hopefully Gaohuai and Rongguang can help with this too.
Paolo
Yes, we are looking into and testing this
On 2017/2/24 10:23, Herongguang (Stephen) wrote:
On 2017/2/22 22:43, Paolo Bonzini wrote:
On 22/02/2017 14:31, Chris Friesen wrote:
Can you reproduce it with kernel 4.8+? I'm suspecting commmit
4e59516a12a6 ("kvm: vmx: ensure VMCS is current while enabling PML",
2016-0
On 2017/2/22 22:43, Paolo Bonzini wrote:
On 22/02/2017 14:31, Chris Friesen wrote:
Can you reproduce it with kernel 4.8+? I'm suspecting commmit
4e59516a12a6 ("kvm: vmx: ensure VMCS is current while enabling PML",
2016-07-14) to be the fix.
I can't easily try with a newer kernel, the s
Hi, Chris Friesen, did you solve the problem?
On 2017/2/9 22:37, Herongguang (Stephen) wrote:
Hi.
I had a problem when I just repeatedly live migrate a vm between two compute
nodes.
The phenomenon was that the KVM module was crashed and then the host rebooted.
However I cannot reliably trigger
Hi, Chris Friesen, did you solve the problem?
On 2017/2/9 22:37, Herongguang (Stephen) wrote:
Hi.
I had a problem when I just repeatedly live migrate a vm between two compute
nodes.
The phenomenon was that the KVM module was crashed and then the host rebooted.
However I cannot reliably trigger
Hi.
I had a problem when I just repeatedly live migrate a vm between two compute
nodes.
The phenomenon was that the KVM module was crashed and then the host rebooted.
However I cannot reliably trigger this BUG.
The backtrace is the same as http://www.spinics.net/lists/kvm/msg138475.html.
The cr
On 2016/9/23 12:59, herongguang wrote:
From: He Rongguang
handle KVM_VCPUEVENT_VALID_SMM properly, or kvm-kmod/kernel will crash
in migration destination in gfn_to_rmap() since kvm_memslots_for_spte_role
is false, whilst (vcpu->arch.hflags & HF_SMM_MASK) is true
Signed-off-by: hero
On 2016/9/23 15:17, Paolo Bonzini wrote:
On 22/09/2016 15:16, Herongguang (Stephen) wrote:
I have some concern:
1. For example, vhost does not know about as_id, I wonder if guests in
SMM can operate disk or ether card, as in
that case vhost would not logging dirty pages correctly, without
On 2016/9/23 16:59, Paolo Bonzini wrote:
On 23/09/2016 10:51, Herongguang (Stephen) wrote:
On 2016/9/23 15:17, Paolo Bonzini wrote:
On 22/09/2016 15:16, Herongguang (Stephen) wrote:
I have some concern:
1. For example, vhost does not know about as_id, I wonder if guests in
SMM can
On 2016/9/23 15:17, Paolo Bonzini wrote:
On 22/09/2016 15:16, Herongguang (Stephen) wrote:
I have some concern:
1. For example, vhost does not know about as_id, I wonder if guests in
SMM can operate disk or ether card, as in
that case vhost would not logging dirty pages correctly, without
From: He Rongguang
handle KVM_VCPUEVENT_VALID_SMM properly, or kvm-kmod/kernel will crash
in migration destination in gfn_to_rmap() since kvm_memslots_for_spte_role
is false, whilst (vcpu->arch.hflags & HF_SMM_MASK) is true
Signed-off-by: herongguang
---
arch/x86/kvm/x86.c | 1 +
On 2016/9/22 21:16, Herongguang (Stephen) wrote:
On 2016/9/14 17:05, Paolo Bonzini wrote:
On 14/09/2016 09:55, Herongguang (Stephen) wrote:
Hi,
We found a problem that when a redhat 6 VM reboots (in grub countdown
UI), migrating this VM will result in VM’s memory difference between
On 2016/9/22 17:29, Paolo Bonzini wrote:
On 22/09/2016 09:51, Herongguang (Stephen) wrote:
After making memory consistent between source and destination
(https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg03069.html),
there can
still reproduce instruction emulation failure in
On 2016/9/14 17:05, Paolo Bonzini wrote:
On 14/09/2016 09:55, Herongguang (Stephen) wrote:
Hi,
We found a problem that when a redhat 6 VM reboots (in grub countdown
UI), migrating this VM will result in VM’s memory difference between
source and destination side. The difference always
Fix events.flags (KVM_VCPUEVENT_VALID_SMM) overwritten by 0.
Signed-off-by: He Rongguang
---
Note without patch 2, this would result in kvm-kmod crash, as described in
patch 2
---
target-i386/kvm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target-i386/kvm.c b/target-i
After making memory consistent between source and destination
(https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg03069.html), there can
still reproduce instruction emulation failure in destination side if migration
when VM’s in grub stage:
[2016-09-15 06:29:24] monitor_qapi_event_emit:47
Hi,
We found a problem that when a redhat 6 VM reboots (in grub countdown UI),
migrating this VM will result in VM’s memory difference between source and
destination side. The difference always resides in GPA 0xA~0xC, i.e.
SMRAM area.
Occasionally this result in VM instruction emulatio
In tight_encode_indexed_rect32, buf(or src)’s size is count. In for loop,
the logic is supposed to be that i is an index into src, i should be
incremented when incrementing src.
This is broken when src is incremented but i is not before while loop,
resulting in off-by-one bug in while loop.
Sign
30 matches
Mail list logo