Hi,
Does anyone know this issue?
Thanks,
Paul
On Sat, Aug 13, 2011 at 8:10 AM, 编码人 wrote:
>
> Hi,
>
> My KVM guest OS is Windows 7. If I change the clock in KVM host
> (Redhat Enterprise Linux 6), Windows may hang --I can't move mouse in
> the VNC desktop. (If I only set clock to be several min
On 08/15/2011 11:15 PM, Paolo Bonzini wrote:
On 08/15/2011 01:27 PM, Umesh Deshpande wrote:
Yes, the mru list patch would obviate the need of holding the ram_list
mutex in qemu_get_ram_ptr.
Feel free to take it and complete it with locking then!
Also, I was planning to protect the whole migr
Hi there,
I have the issue that launching more than one guest concurrently causes
them all to hang within 1 to 60 min even if there is no activity.
I mean that:
- each guest takes 100% CPU
- guests do not respond to ssh, ACPI shutdown/restart etc.
- libvirt daemon does not respond
This only hap
2011/8/16 编码人 :
> Hi,
>
> Does anyone know this issue?
>
> Thanks,
> Paul
>
> On Sat, Aug 13, 2011 at 8:10 AM, 编码人 wrote:
>>
>> Hi,
>>
>> My KVM guest OS is Windows 7. If I change the clock in KVM host
>> (Redhat Enterprise Linux 6), Windows may hang --I can't move mouse in
>> the VNC desktop. (If
Each time I build qemu-kvm, my build script complains
that it is needlessly linked against libglib-2.0 without
using any symbols from that library. So is glib really
needed for qemu-kvm? How it's different from qemu-0.15?
Thanks,
/mjt
--
To unsubscribe from this list: send the line "unsubscribe
On 08/15/2011 06:42 PM, Isaku Yamahata wrote:
On Mon, Aug 15, 2011 at 12:29:37PM -0700, Avi Kivity wrote:
> On 08/12/2011 04:07 AM, Isaku Yamahata wrote:
>> This is a character device to hook page access.
>> The page fault in the area is reported to another user process by
>> this chardriver.
Discussion was titled: Fix refcounting in hugetlbfs quota handling
This patch fixes a race between the umount of a hugetlbfs filesystem, and quota
updates in that filesystem, which can result in the update of the filesystem
quota record, after the record structure has been freed.
Rather than an
It is not a unique KVM problem. We saw the race while doing large async rDMA in
our network driver, but I can imagine it happening with a slow NFS server, or
other DMA that could complete after umount.
What I need, in order to push this upstream, is:
1. For you to light a fire under my feet to get
Linus,
Please pull from
ssh://master.kernel.org/pub/scm/virt/kvm/kvm.git kvm-updates/3.1
to receive fixes for Kconfig problems introduced by the KVM steal time
implementation.
Randy Dunlap (2):
KVM: fix TASK_DELAY_ACCT kconfig warning
KVM: uses TASKSTATS, depends on NET
arch/
This patch adds support for an optional stats vq that works similary to the
stats vq provided by virtio-balloon.
The purpose of this change is to allow collection of statistics about working
virtio-blk devices to easily analyze performance without having to tap into
the guest.
Cc: Rusty Russell
On 08/16/2011 03:57 AM, Michael Tokarev wrote:
Each time I build qemu-kvm, my build script complains
that it is needlessly linked against libglib-2.0 without
using any symbols from that library. So is glib really
needed for qemu-kvm? How it's different from qemu-0.15?
glib is only needed for
Hi, Lidong,
Yes, running ntp client in Guest OS can synchronize the time. But if I
change the clock of the host, the Guest OS may immediately have
problems, and the clock perhaps has no chances to synchronize.
Thanks,
Paul
2011/8/16 lidong chen :
> 2011/8/16 编码人 :
>> Hi,
>>
>> Does anyone know t
On 08/16/2011 03:50 PM, 编码人 wrote:
> Hi,
>
> Does anyone know this issue?
>
Could you describe your environment more detailed please?
- Your host information (32/64, cpuinfo)
- Your guest information (32/64)
And please enable the trace events of kvm to see what happened when
the guest was hangi
Following patch series deals with VCPU and iothread starvation during the
migration of a guest. Currently the iothread is responsible for performing the
guest migration. It holds qemu_mutex during the migration and doesn't allow VCPU
to enter the qemu mode and delays its return to the guest. The gu
ramlist mutex is implemented to protect the RAMBlock list traversal in the
migration thread from their addition/removal from the iothread.
Signed-off-by: Umesh Deshpande
---
cpu-all.h |2 ++
exec.c| 19 +++
qemu-common.h |2 ++
3 files changed, 23 insertions
This patch creates a new list of RAM blocks in MRU order. So that separate
locking rules can be applied to the regular RAM block list and the MRU list.
Signed-off-by: Paolo Bonzini
---
cpu-all.h |2 ++
exec.c| 17 -
2 files changed, 14 insertions(+), 5 deletions(-)
dif
Following patch makes iothread wait until the migration thread responds to the
migrate_cancel request and terminates its execution.
Signed-off-by: Umesh Deshpande
---
buffered_file.c | 13 -
buffered_file.h |3 +++
hw/hw.h |5 -
migration.c |
This patch creates a separate thread for the guest migration on the source side.
migrate_cancel request from the iothread is handled asynchronously. That is,
iothread submits migrate_cancel to the migration thread and returns, while the
migration thread attends this request at the next iteration to
This patch creates a migration bitmap, which is periodically kept in sync with
the qemu bitmap. A separate copy of the dirty bitmap for the migration avoids
concurrent access to the qemu bitmap from iothread and migration thread.
Signed-off-by: Umesh Deshpande
---
arch_init.c | 26
>From a9670ddff84080c56183e2d678189e100f891174 Mon Sep 17 00:00:00 2001
From: Liu, Jinsong
Date: Wed, 17 Aug 2011 11:36:28 +0800
Subject: [PATCH] KVM: emulate lapic tsc deadline timer for hvm
This patch emulate lapic tsc deadline timer for hvm:
Enumerate tsc deadline timer capacibility by CPUID;
From: root
This patchset adds a cgroup client test module,
plus support libraries for doing cgroup testing:
cgroup.py:
* Structure for different cgroup subtests
* Contains basic "cgroup-memory" test
cgroup_common.py:
* Library for cgroup handling (intended to be
used from kvm test in the futu
On 08/16/2011 08:56 PM, Umesh Deshpande wrote:
@@ -3001,8 +3016,10 @@ void qemu_ram_free_from_ptr(ram_addr_t addr)
QLIST_FOREACH(block,&ram_list.blocks, next) {
if (addr == block->offset) {
+qemu_mutex_lock_ramlist();
QLIST_REMOVE(block, next);
22 matches
Mail list logo