>>> > I agree it's either COW breaking or (similarly) locking pages that
>>> > the guest hasn't touched yet.
>>> >
>>> > You can use prealloc or "-rt mlock=on" to avoid this problem.
>>> >
>>> > Paolo
>> Or the new shared flag - IIRC shared VMAs don't do COW either.
>
>Only if the problem isn't lo
>> What if you detach and re-attach?
>> Is it fast then?
>> If yes this means the issue is COW breaking that occurs with
>> get_user_pages, not translation as such.
>> Try hugepages with prealloc - does it help?
>
>I agree it's either COW breaking or (similarly) locking pages that the guest
>hasn
>> Hi, all
>>
>> The VM will get stuck for a while(about 6s for a VM with 20GB memory) when
>> attaching a pass-through PCI card to the non-pass-through VM for the first
>> time.
>> The reason is that the host will build the whole VT-d GPA->HPA DMAR
>> page-table, which needs a lot of time, an
Hi, all
The VM will get stuck for a while(about 6s for a VM with 20GB memory) when
attaching a pass-through PCI card to the non-pass-through VM for the first
time.
The reason is that the host will build the whole VT-d GPA->HPA DMAR page-table,
which needs a lot of time, and during this time, t
ote:
>> > > > Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
>> > > > > When guest set irq smp_affinity, VMEXIT occurs, then the vcpu
>> > > > > thread will IOCTL return to QEMU from hypervisor, then vcpu
>> > > > > thread ask the
> No, this would be exactly the same code that is running now:
>
> mutex_lock(&kvm->irq_lock);
> old = kvm->irq_routing;
> kvm_irq_routing_update(kvm, new);
> mutex_unlock(&kvm->irq_lock);
>
> sync
> > >>I understood the proposal was also to eliminate the
> > >>synchronize_rcu(), so while new interrupts would see the new
> > >>routing table, interrupts already in flight could pick up the old one.
> >Isn't that always the case with RCU? (See my answer above: "the
> >v
>> > I don't think a workqueue is even needed. You just need to use
>> > call_rcu to free "old" after releasing kvm->irq_lock.
>> >
>> > What do you think?
>>
>> It should be rate limited somehow. Since it guest triggarable guest
>> may cause host to allocate a lot of memory this way.
>
Why do
Hi all,
When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will IOCTL
return to QEMU from hypervisor, then vcpu thread ask the hypervisor to update
the irq routing table,
in kvm_set_irq_routing, synchronize_rcu is called, current vcpu thread is
blocked for so much time to wait
Hi, all
What's the difference of the linux guest kernel parameter
idle=, especially in performance?
Taking the performance into account, which one is best?
In my opinion, if the number of all VMs' vcpus is far more than that of pcpus,
e.g. SPECVirt test, idle=halt is better for server's total
>> I read below words on the report of > forecast (May 29, 2013)>, We were going to remove the old
>> block-migration code Then people fixed it
>> Good: it works now
>> Bad: We have to maintain both
>> It uses the same port than migration
>> You need to migrate all/none of block devices
>>
>> The
Introduce MIG_STATE_CANCELLING state to avoid starting a new migration task
while the previous one still exist.
Signed-off-by: Zeng Junliang
Signed-off-by: Zhang Haoyu
---
migration.c | 26 --
1 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/migration.
Avoid a bogus COMPLETED->CANCELLED transition.
There is a period of time from the timing of setting COMPLETED state to that of
migration thread exits, so during which it's problematic in
COMPLETED->CANCELLED transition.
Signed-off-by: Zeng Junliang
Signed-off-by: Zhang Haoyu
---
migration.c |
>Hi all,
>
>Does the Qemu have the storage migration tool, like the io-mirroring inside
>the vmware? io-mirroring means for all the ioes, they are send to both source
>and destination at the same time.
drive_mirror maybe your choice.
Thanks,
Zhang Haoyu
>
>Thanks!
Avoid starting a new migration task while the previous one still
>>> exist.
>>>
>>> Can you explain how to reproduce the problem?
>>>
>> When network disconnection between source and destination happened,
>> the migration thread stuck at below stack,
>> Then I cancel the migration task, the m
> > Avoid starting a new migration task while the previous one still
> exist.
>
> Can you explain how to reproduce the problem?
>
When network disconnection between source and destination happened, the
migration thread stuck at below stack,
#0 0x7f07e96c8288 in writev () from /lib64/libc.so
Avoid starting a new migration task while the previous one still exist.
Signed-off-by: Zeng Junliang
---
migration.c | 34 ++
1 files changed, 22 insertions(+), 12 deletions(-)
diff --git a/migration.c b/migration.c
index 2b1ab20..ab4c439 100644
--- a/migration
Hi, Juan
I read below words on the report of ,
We were going to remove the old block-migration code
Then people fixed it
Good: it works now
Bad: We have to maintain both
It uses the same port than migration
You need to migrate all/none of block devices
The old block-migration code said above is t
The comments of ram_handle_compressed needs to be changed accordingly,
"Do not memset pages to zero if they already read as zero to avoid allocating
zero pages and consuming memory unnecessarily."
Thanks,
Zhang Haoyu
> The madvise for zeroed out pages was introduced when every transferred
> zer
Hi, all
Could someone make a detailed statement for the buggy implementation of
traditional live migration with storage that migrating the storage in iteration
way?
Thanks,
Zhang Haoyu
hi Michal,
I used libvirt-1.0.3, ran below command to perform live migration, why no
pro
Hi, all
Could someone make a detailed statement for the buggy implementation of
traditional storage-migration method that migrating the storage in iteration
way?
Thanks,
Zhang Haoyu
hi Michal,
I used libvirt-1.0.3, ran below command to perform live migration, why no
progre
When several VMs migrate with RDMA at the same time, the increased pressure
cause packet loss probabilistically and make source and destination wait for
each other. There might be some of VMs blocked during the migration.
Fix the bug by using two completion queues, for sending and receiving
resp
>> Hi, all
>>
>> Do live migration if emulated NIC's MAC has been changed, RARP with
>> wrong MAC address will broadcast via qemu_announce_self in destination, so,
>> long time network disconnection probably happen.
>>
>> I want to do below works to resolve this problem, 1. change NICConf's
>> M
>> >> >> >> >> Hi, all
>> >> >> >> >>
>> >> >> >> >> Do live migration if emulated NIC's MAC has been changed,
>> >> >> >> >> RARP with wrong MAC address will broadcast via
>> >> >> >> >> qemu_announce_self in destination, so, long time network
>> >> >> >> >> disconnection probably happen.
>> >
>> >> >> Hi, all
>> >> >>
>> >> >> Do live migration if emulated NIC's MAC has been changed, RARP
>> >> >> with wrong MAC address will broadcast via qemu_announce_self in
>> >> >> destination, so, long time network disconnection probably happen.
>> >> >
>> >> >Good catch.
>> >> >
>> >> >> I want
>> >> Hi, all
>> >>
>> >> Do live migration if emulated NIC's MAC has been changed, RARP with
>> >> wrong MAC address will broadcast via qemu_announce_self in destination,
>> >> so, long time network disconnection probably happen.
>> >
>> >Good catch.
>> >
>> >> I want to do below works to resol
>> Hi, all
>>
>> Do live migration if emulated NIC's MAC has been changed, RARP with
>> wrong MAC address will broadcast via qemu_announce_self in destination, so,
>> long time network disconnection probably happen.
>
>Good catch.
>
>> I want to do below works to resolve this problem, 1. change
Hi, all
Do live migration if emulated NIC's MAC has been changed, RARP with wrong MAC
address will broadcast via qemu_announce_self in destination,
so, long time network disconnection probably happen.
I want to do below works to resolve this problem,
1. change NICConf's MAC as soon as emulated N
>> Hi, all
>>
>> Segmentation fault happened when reboot VM after hot-unplug virtio NIC,
>> which can be reproduced 100%.
>> See similar bug report to
>> https://bugzilla.redhat.com/show_bug.cgi?id=988256
>>
>> test environment:
>> host: SLES11SP2 (kenrel version: 3.0.58)
>> qemu: 1.5.1, upstre
Hi, all
Segmentation fault happened when reboot VM after hot-unplug virtio NIC, which
can be reproduced 100%.
See similar bug report to https://bugzilla.redhat.com/show_bug.cgi?id=988256
test environment:
host: SLES11SP2 (kenrel version: 3.0.58)
qemu: 1.5.1, upstream-qemu (commit 545825d4cda03ea
I tested below combos of qemu and kernel,
++-+-+
|kernel | QEMU | migration |
++-+-+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.6.0|GOOD |
+
Description of problem:
when guest do reboot or reset after hotunplug virtio NIC, Segmentation fault
occurs.It can reproduce 100%.
Similar to https://bugzilla.redhat.com/show_bug.cgi?id=988256
Version-Release number of selected component (if applicable):
Host OS:sles11sp2 kernel version:3.0.58
qe
>>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>>> >>> QEMU_AUDIO_DRV=none
>>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,thre
>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>> >>> QEMU_AUDIO_DRV=none
>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=
>>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>>> QEMU_AUDIO_DRV=none
>>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>>> 050
>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none
>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32
>> -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>> 0505ec91-38
>Hi,
>
>Am 05.08.2013 11:09, schrieb Zhanghaoyu (A):
>> When I build the upstream, encounter a problem that I compile and
>> install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29)
>> on sles11sp2 environment via below command cp
>> /boot/config-
>> >> >> >> hi all,
>> >> >> >>
>> >> >> >> I met similar problem to these, while performing live migration or
>> >> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> >> >> guest:suse11sp2), running tele-communication software suite in
>> >> >> >> guest,
>> >> >> >
>> >> >> hi all,
>> >> >>
>> >> >> I met similar problem to these, while performing live migration or
>> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> >> guest:suse11sp2), running tele-communication software suite in
>> >> >> guest,
>> >> >> https://lists.gnu.o
>> >> hi all,
>> >>
>> >> I met similar problem to these, while performing live migration or
>> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> guest:suse11sp2), running tele-communication software suite in
>> >> guest,
>> >> https://lists.gnu.org/archive/html/qemu
>> hi all,
>>
>> I met similar problem to these, while performing live migration or
>> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> guest:suse11sp2), running tele-communication software suite in guest,
>> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.ht
> Hi,
>
> Am 11.07.2013 11:36, schrieb Zhanghaoyu (A):
> > I met similar problem to these, while performing live migration or
> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> guest:suse11sp2), running tele-communication software suite in guest,
>
hi all,
I met similar problem to these, while performing live migration or save-restore
test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running
tele-communication software suite in guest,
https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
http://comments.
> I searched "vector_irqfd" globally, no place found to set/change irqfd's msi
> message, only irqfd's virq or users member may be changed in
> kvm_virtio_pci_vq_vector_use, kvm_virtio_pci_vq_vector_release, etc.
> So I think it's meaningless to do below check in virtio_pci_vq_vector_unmask,
>
When network disconnection occurs during live migration, the migration thread
will be stuck in the function sendmsg(), as the migration socket is in
~O_NONBLOCK mode now.
Signed-off-by: Zeng Junliang
---
include/migration/migration.h |4
migration-tcp.c | 23 ++
When network disconnection occurs during live migration, the migration thread
will be stuck in the function sendmsg(), as the migration socket is in
~O_NONBLOCK mode now.
Signed-off-by: Zeng Junliang
---
include/migration/migration.h |4
migration-tcp.c | 23 ++
I searched "vector_irqfd" globally, no place found to set/change irqfd's msi
message, only irqfd's virq or users member may be changed in
kvm_virtio_pci_vq_vector_use,
kvm_virtio_pci_vq_vector_release, etc.
So I think it's meaningless to do below check in virtio_pci_vq_vector_unmask,
if (irqfd-
>> >> With regard to old version linux guest(e.g., rhel-5.5), in ISR
>> >> processing, mask and unmask msi-x vector every time, which result in
>> >> VMEXIT, then QEMU will invoke kvm_irqchip_update_msi_route() to ask KVM
>> >> hypervisor to update the VM irq routing table. In KVM hypervisor,
>
>> With regard to old version linux guest(e.g., rhel-5.5), in ISR processing,
>> mask and unmask msi-x vector every time, which result in VMEXIT, then QEMU
>> will invoke kvm_irqchip_update_msi_route() to ask KVM hypervisor to update
>> the VM irq routing table. In KVM hypervisor, synchronizing
>> I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1),
>> and direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the
>> other host via iperf tool, top tool result on VM shown that the %si is too
>> high, approximately 95% ~ 100%, but from the view of host, the VM
With regard to old version linux guest(e.g., rhel-5.5), in ISR processing, mask
and unmask msi-x vector every time, which result in VMEXIT, then QEMU will
invoke kvm_irqchip_update_msi_route() to ask KVM hypervisor to update the VM
irq routing table. In KVM hypervisor, synchronizing RCU needed a
> I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1), and
> direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the other
> host via iperf tool, top tool result on VM shown that the %si is too high,
> approximately 95% ~ 100%, but from the view of host, the VM's t
I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1), and
direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the other
host via iperf tool, top tool result on VM shown that the %si is too high,
approximately 95% ~ 100%, but from the view of host, the VM's total CPU
>> >> >> On Thu, Apr 18, 2013 at 12:00:49PM +0000, Zhanghaoyu (A) wrote:
>> >> >>> I start 10 VMs(windows xp), then running geekbench tool on
>> >> >>> them, about 2 days, one of them was reset, I found the reset
>&
>> >> On Thu, Apr 18, 2013 at 12:00:49PM +, Zhanghaoyu (A) wrote:
>> >>> I start 10 VMs(windows xp), then running geekbench tool on them,
>> >>> about 2 days, one of them was reset, I found the reset operation
>> &
>> On Thu, Apr 18, 2013 at 12:00:49PM +, Zhanghaoyu (A) wrote:
>>> I start 10 VMs(windows xp), then running geekbench tool on them,
>>> about 2 days, one of them was reset, I found the reset operation is
>>> done by int kvm_cpu_exec(CPUArchState *env) {
I start 10 VMs(windows xp), then running geekbench tool on them, about 2 days,
one of them was reset,
I found the reset operation is done by
int kvm_cpu_exec(CPUArchState *env)
{
...
switch (run->exit_reason)
...
case KVM_EXIT_SHUTDOWN:
DPRINTF("shutdown\n");
> > On Mon, Apr 08, 2013 at 12:27:06PM +, Zhanghaoyu (A) wrote:
> >> On Sun, Apr 07, 2013 at 04:58:07AM +0000, Zhanghaoyu (A) wrote:
> >>>>>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes
> >>>>>> qemu progra
> > The log of "make V=1" is identical with that of "make", shown as below,
> >
> > hw/virtio/dataplane/vring.c: In function 'vring_enable_notification':
> > hw/virtio/dataplane/vring.c:72: warning: implicit declaration of function
> > 'vring_avail_event'
> > hw/virtio/dataplane/vring.c:72: warni
> > I compile the QEMU source download from qemu.git
> > (http://git.qemu.org/git/qemu.git) on 4-9-2013, errors reported as
> > below,
> >
> >
> >
> > hw/virtio/dataplane/vring.c: In function 'vring_enable_notification':
> >
> > hw/virtio/dataplane/vring.c:72: warning: implicit declaration of
I compile the QEMU source download from qemu.git
(http://git.qemu.org/git/qemu.git) on 4-9-2013, errors reported as below,
hw/virtio/dataplane/vring.c: In function 'vring_enable_notification':
hw/virtio/dataplane/vring.c:72: warning: implicit declaration of function
'vring_avail_event'
hw/virtio
On Sun, Apr 07, 2013 at 04:58:07AM +, Zhanghaoyu (A) wrote:
> >>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes
> >>> qemu program crashed during starting period, received signal SIGABRT.
> >>> Trying about 20 times, this crash
>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes qemu
>> program crashed during starting period, received signal SIGABRT.
>> Trying about 20 times, this crash may be reproduced.
>> I guess the cause memory corruption or double free.
>
> Which version of QEMU are you running
I start a kvm VM with vnc(using the zrle protocol) connect, sometimes qemu
program crashed during starting period, received signal SIGABRT.
Trying about 20 times, this crash may be reproduced.
I guess the cause memory corruption or double free.
The backtrace shown as below:
0x7f32eda3dd95 in
64 matches
Mail list logo