QEMU 1.7 was released, Quantal has 10ish days left of support, and
Raring is EOL
** Changed in: qemu
Status: Fix Committed => Fix Released
** Changed in: qemu-kvm (Ubuntu Quantal)
Status: Triaged => Invalid
** Changed in: qemu-kvm (Ubuntu Raring)
Status: Triaged => Invalid
Fix will be part of QEMU 1.7.0 (commit fc1c4a5, migration: drop
MADVISE_DONT_NEED for incoming zero pages, 2013-10-24).
** Changed in: qemu
Status: New => Fix Committed
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://
** Changed in: qemu-kvm (Ubuntu Quantal)
Assignee: Chris J Arges (arges) => (unassigned)
** Changed in: qemu-kvm (Ubuntu Raring)
Assignee: Chris J Arges (arges) => (unassigned)
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU
This bug was fixed in the package qemu-kvm - 1.0+noroms-0ubuntu14.12
---
qemu-kvm (1.0+noroms-0ubuntu14.12) precise-proposed; urgency=low
* migration-do-not-overwrite-zero-pages.patch,
call-madv-hugepage-for-guest-ram-allocations.patch:
Fix performance degradation after migr
I have verified this on my local machine using virt-manager's save
memory, savevm/loadvm via the qemu monitor , and migrate via qemu
monitor.
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification because you are a member of qemu-
devel-ml, wh
Hello Mark, or anyone else affected,
Accepted qemu-kvm into precise-proposed. The package will build now and
be available at http://launchpad.net/ubuntu/+source/qemu-kvm/1.0+noroms-
0ubuntu14.12 in a few hours, and then in the -proposed repository.
Please help us by testing this new package. See
On 07.10.2013 11:55, Paolo Bonzini wrote:
Il 07/10/2013 11:49, Peter Lieven ha scritto:
It's in general not easy to do this if you take non-x86 targets into
account.
What about the dirty way to zero out all non zero pages at the beginning of
ram_load?
I'm not sure I follow?
sth like this for
** Description changed:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate
experience worse performance after the migration/loadvm. To workaround these
issues VMs must be completely rebooted. Optimally we should be able to restore
a VM
** Description changed:
SRU Justification
- [Impact]
- * Users of QEMU that save their memory states using savevm/loadvm or migrate
experience worse performance after the migration/loadvm. To workaround these
issues VMs must be completely rebooted. Optimally we should be able to restore
a V
** Description changed:
+ SRU Justification
+ [Impact]
+ * Users of QEMU that save their memory states using savevm/loadvm or migrate
experience worse performance after the migration/loadvm. To workaround these
issues VMs must be completely rebooted. Optimally we should be able to restore
a V
I found that two patches need to be backported to solve this issue:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
I've added the necessary bits into precise and tried a few tests:
1) Measure performance before and after savevm/loadvm.
2) Measure performance bef
Il 07/10/2013 11:49, Peter Lieven ha scritto:
>> It's in general not easy to do this if you take non-x86 targets into
>> account.
> What about the dirty way to zero out all non zero pages at the beginning of
> ram_load?
I'm not sure I follow?
Paolo
On 07.10.2013 11:37, Paolo Bonzini wrote:
Il 07/10/2013 08:38, Peter Lieven ha scritto:
On 06.10.2013 15:57, Zhang Haoyu wrote:
>From my testing this has been fixed in the saucy version (1.5.0) of
qemu. It is fixed by this patch:
f1c72795af573b24a7da5eb52375c9aba8a37972
However later in the
Il 07/10/2013 08:38, Peter Lieven ha scritto:
> On 06.10.2013 15:57, Zhang Haoyu wrote:
>>> >From my testing this has been fixed in the saucy version (1.5.0) of
>> qemu. It is fixed by this patch:
>>> f1c72795af573b24a7da5eb52375c9aba8a37972
>>>
>>> However later in the history this commit was reve
On 06.10.2013 15:57, Zhang Haoyu wrote:
>From my testing this has been fixed in the saucy version (1.5.0) of
qemu. It is fixed by this patch:
f1c72795af573b24a7da5eb52375c9aba8a37972
However later in the history this commit was reverted, and again broke
this. The other commit that fixes this
>>From my testing this has been fixed in the saucy version (1.5.0) of
qemu. It is fixed by this patch:
>f1c72795af573b24a7da5eb52375c9aba8a37972
>
>However later in the history this commit was reverted, and again broke
this. The other commit that fixes this is:
>211ea74022f51164a7729030b28eec90b6c9
>From my testing this has been fixed in the saucy version (1.5.0) of qemu. It
>is fixed by this patch:
f1c72795af573b24a7da5eb52375c9aba8a37972
However later in the history this commit was reverted, and again broke this.
The other commit that fixes this is:
211ea74022f51164a7729030b28eec90b6c99a
** Changed in: qemu-kvm (Ubuntu)
Status: Triaged => In Progress
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
S
** Changed in: qemu-kvm (Ubuntu)
Assignee: (unassigned) => Chris J Arges (arges)
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status i
This is being looked at in an upstream thread at
http://lists.gnu.org/archive/html/qemu-devel/2013-07/msg01850.html
Cheers,
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migrat
We are reliably seeing this post live-migration on an openstack
platform.
Setup:
hypervisor ==> Ubuntu 12.04.3 LTS
libvirt ===> 1.0.2-0ubuntu11.13.04.2~cloud0
qemu-kvm ===> 1.0+noroms-0ubuntu14.10
storage: NFS exports
Guest VM OS: Ubuntu 12.04.1 LTS and CentOS 6.4
We have ept enabled.
Sample ins
My HyperDex cluster nodes performance dropped significantly after migrating
them (virsh migrate --live ...).they are hosted on precise KVM (12.04.2 Precise
Pangolin). first Google search result landed me on this page. it seems i'm not
the only one who's encountering this problem. I hope this
@Paolo yes, when i was doing that testing i was able to consistently
reproduce those results in #23, but it was a red herring, as of now i
cannot reproduce the results in #23 consistently (i suspect it may have
had something to do with the order i was executing tests but didn’t
chase it any furthe
Oops, I missed Chris's comment #28. Thanks.
>From comment #23, the 1.4 machine type seems to be "fast", while 1.3 is
slow. This doesn't make much sense, given the differences between the
two machine types:
enable_compat_apic_id_mode();
.driver = "usb-tablet",\
.prop
Can you please check if you have EPT enabled? This could be
https://bugzilla.kernel.org/show_bug.cgi?id=58771
** Bug watch added: Linux Kernel Bug Tracker #58771
http://bugzilla.kernel.org/show_bug.cgi?id=58771
--
You received this bug notification because you are a member of qemu-
devel-ml,
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Sta
** Also affects: linux (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
Update:
>From our testing this bug affects KVM Hypervisors on Intel processors
that have the EPT feature enabled with Kernels 3.0 and greater. A list
of Intel EPT supported CPUs here
(http://ark.intel.com/Products/VirtualizationTechnology).
When using a KVM Hypervisor Host with Linux kernel 3.0 o
I used this handy tool to run system call preliminary benchmarks:
http://code.google.com/p/byte-unixbench/
In a nutshell, what I found is a confirmation that live migration does indeed
degrade performance on precise KVM.
I hope the below results help narrow down this critical problem to event
I have a few VMs (precise) that process high-volume transaction jobs
each night. After I've done a live-migrate operation to replace faulty
power supply on a bare-metal server, we encountered sluggish performance
on the migrated VMs, significant higher CPU is recorded in particular,
where the same
Can you clarify what's not 100% reproducible? The only time that it is
not reproducible on my system is between different qemu machine types as
I listed. If tests are performed on same machine-type they are
reproducible 100% of the time on the same host and vm guest as shown in
comment #23.
I hav
The results of comment 23 suggest that the issue is not 100%
reproducible. Can you please run the benchmark 3-4 times
(presave/postrestore) and showall 4 results? One benchmark only, e.g.
"simple read" will do.
Also please try putting a big file on disk (something like "dd
if=/dev/zero of=bigfile
** Also affects: qemu
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status
33 matches
Mail list logo