On 3/11/2014 1:48 PM, Juan Quintela wrote:
wrote:
From: ChenLiang
It is inaccuracy and complex that using the transfer speed of
migration thread to determine whether the convergence migration.
The dirty page may be compressed by XBZRLE or ZERO_PAGE.The counter
of updating dirty bitmap will be
hanged, 260 insertions(+), 181 deletions(-)
create mode 100644 include/exec/memory-physical.h
.
Tested-by: Chegu Vinod
---
Hi Juan,
Here are some results from migrating couple of*big fat* guests using TCP
migration and RDMA migration and the last one was with a workload. As one would
On 6/24/2013 8:59 AM, Paolo Bonzini wrote:
Il 24/06/2013 11:47, Chegu Vinod ha scritto:
If a user chooses to turn on the auto-converge migration capability
these changes detect the lack of convergence and throttle down the
guest. i.e. force the VCPUs out of the guest for some duration
and let
Introduce an asynchronous version of run_on_cpu() i.e. the caller
doesn't have to block till the call back routine finishes execution
on the target vcpu.
Signed-off-by: Chegu Vinod
Reviewed-by: Paolo Bonzini
---
cpus.c| 29 +
include/qemu-com
The auto-converge migration capability allows the user to specify if they
choose live migration seqeunce to automatically detect and force convergence.
Signed-off-by: Chegu Vinod
Reviewed-by: Paolo Bonzini
Reviewed-by: Eric Blake
---
include/migration/migration.h |2 ++
migration.c
rt of stage 3
- rebased to latest qemu.git
Changes from v1:
- rebased to latest qemu.git
- added auto-converge capability(default off) - suggested by Anthony Liguori &
Eric Blake.
Signed-off-by: Chegu Vinod
---
Chegu Vinod (3):
Introduce as
al ram: 268444224 kbytes
duplicate: 64946416 pages
skipped: 64903523 pages
normal: 7044971 pages
normal bytes: 28179884 kbytes
Signed-off-by: Chegu Vinod
---
arch_init.c | 79 +++
1 files changed, 79 insertions(+), 0 deletions(-)
diff -
On 6/24/2013 6:01 AM, Paolo Bonzini wrote:
One nit and one question:
Il 23/06/2013 22:11, Chegu Vinod ha scritto:
@@ -404,6 +413,23 @@ static void migration_bitmap_sync(void)
/* more than 1 second = 1000 millisecons */
if (end_time > start_time + 1000) {
+
Oops! A minor glitch on my side (pl. ignore the subject line of
this...as this is actually patch 3/3 and not patch 2/3). I just resent
this as patch 3/3 with the correct subject line.
Thx
Vinod
On 6/23/2013 1:05 PM, Chegu Vinod wrote:
If a user chooses to turn on the auto-converge
al ram: 268444224 kbytes
duplicate: 64946416 pages
skipped: 64903523 pages
normal: 7044971 pages
normal bytes: 28179884 kbytes
Signed-off-by: Chegu Vinod
---
arch_init.c | 79 +++
1 files changed, 79 insertions(+), 0 deletions(-)
diff -
al ram: 268444224 kbytes
duplicate: 64946416 pages
skipped: 64903523 pages
normal: 7044971 pages
normal bytes: 28179884 kbytes
Signed-off-by: Chegu Vinod
---
arch_init.c | 79 +++
1 files changed, 79 insertions(+), 0 deletions(-)
diff -
The auto-converge migration capability allows the user to specify if they
choose live migration seqeunce to automatically detect and force convergence.
Signed-off-by: Chegu Vinod
Reviewed-by: Paolo Bonzini
Reviewed-by: Eric Blake
---
include/migration/migration.h |2 ++
migration.c
v1:
- rebased to latest qemu.git
- added auto-converge capability(default off) - suggested by Anthony Liguori &
Eric Blake.
Signed-off-by: Chegu Vinod
---
Chegu Vinod (3):
Introduce async_run_on_cpu()
Add 'auto-converge' migra
Introduce an asynchronous version of run_on_cpu() i.e. the caller
doesn't have to block till the call back routine finishes execution
on the target vcpu.
Signed-off-by: Chegu Vinod
Reviewed-by: Paolo Bonzini
---
cpus.c| 29 +
include/qemu-com
On 6/20/2013 5:54 AM, Paolo Bonzini wrote:
Il 14/06/2013 15:58, Chegu Vinod ha scritto:
If a user chooses to turn on the auto-converge migration capability
these changes detect the lack of convergence and throttle down the
guest. i.e. force the VCPUs out of the guest for some duration
and let
On 6/14/2013 1:35 PM, mrhi...@linux.vnet.ibm.com wrote:
From: "Michael R. Hines"
For very large virtual machines, pinning can take a long time.
While this does not affect the migration's *actual* time itself,
it is still important for the user to know what's going on and
to know what component
Reviewed-by: Paolo Bonzini
Reviewed-by: Chegu Vinod
Tested-by: Chegu Vinod
Thx
Vinod
Wiki: http://wiki.qemu.org/Features/RDMALiveMigration
Github: g...@github.com:hinesmr/qemu.git
Here is a brief summary of total migration time and downtime using RDMA:
Using a 40gbps infiniband link per
On 6/14/2013 1:38 PM, Michael R. Hines wrote:
Chegu,
I sent a V9 to the mailing list:
The version goes even further, by explicitly timing the pinning
latency and
pushing the value out to QMP so the user clearly knows which component
of total migration time is consumed by pinning.
If you're s
The auto-converge migration capability allows the user to specify if they
choose live migration seqeunce to automatically detect and force convergence.
Signed-off-by: Chegu Vinod
Reviewed-by: Paolo Bonzini
Reviewed-by: Eric Blake
---
include/migration/migration.h |2 ++
migration.c
f) - suggested by Anthony Liguori &
Eric Blake.
Signed-off-by: Chegu Vinod
---
Chegu Vinod (3):
Introduce async_run_on_cpu()
Add 'auto-converge' migration capability
Force auto-convegence of live m
al ram: 268444224 kbytes
duplicate: 64946416 pages
skipped: 64903523 pages
normal: 7044971 pages
normal bytes: 28179884 kbytes
Signed-off-by: Chegu Vinod
---
arch_init.c | 85 +++
1 files changed, 85 insertions(+), 0 deletions(-)
diff -
Introduce an asynchronous version of run_on_cpu() i.e. the caller
doesn't have to block till the call back routine finishes execution
on the target vcpu.
Signed-off-by: Chegu Vinod
Reviewed-by: Paolo Bonzini
---
cpus.c| 29 +
include/qemu-com
On 6/1/2013 9:09 PM, Michael R. Hines wrote:
All,
I have successfully performed over 1000+ back-to-back RDMA migrations
automatically looped *in a row* using a heavy-weight memory-stress
benchmark here at IBM.
Migration success is done by capturing the actual serial console
output of the virt
Hello,
For guest sizes >= 1TB RAM the guest OS is unable to boot up (please
see attached GIF file for the Oops message). Wonder if this is a
bug/regression in qemu/seabios or does one have to enable/disable
something else in the qemu command line (pl. see below) ?
Thanks
Vinod
Host and G
}
+ if (unlikely(disable_hugepages)) {
+ vfio_lock_acct(1);
+ return 1;
+ }
+
/* Lock all the consecutive pages from pfn_base */
for (i = 1, vaddr += PAGE_SIZE; i < npage; i++, vaddr += PAGE_SIZE) {
unsigned long pfn = 0;
.
Te
On 5/10/2013 6:07 AM, Anthony Liguori wrote:
Chegu Vinod writes:
If a user chooses to turn on the auto-converge migration capability
these changes detect the lack of convergence and throttle down the
guest. i.e. force the VCPUs out of the guest for some duration
and let the migration
On 5/9/2013 1:24 PM, Igor Mammedov wrote:
On Thu, 9 May 2013 12:43:20 -0700
Chegu Vinod wrote:
If a user chooses to turn on the auto-converge migration capability
these changes detect the lack of convergence and throttle down the
guest. i.e. force the VCPUs out of the guest for some
On 5/9/2013 1:05 PM, Igor Mammedov wrote:
On Thu, 9 May 2013 12:43:20 -0700
Chegu Vinod wrote:
If a user chooses to turn on the auto-converge migration capability
these changes detect the lack of convergence and throttle down the
guest. i.e. force the VCPUs out of the guest for some
07:28 PM, Chegu Vinod wrote:
Hi Michael,
I picked up the qemu bits from your github branch and gave it a
try. (BTW the setup I was given temporary access to has a pair of
MLX's IB QDR cards connected back to back via QSFP cables)
Observed a couple of things and wanted to share..pe
al ram: 268444224 kbytes
duplicate: 64946416 pages
skipped: 64903523 pages
normal: 7044971 pages
normal bytes: 28179884 kbytes
Signed-off-by: Chegu Vinod
---
arch_init.c | 68 +
include/migration/migration.h |4 ++
migra
Introduce an asynchronous version of run_on_cpu() i.e. the caller
doesn't have to block till the call back routine finishes execution
on the target vcpu.
Signed-off-by: Chegu Vinod
---
cpus.c| 29 +
include/qemu-common.h |1 +
includ
rit, Juan and Eric
- stop the throttling thread at the start of stage 3
- rebased to latest qemu.git
Changes from v1:
- rebased to latest qemu.git
- added auto-converge capability(default off) - suggested by Anthony Liguori &
Eric Blake.
Signed-off
The auto-converge migration capability allows the user to specify if they
choose live migration seqeunce to automatically detect and force convergence.
Signed-off-by: Chegu Vinod
---
include/migration/migration.h |2 ++
migration.c |9 +
qapi-schema.json
off) - suggested by Anthony Liguori &
Eric Blake.
Signed-off-by: Chegu Vinod
---
arch_init.c | 61 -
cpus.c| 41 +++
include/migration/migration.h |7 +
include/qemu-common.h
Hi Michael,
I picked up the qemu bits from your github branch and gave it a try.
(BTW the setup I was given temporary access to has a pair of MLX's IB
QDR cards connected back to back via QSFP cables)
Observed a couple of things and wanted to share..perhaps you may be
aware of them alrea
On 5/1/2013 8:40 AM, Paolo Bonzini wrote:
I shall make the suggested changes.
Appreciate your review feedback on this part of the change.
Hi Paolo.,
Thanks for taking a look (BTW, I accidentally left out the "RFC" in the
patch subject line...my bad!).
Hi Vinod,
I think unfortunately it is n
On 5/1/2013 5:38 AM, Eric Blake wrote:
On 05/01/2013 06:22 AM, Chegu Vinod wrote:
Busy enterprise workloads hosted on large sized VM's tend to dirty
memory faster than the transfer rate achieved via live guest migration.
Despite some good recent improvements (& using dedicated 1
ttling thread at the start of stage 3
- rebased to latest qemu.git
Changes from v1:
- rebased to latest qemu.git
- added auto-converge capability(default off) - suggested by Anthony Liguori &
Eric Blake.
Signed-off-by: Chegu Vinod
--
On 4/30/2013 8:04 AM, Orit Wasserman wrote:
On 04/27/2013 11:50 PM, Chegu Vinod wrote:
Busy enterprise workloads hosted on large sized VM's tend to dirty
memory faster than the transfer rate achieved via live guest migration.
Despite some good recent improvements (& using dedicated 1
On 4/30/2013 9:01 AM, Juan Quintela wrote:
Chegu Vinod wrote:
On 4/30/2013 8:20 AM, Juan Quintela wrote:
(qemu) info migrate
capabilities: xbzrle: off auto-converge: off <
Migration status: active
total time: 1487503 milliseconds
148 seconds
1487 seconds and still the Migration is
On 4/30/2013 8:20 AM, Juan Quintela wrote:
Chegu Vinod wrote:
Busy enterprise workloads hosted on large sized VM's tend to dirty
memory faster than the transfer rate achieved via live guest migration.
Despite some good recent improvements (& using dedicated 10Gig NICs
between hosts)
On 4/29/2013 7:53 AM, Eric Blake wrote:
On 04/27/2013 02:50 PM, Chegu Vinod wrote:
Busy enterprise workloads hosted on large sized VM's tend to dirty
memory faster than the transfer rate achieved via live guest migration.
Despite some good recent improvements (& using dedicated 1
Eric Blake.
Signed-off-by: Chegu Vinod
---
arch_init.c | 44 +++
cpus.c| 12 +
include/migration/migration.h | 12 +
include/qemu/main-loop.h
On 4/24/2013 6:59 PM, Anthony Liguori wrote:
On Wed, Apr 24, 2013 at 6:42 PM, Chegu Vinod <mailto:chegu_vi...@hp.com>> wrote:
Busy enterprise workloads hosted on large sized VM's tend to dirty
memory faster than the transfer rate achieved via live guest
migration.
ning on a 80VCPU/512G guest (~80% busy)
Thanks to Juan and Paolo for some useful suggestions. More
refinment is needed (e.g. smarter way to detect & variable
throttling based on need etc). For now I was hoping to get
some feedback or hear about other more refined ideas.
Signed-o
Hi Satoru,
FYI... I had tried to use this change earlier and it did show some
improvements in perf. (due to reduced exits).
But as expected mlockall () on large sized guests adds a considerable
delay in boot time. For e.g. on an 8 socket Westmere box => a 256G guest
: took an additional ~2+
Hello,
I have been noticing host hangs when trying to boot large guests
(>=40Vcpus) with the current upstream qemu.
Host is running 3.8.2 kernel.
qemu is the latest one from qemu.git.
Example qemu command line listed below... this used to work with a
slightly older qemu (about 1.5 weeks ago
On 2/15/2013 9:46 AM, Paolo Bonzini wrote:
This series does many of the improvements that the migration thread
promised. It removes buffering, lets a large amount of code run outside
the big QEMU lock, and removes some duplication between incoming and
outgoing migration.
Patches 1 to 7 are simp
migration: calculate expected_downtime
arch_init.c | 1 +
include/migration/migration.h | 1 +
migration.c | 15 +--
3 files changed, 15 insertions(+), 2 deletions(-)
Reviewed-by: Chegu Vinod
On 1/9/2013 8:35 PM, Jason Wang wrote:
On 01/10/2013 04:25 AM, Chegu Vinod wrote:
Hello,
'am running into an issue with the latest bits. [ Pl. see below. The
vhost thread seems to be getting
stuck while trying to memcopy...perhaps a bad address?. ] Wondering
if this is a known issue or
Hello,
'am running into an issue with the latest bits. [ Pl. see below. The
vhost thread seems to be getting
stuck while trying to memcopy...perhaps a bad address?. ] Wondering if
this is a known issue or
some recent regression ?
'am using the latest qemu (from qemu.git) and the latest kvm.
On 11/13/2012 8:18 AM, Juan Quintela wrote:
Hi
If you have anything else to put, please add.
Migration Thread
* Plan is integrate it as one of first thing in December (me)
* Remove copies with buffered file (me)
Bitmap Optimization
* Finish moving to individual bitmaps for migration/vga/code
*
On 10/29/2012 9:21 AM, Vinod, Chegu wrote:
Date: Mon, 29 Oct 2012 15:11:25 +0100
From: Juan Quintela
To: qemu-devel@nongnu.org
Cc: owass...@redhat.com, mtosa...@redhat.com, a...@redhat.com,
pbonz...@redhat.com
Subject: [Qemu-devel] [PATCH 00/18] Migration thread lite (20121029)
Hi
Afte
On 10/13/2012 12:32 AM, Gleb Natapov wrote:
On Fri, Oct 12, 2012 at 07:38:42PM -0700, Chegu Vinod wrote:
Hello,
I am using a very recent upstream version of qemu.git along with
kvm.git kernels (in the host and guest).
[Guest kernel had been compiled with CONFIG_X86_X2APIC and
Forwarding to the alias.
Thanks,
Vinod
Original Message
Subject:Re: [RFC 0/7] Migration stats
Date: Mon, 13 Aug 2012 15:20:10 +0200
From: Juan Quintela
Reply-To:
To: Chegu Vinod
CC:
[ snip ]
>> - Prints the real downtime that we ha
On 7/27/2012 7:11 AM, Vinod, Chegu wrote:
-Original Message-
From: Juan Quintela [mailto:quint...@redhat.com]
Sent: Friday, July 27, 2012 4:06 AM
To: Vinod, Chegu
Cc: qemu-devel@nongnu.org; Orit Wasserman
Subject: Re: Fwd: [RFC 00/27] Migration thread (WIP)
Chegu Vinod wrote:
On 7/26
On 7/26/2012 11:41 AM, Chegu Vinod wrote:
Original Message
Subject:[Qemu-devel] [RFC 00/27] Migration thread (WIP)
Date: Tue, 24 Jul 2012 20:36:25 +0200
From: Juan Quintela
To: qemu-devel@nongnu.org
Hi
This series are on top of the migration-next-v5
Original Message
Subject:[Qemu-devel] [RFC 00/27] Migration thread (WIP)
Date: Tue, 24 Jul 2012 20:36:25 +0200
From: Juan Quintela
To: qemu-devel@nongnu.org
Hi
This series are on top of the migration-next-v5 series just posted.
First of all, this is an RF
ode 3 size: 65536 MB
node 4 cpus: 40 41 42 43 44 45 46 47 48 49
node 4 size: 65536 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
Signed-off-by: Chegu
node 3 cpus: 30 31 32 33 34 35 36 37 38 39
node 3 size: 65536 MB
node 4 cpus: 40 41 42 43 44 45 46 47 48 49
node 4 size: 65536 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76
ode 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
Signed-off-by: Chegu Vinod , Jim Hull ,
Craig Hada
---
cpus.c |3 ++-
hw/pc.c |4 +++-
sysemu.h |3 ++-
vl.
On 6/18/2012 3:11 PM, Eric Blake wrote:
On 06/18/2012 04:05 PM, Andreas Färber wrote:
Am 17.06.2012 22:12, schrieb Chegu Vinod:
diff --git a/vl.c b/vl.c
index 204d85b..1906412 100644
--- a/vl.c
+++ b/vl.c
@@ -28,6 +28,7 @@
#include
#include
#include
+#include
Did you check whether this
On 6/18/2012 1:29 PM, Eduardo Habkost wrote:
On Sun, Jun 17, 2012 at 01:12:31PM -0700, Chegu Vinod wrote:
The -numa option to qemu is used to create [fake] numa nodes
and expose them to the guest OS instance.
There are a couple of issues with the -numa option:
a) Max VCPU's that c
6 MB
node 4 cpus: 40 41 42 43 44 45 46 47 48 49
node 4 size: 65536 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
node 7 size: 65536 MB
Signed-off-by: Chegu Vin
On 6/12/2012 8:39 AM, Gleb Natapov wrote:
On Tue, Jun 12, 2012 at 08:33:59AM -0700, Chegu Vinod wrote:
I rebuilt the 3.4.1 kernel in the guest from scratch and retried my
experiments and measured
the boot times...
a) Host : RHEL6.3 RC1 + qemu-kvm (that came with it)& Guest :
RHEL6.3
On 6/8/2012 11:37 AM, Jan Kiszka wrote:
On 2012-06-08 20:20, Chegu Vinod wrote:
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
[CC'ing qemu as this discusses its code base]
On 2012-06-08 19:57, Chegu Vinod wrote:
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700,
Hello,
'am having some issues trying to live migrate a large guest and would
like to get some pointers
on how to go about about debugging this. Here is some info. on the
configuration
_Hardware :_
Two DL980's each with 80 Westmere cores + 1 TB of RAM. Using a 10G NIC
private link
(back to
Hello,
I did pick up these patches a while back and did run some migration tests while
running simple workloads in the guest. Below are some results.
FYI...
Vinod
Config Details:
Guest 10vcps, 60GB (running on a host that is 6cores(12threads) and 64GB).
The hosts are identical X86_64 Bla
On 6/10/2012 2:30 AM, Gleb Natapov wrote:
On Fri, Jun 08, 2012 at 11:20:53AM -0700, Chegu Vinod wrote:
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
BTW, another data point ...if I try to boot a the RHEL6.3 kernel in
the guest (with the latest qemu.git and the 3.4.1 on the host) it
boots just fine
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
[CC'ing qemu as this discusses its code base]
On 2012-06-08 19:57, Chegu Vinod wrote:
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700, Chegu Vinod wrote:
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-
On 6/4/2012 6:13 AM, Isaku Yamahata wrote:
On Mon, Jun 04, 2012 at 05:01:30AM -0700, Chegu Vinod wrote:
Hello Isaku Yamahata,
Hi.
I just saw your patches..Would it be possible to email me a tar bundle of these
patches (makes it easier to apply the patches to a copy of the upstream
qemu.git
71 matches
Mail list logo