On 3/18/2017 7:28 PM, Peter Maydell wrote:
> On 14 March 2017 at 16:18, Paolo Bonzini wrote:
>> From: Jitendra Kolhe
>>
>> Using "-mem-prealloc" option for a large guest leads to higher guest
>> start-up and migration time. This is because with "-mem-p
sysconf() failure gracefully. In case
sysconf() fails, we fall back to single threaded.
(Spotted by Coverity, CID 1372465.)
Signed-off-by: Jitendra Kolhe
---
util/oslib-posix.c | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/util/oslib-posix.c b/util/oslib
no longer touches any pages.
- simplify code my returning memset_thread_failed status from
touch_all_pages.
Signed-off-by: Jitendra Kolhe
---
backends/hostmem.c | 4 +-
exec.c | 2 +-
include/qemu/osdep.h | 3 +-
util/oslib-posix.c | 108 +++
On 2/23/2017 3:31 PM, Paolo Bonzini wrote:
>
>
> On 23/02/2017 10:56, Jitendra Kolhe wrote:
>> if (sigsetjmp(sigjump, 1)) {
>> -error_setg(errp, "os_mem_prealloc: Insufficient free host memory "
>> -&qu
memset threads.
Changed in v3:
- limit number of threads spawned based on
min(sysconf(_SC_NPROCESSORS_ONLN), 16, smp_cpus)
- implement memset thread specific siglongjmp in SIGBUS signal_handler.
Signed-off-by: Jitendra Kolhe
---
backends/hostmem.c | 4 +--
exec.c | 2 +-
On 2/13/2017 9:22 PM, Jitendra Kolhe wrote:
> On 2/13/2017 5:34 PM, Igor Mammedov wrote:
>> On Mon, 13 Feb 2017 11:23:17 +
>> "Daniel P. Berrange" wrote:
>>
>>> On Mon, Feb 13, 2017 at 11:45:46AM +0100, Igor Mammedov wrote:
>>>> On Mon, 1
On 2/13/2017 5:34 PM, Igor Mammedov wrote:
> On Mon, 13 Feb 2017 11:23:17 +
> "Daniel P. Berrange" wrote:
>
>> On Mon, Feb 13, 2017 at 11:45:46AM +0100, Igor Mammedov wrote:
>>> On Mon, 13 Feb 2017 14:30:56 +0530
>>> Jitendra Kolhe wrote:
>>&
| 31m43.400s
64 Core - 1TB | 0m39.885s | 7m55.289s
64 Core - 256GB | 0m11.960s | 2m0.135s
---
Changed in v2:
- modify number of memset threads spawned to min(smp_cpus, 16).
- removed 64GB memory restriction for spawning memset
On 1/30/2017 2:02 PM, Jitendra Kolhe wrote:
> On 1/27/2017 6:33 PM, Dr. David Alan Gilbert wrote:
>> * Jitendra Kolhe (jitendra.ko...@hpe.com) wrote:
>>> Using "-mem-prealloc" option for a very large guest leads to huge guest
>>> start-up and migration time
On 1/27/2017 6:56 PM, Daniel P. Berrange wrote:
> On Thu, Jan 05, 2017 at 12:54:02PM +0530, Jitendra Kolhe wrote:
>> Using "-mem-prealloc" option for a very large guest leads to huge guest
>> start-up and migration time. This is because with "-mem-prealloc" opti
On 1/27/2017 6:33 PM, Dr. David Alan Gilbert wrote:
> * Jitendra Kolhe (jitendra.ko...@hpe.com) wrote:
>> Using "-mem-prealloc" option for a very large guest leads to huge guest
>> start-up and migration time. This is because with "-mem-prealloc" option
&g
On 1/27/2017 6:23 PM, Juan Quintela wrote:
> Jitendra Kolhe wrote:
>> Using "-mem-prealloc" option for a very large guest leads to huge guest
>> start-up and migration time. This is because with "-mem-prealloc" option
>> qemu tries to map every gues
usage - map guest pages using 16 threads
---
64 Core - 4TB | 1m58.970s | 31m43.400s
64 Core - 1TB | 0m39.885s | 7m55.289s
64 Core - 256GB | 0m11.960s | 2m0.135s
---
On 1/5/2017 7:03 AM, Li, Liang Z wrote:
>> Am 23.12.2016 um 03:50 schrieb Li, Liang Z:
While measuring live migration performance for qemu/kvm guest, it was
observed that the qemu doesn’t maintain any intelligence for the
guest ram pages released by the guest balloon driver and treat
On 12/23/2016 8:20 AM, Li, Liang Z wrote:
>> While measuring live migration performance for qemu/kvm guest, it was
>> observed that the qemu doesn’t maintain any intelligence for the guest ram
>> pages released by the guest balloon driver and treat such pages as any other
>> normal guest ram pages.
ping ...
also had received some bounce back from few individual email-ids,
so consider this one as resend.
Thanks,
- Jitendra
On 5/30/2016 4:19 PM, Jitendra Kolhe wrote:
> ping...
> for entire v3 version of the patchset.
> http://patchwork.ozlabs.org/project/qemu-devel/list/?submit
ping...
for entire v3 version of the patchset.
http://patchwork.ozlabs.org/project/qemu-devel/list/?submitter=68462
- Jitendra
On Wed, May 18, 2016 at 4:50 PM, Jitendra Kolhe wrote:
> While measuring live migration performance for qemu/kvm guest, it was observed
> that the qemu doesn’t ma
o know, I wasn't aware of them yet, so that will be a chance
> for a really proper final solution, I hope.
>
>> How about we just skip madvise if host page size is > balloon
>> page size, for 2.6?
>
> That would mean a regression compared to what we have today.
> VIRTIO_BALLOON_PFN_SHIFT, the bitmap test function will
return true if all sub-pages of size (1UL << VIRTIO_BALLOON_PFN_SHIFT)
within dirty page are ballooned out.
The test against bitmap gets disabled in case balloon bitmap status is set
to disable during migration setup.
Signed-off-by:
> VIRTIO_BALLOON_PFN_SHIFT, the bitmap test function will
return true if all sub-pages of size (1UL << VIRTIO_BALLOON_PFN_SHIFT)
within dirty page are ballooned out.
The test against bitmap gets disabled in case balloon bitmap status is set
to disable during migration setup.
Signed-off-by:
.
Signed-off-by: Jitendra Kolhe
---
balloon.c | 15 +++
hw/virtio/virtio-balloon.c | 15 +++
include/hw/virtio/virtio-balloon.h | 1 +
include/sysemu/balloon.h | 1 +
4 files changed, 32 insertions(+)
diff --git a/balloon.c b
is disabled, migration setup will resize balloon bitmap ramblock
size to zero to avoid overhead of bitmap migration.
Signed-off-by: Jitendra Kolhe
---
balloon.c | 58 +--
hw/virtio/virtio-balloon.c| 10
include/migration
virtio-balloon
driver will be represented by 1 in the bitmap. The bitmap is also resized
in case of more RAM is hotplugged.
Signed-off-by: Jitendra Kolhe
---
balloon.c | 91 +-
exec.c | 6 +++
hw/virtio/virtio-balloon.
amblock size is set to zero if the optimization is disabled,
to avoid overhead of migrating the bitmap. If the bitmap is not migrated to
the target, the destination starts with a fresh bitmap and tracks the
ballooning operation thereafter.
Jitendra Kolhe (4):
balloon: maintain bitmap for pages re
On 4/13/2016 5:06 PM, Michael S. Tsirkin wrote:
> On Wed, Apr 13, 2016 at 12:15:38PM +0100, Dr. David Alan Gilbert wrote:
>> * Michael S. Tsirkin (m...@redhat.com) wrote:
>>> On Wed, Apr 13, 2016 at 04:24:55PM +0530, Jitendra Kolhe wrote:
>>>> Can we extend suppor
On 4/10/2016 10:29 PM, Michael S. Tsirkin wrote:
> On Fri, Apr 01, 2016 at 04:38:28PM +0530, Jitendra Kolhe wrote:
>> On 3/29/2016 5:58 PM, Michael S. Tsirkin wrote:
>>> On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote:
>>>> While measuring live migr
On 3/31/2016 10:09 PM, Dr. David Alan Gilbert wrote:
> * Jitendra Kolhe (jitendra.ko...@hpe.com) wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are
On 3/29/2016 4:18 PM, Paolo Bonzini wrote:
>
>
> On 29/03/2016 12:47, Jitendra Kolhe wrote:
>>> Indeed. It is correct for the main system RAM, but hot-plugged RAM
>>> would also have a zero-based section.offset_within_region. You need to
>>> add memory_r
On 3/29/2016 5:58 PM, Michael S. Tsirkin wrote:
> On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages
On 3/28/2016 7:41 PM, Eric Blake wrote:
> On 03/27/2016 10:16 PM, Jitendra Kolhe wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are released by the gue
On 3/28/2016 4:06 PM, Denis V. Lunev wrote:
> On 03/28/2016 07:16 AM, Jitendra Kolhe wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are released by the
On 3/29/2016 3:35 PM, Paolo Bonzini wrote:
>
>
> On 28/03/2016 08:59, Michael S. Tsirkin wrote:
+qemu_mutex_lock_balloon_bitmap();
for (;;) {
size_t offset = 0;
uint32_t pfn;
elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
fter vm_stop,
which has significant impact on the downtime. Moreover, the applications
in the guest space won’t be actually faulting on the ram pages which are
already ballooned out, the proposed optimization will not show any
improvement in migration time during postcopy.
Signed-off-by: Jitendra K
On 3/18/2016 4:57 PM, Roman Kagan wrote:
> [ Sorry I've lost this thread with email setup changes on my side;
> catching up ]
>
> On Tue, Mar 15, 2016 at 06:50:45PM +0530, Jitendra Kolhe wrote:
>> On 3/11/2016 8:09 PM, Jitendra Kolhe wrote:
>>> Here is what
On 3/11/2016 8:09 PM, Jitendra Kolhe wrote:
You mean the total live migration time for the unmodified qemu and the
'you modified for test' qemu
are almost the same?
Not sure I understand the question, but if 'you modified for test' means
below modifications to save_zero_pa
On 3/11/2016 4:24 PM, Li, Liang Z wrote:
I wonder if it is the scanning for zeros or sending the whiteout
which affects the total migration time more. If it is the former
(as I would
expect) then a rather local change to is_zero_range() to make use of
the mapping information before scanning woul
On 3/11/2016 12:55 PM, Li, Liang Z wrote:
On 3/10/2016 3:19 PM, Roman Kagan wrote:
On Fri, Mar 04, 2016 at 02:32:47PM +0530, Jitendra Kolhe wrote:
Even though the pages which are returned to the host by
virtio-balloon driver are zero pages, the migration algorithm will
still end up scanning
On 3/10/2016 3:19 PM, Roman Kagan wrote:
On Fri, Mar 04, 2016 at 02:32:47PM +0530, Jitendra Kolhe wrote:
Even though the pages which are returned to the host by virtio-balloon
driver are zero pages, the migration algorithm will still end up
scanning the entire page ram_find_and_save_block
On 3/10/2016 10:57 PM, Eric Blake wrote:
On 03/10/2016 01:57 AM, Jitendra Kolhe wrote:
+++ b/qapi-schema.json
@@ -544,11 +544,14 @@
# been migrated, pulling the remaining pages along as needed. NOTE:
If
# the migration fails during postcopy the VM will fail. (since 2.5
On 3/7/2016 10:35 PM, Eric Blake wrote:
> On 03/04/2016 02:02 AM, Jitendra Kolhe wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are release by the gue
On 3/8/2016 4:44 PM, Amit Shah wrote:
> On (Fri) 04 Mar 2016 [15:02:47], Jitendra Kolhe wrote:
>>>>
>>>> * Liang Li (liang.z...@intel.com) wrote:
>>>>> The current QEMU live migration implementation mark the all the
>>>>> guest'
> >
> > * Liang Li (liang.z...@intel.com) wrote:
> > > The current QEMU live migration implementation mark the all the
> > > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > > will be processed and that takes quit a lot of CPU cycles.
> > >
> > > From guest's point of view, i
impact on the downtime. Moreover, the applications
in the guest space won’t be actually faulting on the ram pages which are
already ballooned out, the proposed optimization will not show any
improvement in migration time during postcopy.
Signed-off-by: Jitendra Kolhe
---
balloon.c
43 matches
Mail list logo