On Tue, Apr 19, 2016 at 08:12:02PM +0100, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (m...@redhat.com) wrote:
> > On Mon, Apr 18, 2016 at 11:08:31AM +, Li, Liang Z wrote:
> > > Hi Dave,
> > >
> > > I am now working on how to benefit post-copy by skipping the free pages,
> > > and I
* Li, Liang Z (liang.z...@intel.com) wrote:
> > Subject: Re: [RFC Design Doc]Speed up live migration by skipping free pages
> >
> > * Li, Liang Z (liang.z...@intel.com) wrote:
> > > Hi Dave,
> > >
> > > I am now working on how to benefit post-copy by skipping the free
> > > pages, and I remember y
> Subject: Re: [RFC Design Doc]Speed up live migration by skipping free pages
>
> * Li, Liang Z (liang.z...@intel.com) wrote:
> > Hi Dave,
> >
> > I am now working on how to benefit post-copy by skipping the free
> > pages, and I remember you have said we should let the destination know
> > the in
* Michael S. Tsirkin (m...@redhat.com) wrote:
> On Mon, Apr 18, 2016 at 11:08:31AM +, Li, Liang Z wrote:
> > Hi Dave,
> >
> > I am now working on how to benefit post-copy by skipping the free pages,
> > and I remember you have said we should let the destination know the info
> > of free pages
* Li, Liang Z (liang.z...@intel.com) wrote:
> Hi Dave,
>
> I am now working on how to benefit post-copy by skipping the free pages,
> and I remember you have said we should let the destination know the info
> of free pages so as to avoid request the free pages from the source.
>
> We have two s
> > > > I am now working on how to benefit post-copy by skipping the free
> > > > pages, and I remember you have said we should let the destination
> > > > know the info of free pages so as to avoid request the free pages
> > > > from the
> > > source.
> > > >
> > > > We have two solutions:
> > > >
On Mon, Apr 18, 2016 at 02:36:31PM +, Li, Liang Z wrote:
> > On Mon, Apr 18, 2016 at 11:08:31AM +, Li, Liang Z wrote:
> > > Hi Dave,
> > >
> > > I am now working on how to benefit post-copy by skipping the free
> > > pages, and I remember you have said we should let the destination know
> >
> On Mon, Apr 18, 2016 at 11:08:31AM +, Li, Liang Z wrote:
> > Hi Dave,
> >
> > I am now working on how to benefit post-copy by skipping the free
> > pages, and I remember you have said we should let the destination know
> > the info of free pages so as to avoid request the free pages from the
On Mon, Apr 18, 2016 at 11:08:31AM +, Li, Liang Z wrote:
> Hi Dave,
>
> I am now working on how to benefit post-copy by skipping the free pages,
> and I remember you have said we should let the destination know the info
> of free pages so as to avoid request the free pages from the source.
>
Hi Dave,
I am now working on how to benefit post-copy by skipping the free pages,
and I remember you have said we should let the destination know the info
of free pages so as to avoid request the free pages from the source.
We have two solutions:
a. send the migration dirty page bitmap to dest
> On (Tue) 22 Mar 2016 [19:05:31], Dr. David Alan Gilbert wrote:
> > * Liang Li (liang.z...@intel.com) wrote:
>
> > > b. Implement a new virtio device
> > > Implementing a brand new virtio device to exchange information
> > > between host and guest is another choice. It requires modifying the
> >
On (Tue) 22 Mar 2016 [19:05:31], Dr. David Alan Gilbert wrote:
> * Liang Li (liang.z...@intel.com) wrote:
> > b. Implement a new virtio device
> > Implementing a brand new virtio device to exchange information
> > between host and guest is another choice. It requires modifying the
> > virtio spec
> > > > > > > > > The order I'm trying to understand is something like:
> > > > > > > > >
> > > > > > > > > a) Send the get_free_page_bitmap request
> > > > > > > > > b) Start sending pages
> > > > > > > > > c) Reach the end of memory
> > > > > > > > > [ is_ready is false - guest
> > > On Thu, Mar 24, 2016 at 03:53:25PM +, Li, Liang Z wrote:
> > > > > > > > Not very complex, we can implement like this:
> > > > > > > >
> > > > > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2.
> > > > > > > > Clear all the bits in ram_list.
> > > > > > > > dirty_memory
> > > > On Thu, Mar 24, 2016 at 03:53:25PM +, Li, Liang Z wrote:
> > > > > > > > > Not very complex, we can implement like this:
> > > > > > > > >
> > > > > > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2.
> > > > > > > > > Clear all the bits in ram_list.
> > > > > > > > >
On Thu, Mar 24, 2016 at 05:49:33PM +, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (m...@redhat.com) wrote:
> > On Thu, Mar 24, 2016 at 04:05:16PM +, Li, Liang Z wrote:
> > >
> > >
> > > On %D, %SN wrote:
> > > %Q
> > >
> > > %C
> > >
> > > Liang
> > >
> > >
> > > > -Origi
* Michael S. Tsirkin (m...@redhat.com) wrote:
> On Thu, Mar 24, 2016 at 04:05:16PM +, Li, Liang Z wrote:
> >
> >
> > On %D, %SN wrote:
> > %Q
> >
> > %C
> >
> > Liang
> >
> >
> > > -Original Message-
> > > From: Michael S. Tsirkin [mailto:m...@redhat.com]
> > > Sent: Thursday, Mar
On %D, %SN wrote:
%Q
%C
Liang
> -Original Message-
> From: Michael S. Tsirkin [mailto:m...@redhat.com]
> Sent: Thursday, March 24, 2016 11:57 PM
> To: Li, Liang Z
> Cc: Dr. David Alan Gilbert; Wei Yang; qemu-devel@nongnu.org;
> k...@vger.kernel.org; linux-ker...@vger.kenel.org; pbonz.
On Thu, Mar 24, 2016 at 04:05:16PM +, Li, Liang Z wrote:
>
>
> On %D, %SN wrote:
> %Q
>
> %C
>
> Liang
>
>
> > -Original Message-
> > From: Michael S. Tsirkin [mailto:m...@redhat.com]
> > Sent: Thursday, March 24, 2016 11:57 PM
> > To: Li, Liang Z
> > Cc: Dr. David Alan Gilbert; W
> On 24/03/2016 16:16, Li, Liang Z wrote:
> > > There's no guarantee that there's a single 'hole'
> > > even on the PC, and we want balloon to be portable.
> >
> > As long as we know how many 'hole' and where the holes are.
>
> The mapping between ram_addr_t and GPA is completely internal to QEMU.
> > > > > > Agree. Current balloon just send 256 PFNs a time, that's too
> > > > > > few and lead to too many times of virtio transmission, that's
> > > > > > the main reason for the
> > > > > bad performance.
> > > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value
> can
> > > > > imp
On Thu, Mar 24, 2016 at 03:16:29PM +, Li, Liang Z wrote:
> > > > Sorry, why do I think what? That ram_addr_t is not guaranteed to
> > > > equal GPA of the block?
> > > >
> > >
> > > I mean why do you think that's can't guaranteed to work.
> > > Yes, ram_addr_t is not guaranteed to equal GPA of
On Thu, Mar 24, 2016 at 02:33:15PM +, Li, Liang Z wrote:
> > > > > > > Agree. Current balloon just send 256 PFNs a time, that's too
> > > > > > > few and lead to too many times of virtio transmission, that's
> > > > > > > the main reason for the
> > > > > > bad performance.
> > > > > > > Change
On Thu, Mar 24, 2016 at 03:53:25PM +, Li, Liang Z wrote:
> > > > > Not very complex, we can implement like this:
> > > > >
> > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2.
> > > > > Clear all the bits in ram_list.
> > > > > dirty_memory[DIRTY_MEMORY_MIGRATION]
> > > > > 3
> > > > Not very complex, we can implement like this:
> > > >
> > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2.
> > > > Clear all the bits in ram_list.
> > > > dirty_memory[DIRTY_MEMORY_MIGRATION]
> > > > 3. Send the get_free_page_bitmap request 4. Start to send pages to
> > > >
On Thu, Mar 24, 2016 at 02:50:56PM +, Li, Liang Z wrote:
> > > > > >> Given the typical speed of networks; it wouldn't do too much
> > > > > >> harm to start sending assuming all pages are dirty and then
> > > > > >> when the guest finally gets around to finishing the bitmap then
> > > > > >> u
> On 24/03/2016 16:39, Li, Liang Z wrote:
> > > Only if you write the arch specific thing for all arches.
> >
> > I plan to keep a function stub for each arch to implement. And I have
> > done that for X86.
>
> Again: the ram_addr_t matching is internal to QEMU and can vary from
> release to relea
On 24/03/2016 16:39, Li, Liang Z wrote:
> > Only if you write the arch specific thing for all arches.
>
> I plan to keep a function stub for each arch to implement. And I
> have done that for X86.
Again: the ram_addr_t matching is internal to QEMU and can vary from
release to release. Do not d
> > > > I mean why do you think that's can't guaranteed to work.
> > > > Yes, ram_addr_t is not guaranteed to equal GPA of the block. But I
> > > > didn't use them as GPA. The code in the
> > > > filter_out_guest_free_pages() in my patch just follow the style of
> > > > the latest change of
> > > r
> > > Sorry, why do I think what? That ram_addr_t is not guaranteed to
> > > equal GPA of the block?
> > >
> >
> > I mean why do you think that's can't guaranteed to work.
> > Yes, ram_addr_t is not guaranteed to equal GPA of the block. But I
> > didn't use them as GPA. The code in the filter_out_g
> > > > >> Given the typical speed of networks; it wouldn't do too much
> > > > >> harm to start sending assuming all pages are dirty and then
> > > > >> when the guest finally gets around to finishing the bitmap then
> > > > >> update, so it's asynchronous - and then if the guest never
> > > > >>
On 24/03/2016 16:16, Li, Liang Z wrote:
> > There's no guarantee that there's a single 'hole'
> > even on the PC, and we want balloon to be portable.
>
> As long as we know how many 'hole' and where the holes are.
The mapping between ram_addr_t and GPA is completely internal to QEMU.
Passing it
On Thu, Mar 24, 2016 at 10:16:47AM +, Li, Liang Z wrote:
> > On Thu, Mar 24, 2016 at 01:19:40AM +, Li, Liang Z wrote:
> > > > > > > 2. Why not use virtio-balloon
> > > > > > > Actually, the virtio-balloon can do the similar thing by
> > > > > > > inflating the balloon before live migration,
* Li, Liang Z (liang.z...@intel.com) wrote:
> > * Wei Yang (richard.weiy...@huawei.com) wrote:
> > > On Wed, Mar 23, 2016 at 06:48:22AM +, Li, Liang Z wrote:
> > > [...]
> > > >> > 8. Pseudo code
> > > >> > Dirty page logging should be enabled before getting the free page
> > > >> > information
> On Thu, Mar 24, 2016 at 01:19:40AM +, Li, Liang Z wrote:
> > > > > > 2. Why not use virtio-balloon
> > > > > > Actually, the virtio-balloon can do the similar thing by
> > > > > > inflating the balloon before live migration, but its
> > > > > > performance is no good, for an 8GB idle guest ju
> * Wei Yang (richard.weiy...@huawei.com) wrote:
> > On Wed, Mar 23, 2016 at 06:48:22AM +, Li, Liang Z wrote:
> > [...]
> > >> > 8. Pseudo code
> > >> > Dirty page logging should be enabled before getting the free page
> > >> > information from guest, this is important because during the
> > >>
On Thu, Mar 24, 2016 at 01:19:40AM +, Li, Liang Z wrote:
> > > > > 2. Why not use virtio-balloon
> > > > > Actually, the virtio-balloon can do the similar thing by inflating
> > > > > the balloon before live migration, but its performance is no good,
> > > > > for an 8GB idle guest just boots,
* Wei Yang (richard.weiy...@huawei.com) wrote:
> On Wed, Mar 23, 2016 at 06:48:22AM +, Li, Liang Z wrote:
> [...]
> >> > 8. Pseudo code
> >> > Dirty page logging should be enabled before getting the free page
> >> > information from guest, this is important because during the process
> >> > of
On Thu, Mar 24, 2016 at 01:32:25AM +, Li, Liang Z wrote:
>> >> >> >
>> >> >> >6. Handling page cache in the guest The memory used for page
>> >> >> >cache in the guest will change depends on the workload, if guest
>> >> >> >run some block IO intensive work load, there will
>> >> >>
>> >> >> Wou
> >> >> >
> >> >> >6. Handling page cache in the guest The memory used for page
> >> >> >cache in the guest will change depends on the workload, if guest
> >> >> >run some block IO intensive work load, there will
> >> >>
> >> >> Would this improvement benefit a lot when guest only has little free
>
On Wed, Mar 23, 2016 at 06:48:22AM +, Li, Liang Z wrote:
[...]
>> > 8. Pseudo code
>> > Dirty page logging should be enabled before getting the free page
>> > information from guest, this is important because during the process
>> > of getting free pages, some free pages may be used and written
> >>> >From guest's point of view, there are some pages currently not used
> >>> >by
> >>
> >> I see in your original RFC patch and your RFC doc, this line starts
> >> with a character '>'. Not sure this one has a special purpose?
> >>
> >
> > No special purpose. Maybe it's caused by the email clie
> > > > 2. Why not use virtio-balloon
> > > > Actually, the virtio-balloon can do the similar thing by inflating
> > > > the balloon before live migration, but its performance is no good,
> > > > for an 8GB idle guest just boots, it takes about 5.7 Sec to
> > > > inflate the balloon to 7GB, but it
On Wed, Mar 23, 2016 at 02:35:42PM +, Li, Liang Z wrote:
>> >No special purpose. Maybe it's caused by the email client. I didn't
>> >find the character in the original doc.
>> >
>>
>> https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00715.html
>>
>> You could take a look at this link,
On Wed, Mar 23, 2016 at 10:53:42AM -0600, Eric Blake wrote:
>On 03/23/2016 01:18 AM, Li, Liang Z wrote:
>
>From guest's point of view, there are some pages currently not used by
>>>
>>> I see in your original RFC patch and your RFC doc, this line starts with a
>>> character '>'. Not sure
On 03/23/2016 01:18 AM, Li, Liang Z wrote:
>>>
>>> >From guest's point of view, there are some pages currently not used by
>>
>> I see in your original RFC patch and your RFC doc, this line starts with a
>> character '>'. Not sure this one has a special purpose?
>>
>
> No special purpose. Maybe i
> >No special purpose. Maybe it's caused by the email client. I didn't
> >find the character in the original doc.
> >
>
> https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00715.html
>
> You could take a look at this link, there is a '>' before From.
Yes, there is.
> >> >
> >> >6. Handl
On Wed, Mar 23, 2016 at 06:05:27AM +, Li, Liang Z wrote:
> > > To make things easier, I wrote this doc about the possible designs and
> > > my choices. Comments are welcome!
> >
> > Thanks for putting this together, and especially for taking the trouble to
> > benchmark existing code paths!
>
On Wed, Mar 23, 2016 at 07:18:57AM +, Li, Liang Z wrote:
>> Hi, Liang
>>
>> This is a very clear documentation of your work, I appreciated it a lot.
>> Below
>> are some of my personal opinion and question.
>>
>
>Thanks for your comments!
>
>> On Tue, Mar 22, 2016 at 03:43:49PM +0800, Liang
> Hi, Liang
>
> This is a very clear documentation of your work, I appreciated it a lot. Below
> are some of my personal opinion and question.
>
Thanks for your comments!
> On Tue, Mar 22, 2016 at 03:43:49PM +0800, Liang Li wrote:
> >I have sent the RFC version patch set for live migration opti
> > Obviously, the virtio-balloon mechanism has a bigger performance
> > impact to the guest than the way we are trying to implement.
>
> Yeh, we should separately try and fix that; if it's that slow then people
> will be
> annoyed about it when they're just using it for balloon.
>
> > 3. Virtio
> > To make things easier, I wrote this doc about the possible designs and
> > my choices. Comments are welcome!
>
> Thanks for putting this together, and especially for taking the trouble to
> benchmark existing code paths!
>
> I think these numbers do show that there are gains to be had from me
Hi, Liang
This is a very clear documentation of your work, I appreciated it a lot. Below
are some of my personal opinion and question.
On Tue, Mar 22, 2016 at 03:43:49PM +0800, Liang Li wrote:
>I have sent the RFC version patch set for live migration optimization
>by skipping processing the free
* Liang Li (liang.z...@intel.com) wrote:
> I have sent the RFC version patch set for live migration optimization
> by skipping processing the free pages in the ram bulk stage and
> received a lot of comments. The related threads can be found at:
Thanks!
> Obviously, the virtio-balloon mechanism
On Tue, Mar 22, 2016 at 03:43:49PM +0800, Liang Li wrote:
> I have sent the RFC version patch set for live migration optimization
> by skipping processing the free pages in the ram bulk stage and
> received a lot of comments. The related threads can be found at:
>
> https://lists.gnu.org/archive/h
I have sent the RFC version patch set for live migration optimization
by skipping processing the free pages in the ram bulk stage and
received a lot of comments. The related threads can be found at:
https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00715.html
https://lists.gnu.org/archive/h
56 matches
Mail list logo