On 12/02/2010 03:31 AM, Takuya Yoshikawa wrote:
Thanks for the answers Avi, Juan,
Some FYI, (not about the bottleneck)
On Wed, 01 Dec 2010 14:35:57 +0200
Avi Kivity wrote:
> > >- how many dirty pages do we have to care?
> >
> > default values and assuming 1Gigabit ethernet for oursel
Thanks for the answers Avi, Juan,
Some FYI, (not about the bottleneck)
On Wed, 01 Dec 2010 14:35:57 +0200
Avi Kivity wrote:
> > > - how many dirty pages do we have to care?
> >
> > default values and assuming 1Gigabit ethernet for ourselves ~9.5MB of
> > dirty pages to have only 30ms of downt
On 11/30/2010 03:15 AM, Anthony Liguori wrote:
Sounds like the sort of thing you'd only see if you created a guest a
large guest that was mostly unallocated and then tried to migrate.
Or if you migrate a guest that was booted using the memory= option.
Paolo
Avi Kivity wrote:
> On 12/01/2010 03:52 AM, Juan Quintela wrote:
>> > - 512GB guest is really the target?
>>
>> no, problems exist with smaller amounts of RAM. with 16GB guest it is
>> trivial to get 1s stalls, 64GB guest, 3-4s, with more memory, migration
>> is flaky to say the less.
>>
>> >
On 11/30/2010 04:50 PM, Anthony Liguori wrote:
That's what the patch set I was alluding to did. Or maybe I imagined
the whole thing.
No, it just split the main bitmap into three bitmaps. I'm suggesting
that we have the dirty interface have two implementations, one that
refers to the 8-bit
On 12/01/2010 03:52 AM, Juan Quintela wrote:
> - 512GB guest is really the target?
no, problems exist with smaller amounts of RAM. with 16GB guest it is
trivial to get 1s stalls, 64GB guest, 3-4s, with more memory, migration
is flaky to say the less.
> - how much cpu time can we use for th
Anthony Liguori wrote:
> On 11/24/2010 09:16 AM, Paolo Bonzini wrote:
>> On 11/24/2010 12:14 PM, Michael S. Tsirkin wrote:
> buffered_file timer runs each 100ms. And we "try" to measure
channel
> bandwidth from there. If we are not able to run the timer, all the
> calculat
On 11/30/2010 03:47 PM, Anthony Liguori wrote:
On 11/30/2010 01:15 AM, Paolo Bonzini wrote:
On 11/30/2010 03:11 AM, Anthony Liguori wrote:
BufferedFile should hit the qemu_file_rate_limit check when the socket
buffer gets filled up.
The problem is that the file rate limit is not hit because
On 11/30/2010 02:47 PM, Anthony Liguori wrote:
On 11/30/2010 01:15 AM, Paolo Bonzini wrote:
On 11/30/2010 03:11 AM, Anthony Liguori wrote:
BufferedFile should hit the qemu_file_rate_limit check when the socket
buffer gets filled up.
The problem is that the file rate limit is not hit because
On 11/30/2010 04:17 PM, Anthony Liguori wrote:
What's the problem with burning that cpu? per guest page,
compressing takes less than sending. Is it just an issue of qemu
mutex hold time?
If you have a 512GB guest, then you have a 16MB dirty bitmap which
ends up being an 128MB dirty bitmap
On 11/30/2010 08:27 AM, Avi Kivity wrote:
On 11/30/2010 04:17 PM, Anthony Liguori wrote:
What's the problem with burning that cpu? per guest page,
compressing takes less than sending. Is it just an issue of qemu
mutex hold time?
If you have a 512GB guest, then you have a 16MB dirty bitmap
On 11/30/2010 08:12 AM, Paolo Bonzini wrote:
On 11/30/2010 02:47 PM, Anthony Liguori wrote:
On 11/30/2010 01:15 AM, Paolo Bonzini wrote:
On 11/30/2010 03:11 AM, Anthony Liguori wrote:
BufferedFile should hit the qemu_file_rate_limit check when the socket
buffer gets filled up.
The problem i
On 11/30/2010 03:11 AM, Anthony Liguori wrote:
BufferedFile should hit the qemu_file_rate_limit check when the socket
buffer gets filled up.
The problem is that the file rate limit is not hit because work is done
elsewhere. The rate can limit the bandwidth used and makes QEMU aware
that soc
On 11/30/2010 01:15 AM, Paolo Bonzini wrote:
On 11/30/2010 03:11 AM, Anthony Liguori wrote:
BufferedFile should hit the qemu_file_rate_limit check when the socket
buffer gets filled up.
The problem is that the file rate limit is not hit because work is
done elsewhere. The rate can limit the
On 11/30/2010 07:58 AM, Avi Kivity wrote:
On 11/30/2010 03:47 PM, Anthony Liguori wrote:
On 11/30/2010 01:15 AM, Paolo Bonzini wrote:
On 11/30/2010 03:11 AM, Anthony Liguori wrote:
BufferedFile should hit the qemu_file_rate_limit check when the socket
buffer gets filled up.
The problem is t
Avi Kivity wrote:
> On 11/30/2010 04:17 PM, Anthony Liguori wrote:
>>> What's the problem with burning that cpu? per guest page,
>>> compressing takes less than sending. Is it just an issue of qemu
>>> mutex hold time?
>>
>>
>> If you have a 512GB guest, then you have a 16MB dirty bitmap which
>
Takuya Yoshikawa wrote:
> On Tue, 30 Nov 2010 16:27:13 +0200
> Avi Kivity wrote:
> Does anyone is profiling these dirty bitmap things?
I am.
> - 512GB guest is really the target?
no, problems exist with smaller amounts of RAM. with 16GB guest it is
trivial to get 1s stalls, 64GB guest, 3-4s
On Wed, 01 Dec 2010 02:52:08 +0100
Juan Quintela wrote:
> > Since we are planning to do some profiling for these, taking into account
> > Kemari, can you please share these information?
>
> If you see the 0/10 email with this setup, you can see how much time are
> we spending on stuff. Just now
Anthony Liguori wrote:
> On 11/30/2010 08:12 AM, Paolo Bonzini wrote:
>> On 11/30/2010 02:47 PM, Anthony Liguori wrote:
>>> On 11/30/2010 01:15 AM, Paolo Bonzini wrote:
> Juan's patch, IIUC, does the following: If you've been iterating in a
> tight loop, return to the main loop for *one* iteratio
On Tue, 30 Nov 2010 16:27:13 +0200
Avi Kivity wrote:
> On 11/30/2010 04:17 PM, Anthony Liguori wrote:
> >> What's the problem with burning that cpu? per guest page,
> >> compressing takes less than sending. Is it just an issue of qemu
> >> mutex hold time?
> >
> >
> > If you have a 512GB gues
Anthony Liguori wrote:
> On 11/23/2010 05:03 PM, Juan Quintela wrote:
>> From: Juan Quintela
>>
>> cheking each 64 pages is a random magic number as good as any other.
>> We don't want to test too many times, but on the other hand,
>> qemu_get_clock_ns() is not so expensive either.
>>
>> Signed-of
On 11/24/2010 09:16 AM, Paolo Bonzini wrote:
On 11/24/2010 12:14 PM, Michael S. Tsirkin wrote:
> buffered_file timer runs each 100ms. And we "try" to measure
channel
> bandwidth from there. If we are not able to run the timer, all the
> calculations are wrong, and then stalls happens.
So
On 11/29/2010 08:23 PM, Juan Quintela wrote:
Anthony Liguori wrote:
On 11/23/2010 05:03 PM, Juan Quintela wrote:
From: Juan Quintela
cheking each 64 pages is a random magic number as good as any other.
We don't want to test too many times, but on the other hand,
qemu_get_clock_ns()
On Wed, Nov 24, 2010 at 04:16:04PM +0100, Paolo Bonzini wrote:
> On 11/24/2010 12:14 PM, Michael S. Tsirkin wrote:
> >>> buffered_file timer runs each 100ms. And we "try" to measure channel
> >>> bandwidth from there. If we are not able to run the timer, all the
> >>> calculations are wrong, a
On 11/24/2010 12:14 PM, Michael S. Tsirkin wrote:
> buffered_file timer runs each 100ms. And we "try" to measure channel
> bandwidth from there. If we are not able to run the timer, all the
> calculations are wrong, and then stalls happens.
So the problem is the timer in the buffered file
On Wed, Nov 24, 2010 at 12:01:51PM +0100, Juan Quintela wrote:
> "Michael S. Tsirkin" wrote:
> > On Wed, Nov 24, 2010 at 12:03:06AM +0100, Juan Quintela wrote:
> >> From: Juan Quintela
> >>
> >> cheking each 64 pages is a random magic number as good as any other.
> >> We don't want to test too m
"Michael S. Tsirkin" wrote:
> On Wed, Nov 24, 2010 at 12:03:06AM +0100, Juan Quintela wrote:
>> From: Juan Quintela
>>
>> cheking each 64 pages is a random magic number as good as any other.
>> We don't want to test too many times, but on the other hand,
>> qemu_get_clock_ns() is not so expensiv
On Wed, Nov 24, 2010 at 12:03:06AM +0100, Juan Quintela wrote:
> From: Juan Quintela
>
> cheking each 64 pages is a random magic number as good as any other.
> We don't want to test too many times, but on the other hand,
> qemu_get_clock_ns() is not so expensive either.
>
Could you please expla
28 matches
Mail list logo