(2011/12/26 8:35), Paul Mackerras wrote:
On Fri, Dec 23, 2011 at 02:23:30PM +0100, Alexander Graf wrote:
So if I read things correctly, this is the only case you're setting
pages as dirty. What if you have the following:
guest adds HTAB entry x
guest writes to page mapped by x
guest r
Hi, sorry for sending from my personal account.
The following series are all from me:
From: Takuya Yoshikawa
The 3rd version of "moving dirty bitmaps to user space".
>From this version, we add x86 and ppc and asm-generic people to CC lists.
[To KVM people]
Sorry for being
e can expect easily, the time needed to
allocate a bitmap is completely reduced. Furthermore, we can avoid the
tlb flush triggered by vmalloc() and get some good effects. In my test,
the improved ioctl was about 4 to 10 times faster than the original one
for clean slots.
Signed-off-by: Takuya Yosh
right before the get_dirty_log(). So we use this
timing to update is_dirty.
Signed-off-by: Takuya Yoshikawa
Signed-off-by: Fernando Luis Vazquez Cao
CC: Avi Kivity
CC: Alexander Graf
---
arch/ia64/kvm/kvm-ia64.c | 11 +++
arch/powerpc/kvm/book3s.c |9 -
arch/x86/kvm/
We will change the vmalloc() and vfree() to do_mmap() and do_munmap() later.
This patch makes it easy and cleanup the code.
Signed-off-by: Takuya Yoshikawa
Signed-off-by: Fernando Luis Vazquez Cao
---
virt/kvm/kvm_main.c | 27 ---
1 files changed, 20 insertions(+), 7
During the work of KVM's dirty page logging optimization, we encountered
the need of copy_in_user() for 32-bit x86 and ppc: these will be used for
manipulating dirty bitmaps in user space.
So we implement copy_in_user() for 32-bit with existing generic copy user
helpers.
Signed-off-by: T
unctions.
Note: there is a one restriction to this macro: bitmaps must be 64-bit
aligned (see the comment in this patch).
Signed-off-by: Takuya Yoshikawa
Signed-off-by: Fernando Luis Vazquez Cao
CC: Avi Kivity
Cc: Thomas Gleixner
CC: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/
During the work of KVM's dirty page logging optimization, we encountered
the need of copy_in_user() for 32-bit ppc and x86: these will be used for
manipulating dirty bitmaps in user space.
So we implement copy_in_user() for 32-bit with __copy_tofrom_user().
Signed-off-by: Takuya Yosh
vhost.c in which the author
implemented set_bit_to_user() locally using inefficient functions: see TODO
at the top of that.
Probably, this kind of need would be common for virtualization area.
So we introduce a function set_bit_user_non_atomic().
Signed-off-by: Takuya Yoshikawa
Signed-off-by: Fer
user space, we want to update the bitmaps in user space directly.
To achive this, le bit offset with *_user() functions help us a lot.
So let us use the le bit offset calculation part by defining it as a new
macro: generic_le_bit_offset() .
Signed-off-by: Takuya Yoshikawa
CC: Arnd Bergmann
This is not to break the build for other architectures than x86 and ppc.
Signed-off-by: Takuya Yoshikawa
Signed-off-by: Fernando Luis Vazquez Cao
---
arch/ia64/include/asm/kvm_host.h|5 +
arch/powerpc/include/asm/kvm_host.h |6 ++
arch/s390/include/asm/kvm_host.h|6
much because it's using a different place to store dirty logs
rather than the dirty bitmaps of memory slots: all we have to change
are sync and get of dirty log, so we don't need set_bit_user like
functions for ia64.
Signed-off-by: Takuya Yoshikawa
Signed-off-by: Fernando Luis V
the documentation in this patch for precise explanations.
About performance improvement: the most important feature of switch API
is the lightness. In our test, this appeared in the form of improved
responses for GUI manipulations.
Signed-off-by: Takuya Yoshikawa
Signed-off-by: Fernando Luis
We use new API for light dirty log access if KVM supports it.
This conflicts with Marcelo's patches. So please take this as a sample patch.
Signed-off-by: Takuya Yoshikawa
---
kvm/include/linux/kvm.h | 11 ++
qemu-kvm.c |
On Tue, 04 May 2010 19:08:23 +0300
Avi Kivity wrote:
> On 05/04/2010 06:03 PM, Arnd Bergmann wrote:
> > On Tuesday 04 May 2010, Takuya Yoshikawa wrote:
...
> >> So let us use the le bit offset calculation part by defining it as a new
> >> macro: generic_le_bit_offset()
Yes, I'm just using in kernel space: qemu has its own endian related helpers.
So if you allow us to place this macro in asm-generic/bitops/* it will help us.
No problem at all then. Thanks for the explanation.
Acked-by: Arnd Bergmann
Thanks you both. I will add your Acked-by from now on!
(2010/05/06 22:38), Arnd Bergmann wrote:
On Wednesday 05 May 2010, Takuya Yoshikawa wrote:
Date:
Yesterday 04:59:24
That's why the bitmaps are defined as little endian u64 aligned, even on
big endian 32-bit systems. Little endian bitmaps are wordsize agnostic,
and u64 alignment ensures w
get.org get.opt switch.opt
slots[7].len=32768 278379 66398 64024
slots[8].len=32768 181246 270 160
slots[7].len=32768 263961 64673 64494
slots[8].len=32768 181655 265 160
slots[7].len=32768 263736 64701 64610
slots[8].len=32768 182785 267 160
slots[7].len=32768 260925 65360 65042
slots[8].len=
(2010/05/11 12:43), Marcelo Tosatti wrote:
On Tue, May 04, 2010 at 10:08:21PM +0900, Takuya Yoshikawa wrote:
+How to Get
+
+Before calling this, you have to set the slot member of kvm_user_dirty_log
+to indicate the target memory slot.
+
+struct kvm_user_dirty_log {
+ __u32 slot
In usual workload, the number of dirty pages varies a lot for each
iteration
and we should gain really a lot for relatively clean cases.
Can you post such a test, for an idle large guest?
OK, I'll do!
Result of "low workload test" (running top during migration) first,
4GB guest
picked u
One alternative would be:
KVM_SWITCH_DIRTY_LOG passing the address of a bitmap. If the active
bitmap was clean, it returns 0, no switch performed. If the active
bitmap was dirty, the kernel switches to the new bitmap and returns 1.
And the responsability of cleaning the new bitmap could also b
r = 0;
@@ -1195,11 +1232,16 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
gfn = unalias_gfn(kvm, gfn);
memslot = gfn_to_memslot_unaliased(kvm, gfn);
if (memslot&& memslot->dirty_bitmap) {
- unsigned long rel_gfn = gfn - memslot->base_gfn;
+
[To ppc people]
Hi, Benjamin, Paul, Alex,
Please see the patches 6,7/12. I first say sorry for that I've not tested these
yet. In that sense, these may not be in the quality for precise reviews. But I
will be happy if you would give me any comments.
Alex, could you help me? Though I have a pl
+static inline int set_bit_user_non_atomic(int nr, void __user *addr)
+{
+ u8 __user *p;
+ u8 val;
+
+ p = (u8 __user *)((unsigned long)addr + nr / BITS_PER_BYTE);
Does C do the + or the / first? Either way, I'd like to see brackets here :)
OK, I'll change like that! I li
User allocated bitmaps have the advantage of reducing pinned memory.
However we have plenty more pinned memory allocated in memory slots, so
by itself, user allocated bitmaps don't justify this change.
In that sense, what do you think about the question I sent last week?
=== REPOST 1 ===
>>
>
25 matches
Mail list logo