RFC: move the dirty page tracking to use dirty bit
Well, i was bored this morning and had this idea for a while, didnt test it to
much..., first i want to hear what ppl think?
Thanks.
Izik Eidus (2):
kvm: fix dirty bit tracking for slots with large pages
kvm: change the dirty page tracking
When slot is already allocted and being asked to be tracked we need to break the
large pages.
This code flush the mmu when someone ask a slot to start dirty bit tracking.
Signed-off-by: Izik Eidus
---
virt/kvm/kvm_main.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a
right now the dirty page tracking work with the help of page faults, when we
want to track a page for being dirty, we write protect it and we mark it dirty
when we have write page fault, this code move into looking at the dirty bit
of the spte.
Signed-off-by: Izik Eidus
---
arch/ia64/kvm/kvm
Few quick thoughts:
+void kvm_arch_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
+{
+}
+
long kvm_arch_vm_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
Avi Kivity wrote:
Izik Eidus wrote:
When slot is already allocted and being asked to be tracked we need
to break the
large pages.
This code flush the mmu when someone ask a slot to start dirty bit
tracking.
Signed-off-by: Izik Eidus
---
virt/kvm/kvm_main.c |2 ++
1 files changed, 2
RFC move to dirty bit tracking using the page table dirty bit (v2)
(BTW, it seems like the vnc in the mainline have some bugs, i have wasted 2
hours debugging rendering bug that i thought was related to that seires, but
it came out not to be related)
Thanks.
Izik Eidus (2):
kvm: fix dirty
When slot is already allocted and being asked to be tracked we need to break the
large pages.
This code flush the mmu when someone ask a slot to start dirty bit tracking.
Signed-off-by: Izik Eidus
---
virt/kvm/kvm_main.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a
the dirty bit
of the spte.
Signed-off-by: Izik Eidus
---
arch/ia64/kvm/kvm-ia64.c|4 +++
arch/powerpc/kvm/powerpc.c |4 +++
arch/s390/kvm/kvm-s390.c|4 +++
arch/x86/include/asm/kvm_host.h |3 ++
arch/x86/kvm/mmu.c | 42
Izik Eidus wrote:
+static int vmx_dirty_bit_support(void)
+{
+ return false;
+}
+
Again, idiotic bug: this should be:
return tdp_enable == false;
...
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kerne
Marcelo Tosatti wrote:
On Wed, Jun 10, 2009 at 07:23:25PM +0300, Izik Eidus wrote:
change the dirty page tracking to work with dirty bity instead of page fault.
right now the dirty page tracking work with the help of page faults, when we
want to track a page for being dirty, we write protect
Izik Eidus wrote:
Marcelo Tosatti wrote:
/* Free page dirty bitmap if unneeded */
-if (!(new.flags & KVM_MEM_LOG_DIRTY_PAGES))
+if (!(new.flags & KVM_MEM_LOG_DIRTY_PAGES)) {
new.dirty_bitmap = NULL;
+if (old.flags & KVM_MEM_LOG
Ulrich Drepper wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Izik Eidus wrote:
+ if (!kvm_x86_ops->dirty_bit_support()) {
+ spin_lock(&kvm->mmu_lock);
+ /* remove_write_access() flu
Avi Kivity wrote:
Izik Eidus wrote:
change the dirty page tracking to work with dirty bity instead of
page fault.
right now the dirty page tracking work with the help of page faults,
when we
want to track a page for being dirty, we write protect it and we mark
it dirty
when we have write page
Marcelo Tosatti wrote:
What i'm saying is with shadow and NPT (i believe) you can mark a spte
writable but not dirty, which gives you the ability to know whether
certain pages have been dirtied.
Isnt this what this patch is doing?
Should be ok todo that with shadow, as long as the gpte i
Marcelo Tosatti wrote:
On Thu, Jun 11, 2009 at 02:27:46PM +0300, Izik Eidus wrote:
Marcelo Tosatti wrote:
What i'm saying is with shadow and NPT (i believe) you can mark a spte
writable but not dirty, which gives you the ability to know whether
certain pages have been di
scanning in KSM.
Currently, kvm_mmu_notifier_dirty_update() returns 0 if and only if intel EPT is
enabled to indicate that the dirty bits of underlying sptes are not updated by
hardware.
Did you test with each of EPT, NPT and shadow?
Signed-off-by: Nai Xia
Acked-by: Izik Eidus
---
arch/x86
On 6/22/2011 1:43 PM, Avi Kivity wrote:
On 06/21/2011 04:32 PM, Nai Xia wrote:
Introduced kvm_mmu_notifier_test_and_clear_dirty(),
kvm_mmu_notifier_dirty_update()
and their mmu_notifier interfaces to support KSM dirty bit tracking,
which brings
significant performance gain in volatile pages sc
On 6/22/2011 2:10 PM, Avi Kivity wrote:
On 06/22/2011 02:05 PM, Izik Eidus wrote:
+spte = rmap_next(kvm, rmapp, NULL);
+while (spte) {
+int _dirty;
+u64 _spte = *spte;
+BUG_ON(!(_spte& PT_PRESENT_MASK));
+_dirty = _spte& PT_DIRTY_MASK;
+
On 6/22/2011 2:33 PM, Nai Xia wrote:
On Wednesday 22 June 2011 19:28:08 Avi Kivity wrote:
On 06/22/2011 02:24 PM, Avi Kivity wrote:
On 06/22/2011 02:19 PM, Izik Eidus wrote:
On 6/22/2011 2:10 PM, Avi Kivity wrote:
On 06/22/2011 02:05 PM, Izik Eidus wrote:
+spte = rmap_next(kvm, rmapp
If we don't flush the smp tlb don't we risk that we'll insert pages in
the unstable tree that are volatile just because the dirty bit didn't
get set again on the spte?
Yes, this is the trade off we take, the unstable tree will be flushed
anyway -
so this is nothing that won`t be recovered ve
Hi,
It look like commit 6bdb913f0a70a4dfb7f066fb15e2d6f960701d00 break the
semantic of set_pte_at_notify.
The change of calling first to mmu_notifier_invalidate_range_start, then
to set_pte_at_notify, and then to mmu_notifier_invalidate_range_end
not only increase the amount of locks kvm have t
This patch is not for inclusion just rfc.
Thanks.
>From 1297b86aa257100b3d819df9f9f0932bf4f7f49d Mon Sep 17 00:00:00 2001
From: Izik Eidus
Date: Tue, 28 Jul 2009 19:14:26 +0300
Subject: [PATCH] kvm userspace: ksm support
rfc for ksm support to kvm userpsace.
thanks
Signed-off-by: Izik Ei
Anthony Liguori wrote:
Izik Eidus wrote:
This patch is not for inclusion just rfc.
The madvise() interface looks really nice :-)
Thanks.
From 1297b86aa257100b3d819df9f9f0932bf4f7f49d Mon Sep 17 00:00:00 2001
From: Izik Eidus
Date: Tue, 28 Jul 2009 19:14:26 +0300
Subject: [PATCH] kvm
Marcelo Tosatti wrote:
From: Izik Eidus
First check if the list is empty before attempting to look at list
entries.
Signed-off-by: Izik Eidus
Signed-off-by: Marcelo Tosatti
Index: kvm/arch/x86/kvm/mmu.c
===
--- kvm.orig/arch
Marcelo Tosatti wrote:
Remove the bogus n_free_mmu_pages assignment from alloc_mmu_pages.
It breaks accounting of mmu pages, since n_free_mmu_pages is modified
but the real number of pages remains the same.
Signed-off-by: Marcelo Tosatti
Index: kvm/arch/x86/kvm/mmu.c
=
that qemu used to have something called phys_ram_base,
in that case it would be just making madvise on phys_ram_base with the
same of phys_ram_size)
(and adding the necessary kernel changes of course)?
On Tuesday 28 July 2009 11:39:59 am Izik Eidus wrote:
This patch is not for
Brian Jackson wrote:
On Monday 03 August 2009 01:09:38 pm Izik Eidus wrote:
Brian Jackson wrote:
If someone wanted to play around with ksm in qemu-kvm-0.x.x would it be
as simple as adding the below additions to kvm_setup_guest_memory in
kvm-all.c
qemu-kvm-0.x.x doesnt tell me
Hi,
The following seires add ksm support to the kvm mmu.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
increase the
mapcount when mapping page into shadow page table entry,
so when comparing pagecount against mapcount, you have no
reliable result.)
Signed-off-by: Izik Eidus
---
arch/x86/kvm/mmu.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
this flag notify that the host physical page we are pointing to from
the spte is write protected, and therefore we cant change its access
to be write unless we run get_user_pages(write = 1).
(this is needed for change_pte support in kvm)
Signed-off-by: Izik Eidus
---
arch/x86/kvm/mmu.c
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik Eidus
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 70 ++
virt/kvm/kvm_main.c | 14
3
Marcelo Tosatti wrote:
On Thu, Sep 10, 2009 at 07:38:57PM +0300, Izik Eidus wrote:
this flag notify that the host physical page we are pointing to from
the spte is write protected, and therefore we cant change its access
to be write unless we run get_user_pages(write = 1).
(this is needed
Marcelo Tosatti wrote:
On Thu, Sep 10, 2009 at 07:38:58PM +0300, Izik Eidus wrote:
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik Eidus
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 70
Marcelo Tosatti wrote:
On Sat, Sep 12, 2009 at 09:41:10AM +0300, Izik Eidus wrote:
Marcelo Tosatti wrote:
On Thu, Sep 10, 2009 at 07:38:58PM +0300, Izik Eidus wrote:
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik
Marcelo Tosatti wrote:
Why can't you use the writable bit in the spte? So that you can only
sync a writeable spte if it was writeable before, in sync_page?
I could, but there we will add overhead for read only gptes that become
writable in the guest...
If you prefer to fault on the syncing
Hope i fixed everything i was asked...
please tell me if I forgot anything.
Izik Eidus (3):
kvm: dont hold pagecount reference for mapped sptes pages
add SPTE_HOST_WRITEABLE flag to the shadow ptes
add support for change_pte mmu notifiers
arch/x86/include/asm/kvm_host.h |1 +
arch/x86
increase the
mapcount when mapping page into shadow page table entry,
so when comparing pagecount against mapcount, you have no
reliable result.)
Signed-off-by: Izik Eidus
---
arch/x86/kvm/mmu.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
this flag notify that the host physical page we are pointing to from
the spte is write protected, and therefore we cant change its access
to be write unless we run get_user_pages(write = 1).
(this is needed for change_pte support in kvm)
Signed-off-by: Izik Eidus
---
arch/x86/kvm/mmu.c
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik Eidus
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 64 +-
virt/kvm/kvm_main.c | 14
3
Izik Eidus wrote:
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik Eidus
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 64 +-
virt/kvm/kvm_main.c
Change from v2 : remove unused if.
Thanks.
Izik Eidus (3):
kvm: dont hold pagecount reference for mapped sptes pages
add SPTE_HOST_WRITEABLE flag to the shadow ptes
add support for change_pte mmu notifiers
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 84
increase the
mapcount when mapping page into shadow page table entry,
so when comparing pagecount against mapcount, you have no
reliable result.)
Signed-off-by: Izik Eidus
---
arch/x86/kvm/mmu.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
this flag notify that the host physical page we are pointing to from
the spte is write protected, and therefore we cant change its access
to be write unless we run get_user_pages(write = 1).
(this is needed for change_pte support in kvm)
Signed-off-by: Izik Eidus
---
arch/x86/kvm/mmu.c
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik Eidus
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 62 +-
virt/kvm/kvm_main.c | 14 +
3
>From a8ca226de8efb4f0447e4ef87bf034cf18996745 Mon Sep 17 00:00:00 2001
From: Izik Eidus
Date: Sun, 4 Oct 2009 14:01:31 +0200
Subject: [PATCH] kvm-userspace: add ksm support
Calling to madvise(MADV_MERGEABLE) on the memory allocations.
Signed-off-by: Izik Eidus
---
exec.c |3 +++
1 fi
code that treated as not aliased.
Signed-off-by: Izik Eidus <[EMAIL PROTECTED]>
---
arch/ia64/include/asm/kvm_host.h |1 +
arch/ia64/kvm/kvm-ia64.c |5 --
arch/s390/include/asm/kvm_host.h |1 +
arch/s390/kvm/kvm-s390.c |5 --
arch/x86/kvm/mmu.c |
Marcelo Tosatti wrote:
Hi Izik,
On Thu, Sep 04, 2008 at 05:13:20PM +0300, izik eidus wrote:
+ struct kvm_memory_slot *alias_slot = &kvm->memslots[i];
+
+ if (alias_slot->base_gfn == slot->base_gfn)
+ return 1;
+ }
+
Marcelo Tosatti wrote:
Hi Izik,
On Thu, Sep 04, 2008 at 05:13:20PM +0300, izik eidus wrote:
+ struct kvm_memory_slot *alias_slot = &kvm->memslots[i];
+
+ if (alias_slot->base_gfn == slot->base_gfn)
+ return 1;
+ }
+
Marcelo Tosatti wrote:
Hi Izik,
On Thu, Sep 04, 2008 at 05:13:20PM +0300, izik eidus wrote:
+struct kvm_memory_slot *alias_slot = &kvm->memslots[i];
+
+if (alias_slot->base_gfn == slot->base_gfn)
+return 1;
+}
+return 0;
+}
+
i have sent patch that remove the aliasing and i will resend it again
but untill then this patch should be applied as it fix kernel panic.
>From 61a13744e2367572f3e27ab5c0cce6e080e94d67 Mon Sep 17 00:00:00 2001
From: Izik Eidus <[EMAIL PROTECTED]>
Date: Fri, 3 Oct 2008 17:40:32 +030
Avi Kivity wrote:
LRU typically makes fairly bad decisions since it throws most of the
information it has away. I recommend looking up LRU-K and similar
algorithms, just to get a feel for this; it is basically the simplest
possible algorithm short of random selection.
Note that Linux doesn't e
Andrew Morton wrote:
On Tue, 11 Nov 2008 15:21:37 +0200 Izik Eidus <[EMAIL PROTECTED]> wrote:
KSM is a linux driver that allows dynamicly sharing identical memory pages
between one or more processes.
unlike tradtional page sharing that is made at the allocation of the
memory, ksm
Avi Kivity wrote:
Andrew Morton wrote:
The whole approach seems wrong to me. The kernel lost track of these
pages and then we run around post-facto trying to fix that up again.
Please explain (for the changelog) why the kernel cannot get this right
via the usual sharing, refcounting and COWin
From: Izik Eidus <[EMAIL PROTECTED]>
this function is useful for cases you want to compare page and know
that its value wont change during you compare it.
this function is working by walking over the whole rmap of a page
and mark every pte related to the page as write_protect.
the odirec
Andrew Morton wrote:
On Tue, 11 Nov 2008 20:48:16 +0200
Avi Kivity <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
The whole approach seems wrong to me. The kernel lost track of these
pages and then we run around post-facto trying to fix that up again.
Please explain (for the changel
From: Izik Eidus <[EMAIL PROTECTED]>
ksm is driver that allow merging identical pages between one or more
applications in way unvisible to the application that use it.
pages that are merged are marked as readonly and are COWed when any application
try to change them.
ksm is working by w
From: Izik Eidus <[EMAIL PROTECTED]>
this function is needed in cases you want to change the userspace
virtual mapping into diffrent physical page,
KSM need this for merging the identical pages.
this function is working by removing the oldpage from the rmap and
calling put_page on it,
Andrew Morton wrote:
On Tue, 11 Nov 2008 21:18:23 +0200
Izik Eidus <[EMAIL PROTECTED]> wrote:
hm.
There has been the occasional discussion about idenfifying all-zeroes
pages and scavenging them, repointing them at the zero page. Could
this infrastructure be used for that? (And ho
From: Izik Eidus <[EMAIL PROTECTED]>
this function is optimzation for kvm/users of mmu_notifiers for COW
pages, it is useful for kvm when ksm is used beacuse it allow kvm
not to have to recive VMEXIT and only then map the shared page into
the mmu shadow pages, but instead map it directly
Izik Eidus wrote:
Andrew Morton wrote:
On Tue, 11 Nov 2008 21:18:23 +0200
Izik Eidus <[EMAIL PROTECTED]> wrote:
hm.
There has been the occasional discussion about idenfifying all-zeroes
pages and scavenging them, repointing them at the zero page. Could
this infrastructure be used fo
Andrew Morton wrote:
On Tue, 11 Nov 2008 15:21:39 +0200
Izik Eidus <[EMAIL PROTECTED]> wrote:
From: Izik Eidus <[EMAIL PROTECTED]>
this function is needed in cases you want to change the userspace
virtual mapping into diffrent physical page,
Not sure that I und
KSM is a linux driver that allows dynamicly sharing identical memory pages
between one or more processes.
unlike tradtional page sharing that is made at the allocation of the
memory, ksm do it dynamicly after the memory was created.
Memory is periodically scanned; identical pages are identified an
Christoph Lameter wrote:
page migration as far as i saw cant migrate anonymous page into kernel page.
if you want we can change page_migration to do that, but i thought you will
rather have ksm changes separate.
What do you mean by kernel page? The kernel can allocate a page and then
point
Christoph Lameter wrote:
Currently page migration assumes that the page will continue to be part
of the existing file or anon vma.
exactly, and ksm really need it to get out of the existing anon vma!
What you want sounds like assigning a swap pte to an anonymous page? That
way a anon pag
Christoph Lameter wrote:
On Tue, 11 Nov 2008, Avi Kivity wrote:
Christoph Lameter wrote:
page migration requires the page to be on the LRU. That could be changed
if you have a different means of isolating a page from its page tables.
Isn't rmap the means of isolating a page fr
Jonathan Corbet wrote:
I don't claim to begin to really understand the deep VM side of this
patch, but I can certainly pick nits as I work through it...sorry for
the lack of anything more substantive.
+static struct list_head slots;
Some of these file-static variable names seem a litt
Jonathan Corbet wrote:
On Wed, 12 Nov 2008 00:17:39 +0200
Izik Eidus <[EMAIL PROTECTED]> wrote:
+static int ksm_dev_open(struct inode *inode, struct file *filp)
+{
+ try_module_get(THIS_MODULE);
+ return 0;
+}
+
+static int ksm_dev_release(struct inode *inode, struct file
Jonathan Corbet wrote:
[Let's see if I can get through the rest without premature sends...]
On Wed, 12 Nov 2008 00:17:39 +0200
Izik Eidus <[EMAIL PROTECTED]> wrote:
Actually, it occurs to me that there's no sanity checks on any of
the values passed in by ioctl(). What hap
Jonathan Corbet wrote:
What about things like cache effects from scanning all those pages? My
guess is that, if you're trying to run dozens of Windows guests, cache
usage is not at the top of your list of concerns, but I could be
wrong. Usually am...
Ok, ksm does make the cache of the cpu
Avi Kivity wrote:
KAMEZAWA Hiroyuki wrote:
Can I make a question ? (I'm working for memory cgroup.)
Now, we do charge to anonymous page when
- charge(+1) when it's mapped firstly (mapcount 0->1)
- uncharge(-1) it's fully unmapped (mapcount 1->0) vir
page_remove_rmap().
My quesion is
- P
ציטוט KAMEZAWA Hiroyuki:
Thank you for answers.
On Wed, 12 Nov 2008 13:11:12 +0200
Izik Eidus <[EMAIL PROTECTED]> wrote:
Avi Kivity wrote:
KAMEZAWA Hiroyuki wrote:
Can I make a question ? (I'm working for memory cgroup.)
Now, we do charge to anonymous page when
(From v1 to v2 the main change is much more documentation)
KSM is a linux driver that allows dynamicly sharing identical memory
pages between one or more processes.
Unlike tradtional page sharing that is made at the allocation of the
memory, ksm do it dynamicly after the memory was created.
Memor
while another thread read
and writes to/from the first 512bytes of the page. We can lose
O_DIRECT reads, the very moment we mark any pte wrprotected..."
Signed-off-by: Izik Eidus <[EMAIL PROTECTED]>
---
include/linux/rmap.h | 11
mm/rmap.c
of this issue is that newpage cannot be anything but
kernel allocated page that is not swappable.
Signed-off-by: Izik Eidus <[EMAIL PROTECTED]>
---
include/linux/mm.h |5 +++
mm/memory.c| 80
2 files changed, 85 inserti
scanning
(so even if there are still more pages to scan,
we stop this iteration)
__u32 flags:
flags to control ksm scaning (right now just
ksm_control_flags_run
available)
Signed-off-by: Izik Eidus
.
(users of mmu_notifiers that didnt implement the set_pte_at_notify()
call back will just recive the mmu_notifier_invalidate_page callback)
Signed-off-by: Izik Eidus <[EMAIL PROTECTED]>
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c
ציטוט Ryota OZAKI:
Hi Izik,
I've tried your patch set, but ksm doesn't work in my machine.
I compiled linux patched with the four patches and configured with KSM
and KVM enabled. After boot with the linux, I run two VMs running linux
using QEMU with a patch in your mail and started KSM scanner
ציטוט Izik Eidus:
ציטוט Ryota OZAKI:
Hi Izik,
I've tried your patch set, but ksm doesn't work in my machine.
I compiled linux patched with the four patches and configured with KSM
and KVM enabled. After boot with the linux, I run two VMs running linux
using QEMU with a patch in you
Tomasz Chmielewski wrote:
Evert schrieb:
Hi all,
According to the Wikipedia (
http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines
) both VirtualBox & VMware server support something called 'Live
memory allocation'.
Does KVM support this as well?
What does this term mean e
active when Andrea presented it."
I am sending another seires of patchs for kvm kernel and kvm-userspace
that would allow users of kvm to test ksm with it.
The kvm patchs would apply to Avi git tree.
Izik Eidus (4):
MMU_NOTIFIERS: add set_pte_at_notify()
add page_wrprotect(): write p
.
(users of mmu_notifiers that didnt implement the set_pte_at_notify()
call back will just recive the mmu_notifier_invalidate_page callback)
Signed-off-by: Izik Eidus
---
include/linux/mmu_notifier.h | 34 ++
mm/memory.c | 10 --
mm
while another thread read
and writes to/from the first 512bytes of the page. We can lose
O_DIRECT reads, the very moment we mark any pte wrprotected..."
Signed-off-by: Izik Eidus
---
include/linux/rmap.h | 11
mm/rmap.c| 139 +
of this issue is that newpage cannot be anything but
kernel allocated page that is not swappable.
Signed-off-by: Izik Eidus
---
include/linux/mm.h |5 +++
mm/memory.c| 80
2 files changed, 85 insertions(+), 0 deletions(-)
diff --
apply it against Avi git tree.
Izik Eidus (3):
kvm: dont hold pagecount reference for mapped sptes pages.
kvm: add SPTE_HOST_WRITEABLE flag to the shadow ptes.
kvm: add support for change_pte mmu notifiers
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 89
increase the
mapcount when mapping page into shadow page table entry,
so when comparing pagecount against mapcount, you have no
reliable result.)
Signed-off-by: Izik Eidus
---
arch/x86/kvm/mmu.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
this flag notify that the host physical page we are pointing to from
the spte is write protected, and therefore we cant change its access
to be write unless we run get_user_pages(write = 1).
(this is needed for change_pte support in kvm)
Signed-off-by: Izik Eidus
---
arch/x86/kvm/mmu.c
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik Eidus
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 68 +++
virt/kvm/kvm_main.c | 14
3
Apply it against Avi kvm-userspace git tree.
Izik Eidus (2):
qemu: add ksm support
qemu: add ksmctl.
qemu/ksm.h | 70
qemu/vl.c | 34 +
user/Makefile |6 +++-
user/config
Signed-off-by: Izik Eidus
---
qemu/ksm.h | 70
qemu/vl.c | 34 +
2 files changed, 104 insertions(+), 0 deletions(-)
create mode 100644 qemu/ksm.h
diff --git a/qemu/ksm.h b/qemu/ksm.h
new file mode
userspace tool to control the ksm kernel thread
Signed-off-by: Izik Eidus
---
user/Makefile |6 +++-
user/config-x86-common.mak |2 +-
user/ksmctl.c | 69
3 files changed, 75 insertions(+), 2 deletions(-)
create
:
__u32 npages;
number of pages to share inside this memory region.
__u32 pad;
__u64 addr:
the begining of the virtual address of this region.
KSM_REMOVE_MEMORY_REGION:
Remove memory region from ksm.
Signed-off-by: Izik Eidus
---
include/linux/ksm.h| 69 +++
include/linux
KAMEZAWA Hiroyuki wrote:
On Tue, 31 Mar 2009 02:59:20 +0300
Izik Eidus wrote:
Ksm is driver that allow merging identical pages between one or more
applications in way unvisible to the application that use it.
Pages that are merged are marked as readonly and are COWed when any
application
Anthony Liguori wrote:
Izik Eidus wrote:
Ksm is driver that allow merging identical pages between one or more
applications in way unvisible to the application that use it.
Pages that are merged are marked as readonly and are COWed when any
application try to change them.
Ksm is used for cases
Anthony Liguori wrote:
Izik Eidus wrote:
I am sending another seires of patchs for kvm kernel and kvm-userspace
that would allow users of kvm to test ksm with it.
The kvm patchs would apply to Avi git tree.
Any reason to not take these through upstream QEMU instead of
kvm-userspace? In
KAMEZAWA Hiroyuki wrote:
On Tue, 31 Mar 2009 15:21:53 +0300
Izik Eidus wrote:
kpage is actually what going to be KsmPage -> the shared page...
Right now this pages are not swappable..., after ksm will be merged we
will make this pages swappable as well...
sure.
Anthony Liguori wrote:
Andrea Arcangeli wrote:
On Tue, Mar 31, 2009 at 10:54:57AM -0500, Anthony Liguori wrote:
You can still disable ksm and simply return ENOSYS for the MADV_
flag. You
Anthony, the biggest problem about madvice() is that it is a real system
call api, i wouldnt wan
Anthony Liguori wrote:
Chris Wright wrote:
* Anthony Liguori (anth...@codemonkey.ws) wrote:
The ioctl() interface is quite bad for what you're doing. You're
telling the kernel extra information about a VA range in
userspace. That's what madvise is for. You're tweaking simple
read/write
Chris Wright wrote:
* Anthony Liguori (anth...@codemonkey.ws) wrote:
Using an interface like madvise() would force the issue to be dealt with
properly from the start :-)
Yeah, I'm not at all opposed to it.
This updates to madvise for register and sysfs for control.
madvise issues:
-
Chris Wright wrote:
* Izik Eidus (iei...@redhat.com) wrote:
Is this what we want?
How about baby steps...
admit that ioctl to control plane is better done via sysfs?
Yes
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message
Jesper Juhl wrote:
Hi,
On Tue, 31 Mar 2009, Izik Eidus wrote:
KSM is a linux driver that allows dynamicly sharing identical memory
pages between one or more processes.
Unlike tradtional page sharing that is made at the allocation of the
memory, ksm do it dynamicly after the memory was
1 - 100 of 154 matches
Mail list logo