On 2016/03/08 17:30, Paolo Bonzini wrote:
> On 08/03/2016 09:00, Takuya Yoshikawa wrote:
>>> KVM: MMU: introduce kvm_mmu_flush_or_zap
>>> KVM: MMU: move TLB flush out of __kvm_sync_page
>>> KVM: MMU: use kvm_sync_page in kvm_sync_pages
>>> KV
On 2016/03/07 23:15, Paolo Bonzini wrote:
> Having committed the ubsan fixes, this are the cleanups that are left.
>
> Compared to v1, I have fixed the patch to coalesce page zapping after
> mmu_sync_children (as requested by Takuya and Guangrong), and I have
> rewritten is_last_gpte again in an e
On 2016/02/24 22:17, Paolo Bonzini wrote:
> Move the call to kvm_mmu_flush_or_zap outside the loop.
>
> Signed-off-by: Paolo Bonzini
> ---
> arch/x86/kvm/mmu.c | 9 ++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 725
The end result is very similar to handle_ept_misconfig()'s corresponding code.
It may also be possible to change handle_ept_misconfig() not to call
handle_mmio_page_fault() separately from kvm_mmu_page_fault():
the only difference seems to be whether it checks for PFERR_RSVD_MASK.
T
extra error_code check
- avoids returning both RET_MMIO_PF_* values and raw integer values
from vcpu->arch.mmu.page_fault()
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 39 ---
arch/x86/kvm/paging_tmpl.h | 19 ++-
he() to
make it clear what it actually checks for.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 95a955d..a28b734 100644
--- a/arch/x86/kvm/mmu.c
+++ b/ar
Not just in order to clean up the code, but to make it faster by using
enhanced instructions: the initialization became 20-30% faster on our
testing machine.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/arch
As kvm_mmu_get_page() was changed so that every parent pointer would not
get into the sp->parent_ptes chain before the entry pointed to by it was
set properly, we can use the for_each_rmap_spte macro instead of
pte_list_walk().
Signed-off-by: Takuya Yoshikawa
Cc: Xiao Guangrong
---
arch/
-by: Takuya Yoshikawa
Cc: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 23 +--
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 11 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7f46e3e..ec61b22 100644
--- a/arch/x86/kvm/mm
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 204c7d4..a1a3d19 100644
--- a/arch/x86/kvm/mmu.c
+++ b
Guests worked normally in shadow paging mode (ept=0) on my test machine.
Please check if the first two patches reflect what you meant correctly.
Takuya Yoshikawa (3):
[1] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to
link_shadow_page()
[2] KVM: x86: MMU: Use
On 2015/11/26 1:32, Paolo Bonzini wrote:
On 20/11/2015 09:57, Xiao Guangrong wrote:
You can move this patch to the front of
[PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of
pte_list_walk()
By moving kvm_mmu_mark_parents_unsync() to the behind of mmu_spte_set()
(then the pa
On 2015/11/20 17:46, Xiao Guangrong wrote:
You just ignored my comment on the previous version...
I'm sorry but please read the explanation in patch 00.
I've read your comments and I'm not ignoring you.
Since this patch set has become huge than expected, I'm sending
this version so that patch
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b020323..9baf884 100644
--- a/arch/x86/kvm/mmu.c
+++ b
kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 22 --
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 10 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
set yet.
By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 +---
1 file changed, 13
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra allocation error check and zero-initialization of parent_ptes:
shadow page headers allocated by kmem_cache_zalloc() are always in the
per-VCPU pools.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 14
: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 26 +-
2 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt
b/Documentation/virtual/kvm/mmu.txt
index 3a4d681..daf9c0f 100644
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them without clear distinction just makes the code
confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
ulate
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 dele
: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8a1593f..9832bc9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1809,6 +1809,13 @@ static
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9a6801..8a1593f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 196
arch/x86/kvm/mmu_audit.c| 13 +--
3 files changed, 113
For these three, I'm not sure what we should do now, still RFC?
We can also consider other approaches, e.g. moving link_shadow_page() in the
kvm_get_mmu_page() as Paolo suggested before.
Takuya
Takuya Yoshikawa (10):
[01] KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new s
On 2015/11/19 11:46, Xiao Guangrong wrote:
Actually, some people prefer to put braces when one of the
if/else-if/else cases has multiple lines. You can see
some examples in kernel/sched/core.c: see hrtick_start(),
sched_fork(), free_sched_domain().
In our case, I thought putting braces would a
On 2015/11/18 18:09, Paolo Bonzini wrote:
On 18/11/2015 04:21, Xiao Guangrong wrote:
On 11/12/2015 07:55 PM, Takuya Yoshikawa wrote:
@@ -1720,7 +1724,7 @@ static struct kvm_mmu_page
*kvm_mmu_alloc_page(struct kvm_vcpu *vcpu,
* this feature. See the comments in kvm_zap_obsolete_pages
On 2015/11/18 11:44, Xiao Guangrong wrote:
On 11/12/2015 07:50 PM, Takuya Yoshikawa wrote:
+if (!ret) {
+clear_unsync_child_bit(sp, i);
+continue;
+} else if (ret > 0) {
nr_unsync_leaf += ret;
Just a single line h
On 2015/11/14 7:08, Marcelo Tosatti wrote:
On Thu, Nov 12, 2015 at 08:53:43PM +0900, Takuya Yoshikawa wrote:
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.
Move
On 2015/11/14 18:20, Marcelo Tosatti wrote:
The actual issue is this: a higher level page that had, under its children,
no out of sync pages, now, due to your addition, a child that is unsync:
initial state:
level1
final state:
level1 -x-> level2 -x-> level3
Where -x-> are th
On 2015/11/12 23:27, Paolo Bonzini wrote:
On 12/11/2015 12:56, Takuya Yoshikawa wrote:
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 9d21b44..f414ca6 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 33fe720..101e77d 100644
--- a/arch/x86/kvm/mmu.c
+++ b
kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 21 -
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 10 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra error check at its call site since the allocation cannot fail.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 15 ---
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/arch/x86
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 169 +---
arch/x86/kvm/mmu_audit.c| 13 ++--
3 files changed, 100
sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 26 +-
2 files changed, 19 insertions(+), 11 deletions(-)
diff
set yet.
By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 +---
1 file changed, 13
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them with no clear distinction just makes the code
confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c3bbc82..f3120aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1806,6 +1806,13 @@ static
ulate
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 dele
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e7c2c14..c3bbc82 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
try to alleviate the sadness.
Takuya
Takuya Yoshikawa (10):
01: KVM: x86: MMU: Remove unused parameter of __direct_map()
02: KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
03: KVM: x86: MMU: Make mmu_set_spte() return emulate value
04: KVM: x86: MMU: R
On 2015/11/09 19:14, Paolo Bonzini wrote:
Can you also change kvm_mmu_mark_parents_unsync to use
for_each_rmap_spte instead of pte_list_walk? It is the last use of
pte_list_walk, and it's nice if we have two uses of for_each_rmap_spte
with parent_ptes as the argument.
No problem, I will do.
S
sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 31 ++-
2 files changed, 24 insertions(+), 11 deletions
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them without no clear distinction just makes the
code confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
ulate
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 dele
: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a76bc04..a9622a2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1806,6 +1806,13 @@ static
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 11 ---
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7d85bca..a76bc04 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
Patch 1/2/3 are easy ones.
Following two, patch 4/5, may not be ideal solutions, but at least
explain, or try to explain, the problems.
Takuya Yoshikawa (5):
KVM: x86: MMU: Remove unused parameter of __direct_map()
KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
lot() when
!check_hugepage_cache_consistency() check in tdp_page_fault() forces
page table level mapping.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 7 ---
arch/x86/kvm/paging_tmpl.h | 2 +-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/m
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 17 +++--
1 file changed, 11
Now that it has only one caller, and its name is not so helpful for
readers, remove it. Instead, the new memslot_valid_for_gpte() function
makes it possible to share the common code.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 24
1 file changed, 16
This is necessary to eliminate an extra memory slot search later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h | 6 +++---
2 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/paging_tmpl.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm
This will be passed to a function later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 8
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b8482c0..2262728 100644
--- a
it of cleanup effort, the patch set reduces this overhead.
Takuya
Takuya Yoshikawa (5):
KVM: x86: MMU: Make force_pt_level bool
KVM: x86: MMU: Simplify force_pt_level calculation code in FNAME(page_fault)()
KVM: x86: MMU: Merge mapping_level_dirty_bitmap() into mapping_level()
KVM: x86
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 17 +++--
1 file changed, 11
Now that it has only one caller, and its name is not so helpful for
readers, just remove it.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 890cd69
This is necessary to eliminate an extra memory slot search later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h |6 +++---
2 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
This will be passed to a function later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |8
arch/x86/kvm/paging_tmpl.h |4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b8482c0..2262728 100644
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/paging_tmpl.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm
In page fault handlers, both mapping_level_dirty_bitmap() and mapping_level()
do a memory slot search, binary search, through kvm_vcpu_gfn_to_memslot(), which
may not be negligible especially for virtual machines with many memory slots.
With a bit of cleanup effort, the patch set reduces this over
On 2015/05/20 2:25, Paolo Bonzini wrote:
> Prepare for multiple address spaces this way, since a VCPU is not available
> where unaccount_shadowed is called. We will get to the right kvm_memslots
> 1truct through the role field in struct kvm_mmu_page.
typo: s/1truct/struct/
Reviewed-b
erstand lines is really
nice.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Takuya Yoshikawa
Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.ke
ewly added code
+ kvm_for_each_memslot(memslot, slots)
+ kvm_free_memslot(kvm, memslot, NULL);
does nothing in effect, but looks better to be here since this
corresponds to kvm_alloc_memslots() part and may be safer for
future changes.
Other changes look like trivial transi
. framebuffers can
stay calm for a long time, it is worth eliminating this overhead.
Signed-off-by: Takuya Yoshikawa
---
virt/kvm/kvm_main.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a109370..420d8cf 100644
--- a/virt/kvm
On 2014/11/17 18:23, Paolo Bonzini wrote:
>
>
> On 17/11/2014 02:56, Takuya Yoshikawa wrote:
>>>> here are a few small patches that simplify __kvm_set_memory_region
>>>> and associated code. Can you please review them?
>> Ah, already queued. Sorry for bei
On 2014/11/14 20:11, Paolo Bonzini wrote:
> Hi Igor and Takuya,
>
> here are a few small patches that simplify __kvm_set_memory_region
> and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
Takuya
>
> Thanks,
>
> Paolo
>
> Paolo Bonz
On 2014/11/14 20:12, Paolo Bonzini wrote:
> The two kmemdup invocations can be unified. I find that the new
> placement of the comment makes it easier to see what happens.
A lot easier to follow the logic.
Reviewed-by: Takuya Yoshikawa
>
> Signed-off-by: Paolo Bonzini
> -
On Tue, 30 Jul 2013 21:02:08 +0800
Xiao Guangrong wrote:
> @@ -2342,6 +2358,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>*/
> kvm_flush_remote_tlbs(kvm);
>
> + if (kvm->arch.rcu_free_shadow_page) {
> + sp = list_first_entry(invalid_list, struct kvm_m
; KVM: MMU: flush tlb if the spte can be locklessly modified
> KVM: MMU: redesign the algorithm of pte_list
> KVM: MMU: introduce nulls desc
> KVM: MMU: introduce pte-list lockless walker
> KVM: MMU: allow locklessly access shadow page table out of vcpu thread
> KVM: MMU: locklessl
On Thu, 11 Jul 2013 10:41:53 +0300
Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 10:49:56PM +0900, Takuya Yoshikawa wrote:
> > On Wed, 10 Jul 2013 11:24:39 +0300
> > "Michael S. Tsirkin" wrote:
> >
> > > On x86, kvm_arch_create_memslot assumes that rmap/
;arch.lpage_info, 0, sizeof slot->arch.lpage_info);
> +
> for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) {
> unsigned long ugfn;
> int lpages;
> --
> MST
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
>
On Thu, 13 Jun 2013 21:08:21 -0300
Marcelo Tosatti wrote:
> On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:
> - Where is the generation number increased?
Looks like when a new slot is installed in update_memslots() because
it's based on slots->generation. This is not restricted
_invalidate_mmio_sptes(struct kvm *kvm)
>* when mark memslot invalid.
>*/
> if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1)))
> - kvm_mmu_zap_mmio_sptes(kvm);
> + kvm_mmu_invalidate_zap_all_pages(kvm);
> }
>
On Mon, 10 Jun 2013 10:57:50 +0300
Gleb Natapov wrote:
> On Fri, Jun 07, 2013 at 04:51:25PM +0800, Xiao Guangrong wrote:
> > +
> > +/*
> > + * Return values of handle_mmio_page_fault_common:
> > + * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the
> > instruction
> > + *
On Fri, 31 May 2013 01:24:43 +0900
Takuya Yoshikawa wrote:
> On Thu, 30 May 2013 03:53:38 +0300
> Gleb Natapov wrote:
>
> > On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > > On W
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov wrote:
> On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
> > > the pages since other vcpus may be doing lock
| 124
> ++-
> arch/x86/kvm/mmu.h |2 +
> arch/x86/kvm/mmutrace.h | 45 +++---
> arch/x86/kvm/x86.c |9 +--
> 5 files changed, 163 insertions(+), 19 deletions(-)
>
> --
> 1.7.
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong wrote:
> +/*
> + * Fast invalid all shadow pages belong to @slot.
> + *
> + * @slot != NULL means the invalidation is caused the memslot specified
> + * by @slot is being deleted, in this case, we should ensure that rmap
> + * and lpage-info of th
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong wrote:
> This function is used to reset the large page info of all guest pages
> which will be used in later patch
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/x86.c | 25 +
> arch/x86/kvm/x86.h |2 ++
>
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong wrote:
> It is used to set disallowed large page on the specified level, can be
> used in later patch
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/x86.c | 53 ++-
> 1 files changed, 35
On Mon, 22 Apr 2013 15:39:38 +0300
Gleb Natapov wrote:
> > > Do not want kvm_set_memory (cases: DELETE/MOVE/CREATES) to be
> > > suspectible to:
> > >
> > > vcpu 1| kvm_set_memory
> > > create shadow page
> > > nuke shadow page
On Fri, 15 Mar 2013 23:29:53 +0800
Xiao Guangrong wrote:
> +/*
> + * The caller should protect concurrent access on
> + * kvm->arch.mmio_invalid_gen. Currently, it is used by
> + * kvm_arch_commit_memory_region and protected by kvm->slots_lock.
> + */
> +void kvm_mmu_invalid_mmio_spte(struct kvm
On Fri, 15 Mar 2013 23:26:59 +0800
Xiao Guangrong wrote:
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d3c4787..61a5bb6 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
>* mmio sptes.
; not a problem any more. The scalability is the same as zap mmio shadow page
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-
On Wed, 30 Jan 2013 12:06:32 +0800
Xiao Guangrong wrote:
> So, i guess we can do the simple fix first.
>
> >>> By simple fix you mean calling kvm_arch_flush_shadow_all() on READONLY
> >>> flag change?
> >>
> >> Simply disallow READONLY flag changing.
> > Ok, can somebody craft a patch?
On Mon, 28 Jan 2013 08:36:56 -0700
Alex Williamson wrote:
> On Mon, 2013-01-28 at 21:25 +0900, Takuya Yoshikawa wrote:
> > On Mon, 28 Jan 2013 12:59:03 +0200
> > Gleb Natapov wrote:
> >
> > > > It sets spte based on the old value that means the readonly flag ch
On Mon, 28 Jan 2013 12:59:03 +0200
Gleb Natapov wrote:
> > It sets spte based on the old value that means the readonly flag check
> > is missed. We need to call kvm_arch_flush_shadow_all under this case.
> Why not just disallow changing memory region KVM_MEM_READONLY flag
> without deleting the r
On Fri, 25 Jan 2013 12:59:12 +0900
Takuya Yoshikawa wrote:
> > The commit c972f3b1 changed the write-protect behaviour - it does
> > wirte-protection only when dirty flag is set.
> > [ I did not see this commit when we discussed the problem before. ]
>
> I'll look a
On Fri, 25 Jan 2013 11:28:40 +0800
Xiao Guangrong wrote:
> > I think I can naturally update my patch after this gets merged.
> >
>
> Please wait.
The patch I mentioned above won't change anything. Just cleans up
set_memory_region(). The only possible change which we discussed
before was whet
On Thu, 24 Jan 2013 15:03:57 -0700
Alex Williamson wrote:
> A couple patches to make KVM IOMMU support honor read-only mappings.
> This causes an un-map, re-map when the read-only flag changes and
> makes use of it when setting IOMMU attributes. Thanks,
Looks good to me.
I think I can naturall
of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP. We can also revisit Avi's "unlocked TLB flush" work
later for completely suppressing extra TLB flushes if needed.
Signed-off-by: T
Better to place mmu_lock handling and TLB flushing code together since
this is a self-contained function.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |3 +++
arch/x86/kvm/x86.c |5 +
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
kvm->arch.n_requested_mmu_pages by
mmu_lock as can be seen from the fact that it is read locklessly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |4
arch/x86/kvm/x86.c |9 -
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/a
Not needed any more.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt |7 ---
arch/x86/include/asm/kvm_host.h |5 -
arch/x86/kvm/mmu.c| 10 --
3 files changed, 0 insertions(+), 22 deletions(-)
diff --git a/Documentation/virtual
as tens of milliseconds: actually there is no limit since it
is roughly proportional to the number of guest pages.
Another point to note is that this patch removes the only user of
slot_bitmap which will cause some problems when we increase the number
of slots further.
Signed-off-by: Takuya
No longer need to care about the mapping level in this function.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 01d7c2a..bee3509 100644
--- a/arch/x86/kvm/mmu.c
called for a deleted slot, we makes
the caller see if the slot is non-zero and being dirty logged.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/x86.c |8 +++-
virt/kvm/kvm_main.c |1 -
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm
educe the mmu_lock hold
time when we start dirty logging for a large memory slot. You may not
see the problem if you just give 8GB or less of the memory to the guest
with THP enabled on the host -- this is for the worst case.
Takuya Yoshikawa (7):
KVM: Write protect the updated slot only whe
On Mon, 7 Jan 2013 18:36:42 -0200
Marcelo Tosatti wrote:
> Looks good, except patch 1 -
>
> a) don't understand why it is necessary and
What's really necessary is to make sure that we don't call the function
for a deleted slot. My explanation was wrong.
> b) not confident its safe - isnt cl
1 - 100 of 132 matches
Mail list logo