Now that we are running out of the bits in vcpu->requests, using one
of them just to call kvm_make_all_cpus_request() with a valid request
number should be avoided.
This patch achieves this by making kvm_make_all_cpus_request() handle
an empty request.
Signed-off-by: Takuya Yoshikawa
---
a
Not just in order to clean up the code, but to make it faster by using
enhanced instructions: the initialization became 20-30% faster on our
testing machine.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/arch
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 204c7d4..a1a3d19 100644
--- a/arch/x86/kvm/mmu.c
+++ b
As kvm_mmu_get_page() was changed so that every parent pointer would not
get into the sp->parent_ptes chain before the entry pointed to by it was
set properly, we can use the for_each_rmap_spte macro instead of
pte_list_walk().
Signed-off-by: Takuya Yoshikawa
Cc: Xiao Guangrong
---
arch/
-by: Takuya Yoshikawa
Cc: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 23 +--
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 11 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7f46e3e..ec61b22 100644
--- a/arch/x86/kvm/mm
Guests worked normally in shadow paging mode (ept=0) on my test machine.
Please check if the first two patches reflect what you meant correctly.
Takuya Yoshikawa (3):
[1] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to
link_shadow_page()
[2] KVM: x86: MMU: Use
On 2015/11/26 1:32, Paolo Bonzini wrote:
On 20/11/2015 09:57, Xiao Guangrong wrote:
You can move this patch to the front of
[PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of
pte_list_walk()
By moving kvm_mmu_mark_parents_unsync() to the behind of mmu_spte_set()
(then the pa
On 2015/11/20 17:46, Xiao Guangrong wrote:
You just ignored my comment on the previous version...
I'm sorry but please read the explanation in patch 00.
I've read your comments and I'm not ignoring you.
Since this patch set has become huge than expected, I'm sending
this version so that patch
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b020323..9baf884 100644
--- a/arch/x86/kvm/mmu.c
+++ b
set yet.
By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 +---
1 file changed, 13
kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 22 --
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 10 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra allocation error check and zero-initialization of parent_ptes:
shadow page headers allocated by kmem_cache_zalloc() are always in the
per-VCPU pools.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 14
: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 26 +-
2 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt
b/Documentation/virtual/kvm/mmu.txt
index 3a4d681..daf9c0f 100644
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them without clear distinction just makes the code
confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
ulate
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 dele
: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8a1593f..9832bc9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1809,6 +1809,13 @@ static
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9a6801..8a1593f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 196
arch/x86/kvm/mmu_audit.c| 13 +--
3 files changed, 113
For these three, I'm not sure what we should do now, still RFC?
We can also consider other approaches, e.g. moving link_shadow_page() in the
kvm_get_mmu_page() as Paolo suggested before.
Takuya
Takuya Yoshikawa (10):
[01] KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new s
On 2015/11/19 11:46, Xiao Guangrong wrote:
Actually, some people prefer to put braces when one of the
if/else-if/else cases has multiple lines. You can see
some examples in kernel/sched/core.c: see hrtick_start(),
sched_fork(), free_sched_domain().
In our case, I thought putting braces would a
On 2015/11/18 18:09, Paolo Bonzini wrote:
On 18/11/2015 04:21, Xiao Guangrong wrote:
On 11/12/2015 07:55 PM, Takuya Yoshikawa wrote:
@@ -1720,7 +1724,7 @@ static struct kvm_mmu_page
*kvm_mmu_alloc_page(struct kvm_vcpu *vcpu,
* this feature. See the comments in kvm_zap_obsolete_pages
On 2015/11/18 11:44, Xiao Guangrong wrote:
On 11/12/2015 07:50 PM, Takuya Yoshikawa wrote:
+if (!ret) {
+clear_unsync_child_bit(sp, i);
+continue;
+} else if (ret > 0) {
nr_unsync_leaf += ret;
Just a single line h
On 2015/11/14 7:08, Marcelo Tosatti wrote:
On Thu, Nov 12, 2015 at 08:53:43PM +0900, Takuya Yoshikawa wrote:
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.
Move
On 2015/11/14 18:20, Marcelo Tosatti wrote:
The actual issue is this: a higher level page that had, under its children,
no out of sync pages, now, due to your addition, a child that is unsync:
initial state:
level1
final state:
level1 -x-> level2 -x-> level3
Where -x-> are th
On 2015/11/12 23:27, Paolo Bonzini wrote:
On 12/11/2015 12:56, Takuya Yoshikawa wrote:
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 9d21b44..f414ca6 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 33fe720..101e77d 100644
--- a/arch/x86/kvm/mmu.c
+++ b
kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 21 -
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 10 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra error check at its call site since the allocation cannot fail.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 15 ---
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/arch/x86
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 169 +---
arch/x86/kvm/mmu_audit.c| 13 ++--
3 files changed, 100
sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 26 +-
2 files changed, 19 insertions(+), 11 deletions(-)
diff
set yet.
By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 +---
1 file changed, 13
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them with no clear distinction just makes the code
confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
ulate
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 dele
: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c3bbc82..f3120aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1806,6 +1806,13 @@ static
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e7c2c14..c3bbc82 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
try to alleviate the sadness.
Takuya
Takuya Yoshikawa (10):
01: KVM: x86: MMU: Remove unused parameter of __direct_map()
02: KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
03: KVM: x86: MMU: Make mmu_set_spte() return emulate value
04: KVM: x86: MMU: R
On 2015/11/09 19:14, Paolo Bonzini wrote:
Can you also change kvm_mmu_mark_parents_unsync to use
for_each_rmap_spte instead of pte_list_walk? It is the last use of
pte_list_walk, and it's nice if we have two uses of for_each_rmap_spte
with parent_ptes as the argument.
No problem, I will do.
S
sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 31 ++-
2 files changed, 24 insertions(+), 11 deletions
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them without no clear distinction just makes the
code confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
ulate
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 dele
: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a76bc04..a9622a2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1806,6 +1806,13 @@ static
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 11 ---
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7d85bca..a76bc04 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
Patch 1/2/3 are easy ones.
Following two, patch 4/5, may not be ideal solutions, but at least
explain, or try to explain, the problems.
Takuya Yoshikawa (5):
KVM: x86: MMU: Remove unused parameter of __direct_map()
KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
lot() when
!check_hugepage_cache_consistency() check in tdp_page_fault() forces
page table level mapping.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 7 ---
arch/x86/kvm/paging_tmpl.h | 2 +-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/m
Now that it has only one caller, and its name is not so helpful for
readers, remove it. Instead, the new memslot_valid_for_gpte() function
makes it possible to share the common code.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 24
1 file changed, 16
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 17 +++--
1 file changed, 11
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/paging_tmpl.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm
This is necessary to eliminate an extra memory slot search later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h | 6 +++---
2 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
This will be passed to a function later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 8
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b8482c0..2262728 100644
--- a
it of cleanup effort, the patch set reduces this overhead.
Takuya
Takuya Yoshikawa (5):
KVM: x86: MMU: Make force_pt_level bool
KVM: x86: MMU: Simplify force_pt_level calculation code in FNAME(page_fault)()
KVM: x86: MMU: Merge mapping_level_dirty_bitmap() into mapping_level()
KVM: x86
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 17 +++--
1 file changed, 11
Now that it has only one caller, and its name is not so helpful for
readers, just remove it.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 890cd69
This is necessary to eliminate an extra memory slot search later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h |6 +++---
2 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
This will be passed to a function later.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |8
arch/x86/kvm/paging_tmpl.h |4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b8482c0..2262728 100644
As a bonus, an extra memory slot search can be eliminated when
is_self_change_mapping is true.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/paging_tmpl.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm
In page fault handlers, both mapping_level_dirty_bitmap() and mapping_level()
do a memory slot search, binary search, through kvm_vcpu_gfn_to_memslot(), which
may not be negligible especially for virtual machines with many memory slots.
With a bit of cleanup effort, the patch set reduces this over
erstand lines is really
nice.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Takuya Yoshikawa
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
. framebuffers can
stay calm for a long time, it is worth eliminating this overhead.
Signed-off-by: Takuya Yoshikawa
---
virt/kvm/kvm_main.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a109370..420d8cf 100644
--- a/virt/kvm
On 2014/11/17 18:23, Paolo Bonzini wrote:
>
>
> On 17/11/2014 02:56, Takuya Yoshikawa wrote:
>>>> here are a few small patches that simplify __kvm_set_memory_region
>>>> and associated code. Can you please review them?
>> Ah, already queued. Sorry for bei
On 2014/11/14 20:11, Paolo Bonzini wrote:
> Hi Igor and Takuya,
>
> here are a few small patches that simplify __kvm_set_memory_region
> and associated code. Can you please review them?
Ah, already queued. Sorry for being late to respond.
Takuya
>
> Thanks,
>
> Paolo
>
> Paolo Bonz
On 2014/11/14 20:12, Paolo Bonzini wrote:
> The two kmemdup invocations can be unified. I find that the new
> placement of the comment makes it easier to see what happens.
A lot easier to follow the logic.
Reviewed-by: Takuya Yoshikawa
>
> Signed-off-by: Paolo Bonzini
> -
logging support, used by architectures that share
> >> + * comman dirty page logging implementation.
> >
> > s/comman/common/
> >
> > The approach looks sane to me, especially as it does not change other
> > architectures needlessly.
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
No need to scan the entire VCPU array.
Signed-off-by: Takuya Yoshikawa
---
BTW, this looks like hyperv support forces us to stick to the current
implementation which stores VCPUs in an array, or at least something
we can index them; not a good thing.
arch/x86/kvm/x86.c |7 +--
1
(2014/02/18 18:07), Paolo Bonzini wrote:
Il 18/02/2014 09:22, Takuya Yoshikawa ha scritto:
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
(2014/02/18 18:43), Xiao Guangrong wrote:
On 02/18/2014 04:22 PM, Takuya Yoshikawa wrote:
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock. It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.
This patch adds a comment explaining this.
Signed-off-by: Takuya
-by: Takuya Yoshikawa
---
arch/x86/kvm/paging_tmpl.h |7 ---
include/linux/kvm_host.h |2 +-
virt/kvm/kvm_main.c| 11 +++
3 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index cba218a..b1e6c1b
Please take patch A or B.
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Giving proper names to the 0 and 1 was once suggested. But since 0 is
returned to the userspace, giving it another name can introduce extra
confusion. This patch just explains the meanings instead.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/x86.c |5 +
1 file changed, 5
ned-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c |2 +-
arch/powerpc/kvm/book3s_hv.c |2 +-
arch/x86/kvm/x86.c |2 +-
include/linux/kvm_host.h |1 -
virt/kvm/kvm_main.c |8
5 files changed, 3 insertions(+), 12 deletions(-)
diff --
Xiao's "KVM: MMU: flush tlb if the spte can be locklessly modified"
allows us to release mmu_lock before flushing TLBs.
Signed-off-by: Takuya Yoshikawa
Cc: Xiao Guangrong
---
Xiao can change the remaining mmu_lock to RCU's read-side lock:
The grace period will be reason
: Takuya Yoshikawa
Cc: Xiao Guangrong
---
arch/x86/kvm/x86.c | 18 --
virt/kvm/kvm_main.c |6 +-
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1d1f6df..79e8ad0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm
I think this patch set answers Gleb's comment.
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 30 Jul 2013 21:02:08 +0800
Xiao Guangrong wrote:
> @@ -2342,6 +2358,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>*/
> kvm_flush_remote_tlbs(kvm);
>
> + if (kvm->arch.rcu_free_shadow_page) {
> + sp = list_first_entry(invalid_list, struct kvm_m
; KVM: MMU: flush tlb if the spte can be locklessly modified
> KVM: MMU: redesign the algorithm of pte_list
> KVM: MMU: introduce nulls desc
> KVM: MMU: introduce pte-list lockless walker
> KVM: MMU: allow locklessly access shadow page table out of vcpu thread
> KVM: MMU: lockl
On Thu, 11 Jul 2013 10:41:53 +0300
Gleb Natapov wrote:
> On Wed, Jul 10, 2013 at 10:49:56PM +0900, Takuya Yoshikawa wrote:
> > On Wed, 10 Jul 2013 11:24:39 +0300
> > "Michael S. Tsirkin" wrote:
> >
> > > On x86, kvm_arch_create_memslot assumes that rmap/
;arch.lpage_info, 0, sizeof slot->arch.lpage_info);
> +
> for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) {
> unsigned long ugfn;
> int lpages;
> --
> MST
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
>
Now that kvm_arch_memslots_updated() catches every increment of the
memslots->generation, checking if the mmio generation has reached its
maximum value is enough.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |5 +
arch/x86/kvm/x86.c | 10 +-
2 files changed
following patch, x86 will use this new API to check if the mmio
generation has reached its maximum value, in which case mmio sptes need
to be flushed out.
Signed-off-by: Takuya Yoshikawa
---
Removed the trailing space after "return old_memslots;" at this chance.
arch/arm/kvm/arm.c
Patch 1: KVM-arch maintainers, please review this one.
{x86, power, s390, arm}-kvm maintainers CCed.
Could not find mips-kvm maintainer in MAINTAINERS.
Patch 2: I did not move the body of kvm_mmu_invalidate_mmio_sptes() into
x86.c because it looked like mmu details.
Takuya Yoshikawa (2
On Wed, 3 Jul 2013 12:10:57 +0300
Gleb Natapov wrote:
> > Yes, makes sense. However, this patch is still an improvement because
> > the current code is too easily mistaken for an off-by-one bug.
> >
> > Any improvements to the API can go on top.
> >
> If Takuya will send the proper fix shortly
On Wed, 03 Jul 2013 10:53:51 +0200
Paolo Bonzini wrote:
> Il 03/07/2013 10:50, Xiao Guangrong ha scritto:
> >> > Please wait a while. I can not understand it very clearly.
> >> >
> >> > This conditional check will cause caching a overflow value into mmio
> >> > spte.
> >> > The simple case is t
On Wed, 03 Jul 2013 16:39:25 +0800
Xiao Guangrong wrote:
> Please wait a while. I can not understand it very clearly.
>
> This conditional check will cause caching a overflow value into mmio spte.
> The simple case is that kvm adds new slots for many times, the mmio-gen is
> easily
> more than
Since kvm_arch_prepare_memory_region() is called right after installing
the slot marked invalid, wraparound checking should be there to avoid
zapping mmio sptes when mmio generation is still MMIO_MAX_GEN - 1.
Signed-off-by: Takuya Yoshikawa
---
This seems to be the simplest solution for fixing
On Thu, 20 Jun 2013 23:29:22 +0200
Paolo Bonzini wrote:
> > @@ -4385,8 +4385,10 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm)
> > * The max value is MMIO_MAX_GEN - 1 since it is not called
> > * when mark memslot invalid.
> > */
> > - if (unlikely(kvm_current_mmio_genera
From: Takuya Yoshikawa
Without this information, users will just see unexpected performance
problems and there is little chance we will get good reports from them:
note that mmio generation is increased even when we just start, or stop,
dirty logging for some memory slot, in which case users
On Thu, 20 Jun 2013 15:14:42 +0200
Paolo Bonzini wrote:
> Il 20/06/2013 14:54, Gleb Natapov ha scritto:
> >> If they see mysterious peformance problems induced by this wraparound, the
> >> only
> >> way to know the cause later is by this kind of information in the syslog.
> >> So even the first
On Thu, 20 Jun 2013 15:54:38 +0300
Gleb Natapov wrote:
> On Thu, Jun 20, 2013 at 09:28:37PM +0900, Takuya Yoshikawa wrote:
> > On Thu, 20 Jun 2013 14:45:04 +0300
> > Gleb Natapov wrote:
> >
> > > On Thu, Jun 20, 2013 at 12:59:54PM +0200, Paolo Bonzini wrote:
>
On Thu, 20 Jun 2013 14:45:04 +0300
Gleb Natapov wrote:
> On Thu, Jun 20, 2013 at 12:59:54PM +0200, Paolo Bonzini wrote:
> > Il 20/06/2013 10:59, Takuya Yoshikawa ha scritto:
> > > Without this information, users will just see unexpected performance
> > > problems an
shadow pages to be zapped.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c60c5da..bc8302f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4385,8 +4385,10
On Thu, 13 Jun 2013 21:08:21 -0300
Marcelo Tosatti wrote:
> On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:
> - Where is the generation number increased?
Looks like when a new slot is installed in update_memslots() because
it's based on slots->generation. This is not restricted
_invalidate_mmio_sptes(struct kvm *kvm)
>* when mark memslot invalid.
>*/
> if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1)))
> - kvm_mmu_zap_mmio_sptes(kvm);
> + kvm_mmu_invalidate_zap_all_pages(kvm);
> }
>
On Mon, 10 Jun 2013 10:57:50 +0300
Gleb Natapov wrote:
> On Fri, Jun 07, 2013 at 04:51:25PM +0800, Xiao Guangrong wrote:
> > +
> > +/*
> > + * Return values of handle_mmio_page_fault_common:
> > + * RET_MMIO_PF_EMULATE: it is a real mmio page fault, emulate the
> > instruction
> > + *
On Fri, 31 May 2013 01:24:43 +0900
Takuya Yoshikawa wrote:
> On Thu, 30 May 2013 03:53:38 +0300
> Gleb Natapov wrote:
>
> > On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > > On W
On Thu, 30 May 2013 03:53:38 +0300
Gleb Natapov wrote:
> On Wed, May 29, 2013 at 09:19:41PM +0800, Xiao Guangrong wrote:
> > On 05/29/2013 08:39 PM, Marcelo Tosatti wrote:
> > > On Wed, May 29, 2013 at 11:03:19AM +0800, Xiao Guangrong wrote:
> > > the pages since other vcpus may be doing lock
| 124
> ++-
> arch/x86/kvm/mmu.h |2 +
> arch/x86/kvm/mmutrace.h | 45 +++---
> arch/x86/kvm/x86.c |9 +--
> 5 files changed, 163 insertions(+), 19 deletions(-)
>
> --
> 1.
On Mon, 13 May 2013 21:02:10 +0800
Xiao Guangrong wrote:
> On 05/13/2013 07:24 PM, Gleb Natapov wrote:
> > I agree that this is mostly code style issue and with Takuya patch the
> > indentation is dipper. Also the structure of mmu_free_roots() resembles
> > mmu_alloc_shadow_roots() currently. Th
On Thu, 09 May 2013 20:16:18 +0800
Xiao Guangrong wrote:
> >> That function is really magic, and this change do no really help it. I had
> >> several
> >> patches posted some months ago to make these kind of code better
> >> understanding, but
> >> i am too tired to update them.
Thank you for
On Thu, 09 May 2013 18:11:31 +0800
Xiao Guangrong wrote:
> On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
> > By making the last three statements common to both if/else cases, the
> > symmetry between the locking and unlocking becomes clearer. One note
> > here is that VCP
By making the last three statements common to both if/else cases, the
symmetry between the locking and unlocking becomes clearer. One note
here is that VCPU's root_hpa does not need to be protected by mmu_lock.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |
1 - 100 of 946 matches
Mail list logo