Return false in kvm_cpuid() when it fails to find the cpuid
entry. Also, this routine(and its caller) is optimized with
a new argument - check_limit, so that the check_cpuid_limit()
fall back can be avoided.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_emulate.h | 4 ++--
arch/x86/kvm
e can just
redefine it to 5 whenever a replacement is needed for 5 level
paging.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_host.h | 4 +++-
arch/x86/kvm/mmu.c | 36 ++--
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu_audit.c
table for a VM
whose physical address width is less than 48 bits, even when
the VM is running in 5 level paging mode.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_host.h | 10 +-
arch/x86/include/asm/vmx.h | 1 +
arch/x86/kvm/cpuid.c| 5 +
arch/x86/kvm/mmu.c
This patch exposes 5 level page table feature to the VM,
at the same time, the canonical virtual address checking is
extended to support both 48-bits and 57-bits address width.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_host.h | 18 ++
arch/x86/kvm/cpuid.c
Currently, KVM uses CR3_L_MODE_RESERVED_BITS to check the
reserved bits in CR3. Yet the length of reserved bits in
guest CR3 should be based on the physical address width
exposed to the VM. This patch changes CR3 check logic to
calculate the reserved bits at runtime.
Signed-off-by: Yu Zhang
ove definition of PT64_ROOT_MAX_LEVEL
into kvm_host.h;
- Address comments from Paolo Bonzini: add checking for shadow_root_level in
mmu_free_roots();
- Address comments from Paolo Bonzini: set root_level & shadow_root_level both
to PT64_ROOT_4LEVEL for shadow ept situation.
Yu Zhang (5
On 8/17/2017 7:57 PM, Paolo Bonzini wrote:
On 12/08/2017 15:35, Yu Zhang wrote:
index a98b88a..50107ae 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -694,7 +694,7 @@ static __always_inline int __linearize(struct
x86_emulate_ctxt *ctxt,
switch (mode
On 8/17/2017 8:29 PM, Paolo Bonzini wrote:
On 17/08/2017 21:52, Yu Zhang wrote:
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index ac15193..3e759cf 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -21,7 +21,14 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu
On 8/17/2017 8:31 PM, Paolo Bonzini wrote:
On 17/08/2017 21:52, Yu Zhang wrote:
+ if (efer & EFER_LMA) {
+ u64 maxphyaddr;
+ u32 eax = 0x8008;
+
+ if (ctxt->ops->get_cpuid(ctxt, &eax, NU
On 8/17/2017 8:23 PM, Yu Zhang wrote:
On 8/17/2017 8:29 PM, Paolo Bonzini wrote:
On 17/08/2017 21:52, Yu Zhang wrote:
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index ac15193..3e759cf 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -21,7 +21,14 @@ int
On 8/17/2017 9:17 PM, Paolo Bonzini wrote:
On 17/08/2017 14:23, Yu Zhang wrote:
On 8/17/2017 8:29 PM, Paolo Bonzini wrote:
On 17/08/2017 21:52, Yu Zhang wrote:
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index ac15193..3e759cf 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86
On 8/17/2017 10:29 PM, Paolo Bonzini wrote:
On 17/08/2017 13:53, Yu Zhang wrote:
On 8/17/2017 7:57 PM, Paolo Bonzini wrote:
On 12/08/2017 15:35, Yu Zhang wrote:
index a98b88a..50107ae 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -694,7 +694,7 @@ static
ogic, optimizes COW
> scenarios by refreshing leaf SPTEs when they are written, as opposed to
> zapping the SPTE, restarting the guest, and installing the new SPTE on
> the subsequent fault. Since KVM no longer write-protects leaf page
> tables, update_pte() is unreachable and can be dro
On Mon, Feb 08, 2021 at 05:47:22PM +0100, Paolo Bonzini wrote:
> On 08/02/21 14:49, Yu Zhang wrote:
> > On Mon, Feb 08, 2021 at 12:36:57PM +0100, Paolo Bonzini wrote:
> > > On 07/02/21 13:22, Yu Zhang wrote:
> > > > In shadow page table, only leaf SPs may be marked
On Tue, Feb 09, 2021 at 08:46:42AM +0100, Paolo Bonzini wrote:
> On 09/02/21 04:33, Yu Zhang wrote:
> > On Mon, Feb 08, 2021 at 05:47:22PM +0100, Paolo Bonzini wrote:
> > > On 08/02/21 14:49, Yu Zhang wrote:
> > > > On Mon, Feb 08, 2021 at 12:36:57PM +0100, Paolo Bon
d a warning
inside mmu_sync_children() to assert that the flags are used
properly.
While at it, move the warning from mmu_need_write_protect()
to kvm_unsync_page().
Co-developed-by: Sean Christopherson
Signed-off-by: Sean Christopherson
Signed-off-by: Paolo Bonzini
Signed-off-by: Yu Zhang
---
a
01:01:11AM +0800, Yu Zhang wrote:
> In shadow page table, only leaf SPs may be marked as unsync;
> instead, for non-leaf SPs, we store the number of unsynced
> children in unsync_children. Therefore, in kvm_mmu_sync_root(),
> sp->unsync shall always be zero for the root SP and there
ren() is added, in
case someone incorrectly used it.
Also, clarify the mmu_need_write_protect(), by moving the warning
into kvm_unsync_page().
Signed-off-by: Yu Zhang
Signed-off-by: Sean Christopherson
---
Changes in V2:
- warnings added based on Sean's suggestion.
arch/x86/kvm/mmu/mm
nlocks in kvmgt.
Reported-by: Stephen Rothwell
Signed-off-by: Yu Zhang
---
drivers/gpu/drm/i915/gvt/kvmgt.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 60f1a386dd06..b4348256ae95 100644
--
Thanks a lot for reporting this, Stephen. Just sent out a patch
to fix it in kvmgt.
B.R.
Yu
On Mon, Feb 08, 2021 at 04:33:08PM +1100, Stephen Rothwell wrote:
> Hi all,
>
> After merging the kvm tree, today's linux-next build (x86_64 allmodconfig)
> failed like this:
>
> drivers/gpu/drm/i915/gvt
On Mon, Feb 08, 2021 at 12:36:57PM +0100, Paolo Bonzini wrote:
> On 07/02/21 13:22, Yu Zhang wrote:
> > In shadow page table, only leaf SPs may be marked as unsync.
> > And for non-leaf SPs, we use unsync_children to keep the number
> > of the unsynced children. In kvm_mmu_sy
Hi Paolo,
Any comments? Thanks!
B.R.
Yu
On Sat, Jan 16, 2021 at 08:21:00AM +0800, Yu Zhang wrote:
> In shadow page table, only leaf SPs may be marked as unsync.
> And for non-leaf SPs, we use unsync_children to keep the number
> of the unsynced children. In kvm_mmu_sync_root(), s
In shadow page table, only leaf SPs may be marked as unsync.
And for non-leaf SPs, we use unsync_children to keep the number
of the unsynced children. In kvm_mmu_sync_root(), sp->unsync
shall always be zero for the root SP, hence no need to check it.
Signed-off-by: Yu Zhang
---
arch/x86/kvm/
Nested VMX was enabled by default in commit <1e58e5e59148> ("KVM:
VMX: enable nested virtualization by default"), which was merged
in Linux 4.20. This patch is to fix the documentation accordingly.
Signed-off-by: Yu Zhang
---
Documentation/virt/kvm/nested-vmx.rs
On Thu, Oct 03, 2019 at 02:23:48PM -0700, Rick Edgecombe wrote:
> Mask gfn by maxphyaddr in kvm_mtrr_get_guest_memory_type so that the
> guests view of gfn is used when high bits of the physical memory are
> used as extra permissions bits. This supports the KVM XO feature.
>
> TODO: Since MTRR is
Thanks for the notification, Stephen.
@Paolo, should I resubmit the patch to correct?
On Sat, Feb 16, 2019 at 06:34:33PM +1100, Stephen Rothwell wrote:
> Hi all,
>
> In commit
>
> aa8359972cfc ("KVM: x86/mmu: Differentiate between nr zapped and list
> unstable")
>
> Fixes tag
>
> Fixes: 5
Hi Paolo, any comments on this patch? And the other one(kvm: x86: Return
LA57 feature based on hardware capability )? :-)
On Fri, Feb 01, 2019 at 12:09:23AM +0800, Yu Zhang wrote:
> Previously, commit 7dcd57552008 ("x86/kvm/mmu: check if tdp/shadow
> MMU reconfiguration is needed&quo
On Wed, Feb 20, 2019 at 03:06:10PM +0100, Vitaly Kuznetsov wrote:
> Yu Zhang writes:
>
> > Previously, commit 7dcd57552008 ("x86/kvm/mmu: check if tdp/shadow
> > MMU reconfiguration is needed") offered some optimization to avoid
> > the unnecessary reconfigu
Any news?
Thanks for your reivew, Paolo.
This is Yu Zhang from Intel. I'll pick up this 5 level ept feature, and
will try to address your comments next. :-)
Now I am learning Liang's code and trying to bring VM up with Kirill's
native 5 level paging code integrated.
Yu
Paolo
On 3/1/2017 5:04 PM, Yu Zhang wrote:
On 12/13/2016 7:03 PM, Paolo Bonzini wrote:
On 13/12/2016 05:03, Li, Liang Z wrote:
Hi Paolo,
We intended to enable UMIP for KVM and found you had already worked
on it.
Do you have any plan for the following patch set? It's there
anything els
On 3/10/2017 4:36 PM, Paolo Bonzini wrote:
On 10/03/2017 09:02, Yu Zhang wrote:
Besides, is this all the test for UMIP unit test? I.e. do we need to
construct a scenario in the test to trigger vm exit and let hypervisor
inject a GP fault? - I did not see this scenario in this patch. Or
On 3/10/2017 5:31 PM, Yu Zhang wrote:
On 3/10/2017 4:36 PM, Paolo Bonzini wrote:
On 10/03/2017 09:02, Yu Zhang wrote:
Besides, is this all the test for UMIP unit test? I.e. do we
need to
construct a scenario in the test to trigger vm exit and let hypervisor
inject a GP fault? - I did
On 12/13/2016 7:03 PM, Paolo Bonzini wrote:
On 13/12/2016 05:03, Li, Liang Z wrote:
Hi Paolo,
We intended to enable UMIP for KVM and found you had already worked on it.
Do you have any plan for the following patch set? It's there anything else you
expect
us help to do?
Yes, I plan to resen
On Wed, Oct 14, 2020 at 11:26:42AM -0700, Ben Gardon wrote:
> The TDP iterator implements a pre-order traversal of a TDP paging
> structure. This iterator will be used in future patches to create
> an efficient implementation of the KVM MMU for the TDP case.
>
> Tested by running kvm-unit-tests an
On Wed, Oct 14, 2020 at 11:26:47AM -0700, Ben Gardon wrote:
> Add functions to zap SPTEs to the TDP MMU. These are needed to tear down
> TDP MMU roots properly and implement other MMU functions which require
> tearing down mappings. Future patches will add functions to populate the
> page tables, b
On Wed, Oct 14, 2020 at 11:26:44AM -0700, Ben Gardon wrote:
> The TDP MMU must be able to allocate paging structure root pages and track
> the usage of those pages. Implement a similar, but separate system for root
> page allocation to that of the x86 shadow paging implementation. When
> future pat
On Wed, Oct 21, 2020 at 07:20:15PM +0200, Paolo Bonzini wrote:
> On 21/10/20 17:02, Yu Zhang wrote:
> >> void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root)
> >> {
> >> + gfn_t max_gfn = 1ULL << (boot
On Wed, Oct 21, 2020 at 08:00:47PM +0200, Paolo Bonzini wrote:
> On 21/10/20 19:24, Yu Zhang wrote:
> > On Wed, Oct 21, 2020 at 07:20:15PM +0200, Paolo Bonzini wrote:
> >> On 21/10/20 17:02, Yu Zhang wrote:
> >>>> void kvm_tdp_mmu_free_root(struct kv
On Wed, Oct 21, 2020 at 11:08:52AM -0700, Ben Gardon wrote:
> On Wed, Oct 21, 2020 at 7:59 AM Yu Zhang wrote:
> >
> > On Wed, Oct 14, 2020 at 11:26:42AM -0700, Ben Gardon wrote:
> > > The TDP iterator implements a pre-order traversal of a TDP paging
> > > structu
, which will cause the
VM entry failure later.
Fixes: 'commit f99e3daf94ff ("KVM: x86: Add Intel PT virtualization work mode")'
Signed-off-by: Yu Zhang
---
Cc: Paolo Bonzini
Cc: "Radim Krčmář"
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Pete
ed to reset the reserved bits. Also, the TDP may need to
reset its shadow_root_level when this value is changed.
To fix this, a new field, maxphyaddr, is introduced in the extended
role structure to keep track of the configured guest physical address
width.
Signed-off-by: Yu Zhang
---
Cc: Paolo Bonz
ng vcpus, and Qemu
will not be able to detect this feature and create VMs with LA57 feature.
As discussed earlier, VMs can still benefit from extended linear address
width, e.g. to enhance features like ASLR. So we would like to fix this,
by return the true hardware capability when Qemu queries.
S
e can just
redefine it to 5 whenever a replacement is needed for 5 level
paging.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_host.h | 4 +++-
arch/x86/kvm/mmu.c | 36 ++--
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu_audit.c
from Paolo Bonzini: move definition of PT64_ROOT_MAX_LEVEL
into kvm_host.h;
- Address comments from Paolo Bonzini: add checking for shadow_root_level in
mmu_free_roots();
- Address comments from Paolo Bonzini: set root_level & shadow_root_level both
to PT64_ROOT_4LEVEL for shadow ept situation
This patch exposes 5 level page table feature to the VM.
At the same time, the canonical virtual address checking is
extended to support both 48-bits and 57-bits address width.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_host.h | 18 ++
arch/x86/kvm/cpuid.c
table for a VM
whose physical address width is less than 48 bits, even when
the VM is running in 5 level paging mode.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_host.h | 10 +-
arch/x86/include/asm/vmx.h | 2 ++
arch/x86/kvm/cpuid.c| 5 +
arch/x86/kvm/mmu.c
Return false in kvm_cpuid() when it fails to find the cpuid
entry. Also, this routine(and its caller) is optimized with
a new argument - check_limit, so that the check_cpuid_limit()
fall back can be avoided.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_emulate.h | 4 ++--
arch/x86/kvm
Currently, KVM uses CR3_L_MODE_RESERVED_BITS to check the
reserved bits in CR3. Yet the length of reserved bits in
guest CR3 should be based on the physical address width
exposed to the VM. This patch changes CR3 check logic to
calculate the reserved bits at runtime.
Signed-off-by: Yu Zhang
On 8/24/2017 9:40 PM, Paolo Bonzini wrote:
On 24/08/2017 14:27, Yu Zhang wrote:
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 3ed6192..67e7ec2 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -48,6 +48,9 @@
static inline u64 rsvd_bits(int s, int e)
{
+ if
On 8/24/2017 11:50 PM, Paolo Bonzini wrote:
On 24/08/2017 17:23, Yu Zhang wrote:
static inline u64 rsvd_bits(int s, int e)
{
+if (e < s)
+return 0;
+
return ((1ULL << (e - s + 1)) - 1) << s;
}
e = s - 1 is already supported; why do you need e <
On 8/25/2017 12:27 AM, Paolo Bonzini wrote:
On 24/08/2017 17:38, Yu Zhang wrote:
In practice, MAXPHYADDR will never be 59 even because the PKRU bits are
at bits 59..62.
Thanks, Paolo.
I see. I had made an assumption that MAXPHYADDR shall not exceed the
physical one,
which is 52 I believe
On 9/16/2017 7:19 AM, Jim Mattson wrote:
On Thu, Aug 24, 2017 at 5:27 AM, Yu Zhang wrote:
Currently, KVM uses CR3_L_MODE_RESERVED_BITS to check the
reserved bits in CR3. Yet the length of reserved bits in
guest CR3 should be based on the physical address width
exposed to the VM. This patch
On 9/18/2017 4:41 PM, Paolo Bonzini wrote:
On 18/09/2017 10:15, Yu Zhang wrote:
static bool emulator_get_cpuid(struct x86_emulate_ctxt *ctxt,
u32 *eax, u32 *ebx, u32 *ecx, u32 *edx, bool
check_limit)
{
return kvm_cpuid(emul_to_vcpu(ctxt), eax, ebx, ecx, edx
physical address width.")
Reported-by: Jim Mattson
Signed-off-by: Yu Zhang
---
arch/x86/kvm/emulate.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 16bf665..15f527b 100644
--- a/arch/x86/kvm/emulate.c
+++ b
On 8/18/2017 8:50 PM, Paolo Bonzini wrote:
On 18/08/2017 10:28, Yu Zhang wrote:
On 8/17/2017 10:29 PM, Paolo Bonzini wrote:
On 17/08/2017 13:53, Yu Zhang wrote:
On 8/17/2017 7:57 PM, Paolo Bonzini wrote:
On 12/08/2017 15:35, Yu Zhang wrote:
index a98b88a..50107ae 100644
--- a/arch/x86
On 8/21/2017 6:12 PM, Paolo Bonzini wrote:
On 21/08/2017 09:27, Yu Zhang wrote:
On 8/18/2017 8:50 PM, Paolo Bonzini wrote:
On 18/08/2017 10:28, Yu Zhang wrote:
On 8/17/2017 10:29 PM, Paolo Bonzini wrote:
On 17/08/2017 13:53, Yu Zhang wrote:
On 8/17/2017 7:57 PM, Paolo Bonzini wrote:
On
ULL and
return false immediately. Then the false value would have 2 different
meanings - entry
not found, or invalid params.
Paolo, any suggestion? :-)
Thanks
Yu
Reviewed-by: Jim Mattson
On Mon, Sep 18, 2017 at 5:19 AM, David Hildenbrand wrote:
On 18.09.2017 12:45, Yu Zhang wrote:
Routine c
On 9/20/2017 4:13 PM, Paolo Bonzini wrote:
On 20/09/2017 08:35, Yu Zhang wrote:
2 reasons I did not choose to change kvm_cpuid(): 1> like Jim's
comments, kvm_cpuid() will eventually write the *eax - *edx no
matter a cpuid entry is found or not; 2> currently, return value of
kvm
table for a VM
whose physical address width is less than 48 bits, even when
the VM is running in 5 level paging mode.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_host.h | 10 +-
arch/x86/include/asm/vmx.h | 1 +
arch/x86/kvm/cpuid.c| 5 +
arch/x86/kvm/mmu.c
so that
we can just redefine it to PT64_ROOT_5LEVEL whenever a replacement
is needed for 5 level paging.
Signed-off-by: Yu Zhang
---
arch/x86/kvm/mmu.c | 36 ++--
arch/x86/kvm/mmu.h | 4 +++-
arch/x86/kvm/mmu_audit.c | 4 ++--
arch/x86/kvm/svm.c
This patch exposes 5 level page table feature to the VM,
at the same time, the canonical virtual address checking is
extended to support both 48-bits and 57-bits address width.
Signed-off-by: Yu Zhang
---
arch/x86/include/asm/kvm_host.h | 18 ++
arch/x86/kvm/cpuid.c
address width and physical address width to the VM;
2> extends shadow logic to construct 5 level shadow page for VMs running
in LA57 mode;
3> extends ept logic to construct 5 level ept table for VMs whose maximum
physical width exceeds 48 bits.
Yu Zhang (4):
KVM: MMU: check guest CR3 reserved
Currently, KVM uses CR3_L_MODE_RESERVED_BITS to check the
reserved bits in CR3. Yet the length of reserved bits in
guest CR3 should be based on the physical address width
exposed to the VM. This patch changes CR3 check logic to
calculate the reserved bits at runtime.
Signed-off-by: Yu Zhang
Thanks a lot for your comments, Paolo. :-)
On 8/14/2017 3:31 PM, Paolo Bonzini wrote:
On 12/08/2017 15:35, Yu Zhang wrote:
struct rsvd_bits_validate {
- u64 rsvd_bits_mask[2][4];
+ u64 rsvd_bits_mask[2][5];
u64 bad_mt_xwr;
};
Can you change this 4 to
On 8/14/2017 3:36 PM, Paolo Bonzini wrote:
On 12/08/2017 15:35, Yu Zhang wrote:
+ ctxt->ops->get_cpuid(ctxt, &eax, NULL, NULL, NULL);
+ maxphyaddr = eax * 0xff;
This is "&", not "*". You can also use rsvd_bit
On 8/14/2017 10:13 PM, Paolo Bonzini wrote:
On 14/08/2017 13:37, Yu Zhang wrote:
Thanks a lot for your comments, Paolo. :-)
On 8/14/2017 3:31 PM, Paolo Bonzini wrote:
On 12/08/2017 15:35, Yu Zhang wrote:
struct rsvd_bits_validate {
-u64 rsvd_bits_mask[2][4];
+u64 rsvd_bits_mask
On 8/14/2017 11:02 PM, Paolo Bonzini wrote:
On 14/08/2017 16:32, Yu Zhang wrote:
On 8/14/2017 10:13 PM, Paolo Bonzini wrote:
On 14/08/2017 13:37, Yu Zhang wrote:
Thanks a lot for your comments, Paolo. :-)
On 8/14/2017 3:31 PM, Paolo Bonzini wrote:
On 12/08/2017 15:35, Yu Zhang wrote
On 8/15/2017 12:40 AM, Paolo Bonzini wrote:
On 14/08/2017 18:13, Jim Mattson wrote:
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
- if (efer & EFER_LMA)
- rsvd = CR3_L_MODE_RESERVED_BITS & ~CR3_PCID_INVD;
+ if (efer & EFER_LMA) {
68 matches
Mail list logo