Hi there,
I run
$perl scripts/checkpatch.pl -f arch/x86/kvm/*
$perl scripts/checkpatch.pl -f virt/kvm/*.c
$perl scripts/checkpatch.pl -f virt/kvm/*.h
and see lot of WARNINGs and ERRORs.
Can I fix this issue?
--
Eugene
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the bod
Signed-off-by: Eugene Korenevsky
---
Notes:
This patch adds checks on Guest RIP specified in Intel Software Developer
Manual.
The following checks are performed on processors that support Intel 64
architecture:
- Bits 63:32 must be 0 if the "IA-32e mode guest" VM-ent
BitVisor hypervisor fails to start a nested guest due to lack of MSR
load/store support in KVM.
This patch fixes this problem according to Intel SDM.
Signed-off-by: Eugene Korenevsky
---
arch/x86/include/asm/vmx.h| 6 ++
arch/x86/include/uapi/asm/msr-index.h | 3 +
arch/x86
> Thus it would be good to
> have kvm-unit-tests for all what is checked here.
I'll try to implement unit tests a bit later. The fixed patch based on
comments and critics from this thread is following.
> Indeed. Better have function than accepts the field index and that has
> some translation tab
read using MSR index from MSR entry
- MSR value is written to MSR entry
The code performs checks required by Intel Software Developer Manual.
This patch is partially based on Wincy Wan's work.
Signed-off-by: Eugene Korenevsky
---
arch/x86/include/asm/vmx.h| 6 +
arch/x86/in
> I have added Jan and Wincy to the CC list since they reviewed your earlier
> proposal.
> I think it would be better to split this up as I mentioned earlier, however,
> if the other reviewers and the maintainer don't have objections, I am ok :)
OK, the final patch is following.
--
Eugene
--
To
safely read using MSR index from MSR entry
- MSR value is written to MSR entry
The code performs checks required by Intel Software Developer Manual.
Signed-off-by: Eugene Korenevsky
---
arch/x86/include/asm/vmx.h| 6 +
arch/x86/include/uapi/asm/msr-index.h | 3 +
arch/x86
A trivial code cleanup. This `if` is redundant.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/emulate.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 106c015..3a494f3 100644
--- a/arch/x86/kvm/emulate.c
+++ b
f the first instruction
of that task. DR6.BT bit should be set to indicate this condition.
Signed-off-by: Eugene Korenevsky
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/emulate.c | 13 +
arch/x86/kvm/vmx.c | 5 -
arch/x86/kvm/x86.c
On each VM-entry CPU should check the following VMCS fields for zero bits
beyond physical address width:
- APIC-access address
- virtual-APIC address
- posted-interrupt descriptor address
This patch adds these checks required by Intel SDM.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm
in kvm_arch_vcpu_init() and reloaded every time CPUID is updated by
usermode. It is obvious that these reloads occur infrequently.
Signed-off-by: Eugene Korenevsky
---
arch/x86/include/asm/kvm_host.h | 4 +++-
arch/x86/kvm/cpuid.c| 33 ++---
arch/x86/kvm
After speed-up of cpuid_maxphyaddr() it can be called easily. Now instead of
heavy enumeration of CPUID entries it returns cached pre-computed value. It is
also inlined now. So caching its result became unnecessary and can be removed.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 14
is outside the segment
limit.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 77 ++
1 file changed, 61 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 8c14d6a..08a721e 100644
--- a
Prepare for subsequent changes. Extract calls for segment checking in protected
and 64-bit mode. This should be done to avoid overbloating of
get_vmx_mem_address() function, even if kvm_queue_exception_e() is called
twice.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 106
is one, there is no limit
exceeding (limit + 1 - 1 == limit), but if operand size is two, the limit
is exceeded (limit + 2 - 1 > limit).
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm
Add limit checking for expand-down data segments. For such segments, the
effective limit specifies the last address that is not allowed to be accessed
within the segment. I.e. offset <= limit means means limit exceeding.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 5 -
1 f
VMWRITE instruction is not valid in compatibility mode. This is
checked by nested_vmx_check_permission() function which throws #UD if CS.L=0.
The additional check in is_64_bit_mode() for CS.L=0 is useless.
We should check only EFER.LMA=1 which is done by is_long_mode().
Signed-off-by: Eugene
. For INVEPT instruction, memory operand size
is 128 bits.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4a4d677..f39e24f 100644
--- a/arch/x86/kvm
> GCC doesn't warn that "((u32)e->index >> 24) == 0x800" is always false?
> I think SDM says '(e->index >> 8) == 0x8'.
Missed that. Thank you.
--
Eugene
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
> Hi, Eugene, is it okay to split my part up?
I think the patch is atomic. No ideas how this patch could be split
without breaking its integrity.
You are a co-author of the patch since your ideas make significant part of it.
--
Eugene
--
To unsubscribe from this list: send the line "unsubscribe
Will send fixed patch this evening.
--
Eugene
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
should not set any bits
beyond the processor's physical-address width.
Also it adds warning messages on failures during MSR switch. These messages
are useful for people who debug their VMMs in nVMX.
Signed-off-by: Eugene Korenevsky
---
arch/x86/include/uapi/asm/msr-index.h | 3 +
arch/x8
If failed, do nested vmx abort.
Signed-off-by: Wincy Van
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 30 +++---
1 file changed, 23 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9061d93..0d4efaa 100644
--- a/arch/x86
Remove unused variable to get rid of compiler warning.
And remove commented out code (it can always be restored
from git logs).
Signed-off-by: Eugene Korenevsky
---
x86/vmx_tests.c | 10 +++---
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
Several hypervisors need MSR auto load/restore feature.
We read MSRs from VM-entry MSR load area which specified by L1,
and load them via kvm_set_msr in the nested entry.
When nested exit occurs, we get MSRs via kvm_get_msr, writing
them to L1`s MSR store area. After this, we read MSRs from VM-exit
should not set any bits
beyond the processor's physical-address width.
Also it adds warning messages on failures during MSR switch. These messages
are useful for people who debug their VMMs in nVMX.
Signed-off-by: Eugene Korenevsky
---
arch/x86/include/uapi/asm/msr-index.h | 3 +
arch/x8
If failed, do nested vmx abort.
Signed-off-by: Wincy Van
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 30 +++---
1 file changed, 23 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index ac1fa1c2..ddb28e2 100644
--- a/arch
)
Signed-off-by: Eugene Korenevsky
---
x86/vmx_tests.c | 76 +
1 file changed, 76 insertions(+)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 184fafc..913904a 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1525,6 +1525,80 @@ static
Hi there,
Please DO NOT take v3 version of patchset in account. It contains bug
(missing check for MSR load/store area size in
`nested_vmx_check_msr_switch`). This bug has been fixed in v4 version
of patchset.
Now MSR load/store feature is partially covered with tests (see patch
to kvm-unit-tests
> The diff is just
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index d6fe958a0403..09ccf6c09435 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -8305,6 +8305,8 @@ static int nested_vmx_check_msr_switch(struct kvm_vcpu
> *vcpu,
> WARN_ON(1);
>
When generating #PF VM-exit, check equality:
(PFEC & PFEC_MASK) == PFEC_MATCH
If there is equality, the 14 bit of exception bitmap is used to take decision
about generating #PF VM-exit. If there is inequality, inverted 14 bit is used.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c
When generating #PF VM-exit, check equality:
(PFEC & PFEC_MASK) == PFEC_MATCH
If there is equality, the 14 bit of exception bitmap is used to take decision
about generating #PF VM-exit. If there is inequality, inverted 14 bit is used.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c
32 matches
Mail list logo