On Fri, May 02, 2014 at 10:35:09AM +0200, Alexander Graf wrote:
> On 05/01/2014 12:12 AM, Paul Mackerras wrote:
> >On Tue, Apr 29, 2014 at 06:17:37PM +0200, Alexander Graf wrote:
> >>When we expose a POWER8 CPU into the guest, it will start accessing PMU SPRs
> >>that we don't emulate. Just ignore
Paul Mackerras writes:
> On Sun, May 04, 2014 at 10:56:08PM +0530, Aneesh Kumar K.V wrote:
>> With debug option "sleep inside atomic section checking" enabled we get
>> the below WARN_ON during a PR KVM boot. This is because upstream now
>> have PREEMPT_COUNT enabled even if we have preempt disab
Il 07/05/2014 04:30, Bandan Das ha scritto:
> + if (unlikely(ctxt->_eip == fc->end)) {
Is this really going to be unlikely ?
Yes, it happens at most once per instruction and only for instructions
that cross pages.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe
Il 07/05/2014 06:21, Bandan Das ha scritto:
> + if (rc != X86EMUL_CONTINNUE)
> + goto done;
> + }
> +
>while (size--) {
> - if (unlikely(ctxt->_eip == fc->end)) {
> - rc = do_insn_fetch_bytes(ctxt);
> - if (rc != X86EMUL_CO
Il 07/05/2014 06:36, Bandan Das ha scritto:
> + _x = *(_type __aligned(1) *) &_fc->data[ctxt->_eip - _fc->start]; \
For my own understanding, how does the __aligned help here ?
Except for 16-byte SSE accesses, x86 doesn't distinguish aligned and
unaligned accesses. You can read 4 bytes at 0
Il 04/05/2014 18:33, Hu Yaohui ha scritto:
I experienced a similar problem that was related to nested code
having some bugs related to apicv and other new vmx features.
For example, the code enabled posted interrupts to run L2 even when the
feature was not exposed to L1 and L1 didn't use it.
Tr
Kim, Christoffer,
On Tue, May 06 2014 at 7:04:48 pm BST, Christoffer Dall
wrote:
> On Tue, Mar 25, 2014 at 05:08:14PM -0500, Kim Phillips wrote:
>> Use the correct memory type for device MMIO mappings: PAGE_S2_DEVICE.
>>
>> Signed-off-by: Kim Phillips
>> ---
>> arch/arm/kvm/mmu.c | 11 ++
On 6 May 2014 19:38, Peter Maydell wrote:
> On 6 May 2014 18:25, Marc Zyngier wrote:
>> On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
>> wrote:
>>> On Thu, Apr 24, 2014 at 07:17:23PM +0100, Marc Zyngier wrote:
+reg.addr = (u64)&data;
+if (ioctl(vcpu->vcpu_fd, KVM_GET_ONE
> Am 07.05.2014 um 11:34 schrieb Peter Maydell :
>
>> On 6 May 2014 19:38, Peter Maydell wrote:
>>> On 6 May 2014 18:25, Marc Zyngier wrote:
On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
wrote:
> On Thu, Apr 24, 2014 at 07:17:23PM +0100, Marc Zyngier wrote:
> +reg
On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
wrote:
> On 6 May 2014 19:38, Peter Maydell wrote:
>> On 6 May 2014 18:25, Marc Zyngier wrote:
>>> On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
>>> wrote:
On Thu, Apr 24, 2014 at 07:17:23PM +0100, Marc Zyngier wrote:
> +
> Am 07.05.2014 um 11:52 schrieb Marc Zyngier :
>
>> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
>> wrote:
>>> On 6 May 2014 19:38, Peter Maydell wrote:
On 6 May 2014 18:25, Marc Zyngier wrote:
> On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
> wrote:
>> On
On Wed, May 07 2014 at 10:42:54 am BST, Alexander Graf wrote:
>> Am 07.05.2014 um 11:34 schrieb Peter Maydell :
>>
>>> On 6 May 2014 19:38, Peter Maydell wrote:
On 6 May 2014 18:25, Marc Zyngier wrote:
> On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
> wrote:
>> On Thu,
On 05/07/2014 11:57 AM, Marc Zyngier wrote:
On Wed, May 07 2014 at 10:42:54 am BST, Alexander Graf wrote:
Am 07.05.2014 um 11:34 schrieb Peter Maydell :
On 6 May 2014 19:38, Peter Maydell wrote:
On 6 May 2014 18:25, Marc Zyngier wrote:
On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
On 7 May 2014 10:52, Marc Zyngier wrote:
> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
> wrote:
>> Current opinion on the qemu-devel thread seems to be that we
>> should just define that the endianness of the virtio device is
>> the endianness of the guest kernel at the point where the
On Wed, May 07 2014 at 10:55:45 am BST, Alexander Graf wrote:
>> Am 07.05.2014 um 11:52 schrieb Marc Zyngier :
>>
>>> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
>>> wrote:
On 6 May 2014 19:38, Peter Maydell wrote:
> On 6 May 2014 18:25, Marc Zyngier wrote:
>> On Tue, Ma
On Wed, May 07, 2014 at 12:11:13PM +0200, Alexander Graf wrote:
> On 05/07/2014 11:57 AM, Marc Zyngier wrote:
> >On Wed, May 07 2014 at 10:42:54 am BST, Alexander Graf wrote:
> >>>Am 07.05.2014 um 11:34 schrieb Peter Maydell :
> >>>
> On 6 May 2014 19:38, Peter Maydell wrote:
> >On 6 May
Hi all,
On 06/05/14 08:16, Alexander Graf wrote:
>
> On 06.05.14 01:23, Marcelo Tosatti wrote:
>
>> 1) By what algorithm you retrieve
>> and compare time in kvmclock guest structure and KVM_GET_CLOCK.
>> What are the results of the comparison.
>> And whether and backwards time was visible in the
On Wed, May 07 2014 at 11:11:13 am BST, Alexander Graf wrote:
> On 05/07/2014 11:57 AM, Marc Zyngier wrote:
>> Huh? What if my guest has usespace using an idmap, with Stage-1 MMU for
>> isolation only (much like an MPU)? R-class guests anyone?
>>
>> Agreed, this is not the general use case, but t
On Wed, 07 May 2014 10:52:01 +0100
Marc Zyngier wrote:
> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
> wrote:
> > On 6 May 2014 19:38, Peter Maydell wrote:
> >> On 6 May 2014 18:25, Marc Zyngier wrote:
> >>> On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
> >>> wrote:
> O
On Wed, May 07 2014 at 11:10:56 am BST, Peter Maydell
wrote:
> On 7 May 2014 10:52, Marc Zyngier wrote:
>> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
>> wrote:
>>> Current opinion on the qemu-devel thread seems to be that we
>>> should just define that the endianness of the virtio de
On Wed, May 07 2014 at 11:40:54 am BST, Greg Kurz
wrote:
> On Wed, 07 May 2014 10:52:01 +0100
> Marc Zyngier wrote:
>
>> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
>> wrote:
>> > On 6 May 2014 19:38, Peter Maydell wrote:
>> >> On 6 May 2014 18:25, Marc Zyngier wrote:
>> >>> On Tue,
On Wed, May 7, 2014 at 11:58 AM, Paolo Bonzini wrote:
> Il 04/05/2014 18:33, Hu Yaohui ha scritto:
>
>>> I experienced a similar problem that was related to nested code
>>> having some bugs related to apicv and other new vmx features.
>>>
>>> For example, the code enabled posted interrupts to run
On 05/07/2014 07:56 AM, Paul Mackerras wrote:
On Sun, May 04, 2014 at 10:56:08PM +0530, Aneesh Kumar K.V wrote:
With debug option "sleep inside atomic section checking" enabled we get
the below WARN_ON during a PR KVM boot. This is because upstream now
have PREEMPT_COUNT enabled even if we have
Il 07/05/2014 13:16, Abel Gordon ha scritto:
> PLE should be left enabled, I think.
Well... the PLE settings L0 uses to run L1 (vmcs01) may be different
than the PLE settings L1 configured to run L2 (vmcs12).
For example, L0 can use a ple_gap to run L1 that is bigger than the
ple_gap L1 confi
Il 07/05/2014 13:37, Paolo Bonzini ha scritto:
Il 07/05/2014 13:16, Abel Gordon ha scritto:
> PLE should be left enabled, I think.
Well... the PLE settings L0 uses to run L1 (vmcs01) may be different
than the PLE settings L1 configured to run L2 (vmcs12).
For example, L0 can use a ple_gap to
On 05/07/2014 12:46 PM, Marc Zyngier wrote:
On Wed, May 07 2014 at 11:10:56 am BST, Peter Maydell
wrote:
On 7 May 2014 10:52, Marc Zyngier wrote:
On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
wrote:
Current opinion on the qemu-devel thread seems to be that we
should just define tha
On 07/05/14 12:49, Alexander Graf wrote:
> On 05/07/2014 12:46 PM, Marc Zyngier wrote:
>> On Wed, May 07 2014 at 11:10:56 am BST, Peter Maydell
>> wrote:
>>> On 7 May 2014 10:52, Marc Zyngier wrote:
On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
wrote:
> Current opinion on
On 7 May 2014 12:04, Marc Zyngier wrote:
> On Wed, May 07 2014 at 11:40:54 am BST, Greg Kurz
> wrote:
>> All the fuzz is not really about enforcing kernel access... PPC also
>> has a current endianness selector (MSR_LE) but it only makes sense
>> if you are in the cpu context. Initial versions o
On 7 May 2014 13:16, Marc Zyngier wrote:
> That being said, I'm going to stop replying to this thread, and instead
> go back writing code, posting it, and getting on with my life in
> virtio-legacy land.
Some of us are trying to have a conversation in this thread
about virtio-legacy behaviour :-)
On 07/05/14 13:17, Peter Maydell wrote:
> On 7 May 2014 12:04, Marc Zyngier wrote:
>> On Wed, May 07 2014 at 11:40:54 am BST, Greg Kurz
>> wrote:
>>> All the fuzz is not really about enforcing kernel access... PPC also
>>> has a current endianness selector (MSR_LE) but it only makes sense
>>> if
On Wed, 7 May 2014 13:17:51 +0100
Peter Maydell wrote:
> On 7 May 2014 12:04, Marc Zyngier wrote:
> > On Wed, May 07 2014 at 11:40:54 am BST, Greg Kurz
> > wrote:
> >> All the fuzz is not really about enforcing kernel access... PPC also
> >> has a current endianness selector (MSR_LE) but it on
Relative jumps and calls do the masking according to the operand size, and not
according to the address size as the KVM emulator does today. In 64-bit mode,
the resulting RIP is always 64-bit. Otherwise it is masked according to the
instruction operand-size. Note that when 16-bit address size is u
In long-mode, when the address size is 4 bytes, the linear address is not
truncated as the emulator mistakenly does. Instead, the offset within the
segment (the ea field) should be truncated according to the address size.
As Intel SDM says: "In 64-bit mode, the effective address components are ad
This series of patches fixes various scenarios in which KVM does not follow x86
specifications. Patches #4 and #5 are related; they reflect a new revision of
the previously submitted patch that dealt with the wrong masking of registers
in long-mode. Patch #3 is a follow-up to the previously sumbit
32-bit operations are zero extended in 64-bit mode. Currently, the code does
not handle them correctly and keeps the high bits. In 16-bit mode, the high
32-bits are kept intact.
In addition, although it is not well-documented, when address override prefix
is used with REP-string instruction, RCX h
The RSP register is not automatically cached, causing mov DR instruction with
RSP to fail. Instead the regular register accessing interface should be used.
Signed-off-by: Nadav Amit
---
arch/x86/kvm/vmx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx.c b/a
In long-mode, bit 7 in the PDPTE is not reserved only if 1GB pages are
supported by the CPU. Currently the bit is considered by KVM as always
reserved.
Signed-off-by: Nadav Amit
---
arch/x86/kvm/cpuid.h | 7 +++
arch/x86/kvm/mmu.c | 8 ++--
2 files changed, 13 insertions(+), 2 deletion
On Wed, May 07, 2014 at 08:29:19AM +0200, Jan Kiszka wrote:
> On 2014-05-06 20:35, gso...@gmail.com wrote:
> > Signed-off-by: Gabriel Somlo
> > ---
> >
> > Jan,
> >
> > After today's pull from kvm, I also need this to build against my
> > Fedora 20 kernel (3.13.10-200.fc20.x86_64).
>
> Which ve
Il 07/05/2014 14:32, Nadav Amit ha scritto:
In long-mode, when the address size is 4 bytes, the linear address is not
truncated as the emulator mistakenly does. Instead, the offset within the
segment (the ea field) should be truncated according to the address size.
As Intel SDM says: "In 64-bit
It seems that it's easy to implement the EOI assist
on top of the PV EOI feature: simply convert the
page address to the format expected by PV EOI.
Notes:
-"No EOI required" is set only if interrupt injected
is edge triggered; this is true because level interrupts are going
through IOAPIC which
Nadav Amit writes:
> Relative jumps and calls do the masking according to the operand size, and not
> according to the address size as the KVM emulator does today. In 64-bit mode,
> the resulting RIP is always 64-bit. Otherwise it is masked according to the
> instruction operand-size. Note that
Nadav Amit writes:
> 32-bit operations are zero extended in 64-bit mode. Currently, the code does
> not handle them correctly and keeps the high bits. In 16-bit mode, the high
> 32-bits are kept intact.
>
> In addition, although it is not well-documented, when address override prefix
It would be
On Wed, May 07, 2014 at 10:00:21AM +0100, Marc Zyngier wrote:
> Kim, Christoffer,
>
> On Tue, May 06 2014 at 7:04:48 pm BST, Christoffer Dall
> wrote:
> > On Tue, Mar 25, 2014 at 05:08:14PM -0500, Kim Phillips wrote:
> >> Use the correct memory type for device MMIO mappings: PAGE_S2_DEVICE.
> >
On 04/27/2014 02:09 PM, Raghavendra K T wrote:
For kvm part feel free to add:
Tested-by: Raghavendra K T
V9 testing has shown no hangs.
I was able to do some performance testing. here are the results:
Overall we are seeing good improvement for pv-unfair version.
System : 32 cpu sandybridge w
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibi
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 147 +--
kernel/Kconfig.locks|2 +-
2
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/spinlock.h |4 ++--
arch/x86/kernel/kvm.c|2 +-
arch/x86/kernel/paravirt-spinlocks.c |4 ++--
arc
This patch modifies the para-virtualization (PV) infrastructure code
of the x86-64 architecture to support the PV queue spinlock. Three
new virtual methods are added to support PV qspinlock:
1) kick_cpu - schedule in a virtual CPU
2) halt_cpu - schedule out a virtual CPU
3) lockstat - update st
This patch enables the coexistence of both the PV qspinlock and
unfair lock. When both are enabled, however, only the lock fastpath
will perform lock stealing whereas the slowpath will have that disabled
to get the best of both features.
We also need to transition a CPU spinning too long in the p
Locking is always an issue in a virtualized environment because of 2
different types of problems:
1) Lock holder preemption
2) Lock waiter preemption
One solution to the lock waiter preemption problem is to allow unfair
lock in a virtualized environment. In this case, a new lock acquirer
can com
In order to fully resolve the lock waiter preemption problem in virtual
guests, it is necessary to enable lock stealing in the lock waiters.
A simple test-and-set lock, however, has 2 main problems:
1) The constant spinning on the lock word put a lot of cacheline
contention traffic on the aff
The simple unfair queue lock cannot completely solve the lock waiter
preemption problem as a preempted CPU at the front of the queue will
block forward progress in all the other CPUs behind it in the queue.
To allow those CPUs to move forward, it is necessary to enable lock
stealing for those lock
This patch adds base para-virtualization support to the queue
spinlock in the same way as was done in the PV ticket lock code. In
essence, the lock waiters will spin for a specified number of times
(QSPIN_THRESHOLD = 2^14) and then halted itself. The queue head waiter,
unlike the other waiter, will
With the pending addition of more codes to support unfair lock and
PV spinlock, the complexity of the slowpath function increases to
the point that the number of scratch-pad registers in the x86-64
architecture is not enough and so those additional non-scratch-pad
registers will need to be used. Th
From: Peter Zijlstra
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() oper
If unfair lock is supported, the lock acquisition loop at the end of
the queue_spin_lock_slowpath() function may need to detect the fact
the lock can be stolen. Code are added for the stolen lock detection.
A new qhead macro is also defined as a shorthand for mcs.locked.
Signed-off-by: Waiman Lon
This patch extracts the logic for the exchange of new and previous tail
code words into a new xchg_tail() function which can be optimized in a
later patch.
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h |2 +
kernel/locking/qspinlock.c| 61
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much
From: Peter Zijlstra
Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/loc
In order to support additional virtualization features like unfair lock
and para-virtualized spinlock, it is necessary to store additional
CPU specific data into the queue node structure. As a result, a new
qnode structure is created and the mcs_spinlock structure is now part
of the new structure.
There is a problem in the current trylock_pending() function. When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there
Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head don't need to reset the queue code word. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligi
v9->v10:
- Make some minor changes to qspinlock.c to accommodate review feedback.
- Change author to PeterZ for 2 of the patches.
- Include Raghavendra KT's test results in patch 18.
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkm
In order to be able to use the DBG_MDSCR_* macros from the KVM code,
move the relevant definitions to the obvious include file.
Also move the debug_el enum to a portion of the file that is guarded
by #ifndef __ASSEMBLY__ in order to use that file from assembly code.
Signed-off-by: Marc Zyngier
-
pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).
As we're about to use it (a lot) in a different context, rename it
with a (admitedly cryptic) name that make sense for all users.
Signed-off-by: Marc Zyngier
---
arch/arm64/kvm/sys_regs.c | 83
Enable trapping of the debug registers, preventing the guests to
mess with the host state (and allowing guests to use the debug
infrastructure as well).
Signed-off-by: Marc Zyngier
---
arch/arm64/kvm/hyp.S | 8
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/kvm/hyp.S b/arch/ar
As we're about to trap a bunch of CP14 registers, let's rework
the CP15 handling so it can be generalized and work with multiple
tables.
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/kvm_asm.h| 2 +-
arch/arm64/include/asm/kvm_coproc.h | 3 +-
arch/arm64/include/asm/kvm_host.h
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers "only").
Also, we only save/restore them when MDSCR_EL1 has debug enabled,
or when we've flag
We now have multiple tables for the various system registers
we trap. Make sure we check the order of all of them, as it is
critical that we get the order right (been there, done that...).
Signed-off-by: Marc Zyngier
---
arch/arm64/kvm/sys_regs.c | 22 --
1 file changed, 20 i
An interesting "feature" of the CP14 encoding is that there is
an overlap between 32 and 64bit registers, meaning they cannot
live in the same table as we did for CP15.
Create separate tables for 64bit CP14 and CP15 registers, and
let the top level handler use the right one.
Signed-off-by: Marc Z
This patch series adds debug support, a key feature missing from the
KVM/arm64 port.
The main idea is to keep track of whether the debug registers are
"dirty" (changed by the guest) or not. In this case, perform the usual
save/restore dance, for one run only. It means we only have a penalty
if a g
On 5/7/14, 4:57 PM, Paolo Bonzini wrote:
Il 07/05/2014 14:32, Nadav Amit ha scritto:
In long-mode, when the address size is 4 bytes, the linear address is not
truncated as the emulator mistakenly does. Instead, the offset within
the
segment (the ea field) should be truncated according to the ad
Add handlers for all the AArch64 debug registers that are accessible
from EL0 or EL1. The trapping code keeps track of the state of the
debug registers, allowing for the switch code to implement a lazy
switching strategy.
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/kvm_asm.h | 28 ++
Il 07/05/2014 16:50, Bandan Das ha scritto:
> +static void assign_masked(ulong *dest, ulong src, int bytes)
> {
> - *dest = (*dest & ~mask) | (src & mask);
> + switch (bytes) {
> + case 2:
> + *dest = (u16)src | (*dest & ~0xul);
> + break;
> + case 4:
> + *dest
Add handlers for all the AArch32 debug registers that are accessible
from EL0 or EL1. The code follow the same strategy as the AArch64
counterpart with regards to tracking the dirty state of the debug
registers.
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/kvm_asm.h | 9 +++
arch/arm
On Wed, May 7, 2014 at 2:40 PM, Paolo Bonzini wrote:
> Il 07/05/2014 13:37, Paolo Bonzini ha scritto:
>
>> Il 07/05/2014 13:16, Abel Gordon ha scritto:
> PLE should be left enabled, I think.
>>>
>>> Well... the PLE settings L0 uses to run L1 (vmcs01) may be different
>>> than the PLE se
On 7 May 2014 16:20, Marc Zyngier wrote:
> pm_fake doesn't quite describe what the handler does (ignoring writes
> and returning 0 for reads).
>
> As we're about to use it (a lot) in a different context, rename it
> with a (admitedly cryptic) name that make sense for all users.
> -/*
> - * We cou
Il 07/05/2014 17:30, Abel Gordon ha scritto:
> > ... which we already do. The only secondary execution controls we allow are
> > APIC page, unrestricted guest, WBINVD exits, and of course EPT.
>
> But we don't verify if L1 tries to enable the feature for L1 (even if
> it's not exposed)... Or do
On 7 May 2014 16:20, Marc Zyngier wrote:
> This patch series adds debug support, a key feature missing from the
> KVM/arm64 port.
>
> The main idea is to keep track of whether the debug registers are
> "dirty" (changed by the guest) or not. In this case, perform the usual
> save/restore dance, for
On 07/05/14 16:34, Peter Maydell wrote:
> On 7 May 2014 16:20, Marc Zyngier wrote:
>> pm_fake doesn't quite describe what the handler does (ignoring writes
>> and returning 0 for reads).
>>
>> As we're about to use it (a lot) in a different context, rename it
>> with a (admitedly cryptic) name tha
Il 07/05/2014 14:32, Nadav Amit ha scritto:
32-bit operations are zero extended in 64-bit mode. Currently, the code does
not handle them correctly and keeps the high bits. In 16-bit mode, the high
32-bits are kept intact.
In addition, although it is not well-documented, when address override pre
On 07/05/14 16:42, Peter Maydell wrote:
> On 7 May 2014 16:20, Marc Zyngier wrote:
>> This patch series adds debug support, a key feature missing from the
>> KVM/arm64 port.
>>
>> The main idea is to keep track of whether the debug registers are
>> "dirty" (changed by the guest) or not. In this ca
Il 07/05/2014 14:32, Nadav Amit ha scritto:
Relative jumps and calls do the masking according to the operand size, and not
according to the address size as the KVM emulator does today. In 64-bit mode,
the resulting RIP is always 64-bit. Otherwise it is masked according to the
instruction operand
On 5/7/14, 5:43 PM, Bandan Das wrote:
Nadav Amit writes:
Relative jumps and calls do the masking according to the operand size, and not
according to the address size as the KVM emulator does today. In 64-bit mode,
the resulting RIP is always 64-bit. Otherwise it is masked according to the
ins
Hello.
On 06-05-2014 19:51, Andreas Herrmann wrote:
From: David Daney
It is a performance enhancement. When running in a simulator, each
system call to write a character takes a lot of time. Batching them
up decreases the overhead (in the root kernel) of each virtio console
write.
Signe
Il 07/05/2014 14:32, Nadav Amit ha scritto:
This series of patches fixes various scenarios in which KVM does not follow x86
specifications. Patches #4 and #5 are related; they reflect a new revision of
the previously submitted patch that dealt with the wrong masking of registers
in long-mode. Pa
Il 07/05/2014 15:29, Michael S. Tsirkin ha scritto:
It seems that it's easy to implement the EOI assist
on top of the PV EOI feature: simply convert the
page address to the format expected by PV EOI.
Notes:
-"No EOI required" is set only if interrupt injected
is edge triggered; this is true bec
On 5/7/14, 5:50 PM, Bandan Das wrote:
Nadav Amit writes:
32-bit operations are zero extended in 64-bit mode. Currently, the code does
not handle them correctly and keeps the high bits. In 16-bit mode, the high
32-bits are kept intact.
In addition, although it is not well-documented, when addr
https://bugzilla.kernel.org/show_bug.cgi?id=73721
Paolo Bonzini changed:
What|Removed |Added
Status|NEW |RESOLVED
CC|
Il 07/05/2014 15:19, Gabriel L. Somlo ha scritto:
> On Wed, May 07, 2014 at 08:29:19AM +0200, Jan Kiszka wrote:
>> On 2014-05-06 20:35, gso...@gmail.com wrote:
>>> Signed-off-by: Gabriel Somlo
>>> ---
>>>
>>> Jan,
>>>
>>> After today's pull from kvm, I also need this to build against my
>>> Fedora
On Wed, May 07, 2014 at 04:20:47PM +0100, Marc Zyngier wrote:
> In order to be able to use the DBG_MDSCR_* macros from the KVM code,
> move the relevant definitions to the obvious include file.
>
> Also move the debug_el enum to a portion of the file that is guarded
> by #ifndef __ASSEMBLY__ in or
Treat monitor and mwait instructions as nop, which is architecturally
correct (but inefficient) behavior. We do this to prevent misbehaving
guests (e.g. OS X <= 10.7) from receiving invalid opcode faults after
failing to check for monitor/mwait availability via cpuid.
Since mwait-based idle loops
On Wed, May 07, 2014 at 02:10:59PM -0400, Gabriel L. Somlo wrote:
> Treat monitor and mwait instructions as nop, which is architecturally
> correct (but inefficient) behavior. We do this to prevent misbehaving
> guests (e.g. OS X <= 10.7) from receiving invalid opcode faults after
> failing to chec
On 05/07/2014 08:15 PM, Michael S. Tsirkin wrote:
On Wed, May 07, 2014 at 02:10:59PM -0400, Gabriel L. Somlo wrote:
Treat monitor and mwait instructions as nop, which is architecturally
correct (but inefficient) behavior. We do this to prevent misbehaving
guests (e.g. OS X <= 10.7) from receivin
On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
> v9->v10:
> - Make some minor changes to qspinlock.c to accommodate review feedback.
> - Change author to PeterZ for 2 of the patches.
> - Include Raghavendra KT's test results in patch 18.
Any chance you can post these on a git t
> Raghavendra KT had done some performance testing on this patch with
> the following results:
>
> Overall we are seeing good improvement for pv-unfair version.
>
> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
> Guest : 8GB with 16 vcpu/VM.
> Average was taken over 8-10 data poi
Il 07/05/2014 20:10, Gabriel L. Somlo ha scritto:
1. I can't test svm.c (on AMD). As such, I'm not sure the
skip_emulated_instruction() call in my own version of nop_interception()
is necessary. If not, I could probably just call the already existing
nop_on_interception() (line 1
Treat monitor and mwait instructions as nop, which is architecturally
correct (but inefficient) behavior. We do this to prevent misbehaving
guests (e.g. OS X <= 10.7) from crashing after they fail to check for
monitor/mwait availability via cpuid.
Since mwait-based idle loops relying on these nop-
On Tue, May 06, 2014 at 09:18:27AM +0200, Alexander Graf wrote:
>
> On 06.05.14 01:31, Marcelo Tosatti wrote:
> >On Mon, May 05, 2014 at 08:23:43PM -0300, Marcelo Tosatti wrote:
> >>Hi Alexander,
> >>
> >>On Mon, May 05, 2014 at 03:51:22PM +0200, Alexander Graf wrote:
> >>>When we migrate we ask t
1 - 100 of 110 matches
Mail list logo