+--
1 file changed, 2 insertions(+), 2 deletions(-)
Reviewed-by: Alexandre Chartre
And it is probably worth adding a 'Fixes:' tag:
Fixes: 334872a09198 ("x86/traps: Attempt to fixup exceptions in vDSO before
signaling")
alex.
addresses.
Signed-off-by: Alexandre Chartre
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: H. Peter Anvin
Cc: Josh Poimboeuf
Cc: x...@kernel.org
---
This is a potential issue and I don't know if it can be triggered with the
current kernel: is there any code mapped to m
instrumentation markers are only active when CONFIG_DEBUG_ENTRY is
enabled as the end marker emits a NOP to prevent the compiler from merging
the annotation points. This means the objtool verification requires a
kernel compiled with this option.
Signed-off-by: Thomas Gleixner
Reviewed-by: Alexandre
++--
4 files changed, 5 insertions(+), 5 deletions(-)
Reviewed-by: Alexandre Chartre
alex.
On 11/16/20 7:34 PM, Andy Lutomirski wrote:
On Mon, Nov 16, 2020 at 10:10 AM Alexandre Chartre
wrote:
On 11/16/20 5:57 PM, Andy Lutomirski wrote:
On Mon, Nov 16, 2020 at 6:47 AM Alexandre Chartre
wrote:
When entering the kernel from userland, use the per-task PTI stack
instead of the
On 11/17/20 4:52 PM, Andy Lutomirski wrote:
On Tue, Nov 17, 2020 at 7:07 AM Alexandre Chartre
wrote:
On 11/16/20 7:34 PM, Andy Lutomirski wrote:
On Mon, Nov 16, 2020 at 10:10 AM Alexandre Chartre
wrote:
On 11/16/20 5:57 PM, Andy Lutomirski wrote:
On Mon, Nov 16, 2020 at 6:47 AM
On 11/17/20 5:55 PM, Borislav Petkov wrote:
On Tue, Nov 17, 2020 at 08:56:23AM +0100, Alexandre Chartre wrote:
The main goal of ASI is to provide KVM address space isolation to
mitigate guest-to-host speculative attacks like L1TF or MDS.
Because the current L1TF and MDS mitigations are
On 11/17/20 6:07 PM, Borislav Petkov wrote:
On Tue, Nov 17, 2020 at 09:19:01AM +0100, Alexandre Chartre wrote:
We are not reversing PTI, we are extending it.
You're reversing it in the sense that you're mapping more kernel memory
into the user page table than what is mapped
On 11/17/20 7:28 PM, Borislav Petkov wrote:
On Tue, Nov 17, 2020 at 07:12:07PM +0100, Alexandre Chartre wrote:
Yes. L1TF/MDS allow some inter cpu-thread attacks which are not mitigated at
the moment. In particular, this allows a guest VM to attack another guest VM
or the host kernel running
On 11/17/20 10:23 PM, Borislav Petkov wrote:
On Tue, Nov 17, 2020 at 08:02:51PM +0100, Alexandre Chartre wrote:
No. This prevents the guest VM from gathering data from the host
kernel on the same cpu-thread. But there's no mitigation for a guest
VM running on a cpu-thread attacking an
On 11/17/20 10:26 PM, Borislav Petkov wrote:
On Tue, Nov 17, 2020 at 07:12:07PM +0100, Alexandre Chartre wrote:
Some benchmarks are available, in particular from phoronix:
What I was expecting was benchmarks *you* have run which show that
perf penalty, not something one can find quickly on
On 11/18/20 10:30 AM, David Laight wrote:
From: Alexandre Chartre
Sent: 18 November 2020 07:42
On 11/17/20 10:26 PM, Borislav Petkov wrote:
On Tue, Nov 17, 2020 at 07:12:07PM +0100, Alexandre Chartre wrote:
Some benchmarks are available, in particular from phoronix:
What I was expecting
On 11/18/20 2:22 PM, David Laight wrote:
From: Alexandre Chartre
Sent: 18 November 2020 10:30
...
Correct, this RFC is not changing the overhead. However, it is a step forward
for being able to execute some selected syscalls or interrupt handlers without
switching to the kernel page-table
On 11/18/20 12:29 PM, Borislav Petkov wrote:
On Wed, Nov 18, 2020 at 08:41:42AM +0100, Alexandre Chartre wrote:
Well, it looks like I wrongfully assume that KPTI was a well known performance
overhead since it was introduced (because it adds extra page-table switches),
but you are right I
On 11/19/20 2:49 AM, Andy Lutomirski wrote:
On Tue, Nov 17, 2020 at 8:59 AM Alexandre Chartre
wrote:
On 11/17/20 4:52 PM, Andy Lutomirski wrote:
On Tue, Nov 17, 2020 at 7:07 AM Alexandre Chartre
wrote:
On 11/16/20 7:34 PM, Andy Lutomirski wrote:
On Mon, Nov 16, 2020 at 10:10 AM
On 11/19/20 9:05 AM, Alexandre Chartre wrote:
When entering the kernel from userland, use the per-task PTI stack
instead of the per-cpu trampoline stack. Like the trampoline stack,
the PTI stack is mapped both in the kernel and in the user page-table.
Using a per-task stack which is mapped
On 11/19/20 5:06 PM, Andy Lutomirski wrote:
On Thu, Nov 19, 2020 at 4:06 AM Alexandre Chartre
wrote:
On 11/19/20 9:05 AM, Alexandre Chartre wrote:
When entering the kernel from userland, use the per-task PTI stack
instead of the per-cpu trampoline stack. Like the trampoline stack,
the PTI
On 11/19/20 8:10 PM, Thomas Gleixner wrote:
On Mon, Nov 16 2020 at 19:10, Alexandre Chartre wrote:
On 11/16/20 5:57 PM, Andy Lutomirski wrote:
On Mon, Nov 16, 2020 at 6:47 AM Alexandre Chartre
wrote:
When executing more code in the kernel, we are likely to reach a point
where we need to
On 02/10/2019 10:23 PM, Thomas Gleixner wrote:
On Fri, 25 Jan 2019, Alexandre Chartre wrote:
Note that this issue has been observed and reproduced with a custom kernel
with some code mapped to different virtual addresses and using jump labels
As jump labels use text_poke_bp(), crashes were
On 02/11/2019 10:15 AM, Thomas Gleixner wrote:
On Mon, 11 Feb 2019, Alexandre Chartre wrote:
On 02/10/2019 10:23 PM, Thomas Gleixner wrote:
On Fri, 25 Jan 2019, Alexandre Chartre wrote:
Note that this issue has been observed and reproduced with a custom kernel
with some code mapped to
, 2019 at 02:17:20PM +0200, Alexandre Chartre
wrote:
On 7/12/19 1:44 PM, Peter Zijlstra wrote:
AFAIK3 this wants/needs to be combined with core-scheduling
to be
useful, but not a single mention of that is anywhere.
No. This is actually an alternative to core-scheduling.
Eventually, ASI
will kick
ASI session
structure (struct asi_session).
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/asi.h | 4 ++
arch/x86/include/asm/asi_session.h | 17 ++
arch/x86/include/asm/mmu_context.h | 19 ++-
arch/x86/include/asm/tlbflush.h| 12
arch/x86/mm/asi.c
.char...@oracle.com
[4] Core Scheduling - https://lwn.net/Articles/803652
[5] Page Table Isolation (PTI) -
https://www.kernel.org/doc/html/latest/x86/pti.html
Thanks,
alex.
-----
Alexandre Chartre (7):
mm/x86: Introduce kernel Address Space Isolation (ASI)
mm/asi: ASI entry/exit interface
mm/asi: Im
Exit ASI as soon as a task is entering the scheduler (__schedule()),
otherwise ASI will likely quick fault, for example when accessing
run queues. The task will return to ASI when it is scheduled again.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/asi.h | 3 ++
arch/x86/mm/asi.c
ave a new generation of the same ASI pagetable, then the TLB
needs to be flushed. This behavior is similar to the context tracking
done when switching mm.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/asi.h | 23 +++
arch/x86/mm/asi.c
: Alexandre Chartre
---
arch/x86/entry/calling.h | 26 +-
arch/x86/entry/entry_64.S | 22 ++
arch/x86/include/asm/asi.h | 122 +
arch/x86/include/asm/asi_session.h | 7 ++
arch/x86/include/asm/mmu_context.h | 3 +-
arch/x86/kernel/asm
aborted then the location and
address of the fault can be logged and optionally include a stack
trace.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/asi.h | 42 -
arch/x86/mm/asi.c | 95 ++
arch/x86/mm/fault.c| 20
type and a pagetable.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/asi.h | 88 ++
arch/x86/mm/Makefile | 1 +
arch/x86/mm/asi.c | 60 ++
security/Kconfig | 10 +
4 files changed, 159 insertions
re returning to userland.
Signed-off-by: Alexandre Chartre
---
arch/x86/entry/calling.h| 13 -
arch/x86/entry/common.c | 29 -
arch/x86/entry/entry_64.S | 6 ++
arch/x86/include/asm/asi.h | 9 +
arch/x86/include/asm/
references to another page-table.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/dpt.h | 23 +
arch/x86/mm/Makefile | 2 +-
arch/x86/mm/dpt.c | 67 ++
3 files changed, 91 insertions(+), 1 deletion(-)
create mode 100644 arch
Add wrappers around the page table entry (pgd/p4d/pud/pmd) set
functions which check that an existing entry is not being
overwritten.
Signed-off-by: Alexandre Chartre
---
arch/x86/mm/dpt.c | 126 ++
1 file changed, 126 insertions(+)
diff --git a/arch
Add functions to allocate p4d/pud/pmd/pte pages for an decorated
page-table and keep track of them.
Signed-off-by: Alexandre Chartre
---
arch/x86/mm/dpt.c | 110 ++
1 file changed, 110 insertions(+)
diff --git a/arch/x86/mm/dpt.c b/arch/x86/mm/dpt.c
Add wrappers around the p4d/pud/pmd/pte offset kernel functions which
ensure that page-table pointers are in the specified decorated page-table.
Signed-off-by: Alexandre Chartre
---
arch/x86/mm/dpt.c | 66 +++
1 file changed, 66 insertions(+)
diff
level (PGD, P4D, PUD PMD, PTE) at which the copy should be
done. Also functions don't rely on mm or vma, and they don't alter the
source page-table even if an entry is bad. Finally, the VA range start
and size don't need to be page-aligned.
Signed-off-by: Alexandre Chartre
---
arch/
Add functions to keep track of VA ranges mapped in a decorated page-table.
This will be used when unmapping to ensure the same range is unmapped,
at the same page-table level. This will also be used to handle mapping
and unmapping of overlapping VA ranges.
Signed-off-by: Alexandre Chartre
Add helper functions to easily map a module into a decorated page-table.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/dpt.h | 21 +
1 file changed, 21 insertions(+)
diff --git a/arch/x86/include/asm/dpt.h b/arch/x86/include/asm/dpt.h
index 85d2c5051acb
part III) and later by KVM ASI.
Decorated page-table is independent of ASI, and can potentially be used
anywhere a page-table is needed.
Thanks,
alex.
-
Alexandre Chartre (13):
mm/x86: Introduce decorated page-table (dpt)
mm/dpt: Track buffers allocated for a decorated page-table
mm/dpt
Provide functions to copy page-table entries from the kernel page-table
to a decorated page-table for a percpu buffer. A percpu buffer have a
different VA range for each cpu and all them have to be copied.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/dpt.h | 6 ++
arch/x86/mm
another page table is not modified by mistake.
As information (address, size, page-table level) about VA ranges mapped
to the decorated page-table is tracked, clearing is done with just
specifying the start address of the range.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/dpt.h
-by: Alexandre Chartre
---
arch/x86/include/asm/asi.h | 2 ++
arch/x86/mm/asi.c | 57 ++
2 files changed, 59 insertions(+)
diff --git a/arch/x86/include/asm/asi.h b/arch/x86/include/asm/asi.h
index ac0594d4f549..eafed750e07f 100644
--- a/arch/x86
: Alexandre Chartre
---
arch/x86/include/asm/dpt.h | 1 +
arch/x86/mm/dpt.c | 25 +
2 files changed, 26 insertions(+)
diff --git a/arch/x86/include/asm/dpt.h b/arch/x86/include/asm/dpt.h
index fd8c1b84ffe2..3234ba968d80 100644
--- a/arch/x86/include/asm/dpt.h
+++ b/arch
not parts of the kernel page table referenced from the
page-table. To do so, we will keep track of buffers when building the
page-table.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/dpt.h | 21 ++
arch/x86/mm/dpt.c | 82 ++
2 files
Add a sequence to test if an ASI remains active after it is
scheduled out and then scheduled back in.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asidrv.c | 98
drivers/staging/asi/asidrv.h | 2 +
2 files changed, 100 insertions(+)
diff --git
Add options to the asicmd CLI to list, add and clear ASI mapped
VA ranges.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asicmd.c | 243 +++
1 file changed, 243 insertions(+)
diff --git a/drivers/staging/asi/asicmd.c b/drivers/staging/asi/asicmd.c
Introduce the infrastructure for the ASI driver. This driver is meant
for testing ASI. It creates a test ASI, and will allow to run some
test sequences on this ASI.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/Makefile | 7 ++
drivers/staging/asi/asidrv.c | 129
.
Also data effectively mapped can overlap with an already mapped buffer.
This is not an issue when mapping data but, when unmapping, make sure
data from another buffer don't get unmapped as a side effect.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/dpt.h | 1 +
arch/x86/mm/
Add a sequence to test if an ASI remains active after receiving
an interrupt which is itself interrupted by an NMI.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asidrv.c | 62 +++-
drivers/staging/asi/asidrv.h | 1 +
2 files changed, 62 insertions
ASI;
- ASIDRV_SEQ_PRINTK calls printk while running with ASI.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asidrv.c | 248 +++
drivers/staging/asi/asidrv.h | 29
2 files changed, 277 insertions(+)
create mode 100644 drivers/staging/asi/asidrv.h
The asicmd command is a userland CLI to interact with the ASI driver
(asidrv), in particular it provides an interface for running ASI
test sequences.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/Makefile | 6 ++
drivers/staging/asi/asicmd.c | 120
Add a sequence to test if an ASI remains active after receiving
an NMI.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asidrv.c | 116 ++-
drivers/staging/asi/asidrv.h | 4 ++
2 files changed, 118 insertions(+), 2 deletions(-)
diff --git a/drivers
Add more options to the asicmd command to test access to map
or unmapped memory buffer, interrupt, NMI, scheduling while
using ASI.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asicmd.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/drivers/staging/asi/asicmd.c
Define the test ASI type which can be used for testing or experimenting
ASI.
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/asi.h | 2 ++
arch/x86/mm/asi.c | 1 +
drivers/staging/Makefile | 1 +
3 files changed, 4 insertions(+)
diff --git a/arch/x86/include/asm/asi.h b
AILED - unexpected ASI state
# ./asicmd fault
ADDRESS COUNT SYMBOL
0x811081f3 1 log_store.constprop.27+0x1f3/0x280
We still see a new fault but at a difference address (this time because
cpu_number is not mapped).
-
Alexandre Chartre (14):
mm/asi: Define the test ASI type
Add a sequence to test if an ASI remains active after receiving
an interrupt.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asidrv.c | 144 +--
drivers/staging/asi/asidrv.h | 5 ++
2 files changed, 144 insertions(+), 5 deletions(-)
diff --git a
Add a sequence to test if ASI exit or not when accessing a mapped
or unmapped memory buffer.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asidrv.c | 70
drivers/staging/asi/asidrv.h | 3 ++
2 files changed, 73 insertions(+)
diff --git a/drivers
Add options to the asicmd CLI to list and clear ASI page faults. Also
add an option to enable/disable displaying stack trace on ASI page
fault.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asicmd.c | 68 ++--
1 file changed, 65 insertions(+), 3
Add ioctls to list and clear ASI page faults. Also add an ioctl to display
or not stack trace on ASI page fault.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asidrv.c | 88
drivers/staging/asi/asidrv.h | 32 +
2 files changed, 120
Add ioctls to list, add, clear ASI mapped VA ranges.
Signed-off-by: Alexandre Chartre
---
drivers/staging/asi/asidrv.c | 138 +++
drivers/staging/asi/asidrv.h | 26 +++
2 files changed, 164 insertions(+)
diff --git a/drivers/staging/asi/asidrv.c b/drivers
On 02/11/2019 10:57 AM, Alexandre Chartre wrote:
On 02/11/2019 10:15 AM, Thomas Gleixner wrote:
On Mon, 11 Feb 2019, Alexandre Chartre wrote:
On 02/10/2019 10:23 PM, Thomas Gleixner wrote:
On Fri, 25 Jan 2019, Alexandre Chartre wrote:
Note that this issue has been observed and reproduced
On 01/11/2019 05:57 PM, Josh Poimboeuf wrote:
On Fri, Jan 11, 2019 at 05:46:36PM +0100, Alexandre Chartre wrote:
On 01/11/2019 04:28 PM, Josh Poimboeuf wrote:
On Fri, Jan 11, 2019 at 01:10:52PM +0100, Alexandre Chartre wrote:
To avoid any issue with live patching the call instruction
On 01/15/2019 05:19 PM, Steven Rostedt wrote:
On Tue, 15 Jan 2019 12:10:19 +0100
Alexandre Chartre wrote:
Thinking more about it (and I've probably missed something or I am just being
totally stupid because this seems way too simple), can't we just replace the
"call"
e handled using the full kernel address space.
Thanks,
alex.
---
Alexandre Chartre (18):
kvm/isolation: function to track buffers allocated for the KVM page
table
kvm/isolation: add KVM page table entry free functions
kvm/isolation: add KVM page table entry offset functions
kvm/isolation:
performance hit
which some users will not want to take for security gain.
Signed-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/Makefile|2 +-
arch/x86/kvm/isolation.c | 26 ++
2 files changed, 27 insertions(+), 1 deletions(-)
create mode
-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 95 ++
arch/x86/kvm/isolation.h |8
arch/x86/kvm/x86.c | 10 -
3 files changed, 112 insertions(+), 1 deletions(-)
create mode 100644 arch/x86/kvm
(kvm_write_guest_virt_system() can pull in tons of pages)
4) On return to userspace (e.g. QEMU)
5) On prelog of IRQ handlers
Signed-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c |7 ++-
arch/x86/kvm/isolation.h |3 +++
arch/x86/kvm/mmu.c |3
From: Liran Alon
As every entry to guest checks if should switch from host_mm to kvm_mm,
these branches is at very hot path. Optimize them by using
static_branch.
Signed-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 11 ---
arch/x86/kvm
These functions are wrappers are the p4d/pud/pmd/pte offset functions
which ensure that page table pointers are in the KVM page table.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 61 ++
1 files changed, 61 insertions(+), 0
These functions are wrappers around the p4d/pud/pmd/pte free function
which can be used with any pointer in the directory.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 26 ++
1 files changed, 26 insertions(+), 0 deletions(-)
diff --git a/arch/x86
#VMExit handlers still run
with full host address space.
However, this introduces the entry points and places for switching.
Next commits will change switch to happen only when necessary.
Signed-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 20
This will be used when we have to clear mappings to ensure the same
range is cleared at the same page table level it was copied.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 86 -
1 files changed, 84 insertions(+), 2 deletions
These functions allocate p4d/pud/pmd/pte pages and ensure that
pages are in the KVM page table.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 94 ++
1 files changed, 94 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm
Add wrappers around the page table entry (pgd/p4d/pud/pmd) set function
to check that an existing entry is not being overwritten.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 107 ++
1 files changed, 107 insertions(+), 0 deletions
the kernel page table isn't mistakenly modified.
Information (address, size, page table level) about address ranges
mapped to the KVM page table is tracked, so mapping clearing is done
with just specified the start address of the range.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolat
The KVM page table is initialized with adding core memory mappings:
the kernel text, the per-cpu memory, the kvm module, the cpu_entry_area,
%esp fixup stacks, IRQ stacks.
Signed-off-by: Alexandre Chartre
---
arch/x86/kernel/cpu/common.c |2 +
arch/x86/kvm/isolation.c | 131
In addition of core memory mappings, the KVM page table has to be
initialized with vmx specific data.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/vmx/vmx.c | 19 +++
1 files changed, 19 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx
Map vmx cpu to the KVM address space when a vmx cpu is created, and
unmap when it is freed.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/vmx/vmx.c | 65
1 files changed, 65 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c
From: Liran Alon
KVM isolation enter/exit is done by switching between the KVM address
space and the kernel address space.
Signed-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 30 --
arch/x86/mm/tlb.c|1 +
include
Map VM data, in particular the kvm structure data.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 17 +
arch/x86/kvm/isolation.h |2 ++
arch/x86/kvm/vmx/vmx.c | 31 ++-
arch/x86/kvm/x86.c | 12
include
instead of KVM isolated address space.
Signed-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/apic.h|4 ++--
arch/x86/include/asm/hardirq.h | 10 ++
arch/x86/kernel/smp.c |2 +-
arch/x86/platform/uv/tlb_uv.c |2 +-
4 files changed
table. To do so, we will keep track of buffers when building
the KVM page table.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 119 ++
1 files changed, 119 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/isolation.c b/arch
percpu buffer.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 34 ++
arch/x86/kvm/isolation.h |2 ++
2 files changed, 36 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/isolation.c b/arch/x86/kvm/isolation.c
index 539e287..2052abf
mapped
range, then remap the entire larger map.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 67 +++---
1 files changed, 63 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/isolation.c b/arch/x86/kvm/isolation.c
index e494a15
page table.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 229 ++
arch/x86/kvm/isolation.h |1 +
2 files changed, 230 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/isolation.c b/arch/x86/kvm/isolation.c
index b681e4
. The fault will still be reported but without the
stack trace.
Signed-off-by: Alexandre Chartre
---
arch/x86/kernel/dumpstack.c |1 +
arch/x86/kvm/isolation.c| 202 +++
arch/x86/mm/fault.c | 12 +++
3 files changed, 215 insertions(+), 0 dele
task and vcpu. This should eventually
be improved to be independent of any task/vcpu mapping.
Also check that the task effectively entering the KVM address space
is mapped.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 182 ++
arch/x8
KVM memslots can change after they have been created so new memslots
have to be mapped when they are created.
TODO: we currently don't unmapped old memslots, they should be unmapped
when they are freed.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c |
From: Liran Alon
Interrupt handlers will need this handler to switch from
the KVM address space back to the kernel address space
on their prelog.
Signed-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
---
arch/x86/include/asm/irq.h |1 +
arch/x86/kernel/irq.c | 11
KVM buses can change after they have been created so new buses
have to be mapped when they are created.
Signed-off-by: Alexandre Chartre
---
arch/x86/kvm/isolation.c | 37 +
arch/x86/kvm/isolation.h |1 +
arch/x86/kvm/x86.c | 13
From: Liran Alon
Export symbols needed to create, manage, populate and switch
a mm from a kernel module (kvm in this case).
This is a hacky way for now to start.
This should be changed to some suitable memory-management API.
Signed-off-by: Liran Alon
Signed-off-by: Alexandre Chartre
On 5/13/19 5:46 PM, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
wrote:
From: Liran Alon
Add the address_space_isolation parameter to the kvm module.
When set to true, KVM #VMExit handlers run in isolated address space
which maps only KVM required code and
On 5/13/19 5:45 PM, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
wrote:
From: Liran Alon
Create a separate mm for KVM that will be active when KVM #VMExit
handlers run. Up until the point which we architectully need to
access host (or other VM) sensitive data
On 5/13/19 5:49 PM, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
wrote:
From: Liran Alon
Interrupt handlers will need this handler to switch from
the KVM address space back to the kernel address space
on their prelog.
This patch doesn't appear to do any
On 5/13/19 6:02 PM, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
wrote:
The KVM page fault handler handles page fault occurring while using
the KVM address space by switching to the kernel address space and
retrying the access (except if the fault occurs while
On 5/13/19 5:51 PM, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
wrote:
From: Liran Alon
Next commits will change most of KVM #VMExit handlers to run
in KVM isolated address space. Any interrupt handler raised
during execution in KVM address space needs to
On 5/13/19 5:50 PM, Dave Hansen wrote:
+ /*
+* Copy the mapping for all the kernel text. We copy at the PMD
+* level since the PUD is shared with the module mapping space.
+*/
+ rv = kvm_copy_mapping((void *)__START_KERNEL_map, KERNEL_IMAGE_SIZE,
+
On 5/13/19 6:00 PM, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 8:50 AM Dave Hansen wrote:
+ /*
+ * Copy the mapping for all the kernel text. We copy at the PMD
+ * level since the PUD is shared with the module mapping space.
+ */
+ rv = kvm_copy_mapping((void *)_
On 5/14/19 9:07 AM, Peter Zijlstra wrote:
On Mon, May 13, 2019 at 11:13:34AM -0700, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 9:28 AM Alexandre Chartre
wrote:
Actually, I am not sure this is effectively useful because the IRQ
handler is probably faulting before it tries to exit
On 5/14/19 9:09 AM, Peter Zijlstra wrote:
On Mon, May 13, 2019 at 11:18:41AM -0700, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
wrote:
pcpu_base_addr is already mapped to the KVM address space, but this
represents the first percpu chunk. To access a per-cpu
On 5/13/19 11:08 PM, Liran Alon wrote:
On 13 May 2019, at 21:17, Andy Lutomirski wrote:
I expect that the KVM address space can eventually be expanded to include
the ioctl syscall entries. By doing so, and also adding the KVM page table
to the process userland page table (which should be
On 5/14/19 10:34 AM, Andy Lutomirski wrote:
On May 14, 2019, at 1:25 AM, Alexandre Chartre
wrote:
On 5/14/19 9:09 AM, Peter Zijlstra wrote:
On Mon, May 13, 2019 at 11:18:41AM -0700, Andy Lutomirski wrote:
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
wrote:
pcpu_base_addr is
1 - 100 of 278 matches
Mail list logo