On Fri, 16 Jun 2017, Tom Lendacky wrote:
> diff --git a/arch/x86/include/asm/mem_encrypt.h
> b/arch/x86/include/asm/mem_encrypt.h
> index a105796..988b336 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -15,16 +15,24 @@
>
> #ifndef __ASSEMBLY__
On Fri, 16 Jun 2017, Tom Lendacky wrote:
>
> +#ifndef pgprot_encrypted
> +#define pgprot_encrypted(prot) (prot)
> +#endif
> +
> +#ifndef pgprot_decrypted
That looks wrong. It's not decrypted it's rather unencrypted, right?
Thanks,
tglx
--
To unsubscribe from this list: send the l
On Fri, 16 Jun 2017, Tom Lendacky wrote:
> Currently there is a check if the address being mapped is in the ISA
> range (is_ISA_range()), and if it is then phys_to_virt() is used to
> perform the mapping. When SME is active, however, this will result
> in the mapping having the encryption bit set
On Wed, Jun 21, 2017 at 09:18:59AM +0200, Thomas Gleixner wrote:
> That looks wrong. It's not decrypted it's rather unencrypted, right?
Yeah, it previous versions of the patchset, "decrypted" and
"unencrypted" were both present so we settled on "decrypted" for the
nomenclature.
--
Regards/Gruss,
On Fri, Jun 16, 2017 at 01:53:38PM -0500, Tom Lendacky wrote:
> The SMP MP-table is built by UEFI and placed in memory in a decrypted
> state. These tables are accessed using a mix of early_memremap(),
> early_memunmap(), phys_to_virt() and virt_to_phys(). Change all accesses
> to use early_memrema
On Fri, Jun 16, 2017 at 01:54:12PM -0500, Tom Lendacky wrote:
> When Secure Memory Encryption is enabled, the trampoline area must not
> be encrypted. A CPU running in real mode will not be able to decrypt
> memory that has been encrypted because it will not be able to use addresses
> with the memo
On Fri, Jun 16, 2017 at 01:54:24PM -0500, Tom Lendacky wrote:
> Since DMA addresses will effectively look like 48-bit addresses when the
> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the
> device performing the DMA does not support 48-bits. SWIOTLB will be
> initialized to c
2017-06-20 16:14 GMT+02:00 Thomas Gleixner :
> On Tue, 20 Jun 2017, Bartosz Golaszewski wrote:
>> 2017-06-20 12:41 GMT+02:00 Marc Zyngier :
>> > There was a kbuild report from June 1st with worrying warnings on x86_64
>> > (though I couldn't see how that was related to these patches). What's
>> > t
On 31/05/17 17:06, Bartosz Golaszewski wrote:
> This series is a follow-up to [1].
>
> Some users of irq_alloc_generic_chip() are modules which can be
> removed (e.g. gpio-ml-ioh) but have no means of freeing the allocated
> generic chip.
>
> Last time it was suggested to provide irq_destroy_gene
On Fri, Jun 16, 2017 at 01:54:36PM -0500, Tom Lendacky wrote:
> Add warnings to let the user know when bounce buffers are being used for
> DMA when SME is active. Since the bounce buffers are not in encrypted
> memory, these notifications are to allow the user to determine some
> appropriate actio
On 6/20/2017 3:49 PM, Thomas Gleixner wrote:
On Fri, 16 Jun 2017, Tom Lendacky wrote:
+config ARCH_HAS_MEM_ENCRYPT
+ def_bool y
+ depends on X86
That one is silly. The config switch is in the x86 KConfig file, so X86 is
on. If you intended to move this to some generic place outs
Hi Dmitry,
Ping.
How do you want to proceed with that?
Regards,
Mauro
Forwarded message:
Date: Sat, 15 Apr 2017 19:50:45 -0300
From: Mauro Carvalho Chehab
To: Dmitry Torokhov
Cc: linux-in...@vger.kernel.org, Benjamin Tissoires
, Jiri Kosina , Jonathan Corbet
, Roderick Colenbrander ,
Stua
On 6/20/2017 3:55 PM, Thomas Gleixner wrote:
On Fri, 16 Jun 2017, Tom Lendacky wrote:
Currently there is a check if the address being mapped is in the ISA
range (is_ISA_range()), and if it is then phys_to_virt() is used to
perform the mapping. When SME is active, however, this will result
in t
On 6/21/2017 2:37 AM, Thomas Gleixner wrote:
On Fri, 16 Jun 2017, Tom Lendacky wrote:
Currently there is a check if the address being mapped is in the ISA
range (is_ISA_range()), and if it is then phys_to_virt() is used to
perform the mapping. When SME is active, however, this will result
in th
On 6/21/2017 2:16 AM, Thomas Gleixner wrote:
On Fri, 16 Jun 2017, Tom Lendacky wrote:
diff --git a/arch/x86/include/asm/mem_encrypt.h
b/arch/x86/include/asm/mem_encrypt.h
index a105796..988b336 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -15,16 +1
On 6/21/2017 5:50 AM, Borislav Petkov wrote:
On Fri, Jun 16, 2017 at 01:54:36PM -0500, Tom Lendacky wrote:
Add warnings to let the user know when bounce buffers are being used for
DMA when SME is active. Since the bounce buffers are not in encrypted
memory, these notifications are to allow the
On Thu, Jun 15, 2017 at 11:41:12AM +0200, Borislav Petkov wrote:
> On Wed, Jun 14, 2017 at 03:40:28PM -0500, Tom Lendacky wrote:
> > > WARNING: Use of volatile is usually wrong: see
> > > Documentation/process/volatile-considered-harmful.rst
> > > #134: FILE: drivers/iommu/amd_iommu.c:866:
> > > +
On Wed, 21 Jun 2017, Tom Lendacky wrote:
> On 6/21/2017 2:16 AM, Thomas Gleixner wrote:
> > Why is this an unconditional function? Isn't the mask simply 0 when the MEM
> > ENCRYPT support is disabled?
>
> I made it unconditional because of the call from head_64.S. I can't make
> use of the C level
On Wed, Jun 21, 2017 at 05:37:22PM +0200, Joerg Roedel wrote:
> > Do you mean this is like the last exception case in that document above:
> >
> > "
> > - Pointers to data structures in coherent memory which might be modified
> > by I/O devices can, sometimes, legitimately be volatile. A ri
On 6/21/2017 10:38 AM, Thomas Gleixner wrote:
On Wed, 21 Jun 2017, Tom Lendacky wrote:
On 6/21/2017 2:16 AM, Thomas Gleixner wrote:
Why is this an unconditional function? Isn't the mask simply 0 when the MEM
ENCRYPT support is disabled?
I made it unconditional because of the call from head_64
On 6/21/2017 11:59 AM, Borislav Petkov wrote:
On Wed, Jun 21, 2017 at 05:37:22PM +0200, Joerg Roedel wrote:
Do you mean this is like the last exception case in that document above:
"
- Pointers to data structures in coherent memory which might be modified
by I/O devices can, sometimes,
On Wed, 21 Jun 2017, Tom Lendacky wrote:
> On 6/21/2017 10:38 AM, Thomas Gleixner wrote:
> > /*
> > * Sanitize CPU configuration and retrieve the modifier
> > * for the initial pgdir entry which will be programmed
> > * into CR3. Depends on enabled SME encryption, normally 0.
> >
Traditionally, the OOM killer is operating on a process level.
Under oom conditions, it finds a process with the highest oom score
and kills it.
This behavior doesn't suit well the system with many running
containers. There are two main issues:
1) There is no fairness between containers. A small
Introduce a per-memory-cgroup oom_score_adj setting.
A read-write single value file which exits on non-root
cgroups. The default is "0".
It will have a similar meaning to a per-process value,
available via /proc//oom_score_adj.
Should be in a range [-1000, 1000].
Signed-off-by: Roman Gushchin
Cc
Update cgroups v2 docs.
Signed-off-by: Roman Gushchin
Cc: Michal Hocko
Cc: Vladimir Davydov
Cc: Johannes Weiner
Cc: Tetsuo Handa
Cc: David Rientjes
Cc: Tejun Heo
Cc: kernel-t...@fb.com
Cc: cgro...@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux...@kv
Oom killer should avoid unnecessary kills. To prevent them, during
the tasks list traverse we check for task which was previously
selected as oom victims. If there is such a task, new victim
is not selected.
This approach is sub-optimal (we're doing costly iteration over the task
list every time)
We want to limit the number of tasks which are having an access
to the memory reserves. To ensure the progress it's enough
to have one such process at the time.
If we need to kill the whole cgroup, let's give an access to the
memory reserves only to the first process in the list, which is
(usually
Dump the cgroup oom badness score, as well as the name
of chosen victim cgroup.
Here how it looks like in dmesg:
[ 18.824495] Choosing a victim memcg because of the system-wide OOM
[ 18.826911] Cgroup /A1: 200805
[ 18.827996] Cgroup /A2: 273072
[ 18.828937] Cgroup /A2/B3: 51
[ 18.829795]
2017-06-09 17:29 GMT+09:00 Masahiro Yamada :
> Prior to commit fcc8487d477a ("uapi: export all headers under uapi
> directories"), genhdr-y was meant to specify generated UAPI headers.
>
> - generated-y: generated headers (other than asm-generic wrappers)
> - header-y : headers to be exported
> -
Memory protection keys enable applications to protect its
address space from inadvertent access or corruption from
itself.
The overall idea:
A process allocates a key and associates it with
a address range withinits address space.
The process than can dynamically set read/wri
Display the pkey number associated with the vma in smaps of a task.
The key will be seen as below:
VmFlags: rd wr mr mw me dw ac key=0
Signed-off-by: Ram Pai
---
Documentation/filesystems/proc.txt | 3 ++-
fs/proc/task_mmu.c | 22 +++---
2 files changed, 13 inse
Handle Data and Instruction exceptions caused by memory
protection-key.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/mmu_context.h | 12 +
arch/powerpc/include/asm/pkeys.h | 9
arch/powerpc/include/asm/reg.h | 2 +-
arch/powerpc/mm/fault.c| 20
Since PowerPC and Intel both support memory protection keys, moving
the documenation to arch-neutral directory.
Signed-off-by: Ram Pai
---
Documentation/vm/protection-keys.txt | 85 +++
Documentation/x86/protection-keys.txt | 85 --
Add documentation updates that capture PowerPC specific changes.
Signed-off-by: Ram Pai
---
Documentation/vm/protection-keys.txt | 65 +---
1 file changed, 45 insertions(+), 20 deletions(-)
diff --git a/Documentation/vm/protection-keys.txt
b/Documentation/vm/pro
Signed-off-by: Ram Pai
---
tools/testing/selftests/vm/Makefile |1 +
tools/testing/selftests/vm/pkey-helpers.h | 219
tools/testing/selftests/vm/protection_keys.c | 1395 +
tools/testing/selftests/x86/Makefile |2 +-
tools/testing/self
Abstracted out the arch specific code into the header file, and
added powerpc specific changes.
a) added 4k-backed hpte, memory allocator, powerpc specific.
b) added three test case where the key is associated after the page is
accessed/allocated/mapped.
c) cleaned up the code to make chec
Map the PTE protection key bits to the HPTE key protection bits,
while creating HPTE entries.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 5 +
arch/powerpc/include/asm/pkeys.h | 7 +++
arch/powerpc/mm/hash_utils_64.c | 5 +
3
The value of the AMR register at the time of exception
is made available in gp_regs[PT_AMR] of the siginfo.
This field can be used to reprogram the permission bits of
any valid pkey.
Similarly the value of the pkey, whose protection got violated,
is made available at si_pkey field of the siginfo
Replace the magic number used to check for DSI exception
with a meaningful value.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/reg.h | 7 ++-
arch/powerpc/kernel/exceptions-64s.S | 2 +-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/reg.
This system call, associates the pkey with PTE of all
pages covering the given address range.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 22 ++-
arch/powerpc/include/asm/mman.h | 14 -
arch/powerpc/include/asm/pkeys.h | 21 ++-
Store and restore the AMR, IAMR and UMOR register state of the task
before scheduling out and after scheduling in, respectively.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/processor.h | 5 +
arch/powerpc/kernel/process.c| 18 ++
2 files changed, 23 insertion
Sys_pkey_alloc() allocates and returns available pkey
Sys_pkey_free() frees up the pkey.
Total 32 keys are supported on powerpc. However pkey 0,1 and 31
are reserved. So effectively we have 29 pkeys.
Each key can be initialized to disable read, write and execute
permissions. On powerpc a key
x86 does not support disabling execute permissions on a pkey.
Signed-off-by: Ram Pai
---
arch/x86/kernel/fpu/xstate.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index c24ac1e..d582631 100644
--- a/arch/x86/kernel/fpu/xstate.
Currently there are only 4bits in the vma flags to support 16 keys
on x86. powerpc supports 32 keys, which needs 5bits. This patch
introduces an addition bit in the vma flags.
Signed-off-by: Ram Pai
---
fs/proc/task_mmu.c | 6 +-
include/linux/mm.h | 18 +-
2 files changed,
Currently sys_pkey_create() provides the ability to disable read
and write permission on the key, at creation. powerpc has the
hardware support to disable execute on a pkey as well.This patch
enhances the interface to let disable execute at key creation
time. x86 does not allow this. Henc
replace redundant code in flush_hash_page() with helper functions
get_hidx_gslot() and set_hidx_slot()
Signed-off-by: Ram Pai
---
arch/powerpc/mm/hash_utils_64.c | 13 -
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/ha
The H_PAGE_F_SECOND,H_PAGE_F_GIX are not in the 64K main-PTE.
capture these changes in the dump pte report.
Signed-off-by: Ram Pai
---
arch/powerpc/mm/dump_linuxpagetables.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/dump_linuxpagetables.c
b/arch/power
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
in the 64K backed HPTE pages. This along with the earlier
patch will entirely free up the four bits from 64K PTE.
The bit numbers are big-endian as defined in the ISA3.0
This patch does the following change to 64K PTE that is
backed by 64K
Introduce get_hidx_gslot() which returns the slot number of the HPTE
in the global hash table.
This function will come in handy as we work towards re-arranging the
PTE bits in the later patches.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/book3s/64/hash.h | 3 +++
arch/powerpc/mm/hash_
Introduce set_hidx_slot() which sets the (H_PAGE_F_SECOND|H_PAGE_F_GIX)
bits at the appropriate location in the PTE of 4K PTE. In the
case of 64K PTE, it sets the bits in the second part of the PTE. Though
the implementation for the former just needs the slot parameter, it does
take some
replace redundant code with helper functions
get_hidx_gslot() and set_hidx_slot()
Signed-off-by: Ram Pai
---
arch/powerpc/mm/hash64_4k.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c
index 6fa450c..c67
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6,
in the 4K backed HPTE pages. These bits continue to be used
for 64K backed HPTE pages in this patch, but will be freed
up in the next patch. The bit numbers are big-endian as
defined in the ISA3.0
The patch does the following change t
replace redundant code in __hash_page_4K() with helper
functions get_hidx_gslot() and set_hidx_slot()
Signed-off-by: Ram Pai
---
arch/powerpc/mm/hash64_64k.c | 24 ++--
1 file changed, 6 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/
53 matches
Mail list logo