Re: [RFC PATCH] powerpc/powernv: report error messages from opal
Oliver O'Halloran writes: > Recent versions of skiboot will raise an OPAL event (read: interrupt) > when firmware writes an error message to its internal console. In > conjunction they provide an OPAL call that the kernel can use to extract > these messages from the OPAL log to allow them to be written into the > kernel's log buffer where someone will (hopefully) look at them. > > For the companion skiboot patches see: > > https://lists.ozlabs.org/pipermail/skiboot/2016-December/005861.html > > Signed-off-by: Oliver O'Halloran > --- > arch/powerpc/include/asm/opal-api.h| 5 +++- > arch/powerpc/include/asm/opal.h| 1 + > arch/powerpc/platforms/powernv/opal-msglog.c | 41 > ++ > arch/powerpc/platforms/powernv/opal-wrappers.S | 1 + > 4 files changed, 47 insertions(+), 1 deletion(-) > > diff --git a/arch/powerpc/include/asm/opal-api.h > b/arch/powerpc/include/asm/opal-api.h > index 0e2e57bcab50..cb9c0e6afb33 100644 > --- a/arch/powerpc/include/asm/opal-api.h > +++ b/arch/powerpc/include/asm/opal-api.h > @@ -167,7 +167,8 @@ > #define OPAL_INT_EOI 124 > #define OPAL_INT_SET_MFRR125 > #define OPAL_PCI_TCE_KILL126 > -#define OPAL_LAST126 > +#define OPAL_SCRAPE_LOG 128 (another thought, along with the skiboot thoughts), I don't like the SCRAPE_LOG name so much, as it's more of a "hey linux, here's some log messages from firmware, possibly before you were involved"... OPAL_FETCH_LOG ? -- Stewart Smith OPAL Architect, IBM.
[PATCH 0/3] Have CONFIG_STRICT_KERNEL_RWX work with CONFIG_RELOCATABLE
These patches make CONFIG_STRICT_KERNEL_RWX work with CONFIG_RELOCATABLE The first patch splits up the radix linear mapping nicely on relocation to support granular read-only and execution bits. The second patch warns if relocation is actually done (PHYSICAL_START > MEMORY_START), we do best effort support of expected permissions. We could do more granular linear mapping, but we decided to leave it as a TODO (to check for performance/MPSS/etc). The last patch changes the config so that we are no longer dependent on !RELOCATABLE for CONFIG_STRICT_KERNEL_RWX feature. Balbir Singh (3): powerpc/mm/radix: Fix relocatable radix mappings for STRICT_RWX powerpc/mm/hash: WARN if relocation is enabled and CONFIG_STRICT_KERNEL_RWX powerpc/strict_kernel_rwx: Don't depend on !RELOCATABLE arch/powerpc/Kconfig | 2 +- arch/powerpc/mm/pgtable-hash64.c | 7 +- arch/powerpc/mm/pgtable-radix.c | 225 +++ 3 files changed, 186 insertions(+), 48 deletions(-) -- 2.9.4
[PATCH 1/3] powerpc/mm/radix: Fix relocatable radix mappings for STRICT_RWX
The mappings now do perfect kernel pte mappings even when the kernel is relocated. This patch refactors create_physical_mapping() and mark_rodata_ro(). create_physical_mapping() is now largely done with a helper called __create_physical_mapping(), which is defined differently for when CONFIG_STRICT_KERNEL_RWX is enabled and when its not. The goal of the patchset is to provide minimal changes when the CONFIG_STRICT_KERNEL_RWX is disabled, when enabled however, we do split the linear mapping so that permissions are strictly adherent to expectations from the user. Signed-off-by: Balbir Singh --- arch/powerpc/mm/pgtable-radix.c | 225 1 file changed, 179 insertions(+), 46 deletions(-) diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c index d2fd34a..5aaf886 100644 --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -112,26 +112,16 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa, } #ifdef CONFIG_STRICT_KERNEL_RWX -void radix__mark_rodata_ro(void) +static void remove_page_permission_range(unsigned long start, +unsigned long end, +unsigned long clr) { - unsigned long start = (unsigned long)_stext; - unsigned long end = (unsigned long)__init_begin; unsigned long idx; pgd_t *pgdp; pud_t *pudp; pmd_t *pmdp; pte_t *ptep; - if (!mmu_has_feature(MMU_FTR_KERNEL_RO)) { - pr_info("R/O rodata not supported\n"); - return; - } - - start = ALIGN_DOWN(start, PAGE_SIZE); - end = PAGE_ALIGN(end); // aligns up - - pr_devel("marking ro start %lx, end %lx\n", start, end); - for (idx = start; idx < end; idx += PAGE_SIZE) { pgdp = pgd_offset_k(idx); pudp = pud_alloc(&init_mm, pgdp, idx); @@ -152,10 +142,41 @@ void radix__mark_rodata_ro(void) if (!ptep) continue; update_the_pte: - radix__pte_update(&init_mm, idx, ptep, _PAGE_WRITE, 0, 0); + radix__pte_update(&init_mm, idx, ptep, clr, 0, 0); } radix__flush_tlb_kernel_range(start, end); +} + +void radix__mark_rodata_ro(void) +{ + unsigned long start = (unsigned long)_stext; + unsigned long end = (unsigned long)__init_begin; + if (!mmu_has_feature(MMU_FTR_KERNEL_RO)) { + pr_info("R/O rodata not supported\n"); + return; + } + + start = ALIGN_DOWN(start, PAGE_SIZE); + end = PAGE_ALIGN(end); // aligns up + + pr_devel("marking ro start %lx, end %lx\n", start, end); + remove_page_permission_range(start, end, _PAGE_WRITE); + + start = (unsigned long)__init_begin; + end = (unsigned long)__init_end; + start = ALIGN_DOWN(start, PAGE_SIZE); + end = PAGE_ALIGN(end); + + pr_devel("marking no exec start %lx, end %lx\n", start, end); + remove_page_permission_range(start, end, _PAGE_EXEC); + + start = (unsigned long)__start_interrupts - PHYSICAL_START; + end = (unsigned long)__end_interrupts - PHYSICAL_START; + start = ALIGN_DOWN(start, PAGE_SIZE); + end = PAGE_ALIGN(end); + pr_devel("marking ro start %lx, end %lx\n", start, end); + remove_page_permission_range(start, end, _PAGE_WRITE); } #endif @@ -169,31 +190,36 @@ static inline void __meminit print_mapping(unsigned long start, pr_info("Mapped range 0x%lx - 0x%lx with 0x%lx\n", start, end, size); } -static int __meminit create_physical_mapping(unsigned long start, -unsigned long end) +/* + * Create physical mapping and return the last mapping size + * If the call is successful, end_of_mapping will return the + * last address mapped via this call, if not, it will leave + * the value untouched. + */ +static int __meminit __create_physical_mapping(unsigned long vstart, + unsigned long vend, pgprot_t prot, + unsigned long *end_of_mapping) { - unsigned long vaddr, addr, mapping_size = 0; - pgprot_t prot; - unsigned long max_mapping_size; -#ifdef CONFIG_STRICT_KERNEL_RWX - int split_text_mapping = 1; -#else - int split_text_mapping = 0; -#endif + unsigned long mapping_size = 0; + static unsigned long previous_size; + unsigned long addr, start, end; + start = __pa(vstart); + end = __pa(vend); start = _ALIGN_UP(start, PAGE_SIZE); + + pr_devel("physical_mapping start %lx->%lx, prot %lx\n", +vstart, vend, pgprot_val(prot)); + for (addr = start; addr < end; addr += mapping_size) { - unsigned long gap, previous_size; + unsigned long gap; int rc; gap = end - addr; previous_size = mapping_si
[PATCH 2/3] powerpc/mm/hash: WARN if relocation is enabled and CONFIG_STRICT_KERNEL_RWX
For radix we split the mapping into smaller page sizes (at the cost of additional TLB overhead), but for hash its best to print a warning. In the case of hash and no-relocation, the kernel should be well aligned to provide the least overhead with the current linear mapping size (16M) Signed-off-by: Balbir Singh --- arch/powerpc/mm/pgtable-hash64.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/pgtable-hash64.c b/arch/powerpc/mm/pgtable-hash64.c index 0809102b..7c2479d 100644 --- a/arch/powerpc/mm/pgtable-hash64.c +++ b/arch/powerpc/mm/pgtable-hash64.c @@ -438,6 +438,11 @@ void hash__mark_rodata_ro(void) return; } + if (PHYSICAL_START > MEMORY_START) + pr_warn("Detected relocation and CONFIG_STRICT_KERNEL_RWX " + "permissions are best effort, some non-text area " + "might still be left as executable"); + shift = mmu_psize_defs[mmu_linear_psize].shift; step = 1 << shift; @@ -448,7 +453,7 @@ void hash__mark_rodata_ro(void) start, end, step); if (start == end) { - pr_warn("could not set rodata ro, relocate the start" + pr_warn("Could not set rodata ro, relocate the start" " of the kernel to a 0x%x boundary\n", step); return; } -- 2.9.4
[PATCH 3/3] powerpc/strict_kernel_rwx: Don't depend on !RELOCATABLE
The concerns with extra permissions and overlap have been address, remove the dependency on !RELOCTABLE Signed-off-by: Balbir Singh --- arch/powerpc/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 36f858c..3963e24 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -165,7 +165,7 @@ config PPC select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_TRACEHOOK - select ARCH_HAS_STRICT_KERNEL_RWX if (PPC_BOOK3S_64 && !RELOCATABLE && !HIBERNATION) + select ARCH_HAS_STRICT_KERNEL_RWX if (PPC_BOOK3S_64 && && !HIBERNATION) select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select HAVE_CBPF_JITif !PPC64 select HAVE_CONTEXT_TRACKINGif PPC64 -- 2.9.4
Re: [RFC PATCH 1/2] powerpc/xive: guest exploitation of the XIVE interrupt controller
On Mon, Jul 03, 2017 at 09:11:18AM +0200, Cédric Le Goater wrote: > On 07/03/2017 06:19 AM, Benjamin Herrenschmidt wrote: > > On Mon, 2017-07-03 at 13:55 +1000, David Gibson wrote: > >>> Calls that still need to be addressed : > >>> > >>> H_INT_SET_OS_REPORTING_LINE > >>> H_INT_GET_OS_REPORTING_LINE > >>> H_INT_ESB > >>> H_INT_SYNC > >> > >> So, does this mean there's a PAPR update with the XIVE virtualization > >> stuff? Or at least an ACR? Can we have that available please... > > > > There is, I will try to get it published. > > Until then, the QEMU support will have some documentation on the > hcalls and on the device tree. > > I am still struggling with CAS on QEMU. POWER9 supports both the > legacy XICS model and the newer one, XIVE, and we can switch from > one another depending on the guest kernel. This is a serious > headache for the model as the ICS/ICP objects are chosen after > the guest has booted. Ah. I don't know if it helps, but we do have the ability to trigger a full system reset from CAS, so possibly we can do the XICS/XIVE instantiation in the reset path. I don't think we use that CAS reset ability yet - we just adjust the device tree and continue the boot. But it's there if we need it. Worst comes to worst, we might have to instantiate both XICS and XIVE objects, with some flags in each indicating which is active. -- David Gibson| I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson signature.asc Description: PGP signature
Re: [PATCH v2 6/6] ima: Support module-style appended signatures for appraisal
On Tue, 2017-07-04 at 23:22 -0300, Thiago Jung Bauermann wrote: > Mimi Zohar writes: > > > On Wed, 2017-06-21 at 14:45 -0300, Thiago Jung Bauermann wrote: > >> Mimi Zohar writes: > >> > On Wed, 2017-06-07 at 22:49 -0300, Thiago Jung Bauermann wrote: > >> >> @@ -267,11 +276,18 @@ int ima_appraise_measurement(enum ima_hooks func, > >> >> status = INTEGRITY_PASS; > >> >> break; > >> >> case EVM_IMA_XATTR_DIGSIG: > >> >> + case IMA_MODSIG: > >> >> iint->flags |= IMA_DIGSIG; > >> >> - rc = integrity_digsig_verify(INTEGRITY_KEYRING_IMA, > >> >> -(const char *)xattr_value, > >> >> rc, > >> >> -iint->ima_hash->digest, > >> >> -iint->ima_hash->length); > >> >> + > >> >> + if (xattr_value->type == EVM_IMA_XATTR_DIGSIG) > >> >> + rc = > >> >> integrity_digsig_verify(INTEGRITY_KEYRING_IMA, > >> >> +(const char > >> >> *)xattr_value, > >> >> +rc, > >> >> iint->ima_hash->digest, > >> >> + > >> >> iint->ima_hash->length); > >> >> + else > >> >> + rc = ima_modsig_verify(INTEGRITY_KEYRING_IMA, > >> >> + xattr_value); > >> >> + > >> > > >> > Perhaps allowing IMA_MODSIG to flow into EVM_IMA_XATTR_DIGSIG on > >> > failure, would help restore process_measurements() to the way it was. > >> > Further explanation below. > >> > >> It's not possible to simply flow into EVM_IMA_XATTR_DIGSIG on failure > >> because after calling ima_read_xattr we need to run again all the logic > >> before the switch statement. Instead, I'm currently testing the > >> following change for v3, what do you think? > > > > I don't think we can assume that the same algorithm will be used for > > signing the kernel image. Different entities would be signing the > > kernel image with different requirements. > > > > Suppose for example a stock distro image comes signed using one > > algorithm (appended signature), but the same kernel image is locally > > signed using a different algorithm (xattr). Signature verification is > > dependent on either the distro or local public key being loaded onto > > the IMA keyring. > > This example is good, but it raises one question: should the xattr > signature cover the entire contents of the stock distro image (i.e., > also cover the appended signature), or should it ignore the appended > signature and thus cover the same contents that the appended signature > covers? > > If the former, then we can't reuse the iint->ima_hash that was collected > when trying to verify the appended signature because it doesn't cover > the appended signature itself and won't match the hash expected by the > xattr signature. > > If the latter, then evmctl ima_sign needs to be modified to check > whether there's an appended signature in the given file and ignore it > when calculating the xattr signature. > > Which is better? I realize that having the same file hash for both the appended signature and extended attribute would make things a lot easier, but security.ima is a signature of the file as written to disk, meaning it would include any appended signature > > >> >> @@ -226,30 +282,23 @@ static int process_measurement(struct file *file, > >> >> char *buf, loff_t size, > >> >> goto out_digsig; > >> >> } > >> >> > >> >> - template_desc = ima_template_desc_current(); > >> >> - if ((action & IMA_APPRAISE_SUBMASK) || > >> >> - strcmp(template_desc->name, IMA_TEMPLATE_IMA_NAME) > >> >> != 0) > >> >> - /* read 'security.ima' */ > >> >> - xattr_len = ima_read_xattr(file_dentry(file), > >> >> &xattr_value); > >> >> - > >> >> - hash_algo = ima_get_hash_algo(xattr_value, xattr_len); > >> >> - > >> >> - rc = ima_collect_measurement(iint, file, buf, size, hash_algo); > >> >> - if (rc != 0) { > >> >> - if (file->f_flags & O_DIRECT) > >> >> - rc = (iint->flags & IMA_PERMIT_DIRECTIO) ? 0 : > >> >> -EACCES; > >> >> - goto out_digsig; > >> >> - } > >> >> - > >> > > >> > There are four stages: collect measurement, store measurement, > >> > appraise measurement and audit measurement. "Collect" needs to be > >> > done if any one of the other stages is needed. > >> > > >> >> if (!pathbuf) /* ima_rdwr_violation possibly pre-fetched */ > >> >> pathname = ima_d_path(&file->f_path, &pathbuf, > >> >> filename); > >> >> > >> >> + if (iint->flags & IMA_MODSIG_ALLOWED) > >> >> + rc = measure_and_appraise(file, buf, size, func, > >> >> opened, action, > >> >> +
Re: [RFC PATCH 1/2] powerpc/xive: guest exploitation of the XIVE interrupt controller
On Wed, 2017-07-05 at 21:07 +1000, David Gibson wrote: > I don't know if it helps, but we do have the ability to trigger a full > system reset from CAS, so possibly we can do the XICS/XIVE > instantiation in the reset path. > > I don't think we use that CAS reset ability yet - we just adjust the > device tree and continue the boot. But it's there if we need it. > > Worst comes to worst, we might have to instantiate both XICS and XIVE > objects, with some flags in each indicating which is active. That could be a problem with the kernel interrupt controller. We can't really instantiate both there I think... well, actually ... maybe we could, though it's a bit messy... Cheers, Ben.
Re: [RFC PATCH 1/2] powerpc/xive: guest exploitation of the XIVE interrupt controller
On 07/05/2017 04:38 PM, Benjamin Herrenschmidt wrote: > On Wed, 2017-07-05 at 21:07 +1000, David Gibson wrote: >> I don't know if it helps, but we do have the ability to trigger a full >> system reset from CAS, so possibly we can do the XICS/XIVE >> instantiation in the reset path. >> >> I don't think we use that CAS reset ability yet - we just adjust the >> device tree and continue the boot. But it's there if we need it. >> >> Worst comes to worst, we might have to instantiate both XICS and XIVE >> objects, with some flags in each indicating which is active. we have the CAS option for that. Well, that is what I have started using in the QEMU prototype for the sPAPR XIVE support. > That could be a problem with the kernel interrupt controller. We can't > really instantiate both there I think... well, actually ... maybe we > could, though it's a bit messy... Well, It would much cleaner to reset completely the guest, no ICP and no ICS objects, until we reach the end of the CAS negotiation, and then instantiate what we need. The only issue I have spotted for the moment is that the device tree is populated with IRQ numbers allocated from an interrupt source, which is way before CAS has even started. So, to work around that, we could imagine using a bitmap to allocate these IRQ numbers and then instantiate the interrupt source object of the correct type with this bitmap as a constructor parameter. Just an idea. The interrupt presenter objects could be allocated later in the boot process. I think. May be on demand, when a CPU is first notified ? I haven't looked at the gory details: migration, hotplug. But we are starting the discussion on the wrong mailing list ! Let me complete the QEMU patchset. I am currently splitting it in little chunks for a first RFC. The last one being an hideous hack to activate XIVE in the guest. Cheers, C.
[PATCH 07/14] spufs: Implement show_options
Implement the show_options superblock op for spufs as part of a bid to get rid of s_options and generic_show_options() to make it easier to implement a context-based mount where the mount options can be passed individually over a file descriptor. Signed-off-by: David Howells cc: Jeremy Kerr cc: linuxppc-dev@lists.ozlabs.org --- arch/powerpc/platforms/cell/spufs/inode.c | 21 ++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/platforms/cell/spufs/inode.c b/arch/powerpc/platforms/cell/spufs/inode.c index d8af9bc0489f..27a51a60bc33 100644 --- a/arch/powerpc/platforms/cell/spufs/inode.c +++ b/arch/powerpc/platforms/cell/spufs/inode.c @@ -605,6 +605,23 @@ static const match_table_t spufs_tokens = { { Opt_err,NULL }, }; +static int spufs_show_options(struct seq_file *m, struct dentry *root) +{ + struct spufs_sb_info *sbi = spufs_get_sb_info(root->d_sb); + + if (!uid_eq(root->i_uid, GLOBAL_ROOT_UID)) + seq_printf(m, ",uid=%u", + from_kuid_munged(&init_user_ns, root->i_uid)); + if (!gid_eq(root->i_gid, GLOBAL_ROOT_GID)) + seq_printf(m, ",gid=%u", + from_kgid_munged(&init_user_ns, root->i_gid)); + if (root->i_mode & S_IALLUGO != 0775) + seq_printf(m, ",mode=%o", root->i_mode); + if (sbi->debug) + seq_puts(m, ",debug"); + return 0; +} + static int spufs_parse_options(struct super_block *sb, char *options, struct inode *root) { @@ -724,11 +741,9 @@ spufs_fill_super(struct super_block *sb, void *data, int silent) .destroy_inode = spufs_destroy_inode, .statfs = simple_statfs, .evict_inode = spufs_evict_inode, - .show_options = generic_show_options, + .show_options = spufs_show_options, }; - save_mount_options(sb, data); - info = kzalloc(sizeof(*info), GFP_KERNEL); if (!info) return -ENOMEM;
Re: [next-20170609] Oops while running CPU off-on (cpuset.c/cpuset_can_attach)
Hello, Abdul. Thanks for the debug info. Can you please see whether the following patch fixes the issue? If the problem is too difficult to reproduce to confirm the fix by seeing whether it no longer triggers, please let me know. We can instead apply a patch which triggers WARN on the failing condition to confirm the diagnosis. Thanks. diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h index 793565c05742..8b4c3c2f2509 100644 --- a/kernel/cgroup/cgroup-internal.h +++ b/kernel/cgroup/cgroup-internal.h @@ -33,6 +33,9 @@ struct cgroup_taskset { struct list_headsrc_csets; struct list_headdst_csets; + /* the number of tasks in the set */ + int nr_tasks; + /* the subsys currently being processed */ int ssid; diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index dbfd7028b1c6..e3c4152741a3 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -1954,6 +1954,8 @@ static void cgroup_migrate_add_task(struct task_struct *task, if (!cset->mg_src_cgrp) return; + mgctx->tset.nr_tasks++; + list_move_tail(&task->cg_list, &cset->mg_tasks); if (list_empty(&cset->mg_node)) list_add_tail(&cset->mg_node, @@ -2047,16 +2049,18 @@ static int cgroup_migrate_execute(struct cgroup_mgctx *mgctx) return 0; /* check that we can legitimately attach to the cgroup */ - do_each_subsys_mask(ss, ssid, mgctx->ss_mask) { - if (ss->can_attach) { - tset->ssid = ssid; - ret = ss->can_attach(tset); - if (ret) { - failed_ssid = ssid; - goto out_cancel_attach; + if (tset->nr_tasks) { + do_each_subsys_mask(ss, ssid, mgctx->ss_mask) { + if (ss->can_attach) { + tset->ssid = ssid; + ret = ss->can_attach(tset); + if (ret) { + failed_ssid = ssid; + goto out_cancel_attach; + } } - } - } while_each_subsys_mask(); + } while_each_subsys_mask(); + } /* * Now that we're guaranteed success, proceed to move all tasks to @@ -2085,25 +2089,29 @@ static int cgroup_migrate_execute(struct cgroup_mgctx *mgctx) */ tset->csets = &tset->dst_csets; - do_each_subsys_mask(ss, ssid, mgctx->ss_mask) { - if (ss->attach) { - tset->ssid = ssid; - ss->attach(tset); - } - } while_each_subsys_mask(); + if (tset->nr_tasks) { + do_each_subsys_mask(ss, ssid, mgctx->ss_mask) { + if (ss->attach) { + tset->ssid = ssid; + ss->attach(tset); + } + } while_each_subsys_mask(); + } ret = 0; goto out_release_tset; out_cancel_attach: - do_each_subsys_mask(ss, ssid, mgctx->ss_mask) { - if (ssid == failed_ssid) - break; - if (ss->cancel_attach) { - tset->ssid = ssid; - ss->cancel_attach(tset); - } - } while_each_subsys_mask(); + if (tset->nr_tasks) { + do_each_subsys_mask(ss, ssid, mgctx->ss_mask) { + if (ssid == failed_ssid) + break; + if (ss->cancel_attach) { + tset->ssid = ssid; + ss->cancel_attach(tset); + } + } while_each_subsys_mask(); + } out_release_tset: spin_lock_irq(&css_set_lock); list_splice_init(&tset->dst_csets, &tset->src_csets);
Re: [PATCH] powerpc/mm: Implemented default_hugepagesz verification for powerpc
Em 2017-07-05 01:26, Aneesh Kumar K.V escreveu: On Tuesday 04 July 2017 01:35 AM, Victor Aoqui wrote: Implemented default hugepage size verification (default_hugepagesz=) in order to allow allocation of defined number of pages (hugepages=) only for supported hugepage sizes. Signed-off-by: Victor Aoqui --- arch/powerpc/mm/hugetlbpage.c | 15 +++ 1 file changed, 15 insertions(+) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index a4f33de..464e72e 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -797,6 +797,21 @@ static int __init hugepage_setup_sz(char *str) } __setup("hugepagesz=", hugepage_setup_sz); +static int __init default_hugepage_setup_sz(char *str) +{ +unsigned long long size; + +size = memparse(str, &str); + +if (add_huge_page_size(size) != 0) { +hugetlb_bad_size(); +pr_err("Invalid default huge page size specified(%llu)\n", size); +} + +return 1; +} +__setup("default_hugepagesz=", default_hugepage_setup_sz); isn't that a behavior change in what we have now ? . Right now if size specified is not supported, we fallback to HPAGE_SIZE. Yes, it is. However, is this a correct behavior? If we specify an unsupported value, for example default_hugepagesz=1M and hugepages=1000, 1M will be ignored and 1000 pages of 16M (arch default) will be allocated. This could lead to non-expected out of of memory/performance issue. mm/hugetlb.c if (!size_to_hstate(default_hstate_size)) { default_hstate_size = HPAGE_SIZE; if (!size_to_hstate(default_hstate_size)) hugetlb_add_hstate(HUGETLB_PAGE_ORDER); } + struct kmem_cache *hugepte_cache; static int __init hugetlbpage_init(void) { Even if we want to do this, this should be done in generic code and should not be powerpc specific The verification of supported powerpc hugepage size (hugepagesz=) is being performed on add_huge_page_size(), which is currently defined in arch/powerpc/mm/hugetlbpage.c. I think it makes more sense to implement default_hugepagesz= verification on arch/powerpc, don't you think? -aneesh
Re: [PATCH] powerpc/mm: Implemented default_hugepagesz verification for powerpc
Em 2017-07-05 01:31, Anshuman Khandual escreveu: On 07/04/2017 01:35 AM, Victor Aoqui wrote: Implemented default hugepage size verification (default_hugepagesz=) in order to allow allocation of defined number of pages (hugepages=) only for supported hugepage sizes. Signed-off-by: Victor Aoqui --- arch/powerpc/mm/hugetlbpage.c | 15 +++ 1 file changed, 15 insertions(+) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index a4f33de..464e72e 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -797,6 +797,21 @@ static int __init hugepage_setup_sz(char *str) } __setup("hugepagesz=", hugepage_setup_sz); +static int __init default_hugepage_setup_sz(char *str) The function name should be hugetlb_default_size_setup in sync with the generic function hugetlb_default_setup for the same parameter default_hugepagesz. Yes, makes sense to me. +{ +unsigned long long size; + +size = memparse(str, &str); + +if (add_huge_page_size(size) != 0) { I am little bit confused here. Do we always follow another 'hugepages=' element after 'default_hugepagesz' ? If not, then we dont have to do 'add_huge_page_size'. But then that function checks for valid huge page sizes and skips adding hstate if its already added. So I guess it okay. 'default_hugepagesz=' is not always followed by 'hugepages=', but if we specify 'hugepages=' along with 'default_hugepagesz=' it will try to allocate the hugepage size specified. If the size is not supported by hardware, it will try to allocate the number of pages specified with the default hugepage size of the arch, which is not the desired behavior. So calling add_huge_page_size would verify if the hugepage size is supported and in case it's not, hugepages will not be allocated. +hugetlb_bad_size(); +pr_err("Invalid default huge page size specified(%llu)\n", size); Error message should have 'ppc' some where to indicate that the arch rejected the size not core MM.
[PATCH 4/5] powernv:idle: Move initialization of sibling pacas to pnv_alloc_idle_core_states
From: "Gautham R. Shenoy" On POWER9 DD1, in order to get around a hardware issue, we store in every CPU thread's paca the paca pointers of all its siblings. Move this code into pnv_alloc_idle_core_states() soon after the space for saving the sibling pacas is allocated. Signed-off-by: Gautham R. Shenoy --- arch/powerpc/platforms/powernv/idle.c | 45 +-- 1 file changed, 22 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index c400ff9..254a0db8 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -194,6 +194,28 @@ static void pnv_alloc_idle_core_states(void) } } + /* +* For each CPU, record its PACA address in each of it's +* sibling thread's PACA at the slot corresponding to this +* CPU's index in the core. +*/ + if (cpu_has_feature(CPU_FTR_POWER9_DD1)) { + int cpu; + + pr_info("powernv: idle: Saving PACA pointers of all CPUs in their thread sibling PACA\n"); + for_each_possible_cpu(cpu) { + int base_cpu = cpu_first_thread_sibling(cpu); + int idx = cpu_thread_in_core(cpu); + int i; + + for (i = 0; i < threads_per_core; i++) { + int j = base_cpu + i; + + paca[j].thread_sibling_pacas[idx] = &paca[cpu]; + } + } + } + update_subcore_sibling_mask(); if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT) @@ -898,31 +920,8 @@ static int __init pnv_init_idle_states(void) if (pnv_probe_idle_states()) goto out; - pnv_alloc_idle_core_states(); - /* -* For each CPU, record its PACA address in each of it's -* sibling thread's PACA at the slot corresponding to this -* CPU's index in the core. -*/ - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) { - int cpu; - - pr_info("powernv: idle: Saving PACA pointers of all CPUs in their thread sibling PACA\n"); - for_each_possible_cpu(cpu) { - int base_cpu = cpu_first_thread_sibling(cpu); - int idx = cpu_thread_in_core(cpu); - int i; - - for (i = 0; i < threads_per_core; i++) { - int j = base_cpu + i; - - paca[j].thread_sibling_pacas[idx] = &paca[cpu]; - } - } - } - out: return 0; } -- 1.9.4
[PATCH 1/5] powernv:idle: Move device-tree parsing to one place.
From: "Gautham R. Shenoy" The details of the platform idle state are exposed by the firmware to the kernel via device tree. In the current code, we parse the device tree twice : 1) During the boot up in arch/powerpc/platforms/powernv/idle.c Here, the device tree is parsed to obtain the details of the supported_cpuidle_states which is used to determine the default idle state (which would be used when cpuidle is absent) and the deepest idle state (which would be used for cpu-hotplug). 2) During the powernv cpuidle driver initializion (drivers/cpuidle/cpuidle-powernv.c). Here we parse the device tree to populate the cpuidle driver's states. This patch moves all the device tree parsing to the platform idle code. It defines data-structures for recording the details of the parsed idle states. Any other kernel subsystem that is interested in the idle states (eg: cpuidle-powernv driver) can just use the in-kernel data structure instead of parsing the device tree all over again. Further, this helps to check the validity of states in one place and in case of invalid states (eg : stop states whose psscr values are errorenous) flag them as invalid, so that the other subsystems can be prevented from using those. Signed-off-by: Gautham R. Shenoy --- arch/powerpc/include/asm/cpuidle.h| 32 +-- arch/powerpc/platforms/powernv/idle.c | 390 ++ drivers/cpuidle/cpuidle-powernv.c | 233 +--- 3 files changed, 378 insertions(+), 277 deletions(-) diff --git a/arch/powerpc/include/asm/cpuidle.h b/arch/powerpc/include/asm/cpuidle.h index 52586f9..88ff2a1 100644 --- a/arch/powerpc/include/asm/cpuidle.h +++ b/arch/powerpc/include/asm/cpuidle.h @@ -73,19 +73,25 @@ extern u64 pnv_first_deep_stop_state; unsigned long pnv_cpu_offline(unsigned int cpu); -int validate_psscr_val_mask(u64 *psscr_val, u64 *psscr_mask, u32 flags); -static inline void report_invalid_psscr_val(u64 psscr_val, int err) -{ - switch (err) { - case ERR_EC_ESL_MISMATCH: - pr_warn("Invalid psscr 0x%016llx : ESL,EC bits unequal", - psscr_val); - break; - case ERR_DEEP_STATE_ESL_MISMATCH: - pr_warn("Invalid psscr 0x%016llx : ESL cleared for deep stop-state", - psscr_val); - } -} + +#define PNV_IDLE_NAME_LEN 16 +struct pnv_idle_state { + char name[PNV_IDLE_NAME_LEN]; + u32 flags; + u32 latency_ns; + u32 residency_ns; + u64 ctrl_reg_val; /* The ctrl_reg on POWER8 would be pmicr. */ + u64 ctrl_reg_mask; /* On POWER9 it is psscr */ + bool valid; +}; + +struct pnv_idle_states { + unsigned int nr_states; + struct pnv_idle_state *states; +}; + +struct pnv_idle_states *get_pnv_idle_states(void); + #endif #endif diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index 2abee07..b747bb5 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -58,6 +58,17 @@ static u64 pnv_deepest_stop_psscr_mask; static bool deepest_stop_found; +/* + * Data structure that stores details of + * all the platform idle states. + */ +struct pnv_idle_states pnv_idle; + +struct pnv_idle_states *get_pnv_idle_states(void) +{ + return &pnv_idle; +} + static int pnv_save_sprs_for_deep_states(void) { int cpu; @@ -435,9 +446,11 @@ unsigned long pnv_cpu_offline(unsigned int cpu) * stop instruction */ -int validate_psscr_val_mask(u64 *psscr_val, u64 *psscr_mask, u32 flags) +void validate_psscr_val_mask(int i) { - int err = 0; + u64 *psscr_val = &pnv_idle.states[i].ctrl_reg_val; + u64 *psscr_mask = &pnv_idle.states[i].ctrl_reg_mask; + u32 flags = pnv_idle.states[i].flags; /* * psscr_mask == 0xf indicates an older firmware. @@ -447,7 +460,8 @@ int validate_psscr_val_mask(u64 *psscr_val, u64 *psscr_mask, u32 flags) if (*psscr_mask == 0xf) { *psscr_val = *psscr_val | PSSCR_HV_DEFAULT_VAL; *psscr_mask = PSSCR_HV_DEFAULT_MASK; - return err; + pnv_idle.states[i].valid = true; + return; } /* @@ -458,13 +472,17 @@ int validate_psscr_val_mask(u64 *psscr_val, u64 *psscr_mask, u32 flags) * - ESL bit is set for all the deep stop states. */ if (GET_PSSCR_ESL(*psscr_val) != GET_PSSCR_EC(*psscr_val)) { - err = ERR_EC_ESL_MISMATCH; + pnv_idle.states[i].valid = false; + pr_warn("Invalid state:%s:psscr 0x%016llx: ESL,EC bits unequal\n", + pnv_idle.states[i].name, *psscr_val); } else if ((flags & OPAL_PM_LOSE_FULL_CONTEXT) && GET_PSSCR_ESL(*psscr_val) == 0) { - err = ERR_DEEP_STATE_ESL_MISMATCH; + pnv_idle.states[i].valid = false; + pr_warn("Invalid state:%s:psscr 0x%016llx:ESL cl
[PATCH 3/5] powernv:idle: Define idle init function for power8
From: "Gautham R. Shenoy" In this patch we define a new function named pnv_power8_idle_init(). We move the following code from pnv_init_idle_states() into this newly defined function. a) That patches out pnv_fastsleep_workaround_at_entry/exit when no states with OPAL_PM_SLEEP_ENABLED_ER1 are present. b) Creating a sysfs control to choose how the workaround has to be applied when a OPAL_PM_SLEEP_ENABLED_ER1 state is present. c) Set ppc_md.power_save to power7_idle when OPAL_PM_NAP_ENABLED is present. With this, all the power8 specific initializations are in one place. Signed-off-by: Gautham R. Shenoy --- arch/powerpc/platforms/powernv/idle.c | 59 --- 1 file changed, 40 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index a5990d9..c400ff9 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -564,6 +564,44 @@ static void __init pnv_power9_idle_init(void) pnv_first_deep_stop_state); } + +static void __init pnv_power8_idle_init(void) +{ + int i; + bool has_nap = false; + bool has_sleep_er1 = false; + int dt_idle_states = pnv_idle.nr_states; + + for (i = 0; i < dt_idle_states; i++) { + struct pnv_idle_state *state = &pnv_idle.states[i]; + + if (state->flags & OPAL_PM_NAP_ENABLED) + has_nap = true; + if (state->flags & OPAL_PM_SLEEP_ENABLED_ER1) + has_sleep_er1 = true; + } + + if (!has_sleep_er1) { + patch_instruction( + (unsigned int *)pnv_fastsleep_workaround_at_entry, + PPC_INST_NOP); + patch_instruction( + (unsigned int *)pnv_fastsleep_workaround_at_exit, + PPC_INST_NOP); + } else { + /* +* OPAL_PM_SLEEP_ENABLED_ER1 is set. It indicates that +* workaround is needed to use fastsleep. Provide sysfs +* control to choose how this workaround has to be applied. +*/ + device_create_file(cpu_subsys.dev_root, + &dev_attr_fastsleep_workaround_applyonce); + } + + if (has_nap) + ppc_md.power_save = power7_idle; +} + /* * Returns 0 if prop1_len == prop2_len. Else returns -1 */ @@ -837,6 +875,8 @@ static int __init pnv_probe_idle_states(void) if (cpu_has_feature(CPU_FTR_ARCH_300)) pnv_power9_idle_init(); + else + pnv_power8_idle_init(); for (i = 0; i < dt_idle_states; i++) { if (!pnv_idle.states[i].valid) @@ -858,22 +898,6 @@ static int __init pnv_init_idle_states(void) if (pnv_probe_idle_states()) goto out; - if (!(supported_cpuidle_states & OPAL_PM_SLEEP_ENABLED_ER1)) { - patch_instruction( - (unsigned int *)pnv_fastsleep_workaround_at_entry, - PPC_INST_NOP); - patch_instruction( - (unsigned int *)pnv_fastsleep_workaround_at_exit, - PPC_INST_NOP); - } else { - /* -* OPAL_PM_SLEEP_ENABLED_ER1 is set. It indicates that -* workaround is needed to use fastsleep. Provide sysfs -* control to choose how this workaround has to be applied. -*/ - device_create_file(cpu_subsys.dev_root, - &dev_attr_fastsleep_workaround_applyonce); - } pnv_alloc_idle_core_states(); @@ -899,9 +923,6 @@ static int __init pnv_init_idle_states(void) } } - if (supported_cpuidle_states & OPAL_PM_NAP_ENABLED) - ppc_md.power_save = power7_idle; - out: return 0; } -- 1.9.4
[PATCH 2/5] powernv:idle: Change return type of pnv_probe_idle_states to int
From: "Gautham R. Shenoy" In the current idle initialization code, if there are failures in pnv_probe_idle_states, then no platform idle state is enabled. However, since the error is not propagated to the top-level function pnv_init_idle_states, we continue initialization in this top-level function even though this will never be used. Hence change the the return type of pnv_probe_idle_states from void to int and in case of failures, bail out early on in pnv_init_idle_states. Signed-off-by: Gautham R. Shenoy --- arch/powerpc/platforms/powernv/idle.c | 18 +++--- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index b747bb5..a5990d9 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -813,26 +813,27 @@ static int __init pnv_idle_parse(struct device_node *np, int dt_idle_states) /* * Probe device tree for supported idle states */ -static void __init pnv_probe_idle_states(void) +static int __init pnv_probe_idle_states(void) { struct device_node *np; int dt_idle_states; - int i; + int i, rc; np = of_find_node_by_path("/ibm,opal/power-mgt"); if (!np) { pr_warn("opal: PowerMgmt Node not found\n"); - return; + return -ENODEV; } dt_idle_states = of_property_count_u32_elems(np, "ibm,cpu-idle-state-flags"); if (dt_idle_states < 0) { pr_warn("cpuidle-powernv: no idle states found in the DT\n"); - return; + return -ENOENT; } - if (pnv_idle_parse(np, dt_idle_states)) - return; + rc = pnv_idle_parse(np, dt_idle_states); + if (rc) + return rc; if (cpu_has_feature(CPU_FTR_ARCH_300)) pnv_power9_idle_init(); @@ -842,6 +843,8 @@ static void __init pnv_probe_idle_states(void) continue; supported_cpuidle_states |= pnv_idle.states[i].flags; } + + return 0; } static int __init pnv_init_idle_states(void) @@ -852,7 +855,8 @@ static int __init pnv_init_idle_states(void) if (cpuidle_disable != IDLE_NO_OVERRIDE) goto out; - pnv_probe_idle_states(); + if (pnv_probe_idle_states()) + goto out; if (!(supported_cpuidle_states & OPAL_PM_SLEEP_ENABLED_ER1)) { patch_instruction( -- 1.9.4
[PATCH 0/5] powernv:idle: Cleanup idle states initialization
From: "Gautham R. Shenoy" Hi, This patch set aims at cleaning up the powernv idle initialization code mainly covering the following a) Currently there is redundant code for parsing the device-tree for idle states. We do it in two places, once during the platform idle initialization, once more when the cpidle driver initializes. In this patchset the device-tree is parsed only once and we maintain an in-kernel data structure with the details of each platform idle state. The cpu-idle initialization code looks at this data structure for initializing cpuidle states. This makes the cpuidle driver initialization more streamlined. b) Currently the idle initialzation code for power8 and power9 are mixed up. In this patchset we segregate them into their respective functions for improved readability. c) The current code has a bug when the Sleep-Winkle-Engine is unable to restore the hypervisor states for the deep idle states that lose full hypervisor context, since in such cases we don't disable such deep states. Thus, the CPUs that enter such deep states don't wakeup correctly. Patch 1 in the series addresses a). Patches 2,3,4 address b) Patch 5 fixes the bug c) These patches are applied on top of next branch of the powerpc-linux git tree. The patches have been tested on POWER8 and POWER9. Gautham R. Shenoy (5): powernv:idle: Move device-tree parsing to one place. powernv:idle: Change return type of pnv_probe_idle_states to int powernv:idle: Define idle init function for power8 powernv:idle: Move initialization of sibling pacas to pnv_alloc_idle_core_states powernv:idle: Disable LOSE_FULL_CONTEXT states when stop-api fails. arch/powerpc/include/asm/cpuidle.h| 32 +- arch/powerpc/platforms/powernv/idle.c | 576 ++ drivers/cpuidle/cpuidle-powernv.c | 233 -- 3 files changed, 521 insertions(+), 320 deletions(-) -- 1.9.4
[PATCH 5/5] powernv:idle: Disable LOSE_FULL_CONTEXT states when stop-api fails.
From: "Gautham R. Shenoy" Currently, we use the opal call opal_slw_set_reg() to inform the that the Sleep-Winkle Engine (SLW) to restore the contents of some of the Hypervisor state on wakeup from deep idle states that lose full hypervisor context (characterized by the flag OPAL_PM_LOSE_FULL_CONTEXT). However, the current code has a bug in that if opal_slw_set_reg() fails, we don't disable the use of these deep states (winkle on POWER8, stop4 onwards on POWER9). This patch fixes this bug by ensuring that if the the sleep winkle engine is unable to restore the hypervisor states in pnv_save_sprs_for_deep_states(), then we mark as invalid the states which lose full context. As a side-effect, since supported_cpuidle_states in pnv_probe_idle_states() consists of flags of only the valid states, this patch will ensure that no other subsystem in the kernel can use the states which lose full context on stop-api failures. Signed-off-by: Gautham R. Shenoy --- arch/powerpc/platforms/powernv/idle.c | 98 +++ 1 file changed, 87 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index 254a0db8..8d07ce6 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -217,9 +217,6 @@ static void pnv_alloc_idle_core_states(void) } update_subcore_sibling_mask(); - - if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT) - pnv_save_sprs_for_deep_states(); } u32 pnv_get_supported_cpuidle_states(void) @@ -518,6 +515,57 @@ static void __init pnv_power9_idle_init(void) u64 max_residency_ns = 0; int i; int dt_idle_states = pnv_idle.nr_states; + bool save_sprs_for_deep_stop = false; + bool disable_lose_full_context = false; + u64 psscr_rl, residency_ns, psscr_val, psscr_mask; + u32 flags; + + /* +* pnv_deepest_stop_{val,mask} should be set to values +* corresponding to the deepest stop state. +*/ + for (i = 0; i < dt_idle_states; i++) { + psscr_val = pnv_idle.states[i].ctrl_reg_val; + psscr_mask = pnv_idle.states[i].ctrl_reg_mask; + psscr_rl = psscr_val & PSSCR_RL_MASK; + flags = pnv_idle.states[i].flags; + residency_ns = pnv_idle.states[i].residency_ns; + + if (flags & OPAL_PM_LOSE_FULL_CONTEXT) + save_sprs_for_deep_stop = true; + + if (max_residency_ns < residency_ns) { + max_residency_ns = residency_ns; + pnv_deepest_stop_psscr_val = psscr_val; + pnv_deepest_stop_psscr_mask = psscr_mask; + deepest_stop_found = true; + } + } + + /* +* pnv_save_sprs_for_deep_states() expects +* pnv_deepest_stop_psscr_val to be initialized. +*/ + if (save_sprs_for_deep_stop) { + int rc; + + rc = pnv_save_sprs_for_deep_states(); + + /* +* If the Sleep-Winkle Engine is unable to restore the +* critical SPRs on wakeup from some of the deep stop +* states that lose full context, then we mark such +* deep states as invalid and recompute the +* pnv_deepest_stop_psscr_val/mask from among the +* valid states. +*/ + if (unlikely(rc)) { + pr_warn("cpuidle-powernv:Disabling full-context loss states.SLW unable to restore SPRs\n"); + disable_lose_full_context = true; + max_residency_ns = 0; + deepest_stop_found = false; + } + } /* * Set pnv_first_deep_stop_state, pnv_deepest_stop_psscr_{val,mask}, @@ -526,16 +574,20 @@ static void __init pnv_power9_idle_init(void) * pnv_first_deep_stop_state should be set to the first stop * level to cause hypervisor state loss. * -* pnv_deepest_stop_{val,mask} should be set to values corresponding to -* the deepest stop state. * * pnv_default_stop_{val,mask} should be set to values corresponding to * the shallowest (OPAL_PM_STOP_INST_FAST) loss-less stop state. */ pnv_first_deep_stop_state = MAX_STOP_STATE; for (i = 0; i < dt_idle_states; i++) { - u64 psscr_rl, residency_ns, psscr_val, psscr_mask; - u32 flags; + flags = pnv_idle.states[i].flags; + + if ((flags & OPAL_PM_LOSE_FULL_CONTEXT) && + disable_lose_full_context) { + pnv_idle.states[i].valid = false; + pr_warn("cpuidle-powernv: Disabling full-context loss state :%s\n", + pnv_idle.states[i].name); +
Re: 85xx: Enable gpio power/reset driver
Hello Scott On 23/10/2016, Andy Fleming wrote: > These config changes build: > drivers/power/reset/gpio-poweroff.c > drivers/power/reset/gpio-restart.c > > Signed-off-by: Andy Fleming > --- > arch/powerpc/configs/fsl-emb-nonhw.config | 6 ++ > 1 file changed, 6 insertions(+) Is there any news on when these patches will be merged? We have end-users with Cyrus boards now, and would like to provide them with a mainline kernel that can power-off/reset the board. Thanks Darren
Re: [RFC PATCH 1/2] powerpc/xive: guest exploitation of the XIVE interrupt controller
On 07/03/2017 05:55 AM, David Gibson wrote: > On Thu, Jun 22, 2017 at 11:29:16AM +0200, Cédric Le Goater wrote: >> This is the framework for using XIVE in a PowerVM guest. The support >> is very similar to the native one in a much simpler form. >> >> Instead of OPAL calls, a set of Hypervisors call are used to configure >> the interrupt sources and the event/notification queues of the guest: >> >>H_INT_GET_SOURCE_INFO >>H_INT_SET_SOURCE_CONFIG >>H_INT_GET_SOURCE_CONFIG >>H_INT_GET_QUEUE_INFO >>H_INT_SET_QUEUE_CONFIG >>H_INT_GET_QUEUE_CONFIG >>H_INT_RESET >> >> Calls that still need to be addressed : >> >>H_INT_SET_OS_REPORTING_LINE >>H_INT_GET_OS_REPORTING_LINE >>H_INT_ESB >>H_INT_SYNC > > So, does this mean there's a PAPR update with the XIVE virtualization > stuff? Or at least an ACR? Can we have that available please... The QEMU patchset has some initial info on the hcalls : http://patchwork.ozlabs.org/patch/784785/ C.
Re: [PATCH] powerpc/tm: fix live state of vs0/32 in tm_reclaim
Hi Michael, On Wed, Jul 05, 2017 at 11:02:41AM +1000, Michael Neuling wrote: > On Tue, 2017-07-04 at 16:45 -0400, Gustavo Romero wrote: > > Currently tm_reclaim() can return with a corrupted vs0 (fp0) or vs32 (v0) > > due to the fact vs0 is used to save FPSCR and vs32 is used to save VSCR. > > tm_reclaim() should have no state live in the registers once it returns. It > should all be saved in the thread struct. The above is not an issue in my > book. Right, but we will always recheckpoint from the live anyway, so, if we do not force the MSR_VEC and/or MSR_FP in tm_recheckpoint(), then we will inevitably put the live registers into the checkpoint area. It might not be a problem for VEC/FP if they are disabled, since a later VEC/FP touch will raise a fp/vec_unavailable() exception which will fill out the registers properly, replacing the old state brought from the checkpoint area. > When we recheckpoint inside an fp unavail, we need to recheckpoint vec if it > was > enabled. Currently we only ever recheckpoint the FP which seems like a bug. > Visa versa for the other way around. This seems to be another problem that also exists in the code, but it is essentially different from the one in this thread, which happens on the VSX unavailable exception path. Although essentially different, the solution might be similar. So, a fix that would resolve all the issues reported here would sound like. What do you think? --- diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c index d4e545d..76a35ab 100644 --- a/arch/powerpc/kernel/traps.c +++ b/arch/powerpc/kernel/traps.c @@ -1589,7 +1589,7 @@ void fp_unavailable_tm(struct pt_regs *regs) * If VMX is in use, the VRs now hold checkpointed values, * so we don't want to load the VRs from the thread_struct. */ - tm_recheckpoint(¤t->thread, MSR_FP); + tm_recheckpoint(¤t->thread, regs->msr); /* If VMX is in use, get the transactional values back */ if (regs->msr & MSR_VEC) { @@ -1611,7 +1611,7 @@ void altivec_unavailable_tm(struct pt_regs *regs) regs->nip, regs->msr); tm_reclaim_current(TM_CAUSE_FAC_UNAV); regs->msr |= MSR_VEC; - tm_recheckpoint(¤t->thread, MSR_VEC); + tm_recheckpoint(¤t->thread, regs->msr ); current->thread.used_vr = 1; if (regs->msr & MSR_FP) { @@ -1653,7 +1653,7 @@ void vsx_unavailable_tm(struct pt_regs *regs) /* This loads & recheckpoints FP and VRs; but we have * to be sure not to overwrite previously-valid state. */ - tm_recheckpoint(¤t->thread, regs->msr & ~orig_msr); + tm_recheckpoint(¤t->thread, regs->msr); msr_check_and_set(orig_msr & (MSR_FP | MSR_VEC)); diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 9f3e2c9..c6abad1 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -880,10 +880,10 @@ static void tm_reclaim_thread(struct thread_struct *thr, * not. So either this will write the checkpointed registers, * or reclaim will. Similarly for VMX. */ - if ((thr->ckpt_regs.msr & MSR_FP) == 0) + if ((thr->regs->msr & MSR_FP) == 0) memcpy(&thr->ckfp_state, &thr->fp_state, sizeof(struct thread_fp_state)); - if ((thr->ckpt_regs.msr & MSR_VEC) == 0) + if ((thr->regs->msr & MSR_VEC) == 0) memcpy(&thr->ckvr_state, &thr->vr_state, sizeof(struct thread_vr_state));
[RFC v5 00/38] powerpc: Memory Protection Keys
Memory protection keys enable applications to protect its address space from inadvertent access or corruption from itself. The overall idea: A process allocates a key and associates it with an address range withinits address space. The process then can dynamically set read/write permissions on the key without involving the kernel. Any code that violates the permissions of the address space; as defined by its associated key, will receive a segmentation fault. This patch series enables the feature on PPC64 HPTE platform. ISA3.0 section 5.7.13 describes the detailed specifications. Testing: This patch series has passed all the protection key tests available in the selftests directory. The tests are updated to work on both x86 and powerpc. version v5: (1) reverted back to the old design -- store the key in the pte, instead of bypassing it. The v4 design slowed down the hash page path. (2) detects key violation when kernel is told to access user pages. (3) further refined the patches into smaller consumable units (4) page faults handlers captures the faulting key from the pte instead of the vma. This closes a race between where the key update in the vma and a key fault caused cause by the key programmed in the pte. (5) a key created with access-denied should also set it up to deny write. Fixed it. (6) protection-key number is displayed in smaps the x86 way. version v4: (1) patches no more depend on the pte bits to program the hpte -- comment by Balbir (2) documentation updates (3) fixed a bug in the selftest. (4) unlike x86, powerpc lets signal handler change key permission bits; the change will persist across signal handler boundaries. Earlier we allowed the signal handler to modify a field in the siginfo structure which would than be used by the kernel to program the key protection register (AMR) -- resolves a issue raised by Ben. "Calls to sys_swapcontext with a made-up context will end up with a crap AMR if done by code who didn't know about that register". (5) these changes enable protection keys on 4k-page kernel aswell. version v3: (1) split the patches into smaller consumable patches. (2) added the ability to disable execute permission on a key at creation. (3) rename calc_pte_to_hpte_pkey_bits() to pte_to_hpte_pkey_bits() -- suggested by Anshuman (4) some code optimization and clarity in do_page_fault() (5) A bug fix while invalidating a hpte slot in __hash_page_4K() -- noticed by Aneesh version v2: (1) documentation and selftest added (2) fixed a bug in 4k hpte backed 64k pte where page invalidation was not done correctly, and initialization of second-part-of-the-pte was not done correctly if the pte was not yet Hashed with a hpte. Reported by Aneesh. (3) Fixed ABI breakage caused in siginfo structure. Reported by Anshuman. version v1: Initial version Ram Pai (38): powerpc: Free up four 64K PTE bits in 4K backed HPTE pages powerpc: Free up four 64K PTE bits in 64K backed HPTE pages powerpc: introduce pte_set_hash_slot() helper powerpc: introduce pte_get_hash_gslot() helper powerpc: capture the PTE format changes in the dump pte report powerpc: use helper functions in __hash_page_64K() for 64K PTE powerpc: use helper functions in __hash_page_huge() for 64K PTE powerpc: use helper functions in __hash_page_4K() for 64K PTE powerpc: use helper functions in __hash_page_4K() for 4K PTE powerpc: use helper functions in flush_hash_page() mm: introduce an additional vma bit for powerpc pkey mm: ability to disable execute permission on a key at creation x86: disallow pkey creation with PKEY_DISABLE_EXECUTE powerpc: initial plumbing for key management powerpc: helper function to read,write AMR,IAMR,UAMOR registers powerpc: implementation for arch_set_user_pkey_access() powerpc: sys_pkey_alloc() and sys_pkey_free() system calls powerpc: store and restore the pkey state across context switches powerpc: introduce execute-only pkey powerpc: ability to associate pkey to a vma powerpc: implementation for arch_override_mprotect_pkey() powerpc: map vma key-protection bits to pte key bits. powerpc: sys_pkey_mprotect() system call powerpc: Program HPTE key protection bits powerpc: helper to validate key-access permissions of a pte powerpc: check key protection for user page access powerpc: Macro th
[RFC v5 01/38] powerpc: Free up four 64K PTE bits in 4K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6, in the 4K backed HPTE pages.These bits continue to be used for 64K backed HPTE pages in this patch, but will be freed up in the next patch. The bit numbers are big-endian as defined in the ISA3.0 The patch does the following change to the 4k htpe backed 64K PTE's format. H_PAGE_BUSY moves from bit 3 to bit 9 (B bit in the figure below) V0 which occupied bit 4 is not used anymore. V1 which occupied bit 5 is not used anymore. V2 which occupied bit 6 is not used anymore. V3 which occupied bit 7 is not used anymore. Before the patch, the 4k backed 64k PTE format was as follows 0 1 2 3 4 5 6 7 8 9 10...63 : : : : : : : : : : :: v v v v v v v v v v vv ,-,-,-,-,--,--,--,--,-,-,-,-,-,--,-,-,-, |x|x|x|B|V0|V1|V2|V3|x|x|x|x|x||.|.|.|.| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_''_'_'_'_' |S|G|I|X|S |G |I |X |S|G|I|X|..|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__'_'_'_'_' After the patch, the 4k backed 64k PTE format is as follows 0 1 2 3 4 5 6 7 8 9 10...63 : : : : : : : : : : :: v v v v v v v v v v vv ,-,-,-,-,--,--,--,--,-,-,-,-,-,--,-,-,-, |x|x|x| | | | | |x|B|x|x|x||.|.|.|.| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_''_'_'_'_' |S|G|I|X|S |G |I |X |S|G|I|X|..|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__'_'_'_'_' the four bits S,G,I,X (one quadruplet per 4k HPTE) that cache the hash-bucket slot value, is initialized to 1,1,1,1 indicating -- an invalid slot. If a HPTE gets cached in a slot(i.e 7th slot of secondary hash bucket), it is released immediately. In other words, even though is a valid slot value in the hash bucket, we consider it invalid and release the slot and the HPTE. This gives us the opportunity to determine the validity of S,G,I,X bits based on its contents and not on any of the bits V0,V1,V2 or V3 in the primary PTE When we release aHPTEcached in the slot we alsorelease a legitimate slot in the primary hash bucket and unmap its corresponding HPTE. This is to ensure that we do get a HPTE cached in a slot of the primary hash bucket, the next time we retry. Though treating slot as invalid, reduces the number of available slots in the hash bucket and may have an effect on the performance, the probabilty of hitting a slot is extermely low. Compared to the current scheme, the above described scheme reduces the number of false hash table updates significantly andhas the added advantage of releasing four valuable PTE bits for other purpose. NOTE:even though bits 3, 4, 5, 6, 7 are not used when the 64K PTE is backed by 4k HPTE, they continue to be used if the PTE gets backed by 64k HPTE. The next patch will decouple that aswell, and truely release the bits. This idea was jointly developed by Paul Mackerras, Aneesh, Michael Ellermen and myself. 4K PTE format remains unchanged currently. The patch does the following code changes a) PTE flags are split between 64k and 4k header files. b) __hash_page_4K() is reimplemented to reflect the above logic. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/hash-4k.h |2 + arch/powerpc/include/asm/book3s/64/hash-64k.h |8 +-- arch/powerpc/include/asm/book3s/64/hash.h |1 - arch/powerpc/mm/hash64_64k.c | 78 - arch/powerpc/mm/hash_utils_64.c |4 +- 5 files changed, 57 insertions(+), 36 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index b4b5e6b..a306c0a 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -16,6 +16,8 @@ #define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE) #define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE) +#define H_PAGE_BUSY_RPAGE_RSV1 /* software: PTE & hash are busy */ + /* PTE flags to conserve for HPTE identification */ #define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | \ H_PAGE_F_SECOND | H_PAGE_F_GIX) diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index 9732837..62e580c 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -12,18 +12,14 @@ */ #define H_PAGE_COMBO _RPAGE_RPN0 /* this is a combo 4k page */ #define H_PAGE_4K_PFN _RPAGE_RPN1 /* PFN is for a single 4k page */ +#define H_PAGE_BUSY_RPAGE_RPN42
[RFC v5 02/38] powerpc: Free up four 64K PTE bits in 64K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6 in the 64K backed HPTE pages. This along with the earlier patch will entirely free up the four bits from 64K PTE. The bit numbers are big-endian as defined in the ISA3.0 This patch does the following change to 64K PTE backed by 64K HPTE. H_PAGE_F_SECOND (S) which occupied bit 4 moves to the second part of the pte to bit 60. H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also moves to the second part of the pte to bit 61, 62, 63, 64 respectively since bit 7 is now freed up, we move H_PAGE_BUSY (B) from bit 9 to bit 7. The second part of the PTE will hold (H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63. Before the patch, the 64K HPTE backed 64k PTE format was as follows 0 1 2 3 4 5 6 7 8 9 10...63 : : : : : : : : : : :: v v v v v v v v v v vv ,-,-,-,-,--,--,--,--,-,-,-,-,-,--,-,-,-, |x|x|x| |S |G |I |X |x|B|x|x|x||.|.|.|.| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_''_'_'_'_' | | | | | | | | | | | | |..| | | | | <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__'_'_'_'_' After the patch, the 64k HPTE backed 64k PTE format is as follows 0 1 2 3 4 5 6 7 8 9 10...63 : : : : : : : : : : :: v v v v v v v v v v vv ,-,-,-,-,--,--,--,--,-,-,-,-,-,--,-,-,-, |x|x|x| | | | |B |x|x|x|x|x||.|.|.|.| <- primary pte '_'_'_'_'__'__'__'__'_'_'_'_'_''_'_'_'_' | | | | | | | | | | | | |..|S|G|I|X| <- secondary pte '_'_'_'_'__'__'__'__'_'_'_'_'__'_'_'_'_' The above PTE changes is applicable to hugetlbpages aswell. The patch does the following code changes: a) moves the H_PAGE_F_SECOND and H_PAGE_F_GIX to 4k PTE header since it is no more needed b the 64k PTEs. b) abstracts out __real_pte() and __rpte_to_hidx() so the caller need not know the bit location of the slot. c) moves the slot bits the secondary pte. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/hash-4k.h |3 ++ arch/powerpc/include/asm/book3s/64/hash-64k.h | 29 ++- arch/powerpc/include/asm/book3s/64/hash.h |3 -- arch/powerpc/mm/hash64_64k.c | 30 ++-- arch/powerpc/mm/hugetlbpage-hash64.c | 22 ++ 5 files changed, 55 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index a306c0a..1e60099 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -16,6 +16,9 @@ #define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE) #define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE) +#define H_PAGE_F_GIX_SHIFT 56 +#define H_PAGE_F_SECOND_RPAGE_RSV2 /* HPTE is in 2ndary HPTEG */ +#define H_PAGE_F_GIX (_RPAGE_RSV3 | _RPAGE_RSV4 | _RPAGE_RPN44) #define H_PAGE_BUSY_RPAGE_RSV1 /* software: PTE & hash are busy */ /* PTE flags to conserve for HPTE identification */ diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index 62e580c..c281f18 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -12,7 +12,7 @@ */ #define H_PAGE_COMBO _RPAGE_RPN0 /* this is a combo 4k page */ #define H_PAGE_4K_PFN _RPAGE_RPN1 /* PFN is for a single 4k page */ -#define H_PAGE_BUSY_RPAGE_RPN42 /* software: PTE & hash are busy */ +#define H_PAGE_BUSY_RPAGE_RPN44 /* software: PTE & hash are busy */ /* * We need to differentiate between explicit huge page and THP huge @@ -21,8 +21,7 @@ #define H_PAGE_THP_HUGE H_PAGE_4K_PFN /* PTE flags to conserve for HPTE identification */ -#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_F_SECOND | \ -H_PAGE_F_GIX | H_PAGE_HASHPTE | H_PAGE_COMBO) +#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | H_PAGE_COMBO) /* * we support 16 fragments per PTE page of 64K size. */ @@ -50,24 +49,22 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep) unsigned long *hidxp; rpte.pte = pte; - rpte.hidx = 0; - if (pte_val(pte) & H_PAGE_COMBO) { - /* -* Make sure we order the hidx load against the H_PAGE_COMBO -* check. The store side ordering is done in __hash_page_4K -*/ - smp_rmb(); - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); - rpte.hidx = *hidxp; - } + /* +* Ensure that we do not read the hidx before we read +* the pte. Because the writ
[RFC v5 03/38] powerpc: introduce pte_set_hash_slot() helper
Introduce pte_set_hash_slot().It sets the (H_PAGE_F_SECOND|H_PAGE_F_GIX) bits at the appropriate location in the PTE of 4K PTE. For 64K PTE, it sets the bits in the second part of the PTE. Though the implementation for the former just needs the slot parameter, it does take some additional parameters to keep the prototype consistent. This function will be handy as we work towards re-arranging the bits in the later patches. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/hash-4k.h | 15 +++ arch/powerpc/include/asm/book3s/64/hash-64k.h | 25 + 2 files changed, 40 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h index 1e60099..d17ed52 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h @@ -53,6 +53,21 @@ static inline int hash__hugepd_ok(hugepd_t hpd) } #endif +/* + * 4k pte format is different from 64k pte format. Saving the + * hash_slot is just a matter of returning the pte bits that need to + * be modified. On 64k pte, things are a little more involved and + * hence needs many more parameters to accomplish the same. + * However we want to abstract this out from the caller by keeping + * the prototype consistent across the two formats. + */ +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte, + unsigned int subpg_index, unsigned long slot) +{ + return (slot << H_PAGE_F_GIX_SHIFT) & + (H_PAGE_F_SECOND | H_PAGE_F_GIX); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline char *get_hpte_slot_array(pmd_t *pmdp) diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h index c281f18..89ef5a9 100644 --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h @@ -67,6 +67,31 @@ static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index) return ((rpte.hidx >> (index<<2)) & 0xfUL); } +/* + * Commit the hash slot and return pte bits that needs to be modified. + * The caller is expected to modify the pte bits accordingly and + * commit the pte to memory. + */ +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte, + unsigned int subpg_index, unsigned long slot) +{ + unsigned long *hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); + + rpte.hidx &= ~(0xfUL << (subpg_index << 2)); + *hidxp = rpte.hidx | (slot << (subpg_index << 2)); + /* +* Commit the hidx bits to memory before returning. +* Anyone reading pte must ensure hidx bits are +* read only after reading the pte by using the +* read-side barrier smp_rmb(). __real_pte() can +* help ensure that. +*/ + smp_wmb(); + + /* no pte bits to be modified, return 0x0UL */ + return 0x0UL; +} + #define __rpte_to_pte(r) ((r).pte) extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index); /* -- 1.7.1
[RFC v5 04/38] powerpc: introduce pte_get_hash_gslot() helper
Introduce pte_get_hash_gslot()() which returns the slot number of the HPTE in the global hash table. This function will come in handy as we work towards re-arranging the PTE bits in the later patches. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/hash.h |3 +++ arch/powerpc/mm/hash_utils_64.c | 18 ++ 2 files changed, 21 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h index d27f885..277158c 100644 --- a/arch/powerpc/include/asm/book3s/64/hash.h +++ b/arch/powerpc/include/asm/book3s/64/hash.h @@ -156,6 +156,9 @@ static inline int hash__pte_none(pte_t pte) return (pte_val(pte) & ~H_PTE_NONE_MASK) == 0; } +unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift, + int ssize, real_pte_t rpte, unsigned int subpg_index); + /* This low level function performs the actual PTE insertion * Setting the PTE depends on the MMU type and other factors. It's * an horrible mess that I'm not going to try to clean up now but diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index 1b494d0..d3604da 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -1591,6 +1591,24 @@ static inline void tm_flush_hash_page(int local) } #endif +/* + * return the global hash slot, corresponding to the given + * pte, which contains the hpte. + */ +unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift, + int ssize, real_pte_t rpte, unsigned int subpg_index) +{ + unsigned long hash, slot, hidx; + + hash = hpt_hash(vpn, shift, ssize); + hidx = __rpte_to_hidx(rpte, subpg_index); + if (hidx & _PTEIDX_SECONDARY) + hash = ~hash; + slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; + slot += hidx & _PTEIDX_GROUP_IX; + return slot; +} + /* WARNING: This is called from hash_low_64.S, if you change this prototype, * do not forget to update the assembly call site ! */ -- 1.7.1
[RFC v5 05/38] powerpc: capture the PTE format changes in the dump pte report
The H_PAGE_F_SECOND,H_PAGE_F_GIX are not in the 64K main-PTE. capture these changes in the dump pte report. Signed-off-by: Ram Pai --- arch/powerpc/mm/dump_linuxpagetables.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/mm/dump_linuxpagetables.c b/arch/powerpc/mm/dump_linuxpagetables.c index 44fe483..5627edd 100644 --- a/arch/powerpc/mm/dump_linuxpagetables.c +++ b/arch/powerpc/mm/dump_linuxpagetables.c @@ -213,7 +213,7 @@ struct flag_info { .val= H_PAGE_4K_PFN, .set= "4K_pfn", }, { -#endif +#else /* CONFIG_PPC_64K_PAGES */ .mask = H_PAGE_F_GIX, .val= H_PAGE_F_GIX, .set= "f_gix", @@ -224,6 +224,7 @@ struct flag_info { .val= H_PAGE_F_SECOND, .set= "f_second", }, { +#endif /* CONFIG_PPC_64K_PAGES */ #endif .mask = _PAGE_SPECIAL, .val= _PAGE_SPECIAL, -- 1.7.1
[RFC v5 06/38] powerpc: use helper functions in __hash_page_64K() for 64K PTE
replace redundant code in __hash_page_64K() with helper functions pte_get_hash_gslot() and pte_set_hash_slot() Signed-off-by: Ram Pai --- arch/powerpc/mm/hash64_64k.c | 24 1 files changed, 4 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c index 0012618..645f621 100644 --- a/arch/powerpc/mm/hash64_64k.c +++ b/arch/powerpc/mm/hash64_64k.c @@ -244,7 +244,6 @@ int __hash_page_64K(unsigned long ea, unsigned long access, unsigned long flags, int ssize) { real_pte_t rpte; - unsigned long *hidxp; unsigned long hpte_group; unsigned long rflags, pa; unsigned long old_pte, new_pte; @@ -289,18 +288,12 @@ int __hash_page_64K(unsigned long ea, unsigned long access, vpn = hpt_vpn(ea, vsid, ssize); if (unlikely(old_pte & H_PAGE_HASHPTE)) { - unsigned long hash, slot, hidx; - - hash = hpt_hash(vpn, shift, ssize); - hidx = __rpte_to_hidx(rpte, 0); - if (hidx & _PTEIDX_SECONDARY) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += hidx & _PTEIDX_GROUP_IX; + unsigned long gslot; /* * There MIGHT be an HPTE for this pte */ - if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_64K, + gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte, 0); + if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, MMU_PAGE_64K, MMU_PAGE_64K, ssize, flags) == -1) old_pte &= ~_PAGE_HPTEFLAGS; @@ -350,17 +343,8 @@ int __hash_page_64K(unsigned long ea, unsigned long access, return -1; } - /* -* Insert slot number & secondary bit in PTE second half. -*/ - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); - rpte.hidx &= ~(0xfUL); - *hidxp = rpte.hidx | (slot & 0xfUL); - /* -* check __real_pte for details on matching smp_rmb() -*/ - smp_wmb(); new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE; + new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot); } *ptep = __pte(new_pte & ~H_PAGE_BUSY); return 0; -- 1.7.1
[RFC v5 07/38] powerpc: use helper functions in __hash_page_huge() for 64K PTE
replace redundant code in __hash_page_huge() with helper functions pte_get_hash_gslot() and pte_set_hash_slot() Signed-off-by: Ram Pai --- arch/powerpc/mm/hugetlbpage-hash64.c | 24 1 files changed, 4 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c index 6f7aee3..e6dcd50 100644 --- a/arch/powerpc/mm/hugetlbpage-hash64.c +++ b/arch/powerpc/mm/hugetlbpage-hash64.c @@ -23,7 +23,6 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, int ssize, unsigned int shift, unsigned int mmu_psize) { real_pte_t rpte; - unsigned long *hidxp; unsigned long vpn; unsigned long old_pte, new_pte; unsigned long rflags, pa, sz; @@ -74,16 +73,10 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, /* Check if pte already has an hpte (case 2) */ if (unlikely(old_pte & H_PAGE_HASHPTE)) { /* There MIGHT be an HPTE for this pte */ - unsigned long hash, slot, hidx; + unsigned long gslot; - hash = hpt_hash(vpn, shift, ssize); - hidx = __rpte_to_hidx(rpte, 0); - if (hidx & _PTEIDX_SECONDARY) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += hidx & _PTEIDX_GROUP_IX; - - if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, mmu_psize, + gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte, 0); + if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, mmu_psize, mmu_psize, ssize, flags) == -1) old_pte &= ~_PAGE_HPTEFLAGS; } @@ -110,16 +103,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, return -1; } - /* -* Insert slot number & secondary bit in PTE second half. -*/ - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); - rpte.hidx &= ~(0xfUL); - *hidxp = rpte.hidx | (slot & 0xfUL); - /* -* check __real_pte for details on matching smp_rmb() -*/ - smp_wmb(); + new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot); } /* -- 1.7.1
[RFC v5 08/38] powerpc: use helper functions in __hash_page_4K() for 64K PTE
replace redundant code in __hash_page_4K() with helper functions pte_get_hash_gslot() and pte_set_hash_slot() Signed-off-by: Ram Pai --- arch/powerpc/mm/hash64_64k.c | 34 +- 1 files changed, 9 insertions(+), 25 deletions(-) diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c index 645f621..c658cb5 100644 --- a/arch/powerpc/mm/hash64_64k.c +++ b/arch/powerpc/mm/hash64_64k.c @@ -39,9 +39,8 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, { real_pte_t rpte; unsigned long hpte_group; - unsigned long *hidxp; unsigned int subpg_index; - unsigned long rflags, pa, hidx; + unsigned long rflags, pa; unsigned long old_pte, new_pte, subpg_pte; unsigned long vpn, hash, slot, gslot; unsigned long shift = mmu_psize_defs[MMU_PAGE_4K].shift; @@ -114,18 +113,13 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, if (__rpte_sub_valid(rpte, subpg_index)) { int ret; - hash = hpt_hash(vpn, shift, ssize); - hidx = __rpte_to_hidx(rpte, subpg_index); - if (hidx & _PTEIDX_SECONDARY) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += hidx & _PTEIDX_GROUP_IX; - - ret = mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, + gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte, + subpg_index); + ret = mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, MMU_PAGE_4K, MMU_PAGE_4K, ssize, flags); /* -*if we failed because typically the HPTE wasn't really here +* if we failed because typically the HPTE wasn't really here * we try an insertion. */ if (ret == -1) @@ -221,20 +215,10 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, MMU_PAGE_4K, MMU_PAGE_4K, old_pte); return -1; } - /* -* Insert slot number & secondary bit in PTE second half, -* clear H_PAGE_BUSY and set appropriate HPTE slot bit -* Since we have H_PAGE_BUSY set on ptep, we can be sure -* nobody is undating hidx. -*/ - hidxp = (unsigned long *)(ptep + PTRS_PER_PTE); - rpte.hidx &= ~(0xfUL << (subpg_index << 2)); - *hidxp = rpte.hidx | (slot << (subpg_index << 2)); - /* -* check __real_pte for details on matching smp_rmb() -*/ - smp_wmb(); - new_pte |= H_PAGE_HASHPTE; + + new_pte |= pte_set_hash_slot(ptep, rpte, subpg_index, slot); + new_pte |= H_PAGE_HASHPTE; + *ptep = __pte(new_pte & ~H_PAGE_BUSY); return 0; } -- 1.7.1
[RFC v5 09/38] powerpc: use helper functions in __hash_page_4K() for 4K PTE
replace redundant code with helper functions pte_get_hash_gslot() and pte_set_hash_slot() Signed-off-by: Ram Pai --- arch/powerpc/mm/hash64_4k.c | 14 ++ 1 files changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c index 6fa450c..a1eebc1 100644 --- a/arch/powerpc/mm/hash64_4k.c +++ b/arch/powerpc/mm/hash64_4k.c @@ -20,6 +20,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, pte_t *ptep, unsigned long trap, unsigned long flags, int ssize, int subpg_prot) { + real_pte_t rpte; unsigned long hpte_group; unsigned long rflags, pa; unsigned long old_pte, new_pte; @@ -54,6 +55,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, * need to add in 0x1 if it's a read-only user page */ rflags = htab_convert_pte_flags(new_pte); + rpte = __real_pte(__pte(old_pte), ptep); if (cpu_has_feature(CPU_FTR_NOEXECUTE) && !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) @@ -64,13 +66,10 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, /* * There MIGHT be an HPTE for this pte */ - hash = hpt_hash(vpn, shift, ssize); - if (old_pte & H_PAGE_F_SECOND) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += (old_pte & H_PAGE_F_GIX) >> H_PAGE_F_GIX_SHIFT; + unsigned long gslot = pte_get_hash_gslot(vpn, shift, + ssize, rpte, 0); - if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_4K, + if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, MMU_PAGE_4K, MMU_PAGE_4K, ssize, flags) == -1) old_pte &= ~_PAGE_HPTEFLAGS; } @@ -118,8 +117,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, return -1; } new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE; - new_pte |= (slot << H_PAGE_F_GIX_SHIFT) & - (H_PAGE_F_SECOND | H_PAGE_F_GIX); + new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot); } *ptep = __pte(new_pte & ~H_PAGE_BUSY); return 0; -- 1.7.1
[RFC v5 10/38] powerpc: use helper functions in flush_hash_page()
replace redundant code in flush_hash_page() with helper function pte_get_hash_gslot(). Signed-off-by: Ram Pai --- arch/powerpc/mm/hash_utils_64.c | 13 - 1 files changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index d3604da..d863696 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -1615,23 +1615,18 @@ unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift, void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize, unsigned long flags) { - unsigned long hash, index, shift, hidx, slot; + unsigned long index, shift, gslot; int local = flags & HPTE_LOCAL_UPDATE; DBG_LOW("flush_hash_page(vpn=%016lx)\n", vpn); pte_iterate_hashed_subpages(pte, psize, vpn, index, shift) { - hash = hpt_hash(vpn, shift, ssize); - hidx = __rpte_to_hidx(pte, index); - if (hidx & _PTEIDX_SECONDARY) - hash = ~hash; - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; - slot += hidx & _PTEIDX_GROUP_IX; - DBG_LOW(" sub %ld: hash=%lx, hidx=%lx\n", index, slot, hidx); + gslot = pte_get_hash_gslot(vpn, shift, ssize, pte, index); + DBG_LOW(" sub %ld: gslot=%lx\n", index, gslot); /* * We use same base page size and actual psize, because we don't * use these functions for hugepage */ - mmu_hash_ops.hpte_invalidate(slot, vpn, psize, psize, + mmu_hash_ops.hpte_invalidate(gslot, vpn, psize, psize, ssize, local); } pte_iterate_hashed_end(); -- 1.7.1
[RFC v5 12/38] mm: ability to disable execute permission on a key at creation
Currently sys_pkey_create() provides the ability to disable read and write permission on the key, at creation. powerpc has the hardware support to disable execute on a pkey as well.This patch enhances the interface to let disable execute at key creation time. x86 does not allow this. Hence the next patch will add ability in x86 to return error if PKEY_DISABLE_EXECUTE is specified. Signed-off-by: Ram Pai --- include/uapi/asm-generic/mman-common.h |4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 8c27db0..bf4fa07 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -74,7 +74,9 @@ #define PKEY_DISABLE_ACCESS0x1 #define PKEY_DISABLE_WRITE 0x2 +#define PKEY_DISABLE_EXECUTE 0x4 #define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS |\ -PKEY_DISABLE_WRITE) +PKEY_DISABLE_WRITE |\ +PKEY_DISABLE_EXECUTE) #endif /* __ASM_GENERIC_MMAN_COMMON_H */ -- 1.7.1
[RFC v5 11/38] mm: introduce an additional vma bit for powerpc pkey
Currently there are only 4bits in the vma flags to support 16 keys on x86. powerpc supports 32 keys, which needs 5bits. This patch introduces an addition bit in the vma flags. Signed-off-by: Ram Pai --- fs/proc/task_mmu.c |6 +- include/linux/mm.h | 18 +- 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index f0c8b33..2ddc298 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -666,12 +666,16 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) [ilog2(VM_MERGEABLE)] = "mg", [ilog2(VM_UFFD_MISSING)]= "um", [ilog2(VM_UFFD_WP)] = "uw", -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS +#ifdef CONFIG_ARCH_HAS_PKEYS /* These come out via ProtectionKey: */ [ilog2(VM_PKEY_BIT0)] = "", [ilog2(VM_PKEY_BIT1)] = "", [ilog2(VM_PKEY_BIT2)] = "", [ilog2(VM_PKEY_BIT3)] = "", +#endif /* CONFIG_ARCH_HAS_PKEYS */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + /* Additional bit in ProtectionKey: */ + [ilog2(VM_PKEY_BIT4)] = "", #endif }; size_t i; diff --git a/include/linux/mm.h b/include/linux/mm.h index 7cb17c6..3d35bcc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -208,21 +208,29 @@ extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *, #define VM_HIGH_ARCH_BIT_1 33 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */ +#define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit arch */ #define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0) #define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1) #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2) #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3) +#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4) #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */ -#if defined(CONFIG_X86) -# define VM_PATVM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ -#if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) +#ifdef CONFIG_ARCH_HAS_PKEYS # define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0 -# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */ +# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 # define VM_PKEY_BIT1 VM_HIGH_ARCH_1 # define VM_PKEY_BIT2 VM_HIGH_ARCH_2 # define VM_PKEY_BIT3 VM_HIGH_ARCH_3 -#endif +#endif /* CONFIG_ARCH_HAS_PKEYS */ + +#if defined(CONFIG_PPC64_MEMORY_PROTECTION_KEYS) +# define VM_PKEY_BIT4 VM_HIGH_ARCH_4 /* additional key bit used on ppc64 */ +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + + +#if defined(CONFIG_X86) +# define VM_PATVM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ #elif defined(CONFIG_PPC) # define VM_SAOVM_ARCH_1 /* Strong Access Ordering (powerpc) */ #elif defined(CONFIG_PARISC) -- 1.7.1
[RFC v5 13/38] x86: disallow pkey creation with PKEY_DISABLE_EXECUTE
x86 does not support disabling execute permissions on a pkey. Signed-off-by: Ram Pai --- arch/x86/kernel/fpu/xstate.c |3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index c24ac1e..d582631 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -900,6 +900,9 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, if (!boot_cpu_has(X86_FEATURE_OSPKE)) return -EINVAL; + if (init_val & PKEY_DISABLE_EXECUTE) + return -EINVAL; + /* Set the bits we need in PKRU: */ if (init_val & PKEY_DISABLE_ACCESS) new_pkru_bits |= PKRU_AD_BIT; -- 1.7.1
[RFC v5 14/38] powerpc: initial plumbing for key management
Initial plumbing to manage all the keys supported by the hardware. Total 32 keys are supported on powerpc. However pkey 0,1 and 31 are reserved. So effectively we have 29 pkeys. This patch keeps track of reserved keys, allocated keys and keys that are currently free. Also it adds skeletal functions and macros, that the architecture-independent code expects to be available. Signed-off-by: Ram Pai --- arch/powerpc/Kconfig | 16 + arch/powerpc/include/asm/book3s/64/mmu.h |9 +++ arch/powerpc/include/asm/pkeys.h | 106 ++ arch/powerpc/mm/mmu_context_book3s64.c |5 ++ 4 files changed, 136 insertions(+), 0 deletions(-) create mode 100644 arch/powerpc/include/asm/pkeys.h diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index f7c8f99..a2480b6 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -871,6 +871,22 @@ config SECCOMP If unsure, say Y. Only embedded should say N here. +config PPC64_MEMORY_PROTECTION_KEYS + prompt "PowerPC Memory Protection Keys" + def_bool y + # Note: only available in 64-bit mode + depends on PPC64 && PPC_64K_PAGES + select ARCH_USES_HIGH_VMA_FLAGS + select ARCH_HAS_PKEYS + ---help--- + Memory Protection Keys provides a mechanism for enforcing + page-based protections, but without requiring modification of the + page tables when an application changes protection domains. + + For details, see Documentation/powerpc/protection-keys.txt + + If unsure, say y. + endmenu config ISA_DMA_API diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h index 77529a3..104ad72 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu.h +++ b/arch/powerpc/include/asm/book3s/64/mmu.h @@ -108,6 +108,15 @@ struct patb_entry { #ifdef CONFIG_SPAPR_TCE_IOMMU struct list_head iommu_group_mem_list; #endif + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + /* +* Each bit represents one protection key. +* bit set -> key allocated +* bit unset -> key available for allocation +*/ + u32 pkey_allocation_map; +#endif } mm_context_t; /* diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h new file mode 100644 index 000..9345767 --- /dev/null +++ b/arch/powerpc/include/asm/pkeys.h @@ -0,0 +1,106 @@ +#ifndef _ASM_PPC64_PKEYS_H +#define _ASM_PPC64_PKEYS_H + +#define arch_max_pkey() 32 +#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ + VM_PKEY_BIT3 | VM_PKEY_BIT4) +/* + * Bits are in BE format. + * NOTE: key 31, 1, 0 are not used. + * key 0 is used by default. It give read/write/execute permission. + * key 31 is reserved by the hypervisor. + * key 1 is recommended to be not used. + * PowerISA(3.0) page 1015, programming note. + */ +#define PKEY_INITIAL_ALLOCAION 0xc001 + +#define pkeybit_mask(pkey) (0x1 << (arch_max_pkey() - pkey - 1)) + +#define mm_pkey_allocation_map(mm) (mm->context.pkey_allocation_map) + +#define mm_set_pkey_allocated(mm, pkey) { \ + mm_pkey_allocation_map(mm) |= pkeybit_mask(pkey); \ +} + +#define mm_set_pkey_free(mm, pkey) { \ + mm_pkey_allocation_map(mm) &= ~pkeybit_mask(pkey); \ +} + +#define mm_set_pkey_is_allocated(mm, pkey) \ + (mm_pkey_allocation_map(mm) & pkeybit_mask(pkey)) + +#define mm_set_pkey_is_reserved(mm, pkey) (PKEY_INITIAL_ALLOCAION & \ + pkeybit_mask(pkey)) + +static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey) +{ + /* a reserved key is never considered as 'explicitly allocated' */ + return (!mm_set_pkey_is_reserved(mm, pkey) && + mm_set_pkey_is_allocated(mm, pkey)); +} + +/* + * Returns a positive, 5-bit key on success, or -1 on failure. + */ +static inline int mm_pkey_alloc(struct mm_struct *mm) +{ + /* +* Note: this is the one and only place we make sure +* that the pkey is valid as far as the hardware is +* concerned. The rest of the kernel trusts that +* only good, valid pkeys come out of here. +*/ + u32 all_pkeys_mask = (u32)(~(0x0)); + int ret; + + /* +* Are we out of pkeys? We must handle this specially +* because ffz() behavior is undefined if there are no +* zeros. +*/ + if (mm_pkey_allocation_map(mm) == all_pkeys_mask) + return -1; + + ret = arch_max_pkey() - + ffz((u32)mm_pkey_allocation_map(mm)) + - 1; + mm_set_pkey_allocated(mm, ret); + return ret; +} + +static inline int mm_pkey_free(struct mm_struct *mm, int pkey) +{ + if (!mm_pkey_is_allocated(mm, pkey)) + return -EINVAL; + + mm_set_pkey_free(mm, pkey); + + return 0; +} + +/* + * Try to dedicate one of the protection key
[RFC v5 15/38] powerpc: helper function to read, write AMR, IAMR, UAMOR registers
Implements helper functions to read and write the key related registers; AMR, IAMR, UAMOR. AMR register tracks the read,write permission of a key IAMR register tracks the execute permission of a key UAMOR register enables and disables a key Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/pgtable.h | 60 ++ 1 files changed, 60 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 85bc987..435d6a7 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -428,6 +428,66 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, pte_update(mm, addr, ptep, 0, _PAGE_PRIVILEGED, 1); } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + +#include +static inline u64 read_amr(void) +{ + return mfspr(SPRN_AMR); +} +static inline void write_amr(u64 value) +{ + mtspr(SPRN_AMR, value); +} +static inline u64 read_iamr(void) +{ + return mfspr(SPRN_IAMR); +} +static inline void write_iamr(u64 value) +{ + mtspr(SPRN_IAMR, value); +} +static inline u64 read_uamor(void) +{ + return mfspr(SPRN_UAMOR); +} +static inline void write_uamor(u64 value) +{ + mtspr(SPRN_UAMOR, value); +} + +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + +static inline u64 read_amr(void) +{ + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__); + return -1; +} +static inline void write_amr(u64 value) +{ + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__); +} +static inline u64 read_uamor(void) +{ + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__); + return -1; +} +static inline void write_uamor(u64 value) +{ + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__); +} +static inline u64 read_iamr(void) +{ + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__); + return -1; +} +static inline void write_iamr(u64 value) +{ + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__); +} + +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) -- 1.7.1
[RFC v5 16/38] powerpc: implementation for arch_set_user_pkey_access()
This patch provides the detailed implementation for a user to allocate a key and enable it in the hardware. It provides the plumbing, but it cannot be used yet till the system call is implemented. The next patch will do so. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/pkeys.h |8 - arch/powerpc/mm/Makefile |1 + arch/powerpc/mm/pkeys.c | 66 ++ 3 files changed, 74 insertions(+), 1 deletions(-) create mode 100644 arch/powerpc/mm/pkeys.c diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 9345767..1495342 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -2,6 +2,10 @@ #define _ASM_PPC64_PKEYS_H #define arch_max_pkey() 32 +#define AMR_AD_BIT 0x1UL +#define AMR_WD_BIT 0x2UL +#define IAMR_EX_BIT 0x1UL +#define AMR_BITS_PER_PKEY 2 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ VM_PKEY_BIT3 | VM_PKEY_BIT4) /* @@ -93,10 +97,12 @@ static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma, return 0; } +extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, + unsigned long init_val); static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, unsigned long init_val) { - return 0; + return __arch_set_user_pkey_access(tsk, pkey, init_val); } static inline void pkey_mm_init(struct mm_struct *mm) diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index 7414034..8cc2ff1 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -45,3 +45,4 @@ obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o obj-$(CONFIG_SPAPR_TCE_IOMMU) += mmu_context_iommu.o obj-$(CONFIG_PPC_PTDUMP) += dump_linuxpagetables.o obj-$(CONFIG_PPC_HTDUMP) += dump_hashpagetable.o +obj-$(CONFIG_PPC64_MEMORY_PROTECTION_KEYS) += pkeys.o diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c new file mode 100644 index 000..d3ba167 --- /dev/null +++ b/arch/powerpc/mm/pkeys.c @@ -0,0 +1,66 @@ +/* + * PowerPC Memory Protection Keys management + * Copyright (c) 2015, Intel Corporation. + * Copyright (c) 2017, IBM Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + */ +#include /* PKEY_* */ +#include + +/* + * set the access right in AMR IAMR and UAMOR register + * for @pkey to that specified in @init_val. + */ +int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, + unsigned long init_val) +{ + u64 old_amr, old_uamor, old_iamr; + int pkey_shift = (arch_max_pkey()-pkey-1) * AMR_BITS_PER_PKEY; + u64 new_amr_bits = 0x0ul; + u64 new_iamr_bits = 0x0ul; + u64 new_uamor_bits = 0x3ul; + + /* Set the bits we need in AMR: */ + if (init_val & PKEY_DISABLE_ACCESS) + new_amr_bits |= AMR_AD_BIT | AMR_WD_BIT; + if (init_val & PKEY_DISABLE_WRITE) + new_amr_bits |= AMR_WD_BIT; + + /* +* By default execute is disabled. +* To enable execute, PKEY_ENABLE_EXECUTE +* needs to be specified. +*/ + if ((init_val & PKEY_DISABLE_EXECUTE)) + new_iamr_bits |= IAMR_EX_BIT; + + /* Shift the bits in to the correct place in AMR for pkey: */ + new_amr_bits<<= pkey_shift; + new_iamr_bits <<= pkey_shift; + new_uamor_bits <<= pkey_shift; + + /* Get old AMR and mask off any old bits in place: */ + old_amr = read_amr(); + old_amr &= ~((u64)(AMR_AD_BIT|AMR_WD_BIT) << pkey_shift); + + old_iamr = read_iamr(); + old_iamr &= ~(0x3ul << pkey_shift); + + old_uamor = read_uamor(); + old_uamor &= ~(0x3ul << pkey_shift); + + /* Write old part along with new part: */ + write_amr(old_amr | new_amr_bits); + write_iamr(old_iamr | new_iamr_bits); + write_uamor(old_uamor | new_uamor_bits); + + return 0; +} -- 1.7.1
[RFC v5 17/38] powerpc: sys_pkey_alloc() and sys_pkey_free() system calls
Finally this patch provides the ability for a process to allocate and free a protection key. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/systbl.h |2 ++ arch/powerpc/include/asm/unistd.h |4 +--- arch/powerpc/include/uapi/asm/unistd.h |2 ++ 3 files changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h index 1c94708..22dd776 100644 --- a/arch/powerpc/include/asm/systbl.h +++ b/arch/powerpc/include/asm/systbl.h @@ -388,3 +388,5 @@ COMPAT_SYS_SPU(pwritev2) SYSCALL(kexec_file_load) SYSCALL(statx) +SYSCALL(pkey_alloc) +SYSCALL(pkey_free) diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index 9ba11db..e0273bc 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -12,13 +12,11 @@ #include -#define NR_syscalls384 +#define NR_syscalls386 #define __NR__exit __NR_exit #define __IGNORE_pkey_mprotect -#define __IGNORE_pkey_alloc -#define __IGNORE_pkey_free #ifndef __ASSEMBLY__ diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h index b85f142..7993a07 100644 --- a/arch/powerpc/include/uapi/asm/unistd.h +++ b/arch/powerpc/include/uapi/asm/unistd.h @@ -394,5 +394,7 @@ #define __NR_pwritev2 381 #define __NR_kexec_file_load 382 #define __NR_statx 383 +#define __NR_pkey_alloc384 +#define __NR_pkey_free 385 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ -- 1.7.1
[RFC v5 18/38] powerpc: store and restore the pkey state across context switches
Store and restore the AMR, IAMR and UMOR register state of the task before scheduling out and after scheduling in, respectively. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/processor.h |5 + arch/powerpc/kernel/process.c| 18 ++ 2 files changed, 23 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h index a2123f2..1f714df 100644 --- a/arch/powerpc/include/asm/processor.h +++ b/arch/powerpc/include/asm/processor.h @@ -310,6 +310,11 @@ struct thread_struct { struct thread_vr_state ckvr_state; /* Checkpointed VR state */ unsigned long ckvrsave; /* Checkpointed VRSAVE */ #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + unsigned long amr; + unsigned long iamr; + unsigned long uamor; +#endif #ifdef CONFIG_KVM_BOOK3S_32_HANDLER void* kvm_shadow_vcpu; /* KVM internal data */ #endif /* CONFIG_KVM_BOOK3S_32_HANDLER */ diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index baae104..37d001a 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1096,6 +1096,11 @@ static inline void save_sprs(struct thread_struct *t) t->tar = mfspr(SPRN_TAR); } #endif +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + t->amr = mfspr(SPRN_AMR); + t->iamr = mfspr(SPRN_IAMR); + t->uamor = mfspr(SPRN_UAMOR); +#endif } static inline void restore_sprs(struct thread_struct *old_thread, @@ -1131,6 +1136,14 @@ static inline void restore_sprs(struct thread_struct *old_thread, mtspr(SPRN_TAR, new_thread->tar); } #endif +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + if (old_thread->amr != new_thread->amr) + mtspr(SPRN_AMR, new_thread->amr); + if (old_thread->iamr != new_thread->iamr) + mtspr(SPRN_IAMR, new_thread->iamr); + if (old_thread->uamor != new_thread->uamor) + mtspr(SPRN_UAMOR, new_thread->uamor); +#endif } struct task_struct *__switch_to(struct task_struct *prev, @@ -1686,6 +1699,11 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp) current->thread.tm_texasr = 0; current->thread.tm_tfiar = 0; #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + current->thread.amr = 0x0ul; + current->thread.iamr = 0x0ul; + current->thread.uamor = 0x0ul; +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ } EXPORT_SYMBOL(start_thread); -- 1.7.1
[RFC v5 19/38] powerpc: introduce execute-only pkey
This patch provides the implementation of execute-only pkey. The architecture-independent expects the ability to create and manage a special key which has execute-only permission. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/mmu.h |1 + arch/powerpc/include/asm/pkeys.h |6 +++- arch/powerpc/mm/pkeys.c | 59 ++ 3 files changed, 65 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h index 104ad72..0c0a2a8 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu.h +++ b/arch/powerpc/include/asm/book3s/64/mmu.h @@ -116,6 +116,7 @@ struct patb_entry { * bit unset -> key available for allocation */ u32 pkey_allocation_map; + s16 execute_only_pkey; /* key holding execute-only protection */ #endif } mm_context_t; diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 1495342..4b01c37 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -86,11 +86,13 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey) * Try to dedicate one of the protection keys to be used as an * execute-only protection key. */ +extern int __execute_only_pkey(struct mm_struct *mm); static inline int execute_only_pkey(struct mm_struct *mm) { - return 0; + return __execute_only_pkey(mm); } + static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, int pkey) { @@ -108,5 +110,7 @@ static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, static inline void pkey_mm_init(struct mm_struct *mm) { mm_pkey_allocation_map(mm) = PKEY_INITIAL_ALLOCAION; + /* -1 means unallocated or invalid */ + mm->context.execute_only_pkey = -1; } #endif /*_ASM_PPC64_PKEYS_H */ diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index d3ba167..6c90317 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -64,3 +64,62 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, return 0; } + +#define pkeyshift(pkey) ((arch_max_pkey()-pkey-1) * AMR_BITS_PER_PKEY) + +static inline bool pkey_allows_readwrite(int pkey) +{ + int pkey_shift = pkeyshift(pkey); + + if (!(read_uamor() & (0x3UL << pkey_shift))) + return true; + + return !(read_amr() & ((AMR_AD_BIT|AMR_WD_BIT) << pkey_shift)); +} + +int __execute_only_pkey(struct mm_struct *mm) +{ + bool need_to_set_mm_pkey = false; + int execute_only_pkey = mm->context.execute_only_pkey; + int ret; + + /* Do we need to assign a pkey for mm's execute-only maps? */ + if (execute_only_pkey == -1) { + /* Go allocate one to use, which might fail */ + execute_only_pkey = mm_pkey_alloc(mm); + if (execute_only_pkey < 0) + return -1; + need_to_set_mm_pkey = true; + } + + /* +* We do not want to go through the relatively costly +* dance to set AMR if we do not need to. Check it +* first and assume that if the execute-only pkey is +* readwrite-disabled than we do not have to set it +* ourselves. +*/ + if (!need_to_set_mm_pkey && + !pkey_allows_readwrite(execute_only_pkey)) + return execute_only_pkey; + + /* +* Set up AMR so that it denies access for everything +* other than execution. +*/ + ret = __arch_set_user_pkey_access(current, execute_only_pkey, + (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)); + /* +* If the AMR-set operation failed somehow, just return +* 0 and effectively disable execute-only support. +*/ + if (ret) { + mm_set_pkey_free(mm, execute_only_pkey); + return -1; + } + + /* We got one, store it and use it from here on out */ + if (need_to_set_mm_pkey) + mm->context.execute_only_pkey = execute_only_pkey; + return execute_only_pkey; +} -- 1.7.1
[RFC v5 20/38] powerpc: ability to associate pkey to a vma
arch-independent code expects the arch to map a pkey into the vma's protection bit setting. The patch provides that ability. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/mman.h |8 +++- arch/powerpc/include/asm/pkeys.h | 14 -- 2 files changed, 19 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h index 30922f6..067eec2 100644 --- a/arch/powerpc/include/asm/mman.h +++ b/arch/powerpc/include/asm/mman.h @@ -13,6 +13,7 @@ #include #include +#include #include /* @@ -22,7 +23,12 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, unsigned long pkey) { - return (prot & PROT_SAO) ? VM_SAO : 0; +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + return (((prot & PROT_SAO) ? VM_SAO : 0) | + pkey_to_vmflag_bits(pkey)); +#else + return ((prot & PROT_SAO) ? VM_SAO : 0); +#endif } #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 4b01c37..f148e84 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -1,13 +1,23 @@ #ifndef _ASM_PPC64_PKEYS_H #define _ASM_PPC64_PKEYS_H +#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ + VM_PKEY_BIT3 | VM_PKEY_BIT4) + +static inline u64 pkey_to_vmflag_bits(u16 pkey) +{ + return (((pkey & 0x1UL) ? VM_PKEY_BIT0 : 0x0UL) | + ((pkey & 0x2UL) ? VM_PKEY_BIT1 : 0x0UL) | + ((pkey & 0x4UL) ? VM_PKEY_BIT2 : 0x0UL) | + ((pkey & 0x8UL) ? VM_PKEY_BIT3 : 0x0UL) | + ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL)); +} + #define arch_max_pkey() 32 #define AMR_AD_BIT 0x1UL #define AMR_WD_BIT 0x2UL #define IAMR_EX_BIT 0x1UL #define AMR_BITS_PER_PKEY 2 -#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \ - VM_PKEY_BIT3 | VM_PKEY_BIT4) /* * Bits are in BE format. * NOTE: key 31, 1, 0 are not used. -- 1.7.1
[RFC v5 21/38] powerpc: implementation for arch_override_mprotect_pkey()
arch independent code calls arch_override_mprotect_pkey() to return a pkey that best matches the requested protection. This patch provides the implementation. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/pkeys.h | 10 ++- arch/powerpc/mm/pkeys.c | 47 ++ 2 files changed, 55 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index f148e84..20846c2 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -13,6 +13,11 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey) ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL)); } +static inline int vma_pkey(struct vm_area_struct *vma) +{ + return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT; +} + #define arch_max_pkey() 32 #define AMR_AD_BIT 0x1UL #define AMR_WD_BIT 0x2UL @@ -102,11 +107,12 @@ static inline int execute_only_pkey(struct mm_struct *mm) return __execute_only_pkey(mm); } - +extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma, + int prot, int pkey); static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, int pkey) { - return 0; + return __arch_override_mprotect_pkey(vma, prot, pkey); } extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey, diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index 6c90317..c60a045 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -123,3 +123,50 @@ int __execute_only_pkey(struct mm_struct *mm) mm->context.execute_only_pkey = execute_only_pkey; return execute_only_pkey; } + +static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma) +{ + /* Do this check first since the vm_flags should be hot */ + if ((vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) != VM_EXEC) + return false; + + return (vma_pkey(vma) == vma->vm_mm->context.execute_only_pkey); +} + +/* + * This should only be called for *plain* mprotect calls. + */ +int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, + int pkey) +{ + /* +* Is this an mprotect_pkey() call? If so, never +* override the value that came from the user. +*/ + if (pkey != -1) + return pkey; + + /* +* If the currently associated pkey is execute-only, +* but the requested protection requires read or write, +* move it back to the default pkey. +*/ + if (vma_is_pkey_exec_only(vma) && + (prot & (PROT_READ|PROT_WRITE))) + return 0; + + /* +* the requested protection is execute-only. Hence +* lets use a execute-only pkey. +*/ + if (prot == PROT_EXEC) { + pkey = execute_only_pkey(vma->vm_mm); + if (pkey > 0) + return pkey; + } + + /* +* nothing to override. +*/ + return vma_pkey(vma); +} -- 1.7.1
[RFC v5 22/38] powerpc: map vma key-protection bits to pte key bits.
map the pkey bits in the pte from the key protection bits of the vma. The pte bits used for pkey are 3,4,5,6 and 57. The first four bits are the same four bits that were freed up initially in this patch series. remember? :-) Without those four bits this patch would'nt be possible. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/pgtable.h | 20 +++- arch/powerpc/include/asm/mman.h |8 arch/powerpc/include/asm/pkeys.h |9 + 3 files changed, 36 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 435d6a7..d9c87c4 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -37,6 +37,7 @@ #define _RPAGE_RSV20x0800UL #define _RPAGE_RSV30x0400UL #define _RPAGE_RSV40x0200UL +#define _RPAGE_RSV50x00040UL #define _PAGE_PTE 0x4000UL/* distinguishes PTEs from pointers */ #define _PAGE_PRESENT 0x8000UL/* pte contains a translation */ @@ -56,6 +57,20 @@ /* Max physical address bit as per radix table */ #define _RPAGE_PA_MAX 57 +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +#define H_PAGE_PKEY_BIT0 _RPAGE_RSV1 +#define H_PAGE_PKEY_BIT1 _RPAGE_RSV2 +#define H_PAGE_PKEY_BIT2 _RPAGE_RSV3 +#define H_PAGE_PKEY_BIT3 _RPAGE_RSV4 +#define H_PAGE_PKEY_BIT4 _RPAGE_RSV5 +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ +#define H_PAGE_PKEY_BIT0 0 +#define H_PAGE_PKEY_BIT1 0 +#define H_PAGE_PKEY_BIT2 0 +#define H_PAGE_PKEY_BIT3 0 +#define H_PAGE_PKEY_BIT4 0 +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + /* * Max physical address bit we will use for now. * @@ -116,13 +131,16 @@ #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \ _PAGE_SOFT_DIRTY) + +#define H_PAGE_PKEY (H_PAGE_PKEY_BIT0 | H_PAGE_PKEY_BIT1 | H_PAGE_PKEY_BIT2 | \ + H_PAGE_PKEY_BIT3 | H_PAGE_PKEY_BIT4) /* * Mask of bits returned by pte_pgprot() */ #define PAGE_PROT_BITS (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \ H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \ _PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_EXEC | \ -_PAGE_SOFT_DIRTY) +_PAGE_SOFT_DIRTY | H_PAGE_PKEY) /* * We define 2 sets of base prot bits, one for basic pages (ie, * cacheable kernel and user pages) and one for non cacheable diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h index 067eec2..3f7220f 100644 --- a/arch/powerpc/include/asm/mman.h +++ b/arch/powerpc/include/asm/mman.h @@ -32,12 +32,20 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, } #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) + static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags) { +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + return (vm_flags & VM_SAO) ? + __pgprot(_PAGE_SAO | vmflag_to_page_pkey_bits(vm_flags)) : + __pgprot(0 | vmflag_to_page_pkey_bits(vm_flags)); +#else return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0); +#endif } #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) + static inline bool arch_validate_prot(unsigned long prot) { if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO)) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 20846c2..c681de9 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -13,6 +13,15 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey) ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL)); } +static inline u64 vmflag_to_page_pkey_bits(u64 vm_flags) +{ + return (((vm_flags & VM_PKEY_BIT0) ? H_PAGE_PKEY_BIT4 : 0x0UL) | + ((vm_flags & VM_PKEY_BIT1) ? H_PAGE_PKEY_BIT3 : 0x0UL) | + ((vm_flags & VM_PKEY_BIT2) ? H_PAGE_PKEY_BIT2 : 0x0UL) | + ((vm_flags & VM_PKEY_BIT3) ? H_PAGE_PKEY_BIT1 : 0x0UL) | + ((vm_flags & VM_PKEY_BIT4) ? H_PAGE_PKEY_BIT0 : 0x0UL)); +} + static inline int vma_pkey(struct vm_area_struct *vma) { return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT; -- 1.7.1
[RFC v5 23/38] powerpc: sys_pkey_mprotect() system call
Patch provides the ability for a process to associate a pkey with a address range. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/systbl.h |1 + arch/powerpc/include/asm/unistd.h |4 +--- arch/powerpc/include/uapi/asm/unistd.h |1 + 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h index 22dd776..b33b551 100644 --- a/arch/powerpc/include/asm/systbl.h +++ b/arch/powerpc/include/asm/systbl.h @@ -390,3 +390,4 @@ SYSCALL(statx) SYSCALL(pkey_alloc) SYSCALL(pkey_free) +SYSCALL(pkey_mprotect) diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h index e0273bc..daf1ba9 100644 --- a/arch/powerpc/include/asm/unistd.h +++ b/arch/powerpc/include/asm/unistd.h @@ -12,12 +12,10 @@ #include -#define NR_syscalls386 +#define NR_syscalls387 #define __NR__exit __NR_exit -#define __IGNORE_pkey_mprotect - #ifndef __ASSEMBLY__ #include diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h index 7993a07..71ae45e 100644 --- a/arch/powerpc/include/uapi/asm/unistd.h +++ b/arch/powerpc/include/uapi/asm/unistd.h @@ -396,5 +396,6 @@ #define __NR_statx 383 #define __NR_pkey_alloc384 #define __NR_pkey_free 385 +#define __NR_pkey_mprotect 386 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */ -- 1.7.1
[RFC v5 24/38] powerpc: Program HPTE key protection bits
Map the PTE protection key bits to the HPTE key protection bits, while creating HPTE entries. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/mmu-hash.h |5 + arch/powerpc/include/asm/pkeys.h |9 + arch/powerpc/mm/hash_utils_64.c |5 + 3 files changed, 19 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h index 6981a52..f7a6ed3 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h @@ -90,6 +90,8 @@ #define HPTE_R_PP0 ASM_CONST(0x8000) #define HPTE_R_TS ASM_CONST(0x4000) #define HPTE_R_KEY_HI ASM_CONST(0x3000) +#define HPTE_R_KEY_BIT0ASM_CONST(0x2000) +#define HPTE_R_KEY_BIT1ASM_CONST(0x1000) #define HPTE_R_RPN_SHIFT 12 #define HPTE_R_RPN ASM_CONST(0x0000) #define HPTE_R_RPN_3_0 ASM_CONST(0x01fff000) @@ -104,6 +106,9 @@ #define HPTE_R_C ASM_CONST(0x0080) #define HPTE_R_R ASM_CONST(0x0100) #define HPTE_R_KEY_LO ASM_CONST(0x0e00) +#define HPTE_R_KEY_BIT2ASM_CONST(0x0800) +#define HPTE_R_KEY_BIT3ASM_CONST(0x0400) +#define HPTE_R_KEY_BIT4ASM_CONST(0x0200) #define HPTE_V_1TB_SEG ASM_CONST(0x4000) #define HPTE_V_VRMA_MASK ASM_CONST(0x4001ff00) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index c681de9..6477b87 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -22,6 +22,15 @@ static inline u64 vmflag_to_page_pkey_bits(u64 vm_flags) ((vm_flags & VM_PKEY_BIT4) ? H_PAGE_PKEY_BIT0 : 0x0UL)); } +static inline u64 pte_to_hpte_pkey_bits(u64 pteflags) +{ + return (((pteflags & H_PAGE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL)); +} + static inline int vma_pkey(struct vm_area_struct *vma) { return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT; diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index d863696..1e74529 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -230,6 +231,10 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags) */ rflags |= HPTE_R_M; +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + rflags |= pte_to_hpte_pkey_bits(pteflags); +#endif + return rflags; } -- 1.7.1
[RFC v5 25/38] powerpc: helper to validate key-access permissions of a pte
helper function that checks if the read/write/execute is allowed on the pte. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/pgtable.h |2 + arch/powerpc/include/asm/pkeys.h |9 +++ arch/powerpc/mm/pkeys.c | 31 ++ 3 files changed, 42 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index d9c87c4..aad205c 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -474,6 +474,8 @@ static inline void write_uamor(u64 value) mtspr(SPRN_UAMOR, value); } +extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute); + #else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ static inline u64 read_amr(void) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index 6477b87..01f2bfc 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -31,6 +31,15 @@ static inline u64 pte_to_hpte_pkey_bits(u64 pteflags) ((pteflags & H_PAGE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL)); } +static inline u16 pte_to_pkey_bits(u64 pteflags) +{ + return (((pteflags & H_PAGE_PKEY_BIT0) ? 0x10 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT1) ? 0x8 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT2) ? 0x4 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT3) ? 0x2 : 0x0UL) | + ((pteflags & H_PAGE_PKEY_BIT4) ? 0x1 : 0x0UL)); +} + static inline int vma_pkey(struct vm_area_struct *vma) { return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT; diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index c60a045..044a17d 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -170,3 +170,34 @@ int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, */ return vma_pkey(vma); } + +static bool pkey_access_permitted(int pkey, bool write, bool execute) +{ + int pkey_shift; + u64 amr; + + if (!pkey) + return true; + + pkey_shift = pkeyshift(pkey); + if (!(read_uamor() & (0x3UL << pkey_shift))) + return true; + + if (execute && !(read_iamr() & (IAMR_EX_BIT << pkey_shift))) + return true; + + if (!write) { + amr = read_amr(); + if (!(amr & (AMR_AD_BIT << pkey_shift))) + return true; + } + + amr = read_amr(); /* delay reading amr uptil absolutely needed */ + return (write && !(amr & (AMR_WD_BIT << pkey_shift))); +} + +bool arch_pte_access_permitted(u64 pte, bool write, bool execute) +{ + return pkey_access_permitted(pte_to_pkey_bits(pte), + write, execute); +} -- 1.7.1
[RFC v5 26/38] powerpc: check key protection for user page access
Make sure that the kernel does not access user pages without checking their key-protection. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/pgtable.h | 14 ++ 1 files changed, 14 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index aad205c..d590f30 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -476,6 +476,20 @@ static inline void write_uamor(u64 value) extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute); +#define pte_access_permitted(pte, write) \ + (pte_present(pte) && \ +((!(write) || pte_write(pte)) && \ + arch_pte_access_permitted(pte_val(pte), !!write, 0))) + +/* + * We store key in pmd for huge tlb pages. So need + * to check for key protection. + */ +#define pmd_access_permitted(pmd, write) \ + (pmd_present(pmd) && \ +((!(write) || pmd_write(pmd)) && \ + arch_pte_access_permitted(pmd_val(pmd), !!write, 0))) + #else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ static inline u64 read_amr(void) -- 1.7.1
[RFC v5 27/38] powerpc: Macro the mask used for checking DSI exception
Replace the magic number used to check for DSI exception with a meaningful value. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/reg.h |7 ++- arch/powerpc/kernel/exceptions-64s.S |2 +- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index 7e50e47..ba110dd 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -272,16 +272,21 @@ #define SPRN_DAR 0x013 /* Data Address Register */ #define SPRN_DBCR 0x136 /* e300 Data Breakpoint Control Reg */ #define SPRN_DSISR 0x012 /* Data Storage Interrupt Status Register */ +#define DSISR_BIT32 0x8000 /* not defined */ #define DSISR_NOHPTE 0x4000 /* no translation found */ +#define DSISR_PAGEATTR_CONFLT0x2000 /* page attribute conflict */ +#define DSISR_BIT35 0x1000 /* not defined */ #define DSISR_PROTFAULT 0x0800 /* protection fault */ #define DSISR_BADACCESS 0x0400 /* bad access to CI or G */ #define DSISR_ISSTORE0x0200 /* access was a store */ #define DSISR_DABRMATCH 0x0040 /* hit data breakpoint */ -#define DSISR_NOSEGMENT 0x0020 /* SLB miss */ #define DSISR_KEYFAULT 0x0020 /* Key fault */ +#define DSISR_BIT43 0x0010 /* not defined */ #define DSISR_UNSUPP_MMU 0x0008 /* Unsupported MMU config */ #define DSISR_SET_RC 0x0004 /* Failed setting of R/C bits */ #define DSISR_PGDIRFAULT 0x0002 /* Fault on page directory */ +#define DSISR_PAGE_FAULT_MASK (DSISR_BIT32 | DSISR_PAGEATTR_CONFLT | \ + DSISR_BADACCESS | DSISR_BIT43) #define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */ #define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */ #define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */ diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index ae418b8..3fd0528 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1411,7 +1411,7 @@ USE_TEXT_SECTION() .balign IFETCH_ALIGN_BYTES do_hash_page: #ifdef CONFIG_PPC_STD_MMU_64 - andis. r0,r4,0xa410/* weird error? */ + andis. r0,r4,DSISR_PAGE_FAULT_MASK@h bne-handle_page_fault /* if not, try to insert a HPTE */ andis. r0,r4,DSISR_DABRMATCH@h bne-handle_dabr_fault -- 1.7.1
[RFC v5 28/38] powerpc: implementation for arch_vma_access_permitted()
This patch provides the implementation for arch_vma_access_permitted(). Returns true if the requested access is allowed by pkey associated with the vma. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/mmu_context.h |5 arch/powerpc/mm/pkeys.c| 40 2 files changed, 45 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index da7e943..bf69ff9 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -175,11 +175,16 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm, { } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +bool arch_vma_access_permitted(struct vm_area_struct *vma, + bool write, bool execute, bool foreign); +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write, bool execute, bool foreign) { /* by default, allow everything */ return true; } +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ #endif /* __KERNEL__ */ #endif /* __ASM_POWERPC_MMU_CONTEXT_H */ diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c index 044a17d..f89a048 100644 --- a/arch/powerpc/mm/pkeys.c +++ b/arch/powerpc/mm/pkeys.c @@ -201,3 +201,43 @@ bool arch_pte_access_permitted(u64 pte, bool write, bool execute) return pkey_access_permitted(pte_to_pkey_bits(pte), write, execute); } + +/* + * We only want to enforce protection keys on the current process + * because we effectively have no access to AMR/IAMR for other + * processes or any way to tell *which * AMR/IAMR in a threaded + * process we could use. + * + * So do not enforce things if the VMA is not from the current + * mm, or if we are in a kernel thread. + */ +static inline bool vma_is_foreign(struct vm_area_struct *vma) +{ + if (!current->mm) + return true; + /* +* if the VMA is from another process, then AMR/IAMR has no +* relevance and should not be enforced. +*/ + if (current->mm != vma->vm_mm) + return true; + + return false; +} + +bool arch_vma_access_permitted(struct vm_area_struct *vma, + bool write, bool execute, bool foreign) +{ + int pkey; + + /* allow access if the VMA is not one from this process */ + if (foreign || vma_is_foreign(vma)) + return true; + + pkey = vma_pkey(vma); + + if (!pkey) + return true; + + return pkey_access_permitted(pkey, write, execute); +} -- 1.7.1
[RFC v5 29/38] powerpc: Handle exceptions caused by pkey violation
Handle Data and Instruction exceptions caused by memory protection-key. The CPU will detect the key fault if the HPTE is already programmed with the key. However if the HPTE is not hashed, a key fault will not be detected by the hardware. The software will detect pkey violation in such a case. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/reg.h |2 +- arch/powerpc/mm/fault.c| 21 + 2 files changed, 22 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index ba110dd..6e2a860 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -286,7 +286,7 @@ #define DSISR_SET_RC 0x0004 /* Failed setting of R/C bits */ #define DSISR_PGDIRFAULT 0x0002 /* Fault on page directory */ #define DSISR_PAGE_FAULT_MASK (DSISR_BIT32 | DSISR_PAGEATTR_CONFLT | \ - DSISR_BADACCESS | DSISR_BIT43) + DSISR_BADACCESS | DSISR_KEYFAULT | DSISR_BIT43) #define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */ #define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */ #define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */ diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 3a7d580..ea74fe2 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -261,6 +261,13 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, } #endif +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + if (error_code & DSISR_KEYFAULT) { + code = SEGV_PKUERR; + goto bad_area_nosemaphore; + } +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + /* We restore the interrupt state now */ if (!arch_irq_disabled_regs(regs)) local_irq_enable(); @@ -441,6 +448,20 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, WARN_ON_ONCE(error_code & DSISR_PROTFAULT); #endif /* CONFIG_PPC_STD_MMU */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, + is_exec, 0)) { + code = SEGV_PKUERR; + goto bad_area; + } +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + + + /* handle_mm_fault() needs to know if its a instruction access +* fault. +*/ + if (is_exec) + flags |= FAULT_FLAG_INSTRUCTION; /* * If for any reason at all we couldn't handle the fault, * make sure we exit gracefully rather than endlessly redo -- 1.7.1
[RFC v5 30/38] powerpc: capture AMR register content on pkey violation
capture AMR register contents, and save it in paca whenever a pkey violation is detected. This value will be needed to deliver pkey-violation signal to the task. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/paca.h |3 +++ arch/powerpc/kernel/asm-offsets.c |5 + arch/powerpc/mm/fault.c |2 ++ 3 files changed, 10 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h index 1c09f8f..c8bd1fc 100644 --- a/arch/powerpc/include/asm/paca.h +++ b/arch/powerpc/include/asm/paca.h @@ -92,6 +92,9 @@ struct paca_struct { struct dtl_entry *dispatch_log_end; #endif /* CONFIG_PPC_STD_MMU_64 */ u64 dscr_default; /* per-CPU default DSCR */ +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + u64 paca_amr; /* value of amr at exception */ +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ #ifdef CONFIG_PPC_STD_MMU_64 /* diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 709e234..17f5d8a 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -241,6 +241,11 @@ int main(void) OFFSET(PACAHWCPUID, paca_struct, hw_cpu_id); OFFSET(PACAKEXECSTATE, paca_struct, kexec_state); OFFSET(PACA_DSCR_DEFAULT, paca_struct, dscr_default); + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + OFFSET(PACA_AMR, paca_struct, paca_amr); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + OFFSET(ACCOUNT_STARTTIME, paca_struct, accounting.starttime); OFFSET(ACCOUNT_STARTTIME_USER, paca_struct, accounting.starttime_user); OFFSET(ACCOUNT_USER_TIME, paca_struct, accounting.utime); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index ea74fe2..a6710f5 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -264,6 +264,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS if (error_code & DSISR_KEYFAULT) { code = SEGV_PKUERR; + get_paca()->paca_amr = read_amr(); goto bad_area_nosemaphore; } #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ @@ -451,6 +452,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, is_exec, 0)) { + get_paca()->paca_amr = read_amr(); code = SEGV_PKUERR; goto bad_area; } -- 1.7.1
[RFC v5 31/38] powerpc: introduce get_pte_pkey() helper
get_pte_pkey() helper returns the pkey associated with a address corresponding to a given mm_struct. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/book3s/64/mmu-hash.h |5 arch/powerpc/mm/hash_utils_64.c | 28 + 2 files changed, 33 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h index f7a6ed3..369f9ff 100644 --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h @@ -450,6 +450,11 @@ extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap, int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, pte_t *ptep, unsigned long trap, unsigned long flags, int ssize, unsigned int shift, unsigned int mmu_psize); + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + #ifdef CONFIG_TRANSPARENT_HUGEPAGE extern int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid, pmd_t *pmdp, unsigned long trap, diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index 1e74529..591990c 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -1573,6 +1573,34 @@ void hash_preload(struct mm_struct *mm, unsigned long ea, local_irq_restore(flags); } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +/* + * return the protection key associated with the given address + * and the mm_struct. + */ +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address) +{ + pte_t *ptep; + u16 pkey = 0; + unsigned long flags; + + if (REGION_ID(address) == VMALLOC_REGION_ID) + mm = &init_mm; + + if (!mm || !mm->pgd) + return 0; + + local_irq_save(flags); + ptep = find_linux_pte_or_hugepte(mm->pgd, address, + NULL, NULL); + if (ptep) + pkey = pte_to_pkey_bits(pte_val(READ_ONCE(*ptep))); + local_irq_restore(flags); + + return pkey; +} +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM static inline void tm_flush_hash_page(int local) { -- 1.7.1
[RFC v5 32/38] powerpc: capture the violated protection key on fault
Capture the protection key that got violated in paca. This value will be used by used to inform the signal handler. Signed-off-by: Ram Pai --- arch/powerpc/include/asm/paca.h |1 + arch/powerpc/kernel/asm-offsets.c |1 + arch/powerpc/mm/fault.c |3 +++ 3 files changed, 5 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h index c8bd1fc..0c06188 100644 --- a/arch/powerpc/include/asm/paca.h +++ b/arch/powerpc/include/asm/paca.h @@ -94,6 +94,7 @@ struct paca_struct { u64 dscr_default; /* per-CPU default DSCR */ #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS u64 paca_amr; /* value of amr at exception */ + u16 paca_pkey; /* exception causing pkey */ #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ #ifdef CONFIG_PPC_STD_MMU_64 diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 17f5d8a..7dff862 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -244,6 +244,7 @@ int main(void) #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS OFFSET(PACA_AMR, paca_struct, paca_amr); + OFFSET(PACA_PKEY, paca_struct, paca_pkey); #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ OFFSET(ACCOUNT_STARTTIME, paca_struct, accounting.starttime); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index a6710f5..c8674a7 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -265,6 +265,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, if (error_code & DSISR_KEYFAULT) { code = SEGV_PKUERR; get_paca()->paca_amr = read_amr(); + get_paca()->paca_pkey = get_pte_pkey(current->mm, address); goto bad_area_nosemaphore; } #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ @@ -290,6 +291,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + /* * We want to do this outside mmap_sem, because reading code around nip * can result in fault, which will cause a deadlock when called with @@ -453,6 +455,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, is_exec, 0)) { get_paca()->paca_amr = read_amr(); + get_paca()->paca_pkey = vma_pkey(vma); code = SEGV_PKUERR; goto bad_area; } -- 1.7.1
[RFC v5 33/38] powerpc: Deliver SEGV signal on pkey violation
The value of the AMR register at the time of exception is made available in gp_regs[PT_AMR] of the siginfo. The value of the pkey, whose protection got violated, is made available in si_pkey field of the siginfo structure. Signed-off-by: Ram Pai --- arch/powerpc/include/uapi/asm/ptrace.h |3 ++- arch/powerpc/kernel/signal_32.c|5 + arch/powerpc/kernel/signal_64.c|4 arch/powerpc/kernel/traps.c| 14 ++ 4 files changed, 25 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/uapi/asm/ptrace.h b/arch/powerpc/include/uapi/asm/ptrace.h index 8036b38..7ec2428 100644 --- a/arch/powerpc/include/uapi/asm/ptrace.h +++ b/arch/powerpc/include/uapi/asm/ptrace.h @@ -108,8 +108,9 @@ struct pt_regs { #define PT_DAR 41 #define PT_DSISR 42 #define PT_RESULT 43 -#define PT_DSCR 44 #define PT_REGS_COUNT 44 +#define PT_DSCR 44 +#define PT_AMR 45 #define PT_FPR048 /* each FP reg occupies 2 slots in this space */ diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c index 97bb138..9c4a7f3 100644 --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -500,6 +500,11 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame, (unsigned long) &frame->tramp[2]); } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + if (__put_user(get_paca()->paca_amr, &frame->mc_gregs[PT_AMR])) + return 1; +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + return 0; } diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c index c83c115..86a4262 100644 --- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -174,6 +174,10 @@ static long setup_sigcontext(struct sigcontext __user *sc, if (set != NULL) err |= __put_user(set->sig[0], &sc->oldmask); +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + err |= __put_user(get_paca()->paca_amr, &sc->gp_regs[PT_AMR]); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + return err; } diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c index d4e545d..cc0a8c4 100644 --- a/arch/powerpc/kernel/traps.c +++ b/arch/powerpc/kernel/traps.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -247,6 +248,14 @@ void user_single_step_siginfo(struct task_struct *tsk, info->si_addr = (void __user *)regs->nip; } +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +static void fill_sig_info_pkey(int si_code, siginfo_t *info, unsigned long addr) +{ + WARN_ON(si_code != SEGV_PKUERR); + info->si_pkey = get_paca()->paca_pkey; +} +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr) { siginfo_t info; @@ -274,6 +283,11 @@ void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr) info.si_signo = signr; info.si_code = code; info.si_addr = (void __user *) addr; + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS + fill_sig_info_pkey(code, &info, addr); +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ + force_sig_info(signr, &info, current); } -- 1.7.1
[RFC v5 34/38] procfs: display the protection-key number associated with a vma
Display the pkey number associated with the vma in smaps of a task. The key will be seen as below: ProtectionKey: 0 Signed-off-by: Ram Pai --- arch/powerpc/kernel/setup_64.c |8 1 files changed, 8 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index f35ff9d..ebc82b3 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include @@ -745,3 +746,10 @@ static int __init disable_hardlockup_detector(void) } early_initcall(disable_hardlockup_detector); #endif + +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS +void arch_show_smap(struct seq_file *m, struct vm_area_struct *vma) +{ + seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); +} +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */ -- 1.7.1
[RFC v5 35/38] selftest: Move protecton key selftest to arch neutral directory
Signed-off-by: Ram Pai --- tools/testing/selftests/vm/Makefile |1 + tools/testing/selftests/vm/pkey-helpers.h | 219 tools/testing/selftests/vm/protection_keys.c | 1395 + tools/testing/selftests/x86/Makefile |2 +- tools/testing/selftests/x86/pkey-helpers.h| 219 tools/testing/selftests/x86/protection_keys.c | 1395 - 6 files changed, 1616 insertions(+), 1615 deletions(-) create mode 100644 tools/testing/selftests/vm/pkey-helpers.h create mode 100644 tools/testing/selftests/vm/protection_keys.c delete mode 100644 tools/testing/selftests/x86/pkey-helpers.h delete mode 100644 tools/testing/selftests/x86/protection_keys.c diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile index cbb29e4..1d32f78 100644 --- a/tools/testing/selftests/vm/Makefile +++ b/tools/testing/selftests/vm/Makefile @@ -17,6 +17,7 @@ TEST_GEN_FILES += transhuge-stress TEST_GEN_FILES += userfaultfd TEST_GEN_FILES += mlock-random-test TEST_GEN_FILES += virtual_address_range +TEST_GEN_FILES += protection_keys TEST_PROGS := run_vmtests diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h new file mode 100644 index 000..b202939 --- /dev/null +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -0,0 +1,219 @@ +#ifndef _PKEYS_HELPER_H +#define _PKEYS_HELPER_H +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define NR_PKEYS 16 +#define PKRU_BITS_PER_PKEY 2 + +#ifndef DEBUG_LEVEL +#define DEBUG_LEVEL 0 +#endif +#define DPRINT_IN_SIGNAL_BUF_SIZE 4096 +extern int dprint_in_signal; +extern char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; +static inline void sigsafe_printf(const char *format, ...) +{ + va_list ap; + + va_start(ap, format); + if (!dprint_in_signal) { + vprintf(format, ap); + } else { + int len = vsnprintf(dprint_in_signal_buffer, + DPRINT_IN_SIGNAL_BUF_SIZE, + format, ap); + /* +* len is amount that would have been printed, +* but actual write is truncated at BUF_SIZE. +*/ + if (len > DPRINT_IN_SIGNAL_BUF_SIZE) + len = DPRINT_IN_SIGNAL_BUF_SIZE; + write(1, dprint_in_signal_buffer, len); + } + va_end(ap); +} +#define dprintf_level(level, args...) do { \ + if (level <= DEBUG_LEVEL) \ + sigsafe_printf(args); \ + fflush(NULL); \ +} while (0) +#define dprintf0(args...) dprintf_level(0, args) +#define dprintf1(args...) dprintf_level(1, args) +#define dprintf2(args...) dprintf_level(2, args) +#define dprintf3(args...) dprintf_level(3, args) +#define dprintf4(args...) dprintf_level(4, args) + +extern unsigned int shadow_pkru; +static inline unsigned int __rdpkru(void) +{ + unsigned int eax, edx; + unsigned int ecx = 0; + unsigned int pkru; + + asm volatile(".byte 0x0f,0x01,0xee\n\t" +: "=a" (eax), "=d" (edx) +: "c" (ecx)); + pkru = eax; + return pkru; +} + +static inline unsigned int _rdpkru(int line) +{ + unsigned int pkru = __rdpkru(); + + dprintf4("rdpkru(line=%d) pkru: %x shadow: %x\n", + line, pkru, shadow_pkru); + assert(pkru == shadow_pkru); + + return pkru; +} + +#define rdpkru() _rdpkru(__LINE__) + +static inline void __wrpkru(unsigned int pkru) +{ + unsigned int eax = pkru; + unsigned int ecx = 0; + unsigned int edx = 0; + + dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru); + asm volatile(".byte 0x0f,0x01,0xef\n\t" +: : "a" (eax), "c" (ecx), "d" (edx)); + assert(pkru == __rdpkru()); +} + +static inline void wrpkru(unsigned int pkru) +{ + dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru); + /* will do the shadow check for us: */ + rdpkru(); + __wrpkru(pkru); + shadow_pkru = pkru; + dprintf4("%s(%08x) pkru: %08x\n", __func__, pkru, __rdpkru()); +} + +/* + * These are technically racy. since something could + * change PKRU between the read and the write. + */ +static inline void __pkey_access_allow(int pkey, int do_allow) +{ + unsigned int pkru = rdpkru(); + int bit = pkey * 2; + + if (do_allow) + pkru &= (1
[RFC v5 36/38] selftest: PowerPC specific test updates to memory protection keys
Abstracted out the arch specific code into the header file, and added powerpc specific changes. a) added 4k-backed hpte, memory allocator, powerpc specific. b) added three test case where the key is associated after the page is accessed/allocated/mapped. c) cleaned up the code to make checkpatch.pl happy Signed-off-by: Ram Pai --- tools/testing/selftests/vm/pkey-helpers.h| 230 +-- tools/testing/selftests/vm/protection_keys.c | 567 +++--- 2 files changed, 518 insertions(+), 279 deletions(-) diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h index b202939..69bfa89 100644 --- a/tools/testing/selftests/vm/pkey-helpers.h +++ b/tools/testing/selftests/vm/pkey-helpers.h @@ -12,13 +12,72 @@ #include #include -#define NR_PKEYS 16 -#define PKRU_BITS_PER_PKEY 2 +/* Define some kernel-like types */ +#define u8 uint8_t +#define u16 uint16_t +#define u32 uint32_t +#define u64 uint64_t + +#ifdef __i386__ /* arch */ + +#define SYS_mprotect_key 380 +#define SYS_pkey_alloc 381 +#define SYS_pkey_free 382 +#define REG_IP_IDX REG_EIP +#define si_pkey_offset 0x14 + +#define NR_PKEYS 16 +#define NR_RESERVED_PKEYS 1 +#define PKRU_BITS_PER_PKEY 2 +#define PKEY_DISABLE_ACCESS0x1 +#define PKEY_DISABLE_WRITE 0x2 +#define HPAGE_SIZE (1UL<<21) + +#define INIT_PRKU 0x0UL + +#elif __powerpc64__ /* arch */ + +#define SYS_mprotect_key 386 +#define SYS_pkey_alloc 384 +#define SYS_pkey_free 385 +#define si_pkey_offset 0x20 +#define REG_IP_IDX PT_NIP +#define REG_TRAPNO PT_TRAP +#define REG_AMR45 +#define gregs gp_regs +#define fpregs fp_regs + +#define NR_PKEYS 32 +#define NR_RESERVED_PKEYS 3 +#define PKRU_BITS_PER_PKEY 2 +#define PKEY_DISABLE_ACCESS0x3 /* disable read and write */ +#define PKEY_DISABLE_WRITE 0x2 +#define HPAGE_SIZE (1UL<<24) + +#define INIT_PRKU 0x3UL +#else /* arch */ + + NOT SUPPORTED + +#endif /* arch */ + #ifndef DEBUG_LEVEL #define DEBUG_LEVEL 0 #endif #define DPRINT_IN_SIGNAL_BUF_SIZE 4096 + + +static inline u32 pkey_to_shift(int pkey) +{ +#ifdef __i386__ /* arch */ + return pkey * PKRU_BITS_PER_PKEY; +#elif __powerpc64__ /* arch */ + return (NR_PKEYS - pkey - 1) * PKRU_BITS_PER_PKEY; +#endif /* arch */ +} + + extern int dprint_in_signal; extern char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; static inline void sigsafe_printf(const char *format, ...) @@ -53,53 +112,76 @@ static inline void sigsafe_printf(const char *format, ...) #define dprintf3(args...) dprintf_level(3, args) #define dprintf4(args...) dprintf_level(4, args) -extern unsigned int shadow_pkru; -static inline unsigned int __rdpkru(void) +extern u64 shadow_pkey_reg; + +static inline u64 __rdpkey_reg(void) { +#ifdef __i386__ /* arch */ unsigned int eax, edx; unsigned int ecx = 0; - unsigned int pkru; + unsigned int pkey_reg; asm volatile(".byte 0x0f,0x01,0xee\n\t" : "=a" (eax), "=d" (edx) : "c" (ecx)); - pkru = eax; - return pkru; +#elif __powerpc64__ /* arch */ + u64 eax; + u64 pkey_reg; + + asm volatile("mfspr %0, 0xd" : "=r" ((u64)(eax))); +#endif /* arch */ + pkey_reg = (u64)eax; + return pkey_reg; } -static inline unsigned int _rdpkru(int line) +static inline u64 _rdpkey_reg(int line) { - unsigned int pkru = __rdpkru(); + u64 pkey_reg = __rdpkey_reg(); - dprintf4("rdpkru(line=%d) pkru: %x shadow: %x\n", - line, pkru, shadow_pkru); - assert(pkru == shadow_pkru); + dprintf4("rdpkey_reg(line=%d) pkey_reg: %lx shadow: %lx\n", + line, pkey_reg, shadow_pkey_reg); + assert(pkey_reg == shadow_pkey_reg); - return pkru; + return pkey_reg; } -#define rdpkru() _rdpkru(__LINE__) +#define rdpkey_reg() _rdpkey_reg(__LINE__) -static inline void __wrpkru(unsigned int pkru) +static inline void __wrpkey_reg(u64 pkey_reg) { - unsigned int eax = pkru; +#ifdef __i386__ /* arch */ + unsigned int eax = pkey_reg; unsigned int ecx = 0; unsigned int edx = 0; - dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru); + dprintf4("%s() changing %lx to %lx\n", +__func__, __rdpkey_reg(), pkey_reg); asm volatile(".byte 0x0f,0x01,0xef\n\t" : : "a" (eax), "c" (ecx), "d" (edx)); - assert(pkru == __rdpkru()); + dprintf4("%s() PKRUP after changing %lx to %lx\n", + __func__, __rdpkey_reg(), pkey_reg); +#else /* arch */ + u64 eax = pkey_reg; + + dprintf4("%s() changing %llx to %llx\n", +__func__, __rdpkey_reg(), pkey_reg); + asm volatile("mtspr 0xd, %0" : : "r" ((unsigned long)(eax)) : "memory"); + dprintf4("%s() PKRUP after chang
[RFC v5 37/38] Documentation: Move protecton key documentation to arch neutral directory
Since PowerPC and Intel both support memory protection keys, moving the documenation to arch-neutral directory. Signed-off-by: Ram Pai --- Documentation/vm/protection-keys.txt | 85 + Documentation/x86/protection-keys.txt | 85 - 2 files changed, 85 insertions(+), 85 deletions(-) create mode 100644 Documentation/vm/protection-keys.txt delete mode 100644 Documentation/x86/protection-keys.txt diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt new file mode 100644 index 000..b643045 --- /dev/null +++ b/Documentation/vm/protection-keys.txt @@ -0,0 +1,85 @@ +Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature +which will be found on future Intel CPUs. + +Memory Protection Keys provides a mechanism for enforcing page-based +protections, but without requiring modification of the page tables +when an application changes protection domains. It works by +dedicating 4 previously ignored bits in each page table entry to a +"protection key", giving 16 possible keys. + +There is also a new user-accessible register (PKRU) with two separate +bits (Access Disable and Write Disable) for each key. Being a CPU +register, PKRU is inherently thread-local, potentially giving each +thread a different set of protections from every other thread. + +There are two new instructions (RDPKRU/WRPKRU) for reading and writing +to the new register. The feature is only available in 64-bit mode, +even though there is theoretically space in the PAE PTEs. These +permissions are enforced on data access only and have no effect on +instruction fetches. + +=== Syscalls === + +There are 3 system calls which directly interact with pkeys: + + int pkey_alloc(unsigned long flags, unsigned long init_access_rights) + int pkey_free(int pkey); + int pkey_mprotect(unsigned long start, size_t len, + unsigned long prot, int pkey); + +Before a pkey can be used, it must first be allocated with +pkey_alloc(). An application calls the WRPKRU instruction +directly in order to change access permissions to memory covered +with a key. In this example WRPKRU is wrapped by a C function +called pkey_set(). + + int real_prot = PROT_READ|PROT_WRITE; + pkey = pkey_alloc(0, PKEY_DENY_WRITE); + ptr = mmap(NULL, PAGE_SIZE, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0); + ret = pkey_mprotect(ptr, PAGE_SIZE, real_prot, pkey); + ... application runs here + +Now, if the application needs to update the data at 'ptr', it can +gain access, do the update, then remove its write access: + + pkey_set(pkey, 0); // clear PKEY_DENY_WRITE + *ptr = foo; // assign something + pkey_set(pkey, PKEY_DENY_WRITE); // set PKEY_DENY_WRITE again + +Now when it frees the memory, it will also free the pkey since it +is no longer in use: + + munmap(ptr, PAGE_SIZE); + pkey_free(pkey); + +(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions. + An example implementation can be found in + tools/testing/selftests/x86/protection_keys.c) + +=== Behavior === + +The kernel attempts to make protection keys consistent with the +behavior of a plain mprotect(). For instance if you do this: + + mprotect(ptr, size, PROT_NONE); + something(ptr); + +you can expect the same effects with protection keys when doing this: + + pkey = pkey_alloc(0, PKEY_DISABLE_WRITE | PKEY_DISABLE_READ); + pkey_mprotect(ptr, size, PROT_READ|PROT_WRITE, pkey); + something(ptr); + +That should be true whether something() is a direct access to 'ptr' +like: + + *ptr = foo; + +or when the kernel does the access on the application's behalf like +with a read(): + + read(fd, ptr, 1); + +The kernel will send a SIGSEGV in both cases, but si_code will be set +to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when +the plain mprotect() permissions are violated. diff --git a/Documentation/x86/protection-keys.txt b/Documentation/x86/protection-keys.txt deleted file mode 100644 index b643045..000 --- a/Documentation/x86/protection-keys.txt +++ /dev/null @@ -1,85 +0,0 @@ -Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature -which will be found on future Intel CPUs. - -Memory Protection Keys provides a mechanism for enforcing page-based -protections, but without requiring modification of the page tables -when an application changes protection domains. It works by -dedicating 4 previously ignored bits in each page table entry to a -"protection key", giving 16 possible keys. - -There is also a new user-accessible register (PKRU) with two separate -bits (Access Disable and Write Disable) for each key. Being a CPU -register, PKRU is inherently thread-local, potentially giving each -thread a different set of protections from every oth
[RFC v5 38/38] Documentation: PowerPC specific updates to memory protection keys
Add documentation updates that capture PowerPC specific changes. Signed-off-by: Ram Pai --- Documentation/vm/protection-keys.txt | 85 ++ 1 files changed, 65 insertions(+), 20 deletions(-) diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt index b643045..d50b6ab 100644 --- a/Documentation/vm/protection-keys.txt +++ b/Documentation/vm/protection-keys.txt @@ -1,21 +1,46 @@ -Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature -which will be found on future Intel CPUs. +Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature found in +new generation of intel CPUs and on PowerPC 7 and higher CPUs. Memory Protection Keys provides a mechanism for enforcing page-based -protections, but without requiring modification of the page tables -when an application changes protection domains. It works by -dedicating 4 previously ignored bits in each page table entry to a -"protection key", giving 16 possible keys. - -There is also a new user-accessible register (PKRU) with two separate -bits (Access Disable and Write Disable) for each key. Being a CPU -register, PKRU is inherently thread-local, potentially giving each -thread a different set of protections from every other thread. - -There are two new instructions (RDPKRU/WRPKRU) for reading and writing -to the new register. The feature is only available in 64-bit mode, -even though there is theoretically space in the PAE PTEs. These -permissions are enforced on data access only and have no effect on +protections, but without requiring modification of the page tables when an +application changes protection domains. + + +On Intel: + + It works by dedicating 4 previously ignored bits in each page table + entry to a "protection key", giving 16 possible keys. + + There is also a new user-accessible register (PKRU) with two separate + bits (Access Disable and Write Disable) for each key. Being a CPU + register, PKRU is inherently thread-local, potentially giving each + thread a different set of protections from every other thread. + + There are two new instructions (RDPKRU/WRPKRU) for reading and writing + to the new register. The feature is only available in 64-bit mode, + even though there is theoretically space in the PAE PTEs. These + permissions are enforced on data access only and have no effect on + instruction fetches. + + +On PowerPC: + + It works by dedicating 5 page table entry bits to a "protection key", + giving 32 possible keys. + + There is a user-accessible register (AMR) with two separate bits; + Access Disable and Write Disable, for each key. Being a CPU + register, AMR is inherently thread-local, potentially giving each + thread a different set of protections from every other thread. NOTE: + Disabling read permission does not disable write and vice-versa. + + The feature is available on 64-bit HPTE mode only. + 'mtspr 0xd, mem' reads the AMR register + 'mfspr mem, 0xd' writes into the AMR register. + + + +Permissions are enforced on data access only and have no effect on instruction fetches. === Syscalls === @@ -28,9 +53,9 @@ There are 3 system calls which directly interact with pkeys: unsigned long prot, int pkey); Before a pkey can be used, it must first be allocated with -pkey_alloc(). An application calls the WRPKRU instruction +pkey_alloc(). An application calls the WRPKRU/AMR instruction directly in order to change access permissions to memory covered -with a key. In this example WRPKRU is wrapped by a C function +with a key. In this example WRPKRU/AMR is wrapped by a C function called pkey_set(). int real_prot = PROT_READ|PROT_WRITE; @@ -52,11 +77,11 @@ is no longer in use: munmap(ptr, PAGE_SIZE); pkey_free(pkey); -(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions. +(Note: pkey_set() is a wrapper for the RDPKRU,WRPKRU or AMR instructions. An example implementation can be found in tools/testing/selftests/x86/protection_keys.c) -=== Behavior === +=== Behavior = The kernel attempts to make protection keys consistent with the behavior of a plain mprotect(). For instance if you do this: @@ -83,3 +108,23 @@ with a read(): The kernel will send a SIGSEGV in both cases, but si_code will be set to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when the plain mprotect() permissions are violated. + + + + Semantic differences + +The following semantic differences exist between x86 and power. + +a) powerpc allows creation of a key with execute-disabled. The followi
Re: [PATCH] ptrace: Add compat PTRACE_{G,S}ETSIGMASK handlers
On Thu, Jun 29, 2017 at 05:26:37PM +0100, James Morse wrote: > compat_ptrace_request() lacks handlers for PTRACE_{G,S}ETSIGMASK, > instead using those in ptrace_request(). The compat variant should > read a compat_sigset_t from userspace instead of ptrace_request()s > sigset_t. > > While compat_sigset_t is the same size as sigset_t, it is defined as > 2xu32, instead of a single u64. On a big-endian CPU this means that > compat_sigset_t is passed to user-space using middle-endianness, > where the least-significant u32 is written most significant byte > first. > > If ptrace_request()s code is used userspace will read the most > significant u32 where it expected the least significant. > > Instead of duplicating ptrace_request()s code as a special case in > the arch code, handle it here. > Acked-by: Andrei Vagin > CC: Yury Norov > CC: Andrey Vagin > Reported-by: Zhou Chengming > Signed-off-by: James Morse > Fixes: 29000caecbe87 ("ptrace: add ability to get/set signal-blocked mask") > --- > LTP test case here: > https://lists.linux.it/pipermail/ltp/2017-June/004932.html > > kernel/ptrace.c | 52 > 1 file changed, 40 insertions(+), 12 deletions(-) > > diff --git a/kernel/ptrace.c b/kernel/ptrace.c > index 8d2c10714530..a5bebb6713e8 100644 > --- a/kernel/ptrace.c > +++ b/kernel/ptrace.c > @@ -843,6 +843,22 @@ static int ptrace_regset(struct task_struct *task, int > req, unsigned int type, > EXPORT_SYMBOL_GPL(task_user_regset_view); > #endif > > +static int ptrace_setsigmask(struct task_struct *child, sigset_t *new_set) > +{ > + sigdelsetmask(new_set, sigmask(SIGKILL)|sigmask(SIGSTOP)); > + > + /* > + * Every thread does recalc_sigpending() after resume, so > + * retarget_shared_pending() and recalc_sigpending() are not > + * called here. > + */ > + spin_lock_irq(&child->sighand->siglock); > + child->blocked = *new_set; > + spin_unlock_irq(&child->sighand->siglock); > + > + return 0; > +} > + > int ptrace_request(struct task_struct *child, long request, > unsigned long addr, unsigned long data) > { > @@ -914,18 +930,7 @@ int ptrace_request(struct task_struct *child, long > request, > break; > } > > - sigdelsetmask(&new_set, sigmask(SIGKILL)|sigmask(SIGSTOP)); > - > - /* > - * Every thread does recalc_sigpending() after resume, so > - * retarget_shared_pending() and recalc_sigpending() are not > - * called here. > - */ > - spin_lock_irq(&child->sighand->siglock); > - child->blocked = new_set; > - spin_unlock_irq(&child->sighand->siglock); > - > - ret = 0; > + ret = ptrace_setsigmask(child, &new_set); > break; > } > > @@ -1149,7 +1154,9 @@ int compat_ptrace_request(struct task_struct *child, > compat_long_t request, > compat_ulong_t addr, compat_ulong_t data) > { > compat_ulong_t __user *datap = compat_ptr(data); > + compat_sigset_t set32; > compat_ulong_t word; > + sigset_t new_set; > siginfo_t siginfo; > int ret; > > @@ -1189,6 +1196,27 @@ int compat_ptrace_request(struct task_struct *child, > compat_long_t request, > else > ret = ptrace_setsiginfo(child, &siginfo); > break; > + case PTRACE_GETSIGMASK: > + if (addr != sizeof(compat_sigset_t)) > + return -EINVAL; > + > + sigset_to_compat(&set32, &child->blocked); > + > + if (copy_to_user(datap, &set32, sizeof(set32))) > + return -EFAULT; > + > + ret = 0; > + break; > + case PTRACE_SETSIGMASK: > + if (addr != sizeof(compat_sigset_t)) > + return -EINVAL; > + > + if (copy_from_user(&set32, datap, sizeof(compat_sigset_t))) > + return -EFAULT; > + > + sigset_from_compat(&new_set, &set32); > + ret = ptrace_setsigmask(child, &new_set); > + break; > #ifdef CONFIG_HAVE_ARCH_TRACEHOOK > case PTRACE_GETREGSET: > case PTRACE_SETREGSET: > -- > 2.11.0 >
Re: [PATCH RFC 21/26] powerpc: Remove spin_unlock_wait() arch-specific definitions
On Sun, Jul 02, 2017 at 11:58:07AM +0800, Boqun Feng wrote: > On Thu, Jun 29, 2017 at 05:01:29PM -0700, Paul E. McKenney wrote: > > There is no agreed-upon definition of spin_unlock_wait()'s semantics, > > and it appears that all callers could do just as well with a lock/unlock > > pair. This commit therefore removes the underlying arch-specific > > arch_spin_unlock_wait(). > > > > Signed-off-by: Paul E. McKenney > > Cc: Benjamin Herrenschmidt > > Cc: Paul Mackerras > > Cc: Michael Ellerman > > Cc: > > Cc: Will Deacon > > Cc: Peter Zijlstra > > Cc: Alan Stern > > Cc: Andrea Parri > > Cc: Linus Torvalds > > Acked-by: Boqun Feng And finally applied in preparation for v2 of the patch series. Thank you!!! Thanx, Paul > Regards, > Boqun > > > --- > > arch/powerpc/include/asm/spinlock.h | 33 - > > 1 file changed, 33 deletions(-) > > > > diff --git a/arch/powerpc/include/asm/spinlock.h > > b/arch/powerpc/include/asm/spinlock.h > > index 8c1b913de6d7..d256e448ea49 100644 > > --- a/arch/powerpc/include/asm/spinlock.h > > +++ b/arch/powerpc/include/asm/spinlock.h > > @@ -170,39 +170,6 @@ static inline void arch_spin_unlock(arch_spinlock_t > > *lock) > > lock->slock = 0; > > } > > > > -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) > > -{ > > - arch_spinlock_t lock_val; > > - > > - smp_mb(); > > - > > - /* > > -* Atomically load and store back the lock value (unchanged). This > > -* ensures that our observation of the lock value is ordered with > > -* respect to other lock operations. > > -*/ > > - __asm__ __volatile__( > > -"1:" PPC_LWARX(%0, 0, %2, 0) "\n" > > -" stwcx. %0, 0, %2\n" > > -" bne- 1b\n" > > - : "=&r" (lock_val), "+m" (*lock) > > - : "r" (lock) > > - : "cr0", "xer"); > > - > > - if (arch_spin_value_unlocked(lock_val)) > > - goto out; > > - > > - while (lock->slock) { > > - HMT_low(); > > - if (SHARED_PROCESSOR) > > - __spin_yield(lock); > > - } > > - HMT_medium(); > > - > > -out: > > - smp_mb(); > > -} > > - > > /* > > * Read-write spinlocks, allowing multiple readers > > * but only one writer. > > -- > > 2.5.2 > >
Re: [PATCH v6 0/7] perf report: Show branch type
Hi Arnaldo, Could this series be merged? It's more than 2 months since the last time Jiri Olsa gave the ack. Thanks Jin Yao On 6/26/2017 2:24 PM, Jin, Yao wrote: Hi maintainers, Is this patch series OK or anything I should update? Thanks Jin Yao On 6/2/2017 4:02 PM, Jin, Yao wrote: Hi maintainers, Is this patch series (v6) OK for merging? Thanks Jin Yao On 4/20/2017 5:36 PM, Jiri Olsa wrote: On Thu, Apr 20, 2017 at 08:07:48PM +0800, Jin Yao wrote: v6: Update according to the review comments from Jiri Olsa . Major modifications are: 1. Move that multiline conditional code inside {} brackets. 2. Move branch_type_stat_display() from builtin-report.c to branch.c. Move branch_type_str() from callchain.c to branch.c. 3. Keep the original branch info display order, that is: predicted, abort, cycles, iterations for the tools part Acked-by: Jiri Olsa thanks, jirka