On 11/8/24 00:53, Yury Khrustalev wrote:
> This patch adds PKEY_UNRESTRICTED macro defined as 0x0.
Thanks for doing this and the follow-on selftests mods!
Acked-by: Dave Hansen
On 11/8/24 00:53, Yury Khrustalev wrote:
> Replace literal 0 with macro PKEY_UNRESTRICTED where pkey_*() functions
> are used in mm selftests for memory protection keys.
Acked-by: Dave Hansen
On 9/11/24 08:01, Kevin Brodsky wrote:
> On 22/08/2024 17:10, Joey Gouly wrote:
>> @@ -371,6 +382,9 @@ int copy_thread(struct task_struct *p, const struct
>> kernel_clone_args *args)
>> if (system_supports_tpidr2())
>> p->thread.tpidr2_el0 = read_sysreg_s(SYS_TPID
On 8/29/24 01:42, Lorenzo Stoakes wrote:
>> These applications work on x86 because x86 does an implicit 47-bit
>> restriction of mmap() address that contain a hint address that is less
>> than 48 bits.
> You mean x86 _has_ to limit to physically available bits in a canonical
> format 🙂 this will no
On 8/28/24 13:15, Charlie Jenkins wrote:
> A way to restrict mmap() to return LAM compliant addresses in an entire
> address space also doesn't have to be mutually exclusive with this flag.
> This flag allows for the greatest degree of control from applications.
> I don't believe there is additiona
On 8/27/24 22:49, Charlie Jenkins wrote:
> Some applications rely on placing data in free bits addresses allocated
> by mmap. Various architectures (eg. x86, arm64, powerpc) restrict the
> address returned by mmap to be less than the maximum address space,
> unless the hint address is greater than
implementations).
>
> Furthermore, the powerpc implementation is also no longer needed as per
> [1] and [2]. So the arch_unmap() function can be completely removed.
Thanks for doing this cleanup, Liam!
Acked-by: Dave Hansen
On 6/21/24 08:45, Peter Xu wrote:
> On Fri, Jun 21, 2024 at 07:51:26AM -0700, Dave Hansen wrote:
...
>> But, still, what if you take a Dirty=1,Write=1 pud and pud_modify() it
>> to make it Dirty=1,Write=0? What prevents that from being
>> misinterpreted by the hardware as be
On 6/21/24 07:25, Peter Xu wrote:
> These new helpers will be needed for pud entry updates soon. Namely:
>
> - pudp_invalidate()
> - pud_modify()
I think it's also definitely worth noting where you got this code from.
Presumably you copied, pasted and modified the PMD code. That's fine,
but it
lvement in
THP, but PUDs got missed. This patch also realigns pmd_leaf() and
pud_leaf() behavior, which is important.
Acked-by: Dave Hansen
need to rethink this if we get another
architecture or two, but this seems manageable for now.
Acked-by: Dave Hansen
On 5/3/24 06:01, Joey Gouly wrote:
> The new config option specifies how many bits are in each PKEY.
Acked-by: Dave Hansen
On 3/29/24 00:18, Samuel Holland wrote:
> The include guard should match the filename, or it will conflict with
> the newly-added asm/fpu.h.
Acked-by: Dave Hansen
On 3/29/24 00:18, Samuel Holland wrote:
> +#
> +# CFLAGS for compiling floating point code inside the kernel.
> +#
> +CC_FLAGS_FPU := -msse -msse2
> +ifdef CONFIG_CC_IS_GCC
> +# Stack alignment mismatch, proceed with caution.
> +# GCC < 7.1 cannot compile code using `double` and
> -mpreferred-stac
On 3/14/24 09:45, John Baldwin wrote:
> On 3/14/24 8:37 AM, Dave Hansen wrote:
>> On 3/14/24 04:23, Vignesh Balasubramanian wrote:
>>> Add a new .note section containing type, size, offset and flags of
>>> every xfeature that is present.
>>
>> Mechanically, I
On 3/14/24 09:29, Borislav Petkov wrote:
>
>> That argument breaks down a bit on the flags though:
>>
>> xc.xfeat_flags = xstate_flags[i];
>>
>> Because it comes _directly_ from CPUID with zero filtering:
>>
>> cpuid_count(XSTATE_CPUID, i, &eax, &ebx, &ecx, &edx);
>> ...
>> xst
On 3/14/24 09:08, Borislav Petkov wrote:
> On Thu, Mar 14, 2024 at 08:37:09AM -0700, Dave Hansen wrote:
>> This is pretty close to just a raw dump of the XSAVE CPUID leaves.
>> Rather than come up with an XSAVE-specific ABI that depends on CPUID
>> *ANYWAY* (because it dumps
On 3/14/24 04:23, Vignesh Balasubramanian wrote:
> Add a new .note section containing type, size, offset and flags of
> every xfeature that is present.
Mechanically, I'd much rather have all of that info in the cover letter
in the actual changelog instead.
I'd also love to see a practical example
maybe in
the cover letter, but _somewhere_.
That said, feel free to add this to the two x86 patches:
Acked-by: Dave Hansen # for x86
On 6/26/23 07:36, ypode...@redhat.com wrote:
> On Thu, 2023-06-22 at 06:37 -0700, Dave Hansen wrote:
>> On 6/22/23 06:14, ypode...@redhat.com wrote:
>>> I will send a new version with the local variable as you suggested
>>> soon.
>>> As for the config name, w
On 6/22/23 06:14, ypode...@redhat.com wrote:
> I will send a new version with the local variable as you suggested
> soon.
> As for the config name, what about CONFIG_ARCH_HAS_MM_CPUMASK?
The confusing part about that name is that mm_cpumask() and
mm->cpu_bitmap[] are defined unconditionally. So,
On 6/20/23 07:46, Yair Podemsky wrote:
> -void tlb_remove_table_sync_one(void)
> +#ifdef CONFIG_ARCH_HAS_CPUMASK_BITS
> +#define REMOVE_TABLE_IPI_MASK mm_cpumask(mm)
> +#else
> +#define REMOVE_TABLE_IPI_MASK cpu_online_mask
> +#endif /* CONFIG_ARCH_HAS_CPUMASK_BITS */
> +
> +void tlb_remove_table_s
include/asm/pgtable.h | 3 ---
> arch/s390/mm/pageattr.c | 1 +
> arch/x86/include/asm/pgtable.h | 1 +
> arch/x86/include/asm/pgtable_types.h | 3 ---
Looks sane. Thanks Arnd!
Acked-by: Dave Hansen # for arch/x86
On 4/11/23 04:35, Mark Rutland wrote:
> I agree it'd be nice to have performance figures, but I think those would only
> need to demonstrate a lack of a regression rather than a performance
> improvement, and I think it's fairly clear from eyeballing the generated
> instructions that a regression i
On 4/5/23 07:17, Uros Bizjak wrote:
> Add generic and target specific support for local{,64}_try_cmpxchg
> and wire up support for all targets that use local_t infrastructure.
I feel like I'm missing some context.
What are the actual end user visible effects of this series? Is there a
measurable
On 3/15/23 16:20, Ira Weiny wrote:
> Commit 21b56c847753 ("iov_iter: get rid of separate bvec and xarray
> callbacks") removed the calls to memcpy_page_flushcache().
>
> kmap_atomic() is deprecated and used in the x86 version of
> memcpy_page_flushcache().
>
> Remove the unnecessary memcpy_page_
On 6/22/22 14:56, Nayna Jain wrote:
> * Renamed PKS driver to PLPKS to avoid naming conflict as mentioned by
> Dave Hanson.
Thank you for doing this! The new naming looks much less likely to
cause confusion.
On 6/16/22 12:25, Sohil Mehta wrote:
> Should we have different return error codes when compile support is
> disabled vs when runtime support is missing?
It doesn't *really* matter. Programs have to be able to run on old
kernels which will return ENOSYS. So, _when_ new kernels return ENOSYS
or
On 3/15/22 08:53, Ira Weiny wrote:
> On Mon, Mar 14, 2022 at 04:49:12PM -0700, Dave Hansen wrote:
>> On 3/10/22 16:57, ira.we...@intel.com wrote:
>>> From: Ira Weiny
>>>
>>> The number of pkeys supported on x86 and powerpc are much smaller than a
>>
On 3/10/22 16:57, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> The number of pkeys supported on x86 and powerpc are much smaller than a
> u16 value can hold. It is desirable to standardize on the type for
> pkeys. powerpc currently supports the most pkeys at 32. u8 is plenty
> large for th
On 1/21/22 16:56, Nayna Jain wrote:
> Nayna Jain (2):
> pseries: define driver for Platform Keystore
> pseries: define sysfs interface to expose PKS variables
Hi Folks,
There another feature that we might want to consider in the naming here:
> https://lore.kernel.org/all/20220127175505.85139
On 1/17/22 6:46 PM, Nicholas Piggin wrote:
>> This all sounds very fragile to me. Every time a new architecture would
>> get added for huge vmalloc() support, the developer needs to know to go
>> find that architecture's module_alloc() and add this flag.
> This is documented in the Kconfig.
>
>
On 12/28/21 2:26 AM, Kefeng Wang wrote:
>>> There are some disadvantages about this feature[2], one of the main
>>> concerns is the possible memory fragmentation/waste in some scenarios,
>>> also archs must ensure that any arch specific vmalloc allocations that
>>> require PAGE_SIZE mappings(eg, mo
On 12/27/21 6:59 AM, Kefeng Wang wrote:
> This patch select HAVE_ARCH_HUGE_VMALLOC to let X86_64 and X86_PAE
> support huge vmalloc mappings.
In general, this seems interesting and the diff is simple. But, I don't
see _any_ x86-specific data. I think the bare minimum here would be a
few kernel c
On 12/18/21 6:31 AM, Nikita Yushchenko wrote:
>>> This allows archs to optimize it, by
>>> freeing multiple tables in a single release_pages() call. This is
>>> faster than individual put_page() calls, especially with memcg
>>> accounting enabled.
>>
>> Could we quantify "faster"? There's a non-tr
On 12/17/21 12:19 AM, Nikita Yushchenko wrote:
> When batched page table freeing via struct mmu_table_batch is used, the
> final freeing in __tlb_remove_table_free() executes a loop, calling
> arch hook __tlb_remove_table() to free each table individually.
>
> Shift that loop down to archs. This a
On 3/16/21 6:52 PM, Kefeng Wang wrote:
> mem_init_print_info() is called in mem_init() on each architecture,
> and pass NULL argument, so using void argument and move it into mm_init().
>
> Acked-by: Dave Hansen
It's not a big deal but you might want to say something like
em_init_print_info(), so this patch will change the
location of the mem_init_print_info(), but I think it's actually for the
better, since it will be pushed later in boot. As long as the x86
pieces stay the same:
Acked-by: Dave Hansen
On 11/16/20 8:32 AM, Matthew Wilcox wrote:
>>
>> That's really the best we can do from software without digging into
>> microarchitecture-specific events.
> I mean this is perf. Digging into microarch specific events is what it
> does ;-)
Yeah, totally.
But, if we see a bunch of 4k TLB hit event
On 11/16/20 7:54 AM, Matthew Wilcox wrote:
> It gets even more complicated with CPUs with multiple levels of TLB
> which support different TLB entry sizes. My CPU reports:
>
> TLB info
> Instruction TLB: 2M/4M pages, fully associative, 8 entries
> Instruction TLB: 4K pages, 8-way associative, 6
On 10/12/20 9:19 AM, Eric Biggers wrote:
> On Sun, Oct 11, 2020 at 11:56:35PM -0700, Ira Weiny wrote:
>>> And I still don't really understand. After this patchset, there is still
>>> code
>>> nearly identical to the above (doing a temporary mapping just for a memcpy)
>>> that
>>> would still be
On 9/9/20 5:29 AM, Gerald Schaefer wrote:
> This only works well as long there are real pagetable pointers involved,
> that can also be used for iteration. For gup_fast, or any other future
> pagetable walkers using the READ_ONCE logic w/o lock, that is not true.
> There are pointers involved to lo
On 9/7/20 11:00 AM, Gerald Schaefer wrote:
> x86:
> add/remove: 0/0 grow/shrink: 2/0 up/down: 10/0 (10)
> Function old new delta
> vmemmap_populate 587 592 +5
> munlock_vma_pages_range 556 561
On 9/7/20 11:00 AM, Gerald Schaefer wrote:
> Commit 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast
> code") introduced a subtle but severe bug on s390 with gup_fast, due to
> dynamic page table folding.
Would it be fair to say that the "fake" page table entries s390
allocates o
On 4/30/20 8:52 AM, David Hildenbrand wrote:
>> Justifying behavior by documentation that does not consider memory
>> hotplug is bad thinking.
> Are you maybe confusing this patch series with the arm64 approach? This
> is not about ordinary hotplugged DIMMs.
>
> I'd love to get Dan's, Dave's and M
* MHP_NO_FIRMWARE_MEMMAP ensures that future
* kexec'd kernels will not treat this as RAM.
*/
Not a biggie, though.
Acked-by: Dave Hansen
On 3/26/20 2:56 PM, Mike Kravetz wrote:
> Perhaps it would be best to check hugepages_supported() when parsing
> hugetlb command line options. If not enabled, throw an error. This
> will be much easier to do after moving all command line parsing to
> arch independent code.
Yeah, that sounds sane
On 3/18/20 3:52 PM, Mike Kravetz wrote:
> Sounds good. I'll incorporate those changes into a v2, unless someone
> else with has a different opinion.
>
> BTW, this patch should not really change the way the code works today.
> It is mostly a movement of code. Unless I am missing something, the
>
Hi Mike,
The series looks like a great idea to me. One nit on the x86 bits,
though...
> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> index 5bfd5aef5378..51e6208fdeec 100644
> --- a/arch/x86/mm/hugetlbpage.c
> +++ b/arch/x86/mm/hugetlbpage.c
> @@ -181,16 +181,25 @@ hugetlb
On 3/17/20 2:06 PM, Borislav Petkov wrote:
> On Tue, Mar 17, 2020 at 01:35:12PM -0700, Dave Hansen wrote:
>> On 3/17/20 4:18 AM, Borislav Petkov wrote:
>>> Back then when the whole SME machinery started getting mainlined, it
>>> was agreed that for simplicity, clarity a
On 3/17/20 4:18 AM, Borislav Petkov wrote:
> Back then when the whole SME machinery started getting mainlined, it
> was agreed that for simplicity, clarity and sanity's sake, the terms
> denoting encrypted and not-encrypted memory should be "encrypted" and
> "decrypted". And the majority of the cod
On 1/29/20 10:36 PM, Sandipan Das wrote:
> v18:
> (1) Fixed issues with x86 multilib builds based on
> feedback from Dave.
> (2) Moved patch 2 to the end of the series.
These (finally) build and run successfully for me on an x86 system with
protection keys. Feel free to add
On 1/28/20 1:38 AM, Sandipan Das wrote:
> On 27/01/20 9:12 pm, Dave Hansen wrote:
>> How have you tested this patch (and the whole series for that matter)?
>>
> I replaced the second patch with this one and did a build test.
> Till v16, I had tested the whole series (build +
On 1/27/20 2:11 AM, Sandipan Das wrote:
> Hi Dave,
>
> On 23/01/20 12:15 am, Dave Hansen wrote:
>> Still doesn't build for me:
>>
> I have this patch that hopefully fixes this. My understanding was
> that the vm tests are supposed to be generic but this h
Still doesn't build for me:
> # make
> make --no-builtin-rules ARCH=x86_64 -C ../../../.. headers_install
> make[1]: Entering directory '/home/dave/linux.git'
> INSTALL ./usr/include
> make[1]: Leaving directory '/home/dave/linux.git'
> make: *** No rule to make target
> '/home/dave/linux.git/t
On 1/17/20 4:49 AM, Sandipan Das wrote:
> Memory protection keys enables an application to protect its address
> space from inadvertent access by its own code.
>
> This feature is now enabled on powerpc and has been available since
> 4.16-rc1. The patches move the selftests to arch neutral directo
On 1/10/20 9:38 AM, Aneesh Kumar K.V wrote:
>> v15:
>> (1) Rebased on top of latest master.
>> (2) Addressed review comments from Dave Hansen.
>> (3) Moved code for getting or setting pkey bits to new
>> helpers. These changes replace patch 7 o
On 12/18/19 12:59 PM, Michal Suchánek wrote:
>> I'd really just rather do %016lx *everywhere* than sprinkle the
>> PKEY_REG_FMTs around.
> Does lx work with u32 without warnings?
Either way, I'd be happy to just make the x86 one u64 to make the whole
thing look more sane,
On 12/17/19 11:51 PM, Sandipan Das wrote:
> Testing
> ---
> Verified for correctness on powerpc. Need help with x86 testing as I
> do not have access to a Skylake server. Client platforms like Coffee
> Lake do not have the required feature bits set in CPUID.
FWIW, you can get a Skylake Server
On 12/17/19 11:51 PM, Sandipan Das wrote:
> write_pkey_reg(pkey_reg);
> - dprintf4("pkey_reg now: %08x\n", read_pkey_reg());
> + dprintf4("pkey_reg now: "PKEY_REG_FMT"\n", read_pkey_reg());
> }
>
> #define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x)))
> diff --git a/tools/testing/selft
On 9/3/19 1:01 AM, Anshuman Khandual wrote:
> This adds a test module which will validate architecture page table helpers
> and accessors regarding compliance with generic MM semantics expectations.
> This will help various architectures in validating changes to the existing
> page table helpers or
On 6/9/19 9:34 PM, Anshuman Khandual wrote:
>> Do you really think this is easier to read?
>>
>> Why not just move the x86 version to include/linux/kprobes.h, and replace
>> the int with bool?
> Will just return bool directly without an additional variable here as
> suggested
> before. But for the
instead of true/false and a bool, though. It's also not a
horrible thing to add a single line comment to this sucker to say:
/* returns true if kprobes handled the fault */
In any case, and even if you don't clean any of this up:
Reviewed-by: Dave Hansen
Changes from v1:
* Fix compile errors on UML and non-x86 arches
* Clarify commit message and Fixes about the origin of the
bug and add the impact to powerpc / uml / unicore32
--
This is a bit of a mess, to put it mildly. But, it's a bug
that only seems to have showed up in 4.20 but wasn't
d through the cracks.
These look sane to me. Because it pokes around mm/page_alloc.c a bit,
and could impact other architectures, my preference would be for Andrew
to pick these up for -mm. But, I don't feel that strongly about it.
Reviewed-by: Dave Hansen
ocated gigantic pages although unrelated.
Looks good, thanks for all the changes. For everything generic in the
set, plus the x86 bits:
Acked-by: Dave Hansen
On 3/6/19 12:08 PM, Alex Ghiti wrote:
>>>
>>> +Â Â Â /*
>>> + * Gigantic pages allocation depends on the capability for large
>>> page
>>> + * range allocation. If the system cannot provide
>>> alloc_contig_range,
>>> + * allow users to free gigantic pages.
>>> + */
>>> +Â Â Â if (hstat
On 3/6/19 11:00 AM, Alexandre Ghiti wrote:
> +static int set_max_huge_pages(struct hstate *h, unsigned long count,
> + nodemask_t *nodes_allowed)
> {
> unsigned long min_count, ret;
>
> - if (hstate_is_gigantic(h) && !gigantic_page_supported())
> -
From: Dave Hansen
walk_system_ram_range() can return an error code either becuase
*it* failed, or because the 'func' that it calls returned an
error. The memory hotplug does the following:
ret = walk_system_ram_range(..., func);
if (ret)
return ret;
> -#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) ||
> defined(CONFIG_CMA)
> +#ifdef CONFIG_CONTIG_ALLOC
> /* The below functions must be run on a range from a single zone. */
> extern int alloc_contig_range(unsigned long start, unsigned long end,
>
> -#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) ||
> defined(CONFIG_CMA)
> +#ifdef CONFIG_COMPACTION_CORE
> static __init int gigantic_pages_init(void)
> {
> /* With compaction or CMA we can allocate gigantic pages at runtime */
> diff --git a/fs/Kconfig b/fs/Kconfi
On 2/13/19 1:43 AM, Michal Hocko wrote:
>
> We have seen several bugs where zonelists have not been initialized
> properly and it is not really straightforward to track those bugs down.
> One way to help a bit at least is to dump zonelists of each node when
> they are (re)initialized.
Were you th
On 1/24/19 6:17 AM, Michal Hocko wrote:
> and nr_cpus set to 4. The underlying reason is tha the device is bound
> to node 2 which doesn't have any memory and init_cpu_to_node only
> initializes memory-less nodes for possible cpus which nr_cpus restrics.
> This in turn means that proper zonelists a
On 11/27/18 3:57 AM, Florian Weimer wrote:
> I would have expected something that translates PKEY_DISABLE_WRITE |
> PKEY_DISABLE_READ into PKEY_DISABLE_ACCESS, and also accepts
> PKEY_DISABLE_ACCESS | PKEY_DISABLE_READ, for consistency with POWER.
>
> (My understanding is that PKEY_DISABLE_ACCESS
On 10/29/18 2:55 PM, Michael Sammler wrote:
>> PKRU getting reset on signals, and the requirement now that it *can't*
>> be changed if you make syscalls probably needs to get thought about very
>> carefully before we do this, though.
> I am not sure, whether I follow you. Are you saying, that PKRU
On 10/29/18 9:48 AM, Jann Horn wrote:
> On Mon, Oct 29, 2018 at 5:37 PM Dave Hansen wrote:
>> I'm not sure this is a great use for PKRU. I *think* the basic problem
>> is that you want to communicate some rights information down into a
>> filter, and you want to commun
On 10/29/18 10:02 AM, Michael Sammler wrote:
>>> Also, I'm not sure the kernel provides the PKRU guarantees you want at
>>> the moment. Our implementation *probably* works, but it's mostly by
>>> accident.
> I don't know, which guarantees about the PKRU are provided at the
> moment, but the only g
On 10/29/18 9:25 AM, Kees Cook wrote:
> On Mon, Oct 29, 2018 at 4:23 AM, Michael Sammler wrote:
>> Add the current value of an architecture specific protection keys
>> register (currently PKRU on x86) to data available for seccomp-bpf
>> programs to work on. This allows filters based on the curren
On 10/03/2018 06:52 AM, Vitaly Kuznetsov wrote:
> It is more than just memmaps (e.g. forking udev process doing memory
> onlining also needs memory) but yes, the main idea is to make the
> onlining synchronous with hotplug.
That's a good theoretical concern.
But, is it a problem we need to solve
> How should a policy in user space look like when new memory gets added
> - on s390x? Not onlining paravirtualized memory is very wrong.
Because we're going to balloon it away in a moment anyway?
We have auto-onlining. Why isn't that being used on s390?
> So the type of memory is very importa
On 10/01/2018 06:16 AM, Gautham R. Shenoy wrote:
>
> Patch 3: Creates a pair of sysfs attributes named
> /sys/devices/system/cpu/cpuN/topology/smallcore_thread_siblings
> and
> /sys/devices/system/cpu/cpuN/topology/smallcore_thread_siblings_list
> exposing the small
It's really nice if these kinds of things are broken up. First, replace
the old want_memblock parameter, then add the parameter to the
__add_page() calls.
> +/*
> + * NONE: No memory block is to be created (e.g. device memory).
> + * NORMAL: Memory block that represents normal (boot or hotp
On 09/22/2018 04:03 AM, Gautham R Shenoy wrote:
> Without this patchset, the SMT domain would be defined as the group of
> threads that share L2 cache.
Could you try to make a more clear, concise statement about the current
state of the art vs. what you want it to be? Right now, the sched
domains
On 09/20/2018 10:22 AM, Gautham R. Shenoy wrote:
> -
> |L1 Cache |
>--
>|L2| | | | |
>| | 0 | 2 | 4 | 6 |Small Core0
>|C | | | | |
> Bi
On 07/17/2018 06:49 AM, Ram Pai wrote:
> Ensure pkey-0 is allocated on start. Ensure pkey-0 can be attached
> dynamically in various modes, without failures. Ensure pkey-0 can be
> freed and allocated.
>
> Signed-off-by: Ram Pai
> ---
> tools/testing/selftests/vm/protection_keys.c | 66
> ++
On 07/17/2018 06:49 AM, Ram Pai wrote:
> Generally the signal handler restores the state of the pkey register
> before returning. However there are times when the read/write operation
> can legitamely fail without invoking the signal handler. Eg: A
> sys_read() operaton to a write-protected page s
On 07/17/2018 06:49 AM, Ram Pai wrote:
> The maximum number of keys that can be allocated has to
> take into consideration, that some keys are reserved by
> the architecture for specific purpose. Hence cannot
> be allocated.
Back to incomplete sentences, I see. :)
How about:
Some pke
On 07/17/2018 06:49 AM, Ram Pai wrote:
> -static inline int cpu_has_pku(void)
> +static inline bool is_pkey_supported(void)
> {
> - return 1;
> + /*
> + * No simple way to determine this.
> + * Lets try allocating a key and see if it succeeds.
> + */
> + int ret = sys_pk
On 07/17/2018 06:49 AM, Ram Pai wrote:
> Introduce generic abstractions and provide architecture
> specific implementation for the abstractions.
I really wanted to see these two things separated:
1. introduce abstractions
2. introduce ppc implementation
But, I guess most of it is done except for
On 07/17/2018 06:49 AM, Ram Pai wrote:
> cleanup the code to satisfy coding styles.
>
> cc: Dave Hansen
> cc: Florian Weimer
> Signed-off-by: Ram Pai
> ---
> tools/testing/selftests/vm/protection_keys.c | 64 +
> 1 files changed, 43 ins
w pkey register, which is supposed to track the bits
> accurately all throughout
This is getting dangerously close to full sentences that actually
describe the patch. You forgot a period, but much this is a substantial
improvement over earlier parts of the series. Thanks for writing this,
seri
pages.
> get_start_key() <-- provides the first non-reserved key.
Does powerpc not start on key 0? Why do you need this?
> cc: Dave Hansen
> cc: Florian Weimer
> Signed-off-by: Ram Pai
> Signed-off-by: Thiago Jung Bauermann
> Reviewed-by: Dave Hansen
> ---
> tools/te
On 07/17/2018 06:49 AM, Ram Pai wrote:
> alloc_random_pkey() was allocating the same pkey every time.
> Not all pkeys were geting tested. fixed it.
This fixes a real issue but also unnecessarily munges whitespace. If
you rev these again, please fix the munging. Otherwise:
Acked-by: Dave Hansen
ed, the resulting bit value will be less than the original.
>
> This hasn't been a problem so far because this code isn't currently
> used.
>
> cc: Dave Hansen
> cc: Florian Weimer
> Signed-off-by: Ram Pai
> ---
> tools/testing/selftests/vm/protectio
On 07/17/2018 06:49 AM, Ram Pai wrote:
> If the flag is 0, no bits will be set. Hence we cant expect
> the resulting bitmap to have a higher value than what it
> was earlier.
...
> --- a/tools/testing/selftests/vm/protection_keys.c
> +++ b/tools/testing/selftests/vm/protection_keys.c
> @@ -415,7 +4
On 07/17/2018 06:49 AM, Ram Pai wrote:
> - shifted_pkey_reg = (pkey_reg >> (pkey * PKEY_BITS_PER_PKEY));
> + shifted_pkey_reg = right_shift_bits(pkey, pkey_reg);
> dprintf2("%s() shifted_pkey_reg: "PKEY_REG_FMT"\n", __func__,
> shifted_pkey_reg);
> masked_p
;pk reg: %*llx\n", PKEY_FMT_LEN, pkey_reg);
But, I don't _really_ care in the end.
Acked-by: Dave Hansen
On 07/17/2018 06:49 AM, Ram Pai wrote:
> In preparation for multi-arch support, move definitions which have
> arch-specific values to x86-specific header.
Acked-by: Dave Hansen
Acked-by: Dave Hansen
Acked-by: Dave Hansen
1 - 100 of 270 matches
Mail list logo