From: Vaibhav Jain
On Power-8 the AFU attr prefault_mode tried to improve storage fault
performance by prefaulting process segments. However Power-9 radix
mode doesn't have Storage-Segments and prefaulting Pages is too fine
grained.
So this patch updates prefault_mode_store() to not allow any ot
On Mon, 2018-05-14 at 22:50 +1000, Michael Ellerman wrote:
> Add byte-swapping versions of __raw_writeq() and __raw_rm_writeq().
>
> This allows us to avoid sparse warnings caused by passing __be64 to
> __raw_writeq(), which takes unsigned long:
>
> arch/powerpc/platforms/powernv/pci-ioda.c:198
Hi Michael,
On Fri, May 18, 2018 at 12:13:52AM +1000, Michael Ellerman wrote:
> wei.guo.si...@gmail.com writes:
> > From: Simon Guo
> >
> > This patch is based on the previous VMX patch on memcmp().
> >
> > To optimize ppc64 memcmp() with VMX instruction, we need to think about
> > the VMX penalty
Hi Jakub,
On 05/18/2018 12:21 AM, Jakub Kicinski wrote:
> On Thu, 17 May 2018 12:05:47 +0530, Sandipan Das wrote:
>> Currently, we resolve the callee's address for a JITed function
>> call by using the imm field of the call instruction as an offset
>> from __bpf_call_base. If bpf_jit_kallsyms is e
On 05/17/2018 06:08 PM, Guenter Roeck wrote:
> On 05/16/2018 11:10 PM, Shilpasri G Bhat wrote:
>>
>>
>> On 05/15/2018 08:32 PM, Guenter Roeck wrote:
>>> On Thu, Mar 22, 2018 at 04:24:32PM +0530, Shilpasri G Bhat wrote:
This patch series adds support to enable/disable OCC based
inband-se
Clear the PCR on boot to ensure we are not running in a compat mode.
We've seen this cause problems when a crash (and kdump) occurs while
running compat mode guests. The kdump kernel then runs with the PCR
set and causes problems. The symptom in the kdump kernel (also seen in
petitboot after fast-
My powerpc-linux-gnu-gcc v4.4.5 compiler can't build a 32-bit kernel
any more:
arch/powerpc/lib/sstep.c: In function 'do_popcnt':
arch/powerpc/lib/sstep.c:1068: error: integer constant is too large for 'long'
type
arch/powerpc/lib/sstep.c:1069: error: integer constant is too large for 'long'
typ
On Thu, May 17, 2018, at 11:28 PM, Mathieu Desnoyers wrote:
> - On May 16, 2018, at 9:19 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Wed, May 16, 2018 at 04:13:16PM -0400, Mathieu Desnoyers wrote:
> >> - On May 16, 2018, at 12:18 PM, Peter Zijlstra pet...@infradead.org
> >> wrot
On Fri, May 18, 2018 at 08:30:27AM +1000, Benjamin Herrenschmidt wrote:
> On Thu, 2018-05-17 at 14:23 -0500, Segher Boessenkool wrote:
> > On Thu, May 17, 2018 at 01:06:10PM +1000, Benjamin Herrenschmidt wrote:
> > > The current asm statement in __patch_instruction() for the cache flushes
> > > doe
This patch extends the use of a common parse function for the
ibm,drc-info property that can be modified by a callback function
to the hotplug device processing. Candidate code is replaced by
a call to the parser including a pointer to a local context-specific
functions, and local data.
In additi
This patch applies a common parse function for the ibm,drc-info
property that can be modified by a callback function to the
hot-add CPU code. Candidate code is replaced by a call to the
parser including a pointer to a local context-specific functions,
and local data.
In addition, a bug in the rel
This patch provides a common parse function for the ibm,drc-info
property that can be modified by a callback function. The caller
provides a pointer to the function and a pointer to their unique
data, and the parser provides the current lmb set from the struct.
The callback function may return cod
[Replace/withdraw previous patch submission to ensure that testing
of related patches on similar hardware progresses together.]
This patch fixes a memory parsing bug when using of_prop_next_u32
calls at the start of a structure. Depending upon the value of
"cur" memory pointer argument to of_prop
This patch set corrects some errors and omissions in the previous
set of patches adding support for the "ibm,drc-info" property to
powerpc systems.
Unfortunately, some errors in the previous patch set break things
in some of the DLPAR operations. In particular when attempting to
hot-add a new CPU
On Thu, 2018-05-17 at 14:23 -0500, Segher Boessenkool wrote:
> Hi!
>
> On Thu, May 17, 2018 at 01:06:10PM +1000, Benjamin Herrenschmidt wrote:
> > The current asm statement in __patch_instruction() for the cache flushes
> > doesn't have a "volatile" statement and no memory clobber. That means
> >
powerpc migration/memory: This patch adds more recognition for changes
to the associativity of memory blocks described by the device-tree
properties and updates local and general kernel data structures to
reflect those changes. These differences may include:
* Evaluating 'ibm,dynamic-memory' prop
powerpc migration/cpu: Now apply changes to the associativity of cpus
for the topology of LPARS in Post Migration events. Recognize more
changes to the associativity of memory blocks described by the
'cpu' properties when processing the topology of LPARS in Post Migration
events. Previous efforts
powerpc migration/drmem: Export many of the functions of DRMEM to
parse "ibm,dynamic-memory" and "ibm,dynamic-memory-v2" during
hotplug operations and for Post Migration events.
Also modify the DRMEM initialization code to allow it to,
* Be called after system initialization
* Provide a separate
The migration of LPARs across Power systems affects many attributes
including that of the associativity of memory blocks and CPUs. The
patches in this set execute when a system is coming up fresh upon a
migration target. They are intended to,
* Recognize changes to the associativity of memory an
[+cc Russell, Sam, Bryant, linuxppc-dev, Sebastian, linux-s390]
Sorry, I should have pulled in these new CC's earlier because ppc and
s390 both have PCI error handling similar to what Oza is changing
here.
The basic issue is that the new PCIe DPC (Downstream Port Containment,
see PCIe r4.0, sec 6
On Thu, 17 May 2018 12:05:47 +0530, Sandipan Das wrote:
> Currently, we resolve the callee's address for a JITed function
> call by using the imm field of the call instruction as an offset
> from __bpf_call_base. If bpf_jit_kallsyms is enabled, we further
> use this address to get the callee's kern
Hi!
On Thu, May 17, 2018 at 01:06:10PM +1000, Benjamin Herrenschmidt wrote:
> The current asm statement in __patch_instruction() for the cache flushes
> doesn't have a "volatile" statement and no memory clobber. That means
> gcc can potentially move it around (or move the store done by put_user
>
On 05/17/2018 10:19 AM, Matthew Wilcox wrote:
> On Thu, May 17, 2018 at 09:36:00AM -0700, Randy Dunlap wrote:
>>> +If the speculative page fault fails because of a concurrency is
>>
>> because a concurrency is
>
> While one can use concurrency as a nou
-language/20180517-224044
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allmodconfig
compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin
On Thu, May 17, 2018 at 09:36:00AM -0700, Randy Dunlap wrote:
> > +If the speculative page fault fails because of a concurrency is
>
>because a concurrency is
While one can use concurrency as a noun, it sounds archaic to me. I'd
rather:
If
Hi,
On 05/17/2018 04:06 AM, Laurent Dufour wrote:
> This configuration variable will be used to build the code needed to
> handle speculative page fault.
>
> By default it is turned off, and activated depending on architecture
> support, ARCH_HAS_PTE_SPECIAL, SMP and MMU.
>
> The architecture su
- On May 16, 2018, at 9:19 PM, Boqun Feng boqun.f...@gmail.com wrote:
> On Wed, May 16, 2018 at 04:13:16PM -0400, Mathieu Desnoyers wrote:
>> - On May 16, 2018, at 12:18 PM, Peter Zijlstra pet...@infradead.org
>> wrote:
>>
>> > On Mon, Apr 30, 2018 at 06:44:26PM -0400, Mathieu Desnoyers
Hi,
I have reports from a user who is experiencing intermittent issues
with qemu being unable to allocate memory for the guest HPT. We see:
libvirtError: internal error: process exited while connecting to monitor:
Unexpected error in spapr_alloc_htab() at
/build/qemu-UwnbKa/qemu-2.5+dfsg/hw/ppc
On Mon, 2018-05-14 at 15:59:47 UTC, Nicholas Piggin wrote:
> Similarly to opal_event_shutdown, opal_nvram_write can be called in
> the crash path with irqs disabled. Special case the delay to avoid
> sleeping in invalid context.
>
> Cc: sta...@vger.kernel.org # v3.2
> Fixes: 3b8070335f ("powerpc/p
On Mon, 2018-05-14 at 08:27:35 UTC, Philippe Bergheaud wrote:
> Skiboot used to set the default Tunnel BAR register value when capi mode
> was enabled. This approach was ok for the cxl driver, but prevented other
> drivers from choosing different values.
>
> Skiboot versions > 5.11 will not set th
On Tue, Apr 10, 2018 at 08:34:37AM +0200, Christophe Leroy wrote:
> This reverts commit 6ad966d7303b70165228dba1ee8da1a05c10eefe.
>
> That commit was pointless, because csum_add() sums two 32 bits
> values, so the sum is 0x1fffe at the maximum.
> And then when adding upper part (1) and lower p
On Thu, May 17, 2018 at 03:27:37PM +0200, Christophe LEROY wrote:
> Le 17/05/2018 à 15:15, Segher Boessenkool a écrit :
> >>I guess we've been enabling this for all 32-bit targets for ever so it
> >>must be a reasonable option.
> >
> >On 603, load multiple (and string) are one cycle slower than doi
Le 17/05/2018 à 15:46, Michael Ellerman a écrit :
Nicholas Piggin writes:
On Thu, 17 May 2018 12:04:13 +0200 (CEST)
Christophe Leroy wrote:
commit 87a156fb18fe1 ("Align hot loops of some string functions")
degraded the performance of string functions by adding useless
nops
A simple bench
wei.guo.si...@gmail.com writes:
> From: Simon Guo
>
> This patch is based on the previous VMX patch on memcmp().
>
> To optimize ppc64 memcmp() with VMX instruction, we need to think about
> the VMX penalty brought with: If kernel uses VMX instruction, it needs
> to save/restore current thread's V
Thiago Jung Bauermann writes:
> This test verifies that the AMR, IAMR and UAMOR are being written to a
> process' core file.
>
> Signed-off-by: Thiago Jung Bauermann
> ---
> tools/testing/selftests/powerpc/ptrace/Makefile| 5 +-
> tools/testing/selftests/powerpc/ptrace/core-pkey.c | 460
Thiago Jung Bauermann writes:
> This test exercises read and write access to the AMR, IAMR and UAMOR.
>
> Signed-off-by: Thiago Jung Bauermann
> ---
> tools/testing/selftests/powerpc/include/reg.h | 1 +
> tools/testing/selftests/powerpc/ptrace/Makefile| 5 +-
> tools/testing/selft
On Thu, May 17, 2018 at 12:49:58PM +0200, Christophe Leroy wrote:
> In my 8xx configuration, I get 208 calls to memcmp()
> Within those 208 calls, about half of them have constant sizes,
> 46 have a size of 8, 17 have a size of 16, only a few have a
> size over 16. Other fixed sizes are mostly 4, 6
Nicholas Piggin writes:
> On Thu, 17 May 2018 12:04:13 +0200 (CEST)
> Christophe Leroy wrote:
>
>> commit 87a156fb18fe1 ("Align hot loops of some string functions")
>> degraded the performance of string functions by adding useless
>> nops
>>
>> A simple benchmark on an 8xx calling 10x a mem
On Thu, 2018-05-17 at 15:21 +0200, Christophe LEROY wrote:
> > > +static inline int __memcmp8(const void *p, const void *q, int off)
> > > +{
> > > + s64 tmp = be64_to_cpu(*(u64*)(p + off)) - be64_to_cpu(*(u64*)(q +
> > > off));
> >
> > I always assumed 64bits unaligned access would trigger
On Thu, 2018-05-17 at 22:10 +1000, Michael Ellerman wrote:
> Christophe Leroy writes:
> > arch/powerpc/Makefile activates -mmultiple on BE PPC32 configs
> > in order to use multiple word instructions in functions entry/exit
>
> True, though that could be a lot simpler because the MULTIPLEWORD val
On Thu, 17 May 2018 12:59:29 +0200 (CEST)
Christophe Leroy wrote:
> Commit a7a9dcd882a67 ("powerpc: Avoid taking a data miss on every
> userspace instruction miss") has shown that limiting the read of
> faulting instruction to likely cases improves performance.
>
> This patch goes further into t
On Thu, May 17, 2018 at 12:49:50PM +0200, Christophe Leroy wrote:
> In preparation of optimisation patches, move PPC32 specific
> memcmp() and __clear_user() into string_32.S
> --- /dev/null
> +++ b/arch/powerpc/lib/string_32.S
> @@ -0,0 +1,74 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
>
Le 17/05/2018 à 15:15, Segher Boessenkool a écrit :
On Thu, May 17, 2018 at 10:10:21PM +1000, Michael Ellerman wrote:
Christophe Leroy writes:
arch/powerpc/Makefile activates -mmultiple on BE PPC32 configs
in order to use multiple word instructions in functions entry/exit
True, though that
On Thu, 17 May 2018 12:04:13 +0200 (CEST)
Christophe Leroy wrote:
> commit 87a156fb18fe1 ("Align hot loops of some string functions")
> degraded the performance of string functions by adding useless
> nops
>
> A simple benchmark on an 8xx calling 10x a memchr() that
> matches the first byte
Le 17/05/2018 à 15:03, Mathieu Malaterre a écrit :
On Thu, May 17, 2018 at 12:49 PM, Christophe Leroy
wrote:
In my 8xx configuration, I get 208 calls to memcmp()
Within those 208 calls, about half of them have constant sizes,
46 have a size of 8, 17 have a size of 16, only a few have a
size o
On Thu, May 17, 2018 at 10:10:21PM +1000, Michael Ellerman wrote:
> Christophe Leroy writes:
> > arch/powerpc/Makefile activates -mmultiple on BE PPC32 configs
> > in order to use multiple word instructions in functions entry/exit
>
> True, though that could be a lot simpler because the MULTIPLEW
On Thu, May 17, 2018 at 12:49 PM, Christophe Leroy
wrote:
> In my 8xx configuration, I get 208 calls to memcmp()
> Within those 208 calls, about half of them have constant sizes,
> 46 have a size of 8, 17 have a size of 16, only a few have a
> size over 16. Other fixed sizes are mostly 4, 6 and 10
On 05/16/2018 11:10 PM, Shilpasri G Bhat wrote:
On 05/15/2018 08:32 PM, Guenter Roeck wrote:
On Thu, Mar 22, 2018 at 04:24:32PM +0530, Shilpasri G Bhat wrote:
This patch series adds support to enable/disable OCC based
inband-sensor groups at runtime. The environmental sensor groups are
manage
Christophe Leroy writes:
> arch/powerpc/Makefile activates -mmultiple on BE PPC32 configs
> in order to use multiple word instructions in functions entry/exit
True, though that could be a lot simpler because the MULTIPLEWORD value
is only used for PPC32, which is always big endian. I'll send a pa
From: Peter Zijlstra
Try a speculative fault before acquiring mmap_sem, if it returns with
VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
traditional fault.
Signed-off-by: Peter Zijlstra (Intel)
[Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
handle_speculative_fault()]
[
From: Mahendran Ganesh
This patch enables the speculative page fault on the arm64
architecture.
I completed spf porting in 4.9. From the test result,
we can see app launching time improved by about 10% in average.
For the apps which have more than 50 threads, 15% or even more
improvement can be
This patch enable the speculative page fault on the PowerPC
architecture.
This will try a speculative page fault without holding the mmap_sem,
if it returns with VM_FAULT_RETRY, the mmap_sem is acquired and the
traditional page fault processing is done.
The speculative path is only tried for mult
When handling speculative page fault, the vma->vm_flags and
vma->vm_page_prot fields are read once the page table lock is released. So
there is no more guarantee that these fields would not change in our back.
They will be saved in the vm_fault structure before the VMA is checked for
changes.
In t
Add support for the new speculative faults event.
Acked-by: David Rientjes
Signed-off-by: Laurent Dufour
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l| 1 +
too
Add speculative_pgfault vmstat counter to count successful speculative page
fault handling.
Also fixing a minor typo in include/linux/vm_event_item.h.
Signed-off-by: Laurent Dufour
---
include/linux/vm_event_item.h | 3 +++
mm/memory.c | 3 +++
mm/vmstat.c |
Add a new software event to count succeeded speculative page faults.
Acked-by: David Rientjes
Signed-off-by: Laurent Dufour
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index b8e288a1f74
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 80
mm/memory.c | 57 ++--
2 files changed, 125 in
From: Peter Zijlstra
Provide infrastructure to do a speculative fault (not holding
mmap_sem).
The not holding of mmap_sem means we can race against VMA
change/removal and page-table destruction. We use the SRCU VMA freeing
to keep the VMA around. We use the VMA seqcount to detect change
(includi
This change is inspired by the Peter's proposal patch [1] which was
protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
that particular case, and it is introducing major performance degradation
due to excessive scheduling operations.
To allow access to the mm_rb tree without
When dealing with speculative page fault handler, we may race with VMA
being split or merged. In this case the vma->vm_start and vm->vm_end
fields may not match the address the page fault is occurring.
This can only happens when the VMA is split but in that case, the
anon_vma pointer of the new VM
When dealing with the speculative fault path we should use the VMA's field
cached value stored in the vm_fault structure.
Currently vm_normal_page() is using the pointer to the VMA to fetch the
vm_flags value. This patch provides a new __vm_normal_page() which is
receiving the vm_flags flags value
The speculative page fault handler which is run without holding the
mmap_sem is calling lru_cache_add_active_or_unevictable() but the vm_flags
is not guaranteed to remain constant.
Introducing __lru_cache_add_active_or_unevictable() which has the vma flags
value parameter instead of the vma pointer
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Acked-by: David Rientjes
Signed-off-by: Laurent Dufour
---
The speculative page fault handler must be protected against anon_vma
changes. This is because page_add_new_anon_rmap() is called during the
speculative path.
In addition, don't try speculative page fault if the VMA don't have an
anon_vma structure allocated because its allocation should be
protec
If a thread is remapping an area while another one is faulting on the
destination area, the SPF handler may fetch the vma from the RB tree before
the pte has been moved by the other thread. This means that the moved ptes
will overwrite those create by the page fault handler leading to page
leaked.
The VMA sequence count has been introduced to allow fast detection of
VMA modification when running a page fault handler without holding
the mmap_sem.
This patch provides protection against the VMA modification done in :
- madvise()
- mpol_rebind_policy()
- vma_replace_poli
From: Peter Zijlstra
Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
counts such that we can easily test if a VMA is changed.
The calls to vm_write_begin/end() in unmap_page_range() are
used to detect when a VMA is being unmap and thus that new page fault
should not be sat
Some VMA struct fields need to be initialized once the VMA structure is
allocated.
Currently this only concerns anon_vma_chain field but some other will be
added to support the speculative page fault.
Instead of spreading the initialization calls all over the code, let's
introduce a dedicated inli
pte_unmap_same() is making the assumption that the page table are still
around because the mmap_sem is held.
This is no more the case when running a speculative page fault and
additional check must be made to ensure that the final page table are still
there.
This is now done by calling pte_spinloc
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c |
From: Peter Zijlstra
When speculating faults (without holding mmap_sem) we need to validate
that the vma against which we loaded pages is still valid when we're
ready to install the new PTE.
Therefore, replace the pte_offset_map_lock() calls that (re)take the
PTL with pte_map_lock() which can fa
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for BOOK3S_64. This enables
the Speculative Page Fault handler.
Support is only provide for BOOK3S_64 currently because:
- require CONFIG_PPC_STD_MMU because checks done in
set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hug
From: Mahendran Ganesh
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for arm64. This
enables Speculative Page Fault handler.
Signed-off-by: Ganesh Mahendran
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4759566a78cb..c932ae6
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
Speculative Page Fault handler when building for 64bit.
Cc: Thomas Gleixner
Signed-off-by: Laurent Dufour
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 47e7f582f86a..
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support, ARCH_HAS_PTE_SPECIAL, SMP and MMU.
The architecture support is needed since the speculative page fault handler
is calle
This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
page fault without holding the mm semaphore [1].
The idea is to try to handle user space page faults without holding the
mmap_sem. This should allow better concurrency for massively threaded
process since the page fault han
Commit a7a9dcd882a67 ("powerpc: Avoid taking a data miss on every
userspace instruction miss") has shown that limiting the read of
faulting instruction to likely cases improves performance.
This patch goes further into this direction by limiting the read
of the faulting instruction to the only cas
In my 8xx configuration, I get 208 calls to memcmp()
Within those 208 calls, about half of them have constant sizes,
46 have a size of 8, 17 have a size of 16, only a few have a
size over 16. Other fixed sizes are mostly 4, 6 and 10.
This patch inlines calls to memcmp() when size
is constant and l
Many calls to memcmp(), strncmp(), strncpy() and memchr()
are done with constant size.
This patch gives GCC a chance to optimise out
the NUL size verification.
This is only done when CONFIG_FORTIFY_SOURCE is not set, because
when CONFIG_FORTIFY_SOURCE is set, other inline versions of the
function
At the time being, memcmp() compares two chunks of memory
byte per byte.
This patch optimises the comparison by comparing word by word.
A small benchmark performed on an 8xx comparing two chuncks
of 512 bytes performed 10 times gives:
Before : 5852274 TB ticks
After: 1488638 TB ticks
This
Rewrite clear_user() on the same principle as memset(0), making use
of dcbz to clear complete cache lines.
This code is a copy/paste of memset(), with some modifications
in order to retrieve remaining number of bytes to be cleared,
as it needs to be returned in case of error.
On a MPC885, through
In preparation of optimisation patches, move PPC32 specific
memcmp() and __clear_user() into string_32.S
Signed-off-by: Christophe Leroy
---
arch/powerpc/lib/Makefile| 5 +--
arch/powerpc/lib/string.S| 61
arch/powerpc/lib/string_32.S | 74 ++
This serie intend to optimise string functions for PPC32 in the
same spirit as already done on PPC64.
The first patch moves PPC32 specific functions from string.S into
a dedicated file named string_32.S
The second patch rewrites __clear_user() by using dcbz intruction
The third patch rewrites memc
On 05/16/2018 10:35 PM, Ram Pai wrote:
So let me see if I understand the overall idea.
Application can allocate new keys through a new syscall
sys_pkey_alloc_1(flags, init_val, sig_init_val)
'sig_init_val' is the permission-state of the key in signal context.
I would keep the existing system
On 05/16/2018 11:07 PM, Ram Pai wrote:
what would change the key-permission-values enforced in signal-handler
context? Or can it never be changed, ones set through sys_pkey_alloc()?
The access rights can only be set by pkey_alloc and are unchanged after
that (so we do not have to discuss whe
commit 87a156fb18fe1 ("Align hot loops of some string functions")
degraded the performance of string functions by adding useless
nops
A simple benchmark on an 8xx calling 10x a memchr() that
matches the first byte runs in 41668 TB ticks before this patch
and in 35986 TB ticks after this patch.
This is a note to let you know that I've just added the patch titled
futex: Remove duplicated code and fix undefined behaviour
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
This is a note to let you know that I've just added the patch titled
futex: Remove duplicated code and fix undefined behaviour
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
On Fri, May 11, 2018 at 09:04:06AM +0100, Gilad Ben-Yossef wrote:
> Due to a snafu "paes" testmgr tests were not ordered
> lexicographically, which led to boot time warnings.
> Reorder the tests as needed.
>
> Fixes: a794d8d ("crypto: ccree - enable support for hardware keys")
> Reported-by: Abdul
On Thu, May 17, 2018 at 09:19:49AM +0800, Boqun Feng wrote:
> On Wed, May 16, 2018 at 04:13:16PM -0400, Mathieu Desnoyers wrote:
> > and that x86 calls it from syscall_return_slowpath() (which AFAIU is
> > now used in the fast-path since KPTI), I wonder where we should call
>
> So we actually det
tlbies to an LPAR do not have to be serialised since POWER4/PPC970,
after which the MMU_FTR_LOCKLESS_TLBIE feature was introduced to
avoid tlbie locking.
Since commit c17b98cf6028 ("KVM: PPC: Book3S HV: Remove code for
PPC970 processors"), KVM no longer supports processors that do not
have this fe
91 matches
Mail list logo