On Thu, Apr 05, 2018 at 07:22:26PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 05, 2018 at 05:59:03PM +0100, Will Deacon wrote:
> > diff --git a/include/linux/atomic.h b/include/linux/atomic.h
> > index 8b276fd9a127..01ce3997cb42 100644
> > --- a/include/linux/atomic.h
&g
On Thu, Apr 05, 2018 at 07:28:08PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 05, 2018 at 05:59:07PM +0100, Will Deacon wrote:
> > @@ -340,12 +341,17 @@ void queued_spin_lock_slowpath(struct qspinlock
> > *lock, u32 val)
> > goto release;
> >
> >
On Thu, Apr 05, 2018 at 07:07:06PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 05, 2018 at 05:58:59PM +0100, Will Deacon wrote:
> > The qspinlock locking slowpath utilises a "pending" bit as a simple form
> > of an embedded test-and-set lock that can avoid the overhead o
On Thu, Apr 05, 2018 at 05:16:16PM -0400, Waiman Long wrote:
> On 04/05/2018 12:58 PM, Will Deacon wrote:
> > /*
> > -* we're pending, wait for the owner to go away.
> > -*
> > -* *,1,1 -> *,1,0
> > -*
> > -* this wait loo
Hi Andrea,
On Fri, Apr 06, 2018 at 03:05:12PM +0200, Andrea Parri wrote:
> On Fri, Apr 06, 2018 at 12:34:36PM +0100, Will Deacon wrote:
> > I could say something like:
> >
> > "Pairs with dependency ordering from both xchg_tail and explicit
> >dereferences
From: Waiman Long
Currently, the qspinlock_stat code tracks only statistical counts in the
PV qspinlock code. However, it may also be useful to track the number
of locking operations done via the pending code vs. the MCS lock queue
slowpath for the non-PV case.
The qspinlock stat code is modifie
ter Zijlstra
Cc: Ingo Molnar
Signed-off-by: Will Deacon
---
kernel/locking/qspinlock.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 648a16a2cd23..c781ddbe59a6 100644
--- a/kernel/locking/qspinlock.c
++
it when taking the lock
after reaching the head of the queue and leaving the tail entry intact
if we saw pending set, because we know that the tail is going to be
updated shortly.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Signed-off-by: Will Deacon
---
ker
st of the lockword.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Signed-off-by: Will Deacon
---
kernel/locking/qspinlock.c | 19 ---
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a8fc402b3f3a..01b660442d87 1
:
locking/qspinlock: Add stat tracking for pending vs slowpath
Will Deacon (11):
barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
locking/qspinlock: Bound spinning on pending->locked transition in
slowpath
locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper b
Suggested-by: Waiman Long
Signed-off-by: Will Deacon
---
arch/x86/include/asm/qspinlock.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 5e16b5d40d32..2f09915f4aa4 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch
s. For architectures that can
respond to changes in cacheline state in their smp_cond_load implementation,
it should be sufficient to use the default bound of 1.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Suggested-by: Waiman Long
Signed-off-by: Will Deacon
---
kernel/locking/qspinlock.c | 20 +
primitives
to avoid unnecessary barrier overhead on architectures such as arm64.
Signed-off-by: Will Deacon
---
include/asm-generic/atomic-long.h | 2 ++
include/asm-generic/barrier.h | 27 +--
include/linux/atomic.h| 2 ++
3 files changed, 25 insertions(+), 6
A qspinlock can be unlocked simply by writing zero to the locked byte.
This can be implemented in the generic code, so do that and remove the
arch-specific override for x86 in the !PV case.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Signed-off-by: Will Deacon
---
arch/x86/include/asm/qspinlock.h | 17
__qspinlock into struct qspinlock and kill the extra
definition.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Acked-by: Boqun Feng
Signed-off-by: Will Deacon
---
arch/x86/include/asm/qspinlock.h | 2 +-
arch/x86/include/asm/qspinlock_paravirt.h | 3 +-
include/asm-generic/qspinlock_types.h | 32
with its own implementation using WFE.
On x86, this can also be cheaper than spinning on
smp_load_acquire().
Signed-off-by: Jason Low
Signed-off-by: Will Deacon
---
kernel/locking/mcs_spinlock.h | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/locking
Rather than dig into the counter field of the atomic_t inside the
qspinlock structure so that we can call smp_cond_load_acquire, use
atomic_cond_read_acquire instead, which operates on the atomic_t
directly.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Signed-off-by: Will Deacon
---
kernel/locking
Signed-off-by: Will Deacon
---
kernel/locking/qspinlock.c | 19 +--
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fa5d2ab369f9..1e3ddc42135e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking
e
can replace the two RELEASE operations with a single smp_wmb() fence and
use RELAXED operations for the subsequent publishing of the node.
Cc: Peter Zijlstra
Cc: Ingo Molnar
Signed-off-by: Will Deacon
---
kernel/locking/qspinlock.c | 33 +
1 file changed, 17 inse
On Wed, Apr 11, 2018 at 03:53:16PM -0400, Waiman Long wrote:
> > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> > index 396701e8c62d..a8fc402b3f3a 100644
> > --- a/kernel/locking/qspinlock.c
> > +++ b/kernel/locking/qspinlock.c
> > @@ -162,6 +162,17 @@ struct __qspinlock {
>
On Thu, Apr 12, 2018 at 10:16:55AM -0400, Waiman Long wrote:
> On 04/12/2018 10:06 AM, Will Deacon wrote:
> > On Wed, Apr 11, 2018 at 03:53:16PM -0400, Waiman Long wrote:
> >>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> >>> index
ho de Melo
Cc: Ingo Molnar
Signed-off-by: Will Deacon
---
As an aside, the way we currently pass the depfile to -MD appears to be
in direct contradiction with the preprocessor documentation, although it
does work with the cc1 implementation.
tools/build/Build.include | 10 --
1 file changed
Hi Linus,
As I mentioned in the previous pull request, we had some nasty conflicts
with the KVM tree that resulted in us dropping some spectre-related work
shortly before the merge window opened. Now that the KVM tree has been
merged, we've put together an updated version of the patches based on
y
ns(+), 71 deletions(-)
> >
> > Reviewed-by: Marc Zyngier
> >
> > Will/Catalin, if you want to take it via the arm64 tree, that's fine
> > by me.
>
> Please allow me to change my mind. This is going to conflict horribly
> with the VHE rework and the
Hi Boqun,
On Sat, Apr 07, 2018 at 01:47:11PM +0800, Boqun Feng wrote:
> On Thu, Apr 05, 2018 at 05:59:07PM +0100, Will Deacon wrote:
> > @@ -340,12 +341,17 @@ void queued_spin_lock_slowpath(struct qspinlock
> > *lock, u32 val)
> > goto release;
> >
> &
On Sat, Apr 07, 2018 at 10:47:32AM +0200, Peter Zijlstra wrote:
> On Fri, Apr 06, 2018 at 02:09:53PM -0700, Paul E. McKenney wrote:
> > It would indeed be good to not be in the position of having to trade off
> > forward-progress guarantees against performance, but that does appear to
> > be where
Hi Waiman,
Thanks for taking this lot for a spin. Comments and questions below.
On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote:
> On 04/05/2018 12:58 PM, Will Deacon wrote:
> > The qspinlock locking slowpath utilises a "pending" bit as a simple form
> > of
On Mon, Apr 09, 2018 at 11:58:35AM +0100, Will Deacon wrote:
> On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote:
> > The pending bit was added to the qspinlock design to counter performance
> > degradation compared with ticket lock for workloads with light
> > spinl
On Mon, Apr 09, 2018 at 05:54:20PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 09, 2018 at 03:54:09PM +0100, Will Deacon wrote:
> > @@ -289,18 +315,26 @@ void queued_spin_lock_slowpath(struct qspinlock
> > *lock, u32 val)
> > return;
> >
> >
Hi Waiman,
On Mon, Apr 09, 2018 at 02:08:52PM -0400, Waiman Long wrote:
> A locker in the pending code path is doing an infinite number of spins
> when waiting for the _Q_PENDING_VAL to _Q_LOCK_VAL transition. There
> is a concern that lock starvation can happen concurrent lockers are
> able to ta
On Thu, Mar 08, 2018 at 08:41:59AM -0800, Doug Anderson wrote:
> Hi,
>
> On Thu, Mar 8, 2018 at 8:19 AM, Daniel Thompson
> wrote:
> > On 05/03/18 23:43, Douglas Anderson wrote:
> >>
> >> This is the equivalent of commit 001bf455d206 ("ARM: 8428/1: kgdb: Fix
> >> registers on sleeping tasks") but
On Wed, Mar 07, 2018 at 03:11:31PM +, Suzuki K Poulose wrote:
> On 26/02/18 18:05, Will Deacon wrote:
> >On Wed, Feb 07, 2018 at 02:21:05PM +, Suzuki K Poulose wrote:
> >>We treat most of the feature bits in the ID registers as STRICT,
> >>implying that all CP
Hi Dave,
On Thu, Mar 01, 2018 at 05:44:06PM +, Dave Martin wrote:
> Some architectures cannot always report accurately what kind of
> floating-point exception triggered a floating-point exception trap.
>
> This can occur with fp exceptions occurring on lanes in a vector
> instruction on arm64
e() and pmd_free_pte_page(),
> which clear a given pud/pmd entry and free up a page for the lower
> level entries.
>
> This patch implements their stub functions on x86 and arm64, which
> work as workaround.
>
> Reported-by: Lei Li
> Signed-off-by: Toshi Kani
> C
On Fri, Mar 09, 2018 at 01:44:40PM +, Mark Rutland wrote:
> On Wed, Mar 07, 2018 at 09:00:08AM -0600, Shanker Donthineni wrote:
> > static inline void __flush_icache_all(void)
> > {
> > - asm("ic ialluis");
> > - dsb(ish);
> > + /* Instruction cache invalidation is not required for I/D
On Wed, Mar 07, 2018 at 09:00:08AM -0600, Shanker Donthineni wrote:
> The DCache clean & ICache invalidation requirements for instructions
> to be data coherence are discoverable through new fields in CTR_EL0.
> The following two control bits DIC and IDC were defined for this
> purpose. No need to
Hi SHanker,
On Mon, Mar 05, 2018 at 11:06:43AM -0600, Shanker Donthineni wrote:
> The function SMCCC_ARCH_WORKAROUND_1 was introduced as part of SMC
> V1.1 Calling Convention to mitigate CVE-2017-5715. This patch uses
> the standard call SMCCC_ARCH_WORKAROUND_1 for Falkor chips instead
> of Silico
om subject
> matter experts?
The original patch looks fine to me:
Acked-by: Will Deacon
Will
Hi Peter,
On Wed, Sep 26, 2018 at 01:36:23PM +0200, Peter Zijlstra wrote:
> Here is my current stash of generic mmu_gather patches that goes on top of
> Will's
> tlb patches:
FWIW, patches 1,2,15,16,17 and 18 look fine to me, so:
Acked-by: Will Deacon
for those.
I'
Hi all,
On Fri, Sep 21, 2018 at 02:02:26PM +0200, Sebastian Andrzej Siewior wrote:
> We reproducibly observe cache line starvation on a Core2Duo E6850 (2
> cores), a i5-6400 SKL (4 cores) and on a NXP LS2044A ARM Cortex-A72 (4
> cores).
>
> Instrumentation show always the picture:
>
> CPU0
On Wed, Sep 26, 2018 at 01:36:29PM +0200, Peter Zijlstra wrote:
> Needed for ia64 -- alternatively we drop the entire hook.
Ack for dropping the hook.
Will
#x27; VMA.
>
> This allows architectures that have a reasonably efficient
> flush_tlb_range() to not require any additional effort.
>
> Cc: Nick Piggin
> Cc: Andrew Morton
> Cc: "Aneesh Kumar K.V"
> Cc: Will Deacon
> Signed-off-by: Peter Zijlstra (Intel)
ge tracking for classical
> ARM in __pte_free_tlb().
>
> Cc: Nick Piggin
> Cc: Andrew Morton
> Cc: "Aneesh Kumar K.V"
> Cc: Will Deacon
> Cc: Russell King
> Signed-off-by: Peter Zijlstra (Intel)
> ---
> arch/arm/include/asm/tlb.h | 255
> ++---
gt;vm_flags & VM_HUGETLB);
> tlb->vma_exec = !!(vma->vm_flags & VM_EXEC);
> +#endif
Alternatively, we could wrap the two assignments above in a macro like:
tlb_update_vma_flags(tlb, vma)
which could be empty if the generic tlb_flush isn't in use?
Anyway, as long as we resolve this one way or the other, you can add my Ack:
Acked-by: Will Deacon
Cheers,
Will
06PM +, Jan Glauber wrote:
> On Tue, Nov 20, 2018 at 07:03:17PM +, Will Deacon wrote:
> > On Tue, Nov 20, 2018 at 06:28:54PM +, Will Deacon wrote:
> > > On Sat, Nov 10, 2018 at 11:17:03AM +, Jan Glauber wrote:
> > > > On Fri, Nov 09, 2018 at 03:58:56PM +0
ARM platform,
> although that's even less popular.
>
> A simple workaround is to populate ARCH when it is not set and that we're
> running on an arm/arm64 system.
>
> Signed-off-by: Marc Zyngier
> ---
> scripts/decodecode | 7 +++
> 1 file changed, 7 insertions(+)
Acked-by: Will Deacon
Will
e these entries to avoid
> duplicate entries for a single capability.
>
> Cc: Will Deacon
> Cc: Andre Przywara
> Cc: Mark Rutland
> Signed-off-by: Suzuki K Poulose
> ---
> arch/arm64/kernel/cpu_errata.c | 19 +++
> 1 file changed, 7 insertions(+),
The core code already has a check for pXd_none(), so remove it from the
architecture implementation.
Cc: Chintan Pandya
Cc: Toshi Kani
Cc: Michal Hocko
Cc: Andrew Morton
Acked-by: Thomas Gleixner
Reviewed-by: Toshi Kani
Signed-off-by: Will Deacon
---
arch/x86/mm/pgtable.c | 6 --
1
The core code already has a check for pXd_none(), so remove it from the
architecture implementation.
Cc: Chintan Pandya
Cc: Toshi Kani
Cc: Thomas Gleixner
Cc: Michal Hocko
Cc: Andrew Morton
Signed-off-by: Will Deacon
---
arch/arm64/mm/mmu.c | 8 ++--
1 file changed, 2 insertions(+), 6
tting.
Cc: Chintan Pandya
Cc: Toshi Kani
Cc: Thomas Gleixner
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Sean Christopherson
Signed-off-by: Will Deacon
---
lib/ioremap.c | 28
1 file changed, 12 insertions(+), 16 deletions(-)
diff --git a/lib/ioremap.c b/lib/iorem
: Toshi Kani
Signed-off-by: Will Deacon
---
arch/arm64/mm/mmu.c | 5 +
arch/x86/mm/pgtable.c | 8
include/asm-generic/pgtable.h | 5 +
lib/ioremap.c | 27 +--
4 files changed, 39 insertions(+), 6 deletions(-)
diff --git
-send-email-will.dea...@arm.com
The only change since v3 is a rebase onto v4.20-rc3, which was automatic.
I would appreciate a review of patch 4. Sean, please could you take a
quick look?
Thanks,
Will
--->8
Will Deacon (5):
ioremap: Rework pXd_free_pYd_page() API
arm64: mmu: D
pping.
Cc: Chintan Pandya
Cc: Toshi Kani
Cc: Thomas Gleixner
Cc: Michal Hocko
Cc: Andrew Morton
Suggested-by: Linus Torvalds
Reviewed-by: Toshi Kani
Signed-off-by: Will Deacon
---
lib/ioremap.c | 56 ++--
1 file changed, 42 insertions(+
On Tue, Nov 27, 2018 at 07:20:56AM +0100, Greg KH wrote:
> On Mon, Nov 26, 2018 at 08:56:50PM +, Michael Kelley wrote:
> > From: Greg KH Monday, November 26, 2018 11:57
> > AM
> >
> > > > > You created "null" hooks that do nothing, for no one in this patch
> > > > > series, why?
> > > > >
>
On Mon, Nov 26, 2018 at 11:00:10AM -0800, Sean Christopherson wrote:
> On Mon, Nov 26, 2018 at 05:07:46PM +0000, Will Deacon wrote:
> > The current ioremap() code uses a phys_addr variable at each level of
> > page table, which is confusingly offset by subtracting the base virt
Hi Masami,
On Wed, Nov 28, 2018 at 01:29:45AM +0900, Masami Hiramatsu wrote:
> Since commit 4378a7d4be30 ("arm64: implement syscall wrappers")
> introduced "__arm64_" prefix to all syscall wrapper symbols in
> sys_call_table, syscall tracer can not find corresponding
> metadata from syscall name.
he architecture dependent prepare_ftrace_return().
>
> Have arm64 use the new code, and remove the shadow stack management as well as
> having to set up the trace structure.
>
> This is needed to prepare for a fix of a design bug on how the curr_ret_stack
> is used.
>
> Cc
; This means we can generate an immediate jump address using a sequence
> of one MOVN (move wide negated) and two MOVK instructions, where the
> first one sets the lower 16 bits but also sets all top bits to 0x1.
>
> Signed-off-by: Ard Biesheuvel
> ---
Acked-by: Will Deacon
Denial,
On Mon, Nov 26, 2018 at 04:31:23PM -0800, Florian Fainelli wrote:
> breakpoint tests on the ARM 32-bit kernel are broken in several ways.
>
> The breakpoint length requested does not necessarily match whether the
> function address has the Thumb bit (bit 0) set or not, and this does
> matter to th
emove the reference to it.
Do you have a pointer to the commit that changed that behaviour? I just want
to make sure we're not missing something in our unwind_frame() code.
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: linux-arm-ker...@lists.infradead.org
> Signed-off-by: St
On Mon, Nov 26, 2018 at 03:09:56PM +, Suzuki K Poulose wrote:
> On Mon, Nov 26, 2018 at 02:06:04PM +0000, Will Deacon wrote:
> > On Mon, Nov 05, 2018 at 11:55:11AM +, Suzuki K Poulose wrote:
> > > We have two entries for ARM64_WORKAROUND_CLEAN_CACHE capability :
> >
: Will Deacon
---
arch/s390/include/asm/preempt.h | 2 ++
arch/x86/include/asm/preempt.h | 3 +++
include/linux/preempt.h | 3 ---
3 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h
index 23a14d187fb1
t count is
only 32 bits wide, we can simply pack it next to the resched flag and
load the whole thing in one go, so that a dec-and-test operation doesn't
need to load twice.
Signed-off-by: Will Deacon
---
arch/arm64/include/asm/Kbuild| 1 -
arch/arm64/include/asm/pre
: d65f03c0ret
Will
--->8
Will Deacon (2):
preempt: Move PREEMPT_NEED_RESCHED definition into arch code
arm64: preempt: Provide our own implementation of asm/preempt.h
arch/arm64/include/asm/Kbuild| 1 -
arch/arm64/include/asm/preempt.h |
On Fri, Nov 16, 2018 at 02:53:07PM -0800, Florian Fainelli wrote:
> breakpoint tests on the ARM 32-bit kernel are broken in several ways.
>
> The breakpoint length requested does not necessarily match whether the
> function address has the Thumb bit (bit 0) set or not, and this does
> matter to th
On Sat, Nov 10, 2018 at 11:17:03AM +, Jan Glauber wrote:
> On Fri, Nov 09, 2018 at 03:58:56PM +0000, Will Deacon wrote:
> > On Fri, Nov 09, 2018 at 02:37:51PM +, Jan Glauber wrote:
> > > I'm seeing the following oops reproducible with upstream kernel on
On Tue, Nov 20, 2018 at 06:28:54PM +, Will Deacon wrote:
> On Sat, Nov 10, 2018 at 11:17:03AM +, Jan Glauber wrote:
> > On Fri, Nov 09, 2018 at 03:58:56PM +0000, Will Deacon wrote:
> > > On Fri, Nov 09, 2018 at 02:37:51PM +, Jan Glauber wrote:
> > > >
Hi Vivek,
On Thu, Oct 11, 2018 at 03:19:28PM +0530, Vivek Gautam wrote:
> This series enables apps-smmu, the "arm,mmu-500" instance
> on sdm845.
> Series tested on SDM845 MTP device with related smmu patch series [1],
> and necessary config change, besides one hack to keep LDO14 in LPM mode
> to b
[+Thor]
On Fri, Nov 16, 2018 at 04:54:30PM +0530, Vivek Gautam wrote:
> qcom,smmu-v2 is an arm,smmu-v2 implementation with specific
> clock and power requirements.
> On msm8996, multiple cores, viz. mdss, video, etc. use this
> smmu. On sdm845, this smmu is used with gpu.
> Add bindings for the sa
Hi Pavel,
On Tue, Nov 20, 2018 at 09:43:40AM -0500, Pavel Tatashin wrote:
> Allow printk time stamps/sched_clock() to be available from the early
> boot.
>
> Signed-off-by: Pavel Tatashin
> ---
> arch/arm64/kernel/setup.c| 25 +
> drivers/clocksource/arm_arch
On Tue, Nov 27, 2018 at 05:21:08PM -0800, Nadav Amit wrote:
> > On Nov 27, 2018, at 5:06 PM, Nadav Amit wrote:
> >
> >> On Nov 27, 2018, at 4:07 PM, Rick Edgecombe
> >> wrote:
> >>
> >> Sometimes when memory is freed via the module subsystem, an executable
> >> permissioned TLB entry can remai
Hi Masami,
On Wed, Nov 28, 2018 at 08:55:55AM +0900, Masami Hiramatsu wrote:
> On Tue, 27 Nov 2018 13:18:59 -0500
> Steven Rostedt wrote:
>
> > On Tue, 27 Nov 2018 16:58:49 +
> > Will Deacon wrote:
> >
> > > This looks fine to me, but I'm
On Wed, Nov 28, 2018 at 10:01:46AM +0100, Peter Zijlstra wrote:
> On Wed, Nov 28, 2018 at 09:56:40AM +0100, Peter Zijlstra wrote:
> > On Tue, Nov 27, 2018 at 07:45:00PM +0000, Will Deacon wrote:
> > > This pair of patches improves our preempt_enable() implementation slightly
On Wed, Nov 28, 2018 at 09:22:23AM -0500, Steven Rostedt wrote:
> On Wed, 28 Nov 2018 12:05:02 +
> Will Deacon wrote:
>
> > Ok! Then please add a comment to arch_syscall_match_sym_name() along those
> > lines, and you can add my ack:
> >
> > Acked-by: Wi
I spent some more time looking at this today...
On Fri, Nov 23, 2018 at 06:05:25PM +, Will Deacon wrote:
> Doing some more debugging, it looks like the usual failure case is where
> one CPU clears the inode field in the dentry via:
>
> devpts_pty_kill()
>
On Thu, Nov 29, 2018 at 09:03:54AM +, Julien Thierry wrote:
>
>
> On 29/11/18 04:19, Nick Desaulniers wrote:
> > Fixes the warning produced from Clang:
> > ./include/asm-generic/io.h:711:9: warning: value size does not match
> > register size specified by the constraint and modifier
> > [-Was
On Thu, Nov 29, 2018 at 12:14:32PM +, Mark Rutland wrote:
> On Wed, Nov 28, 2018 at 12:24:47PM +0100, Nicholas Mc Guire wrote:
> > diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
> > index 54ec278..f1ea00c 100644
> > --- a/drivers/perf/arm_spe_pmu.c
> > +++ b/drivers/perf/a
On Thu, Nov 29, 2018 at 09:10:39AM -0700, Nathan Chancellor wrote:
> On Thu, Nov 29, 2018 at 10:49:03AM +0000, Will Deacon wrote:
> > On Thu, Nov 29, 2018 at 09:03:54AM +, Julien Thierry wrote:
> > > On 29/11/18 04:19, Nick Desaulniers wrote:
> > > > Fixes t
On Thu, Nov 29, 2018 at 09:17:38AM -0700, Nathan Chancellor wrote:
> On Thu, Nov 29, 2018 at 04:13:37PM +0000, Will Deacon wrote:
> > On Thu, Nov 29, 2018 at 09:10:39AM -0700, Nathan Chancellor wrote:
> > > This doesn't appear to work, I get this error:
> > >
On Tue, Nov 27, 2018 at 05:55:38PM +0100, Andrey Konovalov wrote:
> Tag-based KASAN inline instrumentation mode (which embeds checks of shadow
> memory into the generated code, instead of inserting a callback) generates
> a brk instruction when a tag mismatch is detected.
>
> This commit adds a ta
On Tue, Nov 27, 2018 at 05:55:41PM +0100, Andrey Konovalov wrote:
> Now, that all the necessary infrastructure code has been introduced,
> select HAVE_ARCH_KASAN_SW_TAGS for arm64 to enable software tag-based
> KASAN mode.
>
> Signed-off-by: Andrey Konovalov
> ---
> arch/arm64/Kconfig | 1 +
> 1
On Thu, Nov 29, 2018 at 01:26:34PM -0500, Waiman Long wrote:
> On 11/29/2018 01:08 PM, Peter Zijlstra wrote:
> > Hmm, I think we're missing a barrier in wake_q_add(); when cmpxchg()
> > fails we still need an smp_mb().
> >
> > Something like so.
> >
> > diff --git a/kernel/sched/core.c b/kernel/sch
Hi Florian,
On Wed, Nov 14, 2018 at 12:21:12PM -0800, Florian Fainelli wrote:
> I have been trying to debug some perf builtin tests on ARM 32-bit and
> found that "Breakpoint overflow signal handler" and "Breakpoint overflow
> sampling" were failing, but there are a number of reasons for that and
of having an override we can leverage
> drivers/of/fdt.c populating phys_initrd_start/phys_initrd_size to
> populate those variables for us.
>
> Signed-off-by: Florian Fainelli
> ---
> arch/arm64/mm/init.c | 20
> 1 file changed, 8 insertions(+), 12 deletions(-)
On Thu, Nov 15, 2018 at 05:23:57PM +0800, Yinbo Zhu wrote:
> From: Rajesh Bhagat
>
> Add set/clear bits functions for ARM platform which are used by ehci fsl
> driver
>
> Signed-off-by: Rajesh Bhagat
> Signed-off-by: Yinbo Zhu
> ---
> arch/arm64/include/asm/io.h | 29 +++
Hi Olof,
On Fri, Nov 16, 2018 at 05:54:56PM -0800, Olof Johansson wrote:
> Makes sparse happy. Before:
>
> arch/arm64/include/asm/sysreg.h:471:42: warning: constant 0x
> is so big it is unsigned long
> arch/arm64/include/asm/sysreg.h:512:42: warning: constant 0x
: David Howells
Cc: Liam Girdwood
Cc: Chris Wilson
Cc: Michael Halcrow
Cc: Jonathan Corbet
Reported-by: Linus Torvalds
Signed-off-by: Will Deacon
---
Documentation/admin-guide/kernel-parameters.txt| 2 +-
Documentation/admin-guide/security-bugs.rst| 2 +-
Documentation/arm/Booting
Hi Anders, Steve,
On Tue, Dec 04, 2018 at 08:29:03PM +0100, Anders Roxell wrote:
> When running in qemu on an kernel built with allmodconfig and debug
> options (in particular kcov and ubsan) enabled, ftrace_replace_code
> function call take minutes. The ftrace selftest calls
> ftrace_replace_code
e these entries to avoid
> duplicate entries for a single capability. Add a new Kconfig
> entry to control the "capability" entry to make it easier
> to handle combinations of the CONFIGs.
>
> Cc: Will Deacon
> Cc: Andre Przywara
> Cc: Mark Rutland
> Signed-o
On Wed, Dec 05, 2018 at 05:14:53PM +, Suzuki K Poulose wrote:
> On 05/12/2018 15:02, Will Deacon wrote:
> >On Fri, Nov 30, 2018 at 05:18:00PM +, Suzuki K Poulose wrote:
> >>diff --git a/arch/arm64/include/asm/cputype.h
> >>b/arch/arm64/include/asm/cputype.h
t; @@ -193,6 +193,7 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace
> *rec,
>
> void arch_ftrace_update_code(int command)
> {
> + command |= FTRACE_SCHEDULABLE;
> ftrace_modify_all_code(command);
> }
Bikeshed: I'd probably go for FTRACE_MAY_SLEEP, but I'm not going to die
on that hill so...
Acked-by: Will Deacon
Thanks, Steve!
Will
curr_ret_stack no longer needs to worry about checking for
> this. curr_ret_stack is still initialized to -1, when there's not a shadow
> stack allocated.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: linux-arm-ker...@lists.infradead.org
> Reviewed-by: Joel Fernande
'grep' instead of 'fold' to use a dependency that is
> already used a lot in the kernel.
>
> Reported-by: Naresh Kamboju
> Suggested-by: Will Deacon
> Signed-off-by: Anders Roxell
> ---
> scripts/atomic/atomic-tbl.sh | 2 +-
> 1 file changed, 1 inserti
On Thu, Dec 06, 2018 at 10:59:14AM -0500, Steven Rostedt wrote:
> On Thu, 6 Dec 2018 13:20:07 +
> Will Deacon wrote:
>
> > On Wed, Dec 05, 2018 at 12:48:54PM -0500, Steven Rostedt wrote:
> > > From: "Steven Rostedt (VMware)"
> > >
> > > I
pping.
Cc: Chintan Pandya
Cc: Toshi Kani
Cc: Thomas Gleixner
Cc: Michal Hocko
Cc: Andrew Morton
Suggested-by: Linus Torvalds
Reviewed-by: Toshi Kani
Signed-off-by: Will Deacon
---
lib/ioremap.c | 56 ++--
1 file changed, 42 insertions(+
The core code already has a check for pXd_none(), so remove it from the
architecture implementation.
Cc: Chintan Pandya
Cc: Toshi Kani
Cc: Michal Hocko
Cc: Andrew Morton
Acked-by: Thomas Gleixner
Reviewed-by: Toshi Kani
Signed-off-by: Will Deacon
---
arch/x86/mm/pgtable.c | 6 --
1
tting.
Cc: Chintan Pandya
Cc: Toshi Kani
Cc: Thomas Gleixner
Cc: Michal Hocko
Cc: Andrew Morton
Cc: Sean Christopherson
Tested-by: Sean Christopherson
Reviewed-by: Sean Christopherson
Signed-off-by: Will Deacon
---
lib/ioremap.c | 28
1 file changed, 12 inserti
: Toshi Kani
Signed-off-by: Will Deacon
---
arch/arm64/mm/mmu.c | 5 +
arch/x86/mm/pgtable.c | 8
include/asm-generic/pgtable.h | 5 +
lib/ioremap.c | 27 +--
4 files changed, 39 insertions(+), 6 deletions(-)
diff --git
The core code already has a check for pXd_none(), so remove it from the
architecture implementation.
Cc: Chintan Pandya
Cc: Toshi Kani
Cc: Thomas Gleixner
Cc: Michal Hocko
Cc: Andrew Morton
Signed-off-by: Will Deacon
---
arch/arm64/mm/mmu.c | 8 ++--
1 file changed, 2 insertions(+), 6
--->8
Will Deacon (5):
ioremap: Rework pXd_free_pYd_page() API
arm64: mmu: Drop pXd_present() checks from pXd_free_pYd_table()
x86/pgtable: Drop pXd_none() checks from pXd_free_pYd_table()
lib/ioremap: Ensure phys_addr actually corresponds to a physical
address
lib/ioremap: Ensu
801 - 900 of 6287 matches
Mail list logo