ot;
processes where scanning page tables would have been better, and pages
that are mapped in "sparse" processes where you are happy to be using
rmap, and even pges that are mapped into both types of processes at
once ? Or, can you change the lru/rmap scan so that it will efficiently
s
On Thu, Apr 08, 2021 at 08:13:43AM +0100, Matthew Wilcox wrote:
> On Thu, Apr 08, 2021 at 09:00:26AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 07, 2021 at 10:27:12PM +0100, Matthew Wilcox wrote:
> > > Doing I/O without any lock held already works; it just uses the file
> > > refcount. It would
On Wed, Apr 07, 2021 at 03:50:06AM +0100, Matthew Wilcox wrote:
> On Tue, Apr 06, 2021 at 06:44:59PM -0700, Michel Lespinasse wrote:
> > Performance tuning: as single threaded userspace does not use
> > speculative page faults, it does not require rcu safe vma freeing.
> > T
On Wed, Apr 07, 2021 at 04:40:34PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 06, 2021 at 06:44:49PM -0700, Michel Lespinasse wrote:
> > In the speculative case, call the vm_ops->fault() method from within
> > an rcu read locked section, and verify the mmap sequence lock at th
On Wed, Apr 07, 2021 at 04:47:34PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 06, 2021 at 06:44:34PM -0700, Michel Lespinasse wrote:
> > The counter's write side is hooked into the existing mmap locking API:
> > mmap_write_lock() increments the counter to the n
On Wed, Apr 07, 2021 at 04:35:28PM +0100, Matthew Wilcox wrote:
> On Wed, Apr 07, 2021 at 04:48:44PM +0200, Peter Zijlstra wrote:
> > On Tue, Apr 06, 2021 at 06:44:36PM -0700, Michel Lespinasse wrote:
> > > --- a/arch/x86/mm/fault.c
> > > +++ b/arch/x86/mm/fault.c
>
On Wed, Apr 07, 2021 at 01:14:53PM -0700, Michel Lespinasse wrote:
> On Wed, Apr 07, 2021 at 04:48:44PM +0200, Peter Zijlstra wrote:
> > On Tue, Apr 06, 2021 at 06:44:36PM -0700, Michel Lespinasse wrote:
> > > --- a/arch/x86/mm/fault.c
> > > +++ b/arch/x86/mm/fault
On Wed, Apr 07, 2021 at 04:48:44PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 06, 2021 at 06:44:36PM -0700, Michel Lespinasse wrote:
> > --- a/arch/x86/mm/fault.c
> > +++ b/arch/x86/mm/fault.c
> > @@ -1219,6 +1219,8 @@ void do_user_addr_fault(struct pt_regs *regs,
> &g
On Wed, Apr 07, 2021 at 03:35:27AM +0100, Matthew Wilcox wrote:
> On Tue, Apr 06, 2021 at 06:44:49PM -0700, Michel Lespinasse wrote:
> > In the speculative case, call the vm_ops->fault() method from within
> > an rcu read locked section, and verify the mmap sequence lock at th
Hi Bibo,
You introduced this code in commit 7df676974359f back in May.
Could you check that the change is correct ?
Thanks,
On Tue, Apr 06, 2021 at 06:44:28PM -0700, Michel Lespinasse wrote:
> update_mmu_tlb() can be used instead of update_mmu_cache() when the
> page fault handler detect
s
they will all be adjusted together before use, so they just need to be
consistent with each other, and using the original fault address and
pte allows us to reuse pte_map_lock() without any changes to it.
Signed-off-by: Michel Lespinasse
---
mm/filemap.c | 27 ---
1 file
when finalizing the fault.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 36 +++
include/linux/vm_event_item.h | 4
mm/vmstat.c | 4
3 files changed, 44 insertions(+)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm
In the speculative case, we want to avoid direct pmd checks (which
would require some extra synchronization to be safe), and rely on
pte_map_lock which will both lock the page table and verify that the
pmd has not changed from its initial value.
Signed-off-by: Michel Lespinasse
---
mm/memory.c
h any mmap writer.
This is very similar to a seqlock, but both the writer and speculative
readers are allowed to block. In the fail case, the speculative reader
does not spin on the sequence counter; instead it should fall back to
a different mechanism such as grabbing the mmap lock read side
This prepares for speculative page faults looking up and copying vmas
under protection of an rcu read lock, instead of the usual mmap read lock.
Signed-off-by: Michel Lespinasse
---
include/linux/mm_types.h | 16 +++-
kernel/fork.c| 11 ++-
2 files changed, 21
where the entire page table walk (higher levels down to ptes)
needs special care in the speculative case.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 98 ++---
1 file changed, 49 insertions(+), 49 deletions(-)
diff --git a/mm/memory.c b/mm
tables.
Signed-off-by: Michel Lespinasse
---
include/linux/mm.h | 4 +++
mm/memory.c| 77 --
2 files changed, 79 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d5988e78e6ab..dee8a4833779 100644
--- a
tests that do not have any frequent
concurrent page faults ! This is because rcu safe vma freeing prevents
recently released vmas from being immediately reused in a new thread.
Signed-off-by: Michel Lespinasse
---
kernel/fork.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff
We just need to make sure f2fs_filemap_fault() doesn't block in the
speculative case as it is called with an rcu read lock held.
Signed-off-by: Michel Lespinasse
---
fs/f2fs/file.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT so that the speculative fault
handling code can be compiled on this architecture.
Signed-off-by: Michel Lespinasse
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e4e1b6550115
Performance tuning: single threaded userspace does not benefit from
speculative page faults, so we turn them off to avoid any related
(small) extra overheads.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/x86/mm/fault.c b
Change mmap_lock_is_contended to return a bool value, rather than an
int which the callers are then supposed to interpret as a bool. This
is to ensure consistency with other mmap lock API functions (such as
the trylock functions).
Signed-off-by: Michel Lespinasse
---
include/linux/mmap_lock.h
the anon case, but maybe not as clear for the file cases.
- Is the Android use case compelling enough to merge the entire patchset ?
- Can we use this as a foundation for other mmap scalability work ?
I hear several proposals involving the idea of RCU based fault handling,
and hope this propo
We just need to make sure ext4_filemap_fault() doesn't block in the
speculative case as it is called with an rcu read lock held.
Signed-off-by: Michel Lespinasse
---
fs/ext4/file.c | 1 +
fs/ext4/inode.c | 7 ++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/fs/ext4/f
Add a new CONFIG_SPECULATIVE_PAGE_FAULT_STATS config option,
and dump extra statistics about executed spf cases and abort reasons
when the option is set.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 19 +++---
include/linux/mmap_lock.h | 19 +-
include
Add a speculative field to the vm_operations_struct, which indicates if
the associated file type supports speculative faults.
Initially this is set for files that implement fault() with filemap_fault().
Signed-off-by: Michel Lespinasse
---
fs/btrfs/file.c| 1 +
fs/cifs/file.c | 1 +
fs
anymore, as it is now running within an rcu read lock.
Signed-off-by: Michel Lespinasse
---
fs/xfs/xfs_file.c | 3 +++
mm/memory.c | 22 --
2 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index a007ca0711d9..b360
() API is kept as a wrapper around
do_handle_mm_fault() so that we do not have to immediately update
every handle_mm_fault() call site.
Signed-off-by: Michel Lespinasse
---
include/linux/mm.h | 12 +---
mm/memory.c| 10 +++---
2 files changed, 16 insertions(+), 6 deletions
trying that unimplemented case.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 3 ++-
include/linux/mm.h | 14 ++
mm/memory.c | 17 -
3 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
, and that readahead is not
necessary at this time. In all other cases, the fault is aborted to be
handled non-speculatively.
Signed-off-by: Michel Lespinasse
---
mm/filemap.c | 45 -
1 file changed, 44 insertions(+), 1 deletion(-)
diff --git a/mm
in order to satisfy pte_map_lock()'s preconditions.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 31 ++-
1 file changed, 22 insertions(+), 9 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index eea72bd78d06..547d9d0ee962 100644
--- a/mm/memory.c
lative fault handling.
The speculative handling case also does not preallocate page tables,
as it is always called with a pre-existing page table.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 63 +++--
1 file changed, 42 insertions(+), 21 deleti
: Michel Lespinasse
---
arch/arm64/mm/fault.c | 52 +++
1 file changed, 52 insertions(+)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index f37d4e3830b7..3757bfbb457a 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -25,6 +25,7
Define the new FAULT_FLAG_SPECULATIVE flag, which indicates when we are
attempting speculative fault handling (without holding the mmap lock).
Signed-off-by: Michel Lespinasse
---
include/linux/mm.h | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b
Change handle_pte_fault() to allow speculative fault execution to proceed
through do_numa_page().
do_swap_page() does not implement speculative execution yet, so it
needs to abort with VM_FAULT_RETRY in that case.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 15 ++-
1 file
Defer freeing of vma->vm_file when freeing vmas.
This is to allow speculative page faults in the mapped file case.
Signed-off-by: Michel Lespinasse
---
fs/exec.c | 1 +
kernel/fork.c | 17 +++--
mm/mmap.c | 11 +++
mm/nommu.c| 6 ++
4 files changed,
is set (the original pte was not
pte_none), catch speculative faults and return VM_FAULT_RETRY as
those cases are not implemented yet. Also assert that do_fault()
is not reached in the speculative case.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 2 +-
mm/memory.c |
tical between the two cases.
This change reduces the code duplication between the two cases.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 85 +++--
1 file changed, 37 insertions(+), 48 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
change do_numa_page() to use pte_spinlock() when locking the page table,
so that the mmap sequence counter will be validated in the speculative case.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm
wp_pfn_shared() or wp_page_shared() (both unreachable as we only
handle anon vmas so far) or handle_userfault() (needs an explicit
abort to handle non-speculatively).
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/memory.c
ind less readable.
Signed-off-by: Michel Lespinasse
---
include/linux/mmap_lock.h | 32
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 4e27f755766b..8ff276a7560e 100644
--- a/include/l
page faulting code, and some code has to
be added there to try speculative fault handling first.
Signed-off-by: Michel Lespinasse
---
mm/Kconfig | 22 ++
1 file changed, 22 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..322bda319dea 100644
--- a/mm/Kconfig
when finally committing
the faulted page to the mm address space.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 74 ++---
1 file changed, 42 insertions(+), 32 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index fc555fae0844..ab3160719bf3
update_mmu_tlb() can be used instead of update_mmu_cache() when the
page fault handler detects that it lost the race to another page fault.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index
that point the page table lock serializes any further
races with concurrent mmap lock writers.
If the mmap sequence count check fails, both functions will return false
with the pte being left unmapped and unlocked.
Signed-off-by: Michel Lespinasse
---
include/linux/mm.h | 34 +
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT so that the speculative fault
handling code can be compiled on this architecture.
Signed-off-by: Michel Lespinasse
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2792879d398e
Change do_anonymous_page() to handle the speculative case.
This involves aborting speculative faults if they have to allocate a new
anon_vma, and using pte_map_lock() instead of pte_offset_map_lock()
to complete the page fault.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 17
Change do_swap_page() to allow speculative fault execution to proceed.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index ab3160719bf3..6eddd7b4e89c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3340,11
-git a/drivers/media/i2c/ov5693.c b/drivers/media/i2c/ov5693.c
> new file mode 100644
> index ..da2ca99a7ad3
> --- /dev/null
> +++ b/drivers/media/i2c/ov5693.c
> @@ -0,0 +1,1557 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2013 Intel Corporation. A
ve come to depend on it (the old "common
law feature" issue).
Just a concern I have, with 0 evidence behind it, so I hope it turns
out not to be an actual issue.
Acked-by: Michel Lespinasse
On Thu, Apr 1, 2021 at 12:51 PM Liam Howlett wrote:
>
> find_vma() will continue to search up
initialize
req.reply.tval_usec = req32.reply.tval_usec;
before calling drm_ioctl_kernel, since it's not aliased by any req.request.*
member, and drm_wait_vblank_ioctl doesn't always write to it.
--
Earthling Michel Dänzer | https://redhat.com
Libr
issue and mesa was using
SYS_kcmp to compare device node fds.
A far shorter and more portable solution is possible, so let me
prepare a Mesa patch.
Make sure to read my comments on
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6881 first. :)
--
Earthling Michel Dänzer
On 2021-02-08 2:34 p.m., Daniel Vetter wrote:
On Mon, Feb 8, 2021 at 12:49 PM Michel Dänzer wrote:
On 2021-02-05 9:53 p.m., Daniel Vetter wrote:
On Fri, Feb 5, 2021 at 7:37 PM Kees Cook wrote:
On Fri, Feb 05, 2021 at 04:37:52PM +, Chris Wilson wrote:
Userspace has discovered the
On 2021-02-08 12:49 p.m., Michel Dänzer wrote:
On 2021-02-05 9:53 p.m., Daniel Vetter wrote:
On Fri, Feb 5, 2021 at 7:37 PM Kees Cook wrote:
On Fri, Feb 05, 2021 at 04:37:52PM +, Chris Wilson wrote:
Userspace has discovered the functionality offered by SYS_kcmp and has
started to depend
select if CONFIG_DRM is
unfortunately needed I think.
Per above, not sure this is really true.
--
Earthling Michel Dänzer | https://redhat.com
Libre software enthusiast | Mesa and X developer
When trying to convert a CCM matrix for IPU3, extreme values for the
Color Correction Matrix.
Specify the precision to ease userspace integration.
Signed-off-by: Jean-Michel Hautbois
---
drivers/staging/media/ipu3/include/intel-ipu3.h | 14 --
1 file changed, 8 insertions(+), 6
From: Michel Dänzer
Without __GFP_NOWARN, attempts at allocating huge pages can trigger
dmesg splats like below (which are essentially noise, since TTM falls
back to normal pages if it can't get a huge one).
[ 9556.710241] clinfo: page allocation failure: order:9,
mode:0x194dc2(GFP_HIG
Hi Daniel,
Thanks for the patch !
On 30/11/2020 14:31, Daniel Scally wrote:
> On platforms where ACPI is designed for use with Windows, resources
> that are intended to be consumed by sensor devices are sometimes in
> the _CRS of a dummy INT3472 device upon which the sensor depends. This
> driver
ssure.
--
Earthling Michel Dänzer | https://redhat.com
Libre software enthusiast | Mesa and X developer
tch has effectively no overhead unless tracepoints are enabled at
> runtime. If tracepoints are enabled, there is a performance impact, but
> how much depends on exactly what e.g. the BPF program does.
>
> Signed-off-by: Axel Rasmussen
Reviewed-by: Michel Lespinasse
Looks good to me, thanks!
;mmap_lock);
mmap_lock_trace_acquire_returned(mm, true, true);
}
I think this is more straightforward, and also the
mmap_lock_trace_start_locking and similar functions don't depend on
the underlying lock implementation.
The changes to the other files look fine to me.
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
his by using
> e.g. u8 (assuming sizeof(bool) is 1, and bool is unsigned; if either of
> these properties don't match, you get EINVAL [2]).
>
> Supporting "bool" explicitly makes hooking this up easier and more
> portable for userspace.
Acked-by: Michel Lespinasse
Lo
On Fri, Oct 2, 2020 at 9:33 AM Jann Horn wrote:
> On Fri, Oct 2, 2020 at 11:18 AM Michel Lespinasse wrote:
> > On Thu, Oct 1, 2020 at 6:25 PM Jann Horn wrote:
> > > Until now, the mmap lock of the nascent mm was ordered inside the mmap
> > > lock
> > > of th
On Thu, Oct 1, 2020 at 6:25 PM Jann Horn wrote:
> Until now, the mmap lock of the nascent mm was ordered inside the mmap lock
> of the old mm (in dup_mmap() and in UML's activate_mm()).
> A following patch will change the exec path to very broadly lock the
> nascent mm, but fine-grained locking sh
xceptions; I think it's both
unusual and adds complexity that is not strictly contained into the
init paths.
I don't really understand the concern with the bprm vma in
get_arg_page(); I'm not super familiar with this code but isn't it a
normal vma within the process t
ensure that they hold the mmap lock when calling into GUP (unless the mm is
> > not yet globally visible), add an assertion to make sure it stays that way
> > going forward.
Thanks for doing this, there is a lot of value in ensuring that a
function's callers follows the prerequisites.
Acked-by: Michel Lespinasse
ck before doing
> anything with `vma`, but that's because we actually don't do anything with
> it apart from the NULL check.)
>
> Signed-off-by: Jann Horn
Thanks for these cleanups :)
Acked-by: Michel Lespinasse
nly for testing, and it's only reachable by root through
> debugfs, so this doesn't really have any impact; however, if we want to add
> lockdep asserts into the GUP path, we need to have clean locking here.
>
> Signed-off-by: Jann Horn
Acked-by: Michel Lespinasse
Tha
least (if
DMA_BUF_IOCTL_IMPORT_SYNC_FILE is still controversial).
--
Earthling Michel Dänzer | https://redhat.com
Libre software enthusiast | Mesa and X developer
On 2020-09-21 4:40 p.m., Sasha Levin wrote:
From: Michel Dänzer
[ Upstream commit 2f228aab21bbc74e90e267a721215ec8be51daf7 ]
Don't check drm_crtc_state::active for this either, per its
documentation in include/drm/drm_crtc.h:
* Hence drivers must not consult @active in their va
Dear Friend,
Let me start by introducing myself, I am Mr Michel Madi Manager of
Bank Of Africa Burkina Faso.
I am writing you this letter based on the latest development at my
Department which I will like to bring to your personal edification.
(7.5 million U.S Dollars transfer claims).
This is
Dear Friend,
Let me start by introducing myself, I am Mr Michel Madi Manager of
Bank Of Africa Burkina Faso.
I am writing you this letter based on the latest development at my
Department which I will like to bring to your personal edification.
(7.5 million U.S Dollars transfer claims).
This is
doesn't support
suspend-to-RAM on Apple PowerPC notebooks.
--
Earthling Michel Dänzer | https://redhat.com
Libre software enthusiast | Mesa and X developer
On Sat, Aug 22, 2020 at 9:04 AM Michel Lespinasse wrote:
> - B's implementation could, when lockdep is enabled, always release
> lock A before acquiring lock B. This is not ideal though, since this
> would hinder testing of the not-blocked code path in the acquire
> sequence.
A
On Sat, Aug 22, 2020 at 9:39 AM wrote:
> On Sat, Aug 22, 2020 at 09:04:09AM -0700, Michel Lespinasse wrote:
> > Hi,
> >
> > I am wondering about how to describe the following situation to lockdep:
> >
> > - lock A would be something that's already implement
der testing of the not-blocked code path in the acquire
sequence.
Would the lockdep maintainers have any guidance as to how to handle
this locking case ?
Thanks,
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
ell, because you
didn't trim the quoted text (hint, hint).
--
Earthling Michel Dänzer | https://redhat.com
Libre software enthusiast | Mesa and X developer
On Wed, Aug 12, 2020 at 7:13 PM Chinwen Chang
wrote:
> smaps_rollup will try to grab mmap_lock and go through the whole vma
> list until it finishes the iterating. When encountering large processes,
> the mmap_lock will be held for a longer time, which may block other
> write requests like mmap an
On Wed, Aug 12, 2020 at 7:14 PM Chinwen Chang
wrote:
>
> Add new API to query if someone wants to acquire mmap_lock
> for write attempts.
>
> Using this instead of rwsem_is_contended makes it more tolerant
> of future changes to the lock type.
>
> Signed-off-by: Chinwen
On Thu, Aug 13, 2020 at 9:11 AM Chinwen Chang
wrote:
> On Thu, 2020-08-13 at 02:53 -0700, Michel Lespinasse wrote:
> > On Wed, Aug 12, 2020 at 7:14 PM Chinwen Chang
> > wrote:
> > > Recently, we have observed some janky issues caused by unpleasantly long
> > >
the
past I have tried adding rwsem_is_contended to mlock(), and later to
mm_populate() paths, and IIRC gotten pushback on it both times. I
don't feel strongly on this, but would prefer if someone else approved
the rwsem_is_contended() use case.
Couple related questions, how many VMAs are we looking
50).
>
> In this case the patch is a clear NAK since you haven't root caused the
> issue and are just working around it in a very questionable manner.
To be fair though, amdgpu & radeon are already disabling write-combining
for system memory pages in 32-bit x86 kernels for similar reasons.
--
Earthling Michel Dänzer | https://redhat.com
Libre software enthusiast | Mesa and X developer
pipermail/linux-riscv/2020-June/010335.html
>
> Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> Signed-off-by: Atish Patra
Thanks for the fix.
Reviewed-by: Michel Lespinasse
ocking checks exposed the issue that OpenRISC was not taking
> this mmap lock when during page walks for DMA operations. This patch
> locks and unlocks the mmap lock for page walking.
>
> Fixes: 42fc541404f2 ("mmap locking API: add mmap_assert_locked() and
> mmap_assert_write_lo
On Tue, Jun 16, 2020 at 11:07 PM Stafford Horne wrote:
> On Wed, Jun 17, 2020 at 02:35:39PM +0900, Stafford Horne wrote:
> > On Tue, Jun 16, 2020 at 01:47:24PM -0700, Michel Lespinasse wrote:
> > > This makes me wonder actually - maybe there is a latent bug that got
> > &
(!rwsem_is_locked(&walk.mm->mmap_lock)) added to
walk_page_range() / walk_page_range_novma() / walk_page_vma() ...
On Tue, Jun 16, 2020 at 12:41 PM Atish Patra wrote:
>
> On Tue, Jun 16, 2020 at 12:19 PM Stafford Horne wrote:
> >
> > On Tue, Jun 16, 2020 at 03:44:49AM -0700, Michel
ffe00107b76b
> >> > [ 10.393096] status: 0120 badaddr:
> >> > cause: 0003
> >> > [ 10.397755] ---[ end trace 861659596ac28841 ]---
> >> >
nts")
> Signed-off-by: Randy Dunlap
> Cc: Mauro Carvalho Chehab
> Cc: Michel Lespinasse
> Cc: Andrew Morton
Acked-by: Michel Lespinasse
Thanks for the fixes !
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
On Thu, Jun 4, 2020 at 1:16 AM youling 257 wrote:
> 2020-06-04 13:57 GMT+08:00, Michel Lespinasse :
> > However I would like more information about your report. Did you apply
> > the series yourself ? If so, what base tree did you apply it onto ? If
> > not, what tree did
? If so, what base tree did you apply it onto ? If
not, what tree did you use that already included the series ?
Thanks,
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
On Thu, May 21, 2020 at 12:42 AM Vlastimil Babka wrote:
> On 5/20/20 7:29 AM, Michel Lespinasse wrote:
> > Convert comments that reference mmap_sem to reference mmap_lock instead.
> >
> > Signed-off-by: Michel Lespinasse
>
> Reviewed-by: Vlastimil Babka
>
Looks good, thanks !
On Wed, May 20, 2020 at 8:22 PM Andrew Morton wrote:
> On Tue, 19 May 2020 22:29:08 -0700 Michel Lespinasse
> wrote:
> > Convert comments that reference mmap_sem to reference mmap_lock instead.
>
> This may not be complete..
>
> From: Andrew Morton
Looks good. I'm not sure if you need a review, but just in case:
On Wed, May 20, 2020 at 8:23 PM Andrew Morton wrote:
> On Tue, 19 May 2020 22:29:01 -0700 Michel Lespinasse
> wrote:
>
> > Convert the last few remaining mmap_sem rwsem calls to use the new
> > mmap lock
On Wed, May 20, 2020 at 12:32 AM John Hubbard wrote:
> On 2020-05-19 19:39, Michel Lespinasse wrote:
> >> That gives you additional options inside internal_get_user_pages_fast(),
> >> such
> >> as, approximately:
> >>
> >> if (!(gup_flags & F
Convert comments that reference old mmap_sem APIs to reference
corresponding new mmap locking APIs instead.
Signed-off-by: Michel Lespinasse
---
Documentation/vm/hmm.rst | 6 +++---
arch/alpha/mm/fault.c | 2 +-
arch/ia64/mm/fault.c | 2 +-
arch/m68k/mm/fault.c
Add new APIs to assert that mmap_sem is held.
Using this instead of rwsem_is_locked and lockdep_assert_held[_write]
makes the assertions more tolerant of future changes to the lock type.
Signed-off-by: Michel Lespinasse
---
arch/x86/events/core.c| 2 +-
fs/userfaultfd.c | 6
This use is converted manually ahead of the next patch in the series,
as it requires including a new header which the automated conversion
would miss.
Signed-off-by: Michel Lespinasse
Reviewed-by: Daniel Jordan
Reviewed-by: Davidlohr Bueso
Reviewed-by: Laurent Dufour
Reviewed-by: Vlastimil
Rename the mmap_sem field to mmap_lock. Any new uses of this lock
should now go through the new mmap locking api. The mmap_lock is
still implemented as a rwsem, though this could change in the future.
Signed-off-by: Michel Lespinasse
Reviewed-by: Vlastimil Babka
---
arch/ia64/mm/fault.c
Define a new initializer for the mmap locking api.
Initially this just evaluates to __RWSEM_INITIALIZER as the API
is defined as wrappers around rwsem.
Signed-off-by: Michel Lespinasse
Reviewed-by: Laurent Dufour
Reviewed-by: Vlastimil Babka
---
arch/x86/kernel/tboot.c| 2 +-
drivers
least-ugly way of addressing this in the short term.
Signed-off-by: Michel Lespinasse
Reviewed-by: Daniel Jordan
Reviewed-by: Vlastimil Babka
---
include/linux/mmap_lock.h | 14 ++
kernel/bpf/stackmap.c | 17 +
2 files changed, 19 insertions(+), 12 deletions(-)
1 - 100 of 1016 matches
Mail list logo