Signed-off-by: Christoph Hellwig
---
arch/m32r/include/asm/io.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/m32r/include/asm/io.h b/arch/m32r/include/asm/io.h
index 1b653bb16f9a..a4272d8f0d9c 100644
--- a/arch/m32r/include/asm/io.h
+++ b/arch/m32r/include/asm/io.h
@@ -191,8 +191,6 @
Signed-off-by: Christoph Hellwig
---
arch/hexagon/include/asm/io.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/hexagon/include/asm/io.h b/arch/hexagon/include/asm/io.h
index 66f5e9a61efc..9e8621d94ee9 100644
--- a/arch/hexagon/include/asm/io.h
+++ b/arch/hexagon/include/asm/io.h
@@
CONFIG_ALPHA_JENSEN has failed to compile since commit 6aca0503
("alpha/dma: use common noop dma ops"), so mark it as broken.
Signed-off-by: Christoph Hellwig
---
arch/alpha/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index b31b974a03cb..e9
Almost every architecture supports a direct dma mapping implementation,
where no iommu is used and the device dma address is a 1:1 mapping to
the physical address or has a simple linear offset. Currently the
code for this implementation is most duplicated over the architectures,
and the duplicated
Signed-off-by: Christoph Hellwig
---
arch/powerpc/include/asm/dma-mapping.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/powerpc/include/asm/dma-mapping.h
b/arch/powerpc/include/asm/dma-mapping.h
index 5a6cbe11db6f..592c7f418aa0 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+
Signed-off-by: Christoph Hellwig
---
arch/m32r/include/asm/dma-mapping.h | 7 ---
1 file changed, 7 deletions(-)
diff --git a/arch/m32r/include/asm/dma-mapping.h
b/arch/m32r/include/asm/dma-mapping.h
index 336ffe60814b..8967fb659691 100644
--- a/arch/m32r/include/asm/dma-mapping.h
+++ b/arc
We always use the stub definitions, so remove the unused other code.
Signed-off-by: Christoph Hellwig
Acked-by: Vineet Gupta
---
arch/arc/Kconfig | 3 ---
arch/arc/include/asm/dma-mapping.h | 7 ---
arch/arc/mm/dma.c | 14 +++---
3 files changed,
Signed-off-by: Christoph Hellwig
---
arch/riscv/include/asm/dma-mapping.h | 8
1 file changed, 8 deletions(-)
diff --git a/arch/riscv/include/asm/dma-mapping.h
b/arch/riscv/include/asm/dma-mapping.h
index 3eec1000196d..73849e2cc761 100644
--- a/arch/riscv/include/asm/dma-mapping.h
+++
Signed-off-by: Christoph Hellwig
---
arch/s390/include/asm/dma-mapping.h | 7 ---
1 file changed, 7 deletions(-)
diff --git a/arch/s390/include/asm/dma-mapping.h
b/arch/s390/include/asm/dma-mapping.h
index eaf490f9c5bc..2ec7240c1ada 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arc
This makes sure the generic version can be used with architectures /
devices that have a DMA offset in the direct mapping.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
include/linux/dma-mapping.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux
The generic version now takes dma_pfn_offset into account, so there is no
more need for an architecture override.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
arch/arm64/include/asm/dma-mapping.h | 9 -
1 file changed, 9 deletions(-)
diff --git a/arch/arm64/include/as
This makes it match the generic version.
Reported-by: Vladimir Murzin
Signed-off-by: Christoph Hellwig
---
arch/mips/include/asm/dma-mapping.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mips/include/asm/dma-mapping.h
b/arch/mips/include/asm/dma-mapping.h
index 0d9
phys_to_dma, dma_to_phys and dma_capable are helpers published by
architecture code for use of swiotlb and xen-swiotlb only. Drivers are
not supposed to use these directly, but use the DMA API instead.
Move these to a new asm/dma-direct.h helper, included by a
linux/dma-direct.h wrapper that prov
And unlike the other helpers we don't require a as
this helper is a special case for ia64 only, and this keeps it as
simple as possible.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/dma-mapping.h | 2 --
arch/arm64/include/asm/dma-mapping.h | 4
arch/ia64/Kconfig
Signed-off-by: Christoph Hellwig
Acked-by: Richard Kuo
---
arch/hexagon/include/asm/dma-mapping.h | 7 ---
arch/hexagon/kernel/dma.c | 1 +
2 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/hexagon/include/asm/dma-mapping.h
b/arch/hexagon/include/asm/dma-mappin
We want to use the dma_direct_ namespace for a generic implementation,
so rename powerpc to the second best choice: dma_nommu_.
Signed-off-by: Christoph Hellwig
---
arch/powerpc/include/asm/dma-mapping.h| 8 ++--
arch/powerpc/kernel/dma-iommu.c | 2 +-
arch/powerpc/kernel/dma-swi
This frees the dma_direct_* namespace for a generic implementation.
Signed-off-by: Christoph Hellwig
---
arch/microblaze/include/asm/dma-mapping.h | 4 +--
arch/microblaze/kernel/dma.c | 48 +++
2 files changed, 26 insertions(+), 26 deletions(-)
diff --
Always returning 1 is the same behavior as not supplying a method at all.
Signed-off-by: Christoph Hellwig
---
arch/microblaze/kernel/dma.c | 6 --
arch/parisc/kernel/pci-dma.c | 7 ---
2 files changed, 13 deletions(-)
diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/d
Signed-off-by: Christoph Hellwig
---
arch/microblaze/kernel/dma.c | 28
1 file changed, 28 deletions(-)
diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/dma.c
index b45d8f8967af..c91e8cef98dd 100644
--- a/arch/microblaze/kernel/dma.c
+++ b/arch/micr
This is not needed in drivers, so move it to a private header.
Signed-off-by: Christoph Hellwig
---
arch/s390/include/asm/dma-mapping.h | 2 --
arch/s390/include/asm/pci_dma.h | 3 +++
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/s390/include/asm/dma-mapping.h
b/arch/
These days all devices should have a DMA coherent mask, and most dma_ops
implementations rely on that fact. But just to be sure add an assert to
ring the warning bell if that is not the case.
Signed-off-by: Christoph Hellwig
Reviewed-by: Vladimir Murzin
---
include/linux/dma-mapping.h | 1 +
1
Lift the code from x86 so that we behave consistently. In the future we
should probably warn if any of these is set.
Signed-off-by: Christoph Hellwig
Acked-by: Jesper Nilsson
Acked-by: Geert Uytterhoeven [m68k]
---
arch/cris/arch-v32/drivers/pci/dma.c | 3 ---
arch/h8300/kernel/dma.c
To implement the x86 forbid_dac and iommu_sac_force we want an arch hook
so that it can apply the global options across all dma_map_ops
implementations.
Signed-off-by: Christoph Hellwig
---
arch/x86/include/asm/dma-mapping.h | 3 +++
arch/x86/kernel/pci-dma.c | 19 ---
For architectures that just use the generic dma_noop_ops we can provide
a generic version of dma-mapping.h.
Signed-off-by: Christoph Hellwig
---
MAINTAINERS | 1 +
arch/m32r/include/asm/Kbuild | 1 +
arch/m32r/include/asm/dma-mapping.h | 17 -
The trivial direct mapping implementation already does a virtual to
physical translation which isn't strictly a noop, and will soon learn
to do non-direct but linear physical to dma translations through the
device offset and a few small tricks. Rename it to a better fitting
name.
Signed-off-by: C
This means it uses whatever linear remapping scheme that the architecture
provides is used in the generic dma_direct ops.
Signed-off-by: Christoph Hellwig
Reviewed-by: Vladimir Murzin
---
lib/dma-direct.c | 18 +++---
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/li
Roughly based on the x86 pci-nommu implementation.
Signed-off-by: Christoph Hellwig
---
lib/dma-direct.c | 31 ++-
1 file changed, 30 insertions(+), 1 deletion(-)
diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index 12ea9653781b..32fd4d9e4c47 100644
--- a/lib/dma-d
Try the CMA allocator for coherent allocations if supported.
Roughly modelled after the x86 code.
Signed-off-by: Christoph Hellwig
---
lib/dma-direct.c | 24 ++--
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index 32fd4d9
To preserve the x86 behavior.
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
lib/dma-direct.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index a9ae98be7af3..f04a424f91fa 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-dire
This allows to dip into zones for lower memory if they are available.
If one of the zones is not available the corresponding GFP_* flag
will evaluate to 0 so they won't change anything. We provide an
arch tunable for those architectures that do not use GFP_DMA for
the lowest 24-bits, given that th
If an attempt to allocate memory succeeded, but isn't inside the
supported DMA mask, retry the allocation with GFP_DMA set as a
last resort.
Based on the x86 code, but an off by one error in what is now
dma_coherent_ok has been fixed vs the x86 code.
Signed-off-by: Christoph Hellwig
---
lib/dma
So that they don't need to indirect through the operation vector.
Signed-off-by: Christoph Hellwig
Reviewed-by: Vladimir Murzin
---
arch/arm/mm/dma-mapping-nommu.c | 9 +++--
include/linux/dma-direct.h | 5 +
lib/dma-direct.c| 6 +++---
3 files changed, 11 insertion
Signed-off-by: Christoph Hellwig
Reviewed-by: Robin Murphy
---
include/linux/dma-direct.h | 1 +
lib/dma-direct.c | 19 +++
2 files changed, 20 insertions(+)
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 4788bf0bf683..bcdb1a3e4b1f 100644
-
cris currently has an incomplete direct mapping dma_map_ops implementation
if PCI support is enabled. Replace it with the fully feature generic
dma-direct implementation.
Signed-off-by: Christoph Hellwig
Acked-by: Jesper Nilsson
---
arch/cris/Kconfig | 4 ++
arch/cris/ar
Replace the bare-bones h8300 direct dma mapping implementation with
the fully featured generic dma-direct one.
Signed-off-by: Christoph Hellwig
---
arch/h8300/Kconfig | 1 +
arch/h8300/include/asm/Kbuild| 1 +
arch/h8300/include/asm/dma-mapping.h | 12 ---
arch/h8
Christophe Lombard writes:
> The POWER9 core supports a new feature: ASB_Notify which requires the
> support of the Special Purpose Register: TIDR.
>
> The ASB_Notify command, generated by the AFU, will attempt to
> wake-up the host thread identified by the particular LPID:PID:TID.
>
> This patch
On 12-01-18, 12:43, Shilpasri G Bhat wrote:
> Some OpenPOWER boxes can have same pstate values for nominal and
> pmin pstates. In these boxes the current code will not initialize
> 'powernv_pstate_info.min' variable and result in erroneous CPU
> frequency reporting. This patch fixes this problem.
>
Le 11/01/2018 à 16:01, Philippe Bergheaud a écrit :
Configure the P9 XSL_DSNCTL register with PHB indications found
in the device tree, or else use legacy hard-coded values.
Signed-off-by: Philippe Bergheaud
---
Changelog:
v2: New patch. Use the new device tree property "ibm,phb-indications".
Le 11/01/2018 à 16:01, Philippe Bergheaud a écrit :
P9 supports PCI tunneled operations (atomics and as_notify). This
patch adds support for tunneled operations on powernv, with a new
API, to be called by device drivers:
pnv_pci_get_tunnel_ind()
Tell driver the 16-bit ASN indication used by
CPU6 ERRATA affects only MPC860 revisions prior to C.0. Manufacturing
of those revisiosn was stopped in 1999-2000.
Therefore, it has been almost 20 years since this ERRATA has been
fixed in the silicon.
This patch removes the workaround for that ERRATA.
Signed-off-by: Christophe Leroy
---
arch/
EXCEPTION_PROLOG_0 and EXCEPTION_EPILOG_0 were added some
time ago in order to regroup the two mtspr/mfspr to SCRATCH0 and
SCRATCH1 and the mfcr/mtcr in order to ease entry and exit of
function not using the full EXCEPTION_PROLOG.
Since then, the mfcr/mtcr has been taken out, hence just leaving
th
In TLB miss handlers, updating the perf counter is only useful
when performing a perf analysis. As it has a noticeable overhead,
let's only do it when needed.
In order to do so, the exit of the miss handlers will be patched
when starting/stopping 'perf': the first register restore
instruction of e
_PAGE_WRITETHRU is only used in:
* AMIGA_Z2RAM block driver which is never activated on powerPC
* Video/FB driver which is for PPC_PMAC
Therefore, no need to spend time in 8xx TLB miss handlers for
handling it.
And by removing it, we free up bit 20 which then avoids having
to clear it on each TLB
commit ac29c64089b74 ("powerpc/mm: Replace _PAGE_USER with
_PAGE_PRIVILEGED") introduced _PAGE_PRIVILEGED for BOOK3S/64
This patch generalises _PAGE_PRIVILEGED for all CPUs, allowing
to have either _PAGE_PRIVILEGED or _PAGE_USER or both.
PPC_8xx has a _PAGE_SHARED flag which is set for and only f
Today, PAGE_NONE is defined as a page not having _PAGE_USER.
In some circunstances, when the CPU supports it, it might be
better to be able to flag a page with NO ACCESS.
In a following patch, the 8xx will switch user access being flagged
in the PMD, therefore it will not be possible anymore to us
As Linux kernel separates KERNEL and USER address spaces, there is
therefore no need to flag USER access at page level.
Today, the 8xx TLB handlers already handle user access in the L1 entry
through Access Protection Groups, it is then natural to move the user
access handling at PMD level once _PA
When CONFIG_SWAP is set, the TLB miss handlers have to also take
into account _PAGE_ACCESSED flag. At the moment it is done by
anding _PAGE_ACCESSED into _PAGE_PRESENT using 3 instructions.
This patch uses APG for handling _PAGE_ACCESSED, allowing to
just copy _PAGE_ACCESSED bit into APG field, he
> -Original Message-
> From: Linuxppc-dev [mailto:linuxppc-dev-
> bounces+madalin.bucur=nxp@lists.ozlabs.org] On Behalf Of Jamie Krueger
> Sent: Wednesday, January 10, 2018 5:57 PM
> To: linuxppc-dev@lists.ozlabs.org
> Subject: DPAA Ethernet problems with mainstream Linux kernels
>
> H
On Fri, Jan 12, 2018 at 3:11 PM, kbuild test robot
wrote:
> tree:
> https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git
> master
> head: b40fa82cd6138350f723aa47b37e3e3e80906b40
> commit: 148b974deea927f5dbb6c468af2707b488bfa2de [130/134] crypto:
> aes-generic - build
On 01/12/2018 08:22 AM, Madalin-cristian Bucur wrote:
-Original Message-
From: Linuxppc-dev [mailto:linuxppc-dev-
bounces+madalin.bucur=nxp@lists.ozlabs.org] On Behalf Of Jamie Krueger
Sent: Wednesday, January 10, 2018 5:57 PM
To: linuxppc-dev@lists.ozlabs.org
Subject: DPAA Ethernet p
Hi!
On Fri, Jan 12, 2018 at 03:55:47PM +0100, Arnd Bergmann wrote:
> >crypto/aes_generic.o: In function `crypto_aes_set_key':
> >>> aes_generic.c:(.text+0x4e0): undefined reference to `_restgpr_31_x'
>
> adding linuxpcc-dev to Cc, maybe someone knows a way out of this.
> It appears related to
On 01/08/2018 11:19 AM, Michael Bringmann wrote:
> Add code to parse the new property 'ibm,thread-groups" when it is
> present. The content of this property explicitly defines the number
> of threads per core as well as the PowerPC 'threads_core_mask'.
> The design provides a common device-tree fo
This is a port on kernel 4.15 of the work done by Peter Zijlstra to handle
page fault without holding the mm semaphore [1].
The idea is to try to handle user space page faults without holding the
mmap_sem. This should allow better concurrency for massively threaded
process since the page fault han
Define CONFIG_SPF for BOOK3S_64 and SMP. This enables the Speculative Page
Fault handler.
Support is only provide for BOOK3S_64 currently because:
- require CONFIG_PPC_STD_MMU because checks done in
set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when
building for 64bits with SMP.
Signed-off-by: Laurent Dufour
---
arch/x86/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a317d5594b6a..d74353b85aaf 100644
--- a/a
From: Peter Zijlstra
One of the side effects of speculating on faults (without holding
mmap_sem) is that we can race with free_pgtables() and therefore we
cannot assume the page-tables will stick around.
Remove the reliance on the pte pointer.
Signed-off-by: Peter Zijlstra (Intel)
[Remove onl
From: Peter Zijlstra
When speculating faults (without holding mmap_sem) we need to validate
that the vma against which we loaded pages is still valid when we're
ready to install the new PTE.
Therefore, replace the pte_offset_map_lock() calls that (re)take the
PTL with pte_map_lock() which can fa
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c |
From: Peter Zijlstra
Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
counts such that we can easily test if a VMA is changed.
The unmap_page_range() one allows us to make assumptions about
page-tables; when we find the seqcount hasn't changed we can assume
page-tables are
The VMA sequence count has been introduced to allow fast detection of
VMA modification when running a page fault handler without holding
the mmap_sem.
This patch provides protection against the VMA modification done in :
- madvise()
- mpol_rebind_policy()
- vma_replace_poli
If a thread is remapping an area while another one is faulting on the
destination area, the SPF handler may fetch the vma from the RB tree before
the pte has been moved by the other thread. This means that the moved ptes
will overwrite those create by the page fault handler leading to page
leaked.
The speculative page fault handler must be protected against anon_vma
changes. This is because page_add_new_anon_rmap() is called during the
speculative path.
In addition, don't try speculative page fault if the VMA don't have an
anon_vma structure allocated because its allocation should be
protec
When handling speculative page fault, the vma->vm_flags and
vma->vm_page_prot fields are read once the page table lock is released. So
there is no more guarantee that these fields would not change in our back.
They will be saved in the vm_fault structure before the VMA is checked for
changes.
This
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour
---
include/linux/migrate.h |
The speculative page fault handler which is run without holding the
mmap_sem is calling lru_cache_add_active_or_unevictable() but the vm_flags
is not guaranteed to remain constant.
Introducing __lru_cache_add_active_or_unevictable() which has the vma flags
value parameter instead of the vma pointer
The current maybe_mkwrite() is getting passed the pointer to the vma
structure to fetch the vm_flags field.
When dealing with the speculative page fault handler, it will be better to
rely on the cached vm_flags value stored in the vm_fault structure.
This patch introduce a __maybe_mkwrite() servi
When dealing with the speculative fault path we should use the VMA's field
cached value stored in the vm_fault structure.
Currently vm_normal_page() is using the pointer to the VMA to fetch the
vm_flags value. This patch provides a new __vm_normal_page() which is
receiving the vm_flags flags value
When dealing with speculative page fault handler, we may race with VMA
being split or merged. In this case the vma->vm_start and vm->vm_end
fields may not match the address the page fault is occurring.
This can only happens when the VMA is split but in that case, the
anon_vma pointer of the new VM
This change is inspired by the Peter's proposal patch [1] which was
protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
that particular case, and it is introducing major performance degradation
due to excessive scheduling operations.
To allow access to the mm_rb tree without
From: Peter Zijlstra
Provide infrastructure to do a speculative fault (not holding
mmap_sem).
The not holding of mmap_sem means we can race against VMA
change/removal and page-table destruction. We use the SRCU VMA freeing
to keep the VMA around. We use the VMA seqcount to detect change
(includi
There is a deadlock when a CPU is doing a speculative page fault and
another one is calling do_unmap().
The deadlock occurred because the speculative path try to spinlock the
pte while the interrupt are disabled. When the other CPU in the
unmap's path has locked the pte then is waiting for all the
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 87
mm/memory.c | 62 ++--
2 files changed, 136 in
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 769533696483..06c7fdb14f89 100644
---
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l| 1 +
tools/perf/util/python.c
When the speculative page fault handler is returning VM_RETRY, there is a
chance that VMA fetched without grabbing the mmap_sem can be reused by the
legacy page fault handler. By reusing it, we avoid calling find_vma()
again. To achieve, that we must ensure that the VMA structure will not be
freed
From: Peter Zijlstra
Try a speculative fault before acquiring mmap_sem, if it returns with
VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
traditional fault.
Signed-off-by: Peter Zijlstra (Intel)
[Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
handle_speculative_fault()]
[
This patch enable the speculative page fault on the PowerPC
architecture.
This will try a speculative page fault without holding the mmap_sem,
if it returns with VM_FAULT_RETRY, the mmap_sem is acquired and the
traditional page fault processing is done.
The speculative path is only tried for mult
On Wed, Jan 10, 2018 at 10:42 PM, Nicolin Chen wrote:
>
> ==Change log==
> v2
> * Reworked the series by taking suggestions from Maciej
> + Added PATCH-01 to keep all ssi->i2s_net updated
> + Replaced bool tx with bool dir in PATCH-03 and PATCH-06
> + Moved all initial register configuratio
On Fri, Jan 12, 2018 at 06:26:02PM +0100, Laurent Dufour wrote:
> There is a deadlock when a CPU is doing a speculative page fault and
> another one is calling do_unmap().
>
> The deadlock occurred because the speculative path try to spinlock the
> pte while the interrupt are disabled. When the ot
On Fri, Jan 12, 2018 at 06:26:00PM +0100, Laurent Dufour wrote:
> -static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root)
> +static void __vma_rb_erase(struct vm_area_struct *vma, struct mm_struct *mm)
> {
> + struct rb_root *root = &mm->mm_rb;
> /*
>* Note
On Fri, 12 Jan 2018, Laurent Dufour wrote:
> Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when
> building for 64bits with SMP.
>
> Signed-off-by: Laurent Dufour
> ---
> arch/x86/Kconfig | 4
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/x86/Kconfig b/ar
On Fri, Jan 12, 2018 at 06:26:06PM +0100, Laurent Dufour wrote:
> @@ -1354,7 +1354,10 @@ extern int handle_mm_fault(struct vm_area_struct *vma,
> unsigned long address,
> unsigned int flags);
> #ifdef CONFIG_SPF
> extern int handle_speculative_fault(struct mm_struct *mm,
> +
Le 09/05/2017 à 16:16, Christophe Leroy a écrit :
Commit fd893fe56a130 ("powerpc/mm: Fix missing page attributes in
page table dump") added support of _PAGE_RO attribute.
This patch makes it more simple
Superseeded by https://patchwork.ozlabs.org/patch/859896/
Christophe
Signed-off-by: Ch
On Fri, Jan 12, 2018 at 5:39 PM, Segher Boessenkool
wrote:
>> or why the aes_generic implementation needs this on
>> powerpc when built with 'gcc -Os'. FWIW, the -Os change was needed
>> to work around a possible kernel stack overflow that can happen with
>> gcc-7.2, see https://patchwork.kernel.
On Fri, Jan 12, 2018 at 08:43:21PM +0100, Arnd Bergmann wrote:
> On Fri, Jan 12, 2018 at 5:39 PM, Segher Boessenkool
> wrote:
>
> >> or why the aes_generic implementation needs this on
> >> powerpc when built with 'gcc -Os'. FWIW, the -Os change was needed
> >> to work around a possible kernel st
On Wed, Jan 10, 2018 at 09:00:13AM +0100, Christoph Hellwig wrote:
> These days all devices should have a DMA coherent mask, and most dma_ops
> implementations rely on that fact. But just to be sure add an assert to
> ring the warning bell if that is not the case.
>
> Signed-off-by: Christoph Hel
On Wed, Jan 10, 2018 at 09:00:15AM +0100, Christoph Hellwig wrote:
> To implement the x86 forbid_dac and iommu_sac_force we want an arch hook
> so that it can apply the global options across all dma_map_ops
> implementations.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Konrad Rzeszutek Wil
On Wed, Jan 10, 2018 at 09:09:13AM +0100, Christoph Hellwig wrote:
> We'll need that name for a generic implementation soon.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Konrad Rzeszutek Wilk
> ---
> arch/ia64/hp/common/hwsw_iommu.c | 4 ++--
> arch/ia64/hp/common/sba_iommu.c | 6 +++---
On Wed, Jan 10, 2018 at 09:09:14AM +0100, Christoph Hellwig wrote:
> We'll need that name for a generic implementation soon.
>
Reviewed-by: Konrad Rzeszutek Wilk
> Signed-off-by: Christoph Hellwig
> ---
> arch/powerpc/include/asm/swiotlb.h | 2 +-
> arch/powerpc/kernel/dma-swiotlb.c | 4 ++--
>
On Wed, Jan 10, 2018 at 09:09:15AM +0100, Christoph Hellwig wrote:
> We'll need that name for a generic implementation soon.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Konrad Rzeszutek Wilk
> ---
> arch/x86/kernel/pci-swiotlb.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
On Wed, Jan 10, 2018 at 09:09:16AM +0100, Christoph Hellwig wrote:
OK?
Reviewed-by: Konrad Rzeszutek Wilk
> Signed-off-by: Christoph Hellwig
> ---
> arch/powerpc/kernel/dma-swiotlb.c | 2 +-
> arch/x86/kernel/pci-swiotlb.c | 2 +-
> include/linux/swiotlb.h | 4 ++--
> lib/swiotlb
On Fri, Jan 12, 2018 at 9:41 PM, Segher Boessenkool
wrote:
> On Fri, Jan 12, 2018 at 08:43:21PM +0100, Arnd Bergmann wrote:
>> On Fri, Jan 12, 2018 at 5:39 PM, Segher Boessenkool
>> We could theoretically work around it by turning that into
>> "#if defined(CONFIG_CC_OPTIMIZE_FOR_SIZE) ||
>> defin
On Fri, Jan 12, 2018 at 10:29:01PM +0100, Arnd Bergmann wrote:
> On Fri, Jan 12, 2018 at 9:41 PM, Segher Boessenkool
> wrote:
> > On Fri, Jan 12, 2018 at 08:43:21PM +0100, Arnd Bergmann wrote:
> >> On Fri, Jan 12, 2018 at 5:39 PM, Segher Boessenkool
>
> >> We could theoretically work around it by
On Fri, Jan 12, 2018 at 10:41 PM, Segher Boessenkool
wrote:
> On Fri, Jan 12, 2018 at 10:29:01PM +0100, Arnd Bergmann wrote:
>> On Fri, Jan 12, 2018 at 9:41 PM, Segher Boessenkool
>> wrote:
>> > On Fri, Jan 12, 2018 at 08:43:21PM +0100, Arnd Bergmann wrote:
>> >> On Fri, Jan 12, 2018 at 5:39 PM,
On Fri, Jan 12, 2018 at 10:45:31PM +0100, Arnd Bergmann wrote:
> > I guess you could enable the _x routines whenever you use ubsan? Ubsan
> > will cause much bigger code growth than the handful of insns in those
> > routines?
>
> Right, that could work, too. My patch that Herbert merged intention
Nathan Fontenot writes:
> On 01/08/2018 11:19 AM, Michael Bringmann wrote:
>> Add code to parse the new property 'ibm,thread-groups" when it is
>> present. The content of this property explicitly defines the number
>> of threads per core as well as the PowerPC 'threads_core_mask'.
>> The design
On Fri, Jan 12, 2018 at 11:02:51AM -0800, Matthew Wilcox wrote:
> On Fri, Jan 12, 2018 at 06:26:06PM +0100, Laurent Dufour wrote:
> > @@ -1354,7 +1354,10 @@ extern int handle_mm_fault(struct vm_area_struct
> > *vma, unsigned long address,
> > unsigned int flags);
> > #ifdef CONFIG_SPF
97 matches
Mail list logo