lly used, while
> leaving the arhcitecture specific ones as the user visible
> place for configuring it, to avoid breaking user configs.
>
> Signed-off-by: Arnd Bergmann
For arm64:
Acked-by: Catalin Marinas
> Reported-by: Linux Kernel Functional Testing
> Fixes: a0d2fcd62ac2 ("vdso/ARM: Make union vdso_data_store available for all
> architectures")
> Link:
> https://lore.kernel.org/lkml/ca+g9fytrxxm_ko9fnpz3xarxhv7ud_yqp-teupqrnrhu+_0...@mail.gmail.com/
> Signed-off-by: Arnd Bergmann
Acked-by: Catalin Marinas
On Thu, 26 Jan 2023 22:39:30 -0800, Randy Dunlap wrote:
> Correct many spelling errors in Documentation/ as reported by codespell.
>
> Maintainers of specific kernel subsystems are only Cc-ed on their
> respective patches, not the entire series. [if all goes well]
>
> These patches are based on l
On Tue, Feb 14, 2023 at 08:49:03AM +0100, Alexandre Ghiti wrote:
> From: Palmer Dabbelt
>
> As far as I can tell this is not used by userspace and thus should not
> be part of the user-visible API.
>
> Signed-off-by: Palmer Dabbelt
Acked-by: Catalin Marinas
flush_tlb_fix_spurious_fault(vma, address, ptep) do { } while (0)
For arm64:
Acked-by: Catalin Marinas
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index 2c70b4d1263d..c1f6b46ec555 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s3
On Thu, Mar 23, 2023 at 11:21:44AM +0200, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)"
>
> It is not a good idea to change fundamental parameters of core memory
> management. Having predefined ranges suggests that the values within
> those ranges are sensible, but one has to *really* unders
es.
>
> Signed-off-by: Mike Rapoport (IBM)
Acked-by: Catalin Marinas
On Tue, Apr 02, 2024 at 03:51:36PM +0800, Kefeng Wang wrote:
> The __do_page_fault() only check vma->flags and call handle_mm_fault(),
> and only called by do_page_fault(), let's squash it into do_page_fault()
> to cleanup code.
>
> Signed-off-by: Kefeng Wang
Reviewed-by: Catalin Marinas
TRY | VM_FAULT_COMPLETED)))
I think this makes sense. A concurrent modification of vma->vm_flags
(e.g. mprotect()) would do a vma_start_write(), so no need to recheck
again with the mmap lock held.
Reviewed-by: Catalin Marinas
yan
> Signed-off-by: Kefeng Wang
As I reviewed v1 and the changes are minimal:
Reviewed-by: Catalin Marinas
lat_sig -P 1 prot lat_sig' from lmbench testcase.
>
> Since the page faut is handled under per-VMA lock, count it as a vma lock
> event with VMA_LOCK_SUCCESS.
>
> Reviewed-by: Suren Baghdasaryan
> Signed-off-by: Kefeng Wang
Reviewed-by: Catalin Marinas
On Tue, Sep 01, 2020 at 03:22:39AM +0900, Masahiro Yamada wrote:
> The vdso linker script is preprocessed on demand.
> Adding it to 'targets' is enough to include the .cmd file.
>
> Signed-off-by: Masahiro Yamada
For arm64:
Acked-by: Catalin Marinas
On Wed, Oct 07, 2020 at 09:39:31AM +0200, Jann Horn wrote:
> arch_validate_prot() is a hook that can validate whether a given set of
> protection flags is valid in an mprotect() operation. It is given the set
> of protection flags and the address being modified.
>
> However, the address being modi
On Thu, Oct 08, 2020 at 09:34:26PM +1100, Michael Ellerman wrote:
> Jann Horn writes:
> > So while the mprotect() case
> > checks the flags and refuses unknown values, the mmap() code just lets
> > the architecture figure out which bits are actually valid to set (via
> > arch_calc_vm_prot_bits())
Hi Khalid,
On Wed, Oct 07, 2020 at 02:14:09PM -0600, Khalid Aziz wrote:
> On 10/7/20 1:39 AM, Jann Horn wrote:
> > arch_validate_prot() is a hook that can validate whether a given set of
> > protection flags is valid in an mprotect() operation. It is given the set
> > of protection flags and the a
On Mon, Oct 12, 2020 at 11:03:33AM -0600, Khalid Aziz wrote:
> On 10/10/20 5:09 AM, Catalin Marinas wrote:
> > On Wed, Oct 07, 2020 at 02:14:09PM -0600, Khalid Aziz wrote:
> >> On 10/7/20 1:39 AM, Jann Horn wrote:
> >>> arch_validate_prot() is a hook that can v
On Mon, Oct 12, 2020 at 01:14:50PM -0600, Khalid Aziz wrote:
> On 10/12/20 11:22 AM, Catalin Marinas wrote:
> > On Mon, Oct 12, 2020 at 11:03:33AM -0600, Khalid Aziz wrote:
> >> On 10/10/20 5:09 AM, Catalin Marinas wrote:
> >>> On Wed, Oct 07, 2020 at 02:14:09PM -0600
On Wed, Oct 14, 2020 at 03:21:16PM -0600, Khalid Aziz wrote:
> On 10/13/20 3:16 AM, Catalin Marinas wrote:
> > On Mon, Oct 12, 2020 at 01:14:50PM -0600, Khalid Aziz wrote:
> >> On 10/12/20 11:22 AM, Catalin Marinas wrote:
> >>> On Mon, Oct 12, 2020 at 11:03:33AM -0600
t; ---
> arch/arm/include/asm/stackprotector.h | 9 +
> arch/arm64/include/asm/stackprotector.h | 9 +----
For arm64:
Acked-by: Catalin Marinas
On Thu, Nov 17, 2022 at 04:26:48PM +0800, Yicong Yang wrote:
> It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> large systems but may not have improvement on small systems like on
> a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
> on CONFIG_EXPERT for this
On Sun, Jan 08, 2023 at 06:48:41PM +0800, Barry Song wrote:
> On Fri, Jan 6, 2023 at 2:15 AM Catalin Marinas
> wrote:
> > On Thu, Nov 17, 2022 at 04:26:48PM +0800, Yicong Yang wrote:
> > > It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> > >
IG_MEMTEST.
> Conversely CONFIG_MEMTEST cannot be enabled on platforms where it would
> not be tested anyway.
>
> Cc: Russell King
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Thomas Bogendoerfer
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
RCH_ENABLE_SPLIT_PMD_PTLOCK
> mm: Drop redundant HAVE_ARCH_TRANSPARENT_HUGEPAGE
>
> arch/arc/Kconfig | 9 ++--
> arch/arm/Kconfig | 10 ++---
> arch/arm64/Kconfig | 30 ++
For arm64:
Acked-by: Catalin Marinas
Hi Anshuman,
On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote:
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return PAGE_NONE;
> + case VM
On Tue, Mar 15, 2022 at 03:18:34PM +0100, David Hildenbrand wrote:
> diff --git a/arch/arm64/include/asm/pgtable-prot.h
> b/arch/arm64/include/asm/pgtable-prot.h
> index b1e1b74d993c..62e0ebeed720 100644
> --- a/arch/arm64/include/asm/pgtable-prot.h
> +++ b/arch/arm64/include/asm/pgtable-prot.h
>
On Thu, Mar 17, 2022 at 11:04:18AM +0100, David Hildenbrand wrote:
> On 16.03.22 19:27, Catalin Marinas wrote:
> > On Tue, Mar 15, 2022 at 03:18:34PM +0100, David Hildenbrand wrote:
> >> @@ -909,12 +925,13 @@ static inline pmd_t pmdp_establish(struct
> &
t vm_area_struct
> *vma,
>
> /*
> * Encode and decode a swap entry:
> - * bits 0-1: present (must be zero)
> + * bits 0: present (must be zero)
> + * bits 1: remember PG_anon_exclusive
It looks fine to me.
Reviewed-by: Catalin Marinas
On Mon, Mar 21, 2022 at 05:44:05PM +, Will Deacon wrote:
> On Mon, Mar 21, 2022 at 04:07:48PM +0100, David Hildenbrand wrote:
> > So the example you gave cannot possibly have that bit set. From what I
> > understand, it should be fine. But I have no real preference: I can also
> > just stick to
Hi Ariel,
On Fri, Feb 18, 2022 at 09:45:51PM +0200, Ariel Marcovitch wrote:
> I was running a powerpc 32bit kernel (built using
> qemu_ppc_mpc8544ds_defconfig
> buildroot config, with enabling DEBUGFS+KMEMLEAK+HIGHMEM in the kernel
> config)
> on qemu and invoked the kmemleak scan (twice. for some
on
> Cc: linux...@kvack.org
> Cc: linux-ker...@vger.kernel.org
> Suggested-by: Christoph Hellwig
> Signed-off-by: Anshuman Khandual
Reviewed-by: Catalin Marinas
(VM_MTE & !VM_MTE_ALLOWED).
*/
if (vm_flags & VM_MTE)
prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
return __pgprot(prot);
}
EXPORT_SYMBOL(vm_get_page_prot);
With that:
Reviewed-by: Catalin Marinas
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Anshuman Khandual
Reviewed-by: Catalin Marinas
by: Anshuman Khandual
Reviewed-by: Catalin Marinas
On Wed, Apr 20, 2022 at 03:04:15AM +, Tong Tiangen wrote:
> Add copy_{to, from}_user() to machine check safe.
>
> If copy fail due to hardware memory error, only the relevant processes are
> affected, so killing the user process and isolate the user page with
> hardware memory errors is a more
On Wed, 20 Apr 2022 03:04:11 +, Tong Tiangen wrote:
> With the increase of memory capacity and density, the probability of
> memory error increases. The increasing size and density of server RAM
> in the data center and cloud have shown increased uncorrectable memory
> errors.
>
> Currently, t
On Thu, May 05, 2022 at 02:39:43PM +0800, Tong Tiangen wrote:
> 在 2022/5/4 18:26, Catalin Marinas 写道:
> > On Wed, Apr 20, 2022 at 03:04:15AM +, Tong Tiangen wrote:
> > > Add copy_{to, from}_user() to machine check safe.
> > >
> > > If copy fail due to hard
ke the relative pointers less surprising to both humans and tools by
> calculating them the normal way.
>
> Signed-off-by: Josh Poimboeuf
> ---
> arch/arm64/include/asm/asm-bug.h | 4 ++--
Acked-by: Catalin Marinas
On Wed, 20 Apr 2022 03:04:11 +, Tong Tiangen wrote:
> With the increase of memory capacity and density, the probability of
> memory error increases. The increasing size and density of server RAM
> in the data center and cloud have shown increased uncorrectable memory
> errors.
>
> Currently, t
uld enable both ZONE_DMA and
> ZONE_DMA32 if EXPERT, add ARCH_HAS_ZONE_DMA_SET to make dma zone
> configurable and visible on the two architectures.
>
> Cc: Andrew Morton
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Geert Uytterhoeven
> Cc: Thomas Bogendoerfer
>
On Mon, Jun 07, 2021 at 09:54:22PM +0200, David Hildenbrand wrote:
> The parameter is unused, let's remove it.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Heiko Carstens
> Cc
ntry from the header and updates Documentation/vm/memory-model.rst.
>
> Cc: Jonathan Corbet
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Michael Ellerman
> Cc: Dave Hansen
> Cc: Andy Lutomirski
> Cc: Peter Zijl
On Thu, Jul 09, 2020 at 04:37:10PM +0200, Paul Menzel wrote:
> Despite Linux 5.8-rc4 reporting memory leaks on the IBM POWER 8 S822LC, the
> file does not contain more information.
>
> > $ dmesg
> > […] > [48662.953323] perf: interrupt took too long (2570 > 2500),
> > lowering
> kernel.perf_event_
On Thu, Jul 09, 2020 at 11:08:52PM +0200, Paul Menzel wrote:
> Am 09.07.20 um 19:57 schrieb Catalin Marinas:
> > On Thu, Jul 09, 2020 at 04:37:10PM +0200, Paul Menzel wrote:
> > > Despite Linux 5.8-rc4 reporting memory leaks on the IBM POWER 8 S822LC,
> > > the
>
span itself, so the loop
> over memblock.memory regions is redundant.
>
> Replace the loop with a single call to memblock_set_node() to the entire
> memory.
>
> Signed-off-by: Mike Rapoport
Acked-by: Catalin Marinas
On Mon, Aug 10, 2020 at 11:17:30AM +0200, Gregory Herrero wrote:
> On Mon, Aug 10, 2020 at 08:48:22AM +, Christophe Leroy wrote:
> > Commit ea0eada45632 leads to the following build failure on powerpc:
> >
> > HOSTCC scripts/recordmcount
> > scripts/recordmcount.c: In function 'arm64_is_fak
On Mon, 10 Aug 2020 08:48:22 + (UTC), Christophe Leroy wrote:
> Commit ea0eada45632 leads to the following build failure on powerpc:
>
> HOSTCC scripts/recordmcount
> scripts/recordmcount.c: In function 'arm64_is_fake_mcount':
> scripts/recordmcount.c:440: error: 'R_AARCH64_CALL26' undeclar
On Wed, Aug 26, 2020 at 12:57:48AM +1000, Nicholas Piggin wrote:
> This allows unsupported levels to be constant folded away, and so
> p4d_free_pud_page can be removed because it's no longer linked to.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: linux-arm-ker...@lists
On Tue, Aug 18, 2020 at 05:08:16PM +0100, Will Deacon wrote:
> On Tue, Aug 18, 2020 at 04:07:36PM +0100, Matthew Wilcox wrote:
> > For example, arm64 seems confused in this scenario:
> >
> > void flush_dcache_page(struct page *page)
> > {
> > if (test_bit(PG_dcache_clean, &page->flags))
>
On Tue, Sep 14, 2021 at 02:10:29PM +0200, Ard Biesheuvel wrote:
> The CPU field will be moved back into thread_info even when
> THREAD_INFO_IN_TASK is enabled, so add it back to arm64's definition of
> struct thread_info.
>
> Signed-off-by: Ard Biesheuvel
Acked-by: Catalin Marinas
is sufficient to
> reference the CPU number of the current task.
>
> Signed-off-by: Ard Biesheuvel
Acked-by: Catalin Marinas
erted to the new power-off API.
>
> Signed-off-by: Dmitry Osipenko
Acked-by: Catalin Marinas
large pages).
>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
> Cc: x...@kernel.org
> Cc: "H. Peter Anvin"
> Signed-o
On Mon, Nov 30, 2020 at 07:50:25PM +, ZHIZHIKIN Andrey wrote:
> From Krzysztof Kozlowski :
> > On Mon, Nov 30, 2020 at 03:21:33PM +, Andrey Zhizhikin wrote:
> > > Commit 7ecdea4a0226 ("backlight: generic_bl: Remove this driver as it is
> > > unused") removed geenric_bl driver from the tree,
On Fri, May 15, 2020 at 04:36:26PM +0200, Christoph Hellwig wrote:
> ARM64 needs almost no cache flushing routines of its own. Rely on
> asm-generic/cacheflush.h for the defaults.
>
> Signed-off-by: Christoph Hellwig
Acked-by: Catalin Marinas
| 4 +-
> arch/arm64/Kconfig| 1 -
For arm64:
Acked-by: Catalin Marinas
signed long min,
> unsigned long max)
> #endif
> max_zone_pfns[ZONE_NORMAL] = max;
>
> - free_area_init_nodes(max_zone_pfns);
> + free_area_init(max_zone_pfns);
> }
Acked-by: Catalin Marinas
tween the zones.
>
> After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
> available to all architectures.
>
> Using this function instead of free_area_init_node() simplifies the zone
> detection.
>
> Signed-off-by: Mike Rapoport
Acked-by: Catalin Marinas
(BTW
On Thu, May 14, 2020 at 12:22:36AM +0530, Bhupesh Sharma wrote:
> diff --git a/kernel/crash_core.c b/kernel/crash_core.c
> index 9f1557b98468..18175687133a 100644
> --- a/kernel/crash_core.c
> +++ b/kernel/crash_core.c
> @@ -413,6 +413,7 @@ static int __init crash_save_vmcoreinfo_init(void)
>
On Thu, Jun 18, 2020 at 06:45:29AM +0530, Anshuman Khandual wrote:
> There are many instances where vmemap allocation is often switched between
> regular memory and device memory just based on whether altmap is available
> or not. vmemmap_alloc_block_buf() is used in various platforms to allocate
>
On Thu, Jul 02, 2020 at 08:08:55PM +0800, Dave Young wrote:
> Hi Catalin,
> On 07/02/20 at 12:00pm, Catalin Marinas wrote:
> > On Thu, May 14, 2020 at 12:22:36AM +0530, Bhupesh Sharma wrote:
> > > diff --git a/kernel/crash_core.c b/kernel/crash_core.c
> > > index 9f1
On Thu, 14 May 2020 00:22:35 +0530, Bhupesh Sharma wrote:
> Apologies for the delayed update. Its been quite some time since I
> posted the last version (v5), but I have been really caught up in some
> other critical issues.
>
> Changes since v5:
>
> - v5 can be viewed here:
> h
On Sat, Jul 31, 2021 at 02:22:32PM +0900, Masahiro Yamada wrote:
> Make architectures select TRACE_IRQFLAGS_SUPPORT instead of
> having many defines.
>
> Signed-off-by: Masahiro Yamada
For arm64:
Acked-by: Catalin Marinas
arch/riscv/kernel/vdso/vgettimeofday.o is built only under that
> condition. So, you can always define CFLAGS_vgettimeofday.o
>
> Remove all the meaningless conditionals.
>
> Signed-off-by: Masahiro Yamada
Acked-by: Catalin Marinas
symbol
we export in this file.
Either way:
Acked-by: Catalin Marinas
-off-by: Christoph Hellwig
Acked-by: Catalin Marinas
On Tue, May 28, 2019 at 09:14:12PM +0200, Mathieu Malaterre wrote:
> On Tue, May 28, 2019 at 7:21 AM Michael Ellerman wrote:
> > Mathieu Malaterre writes:
> > > Is there a way to dump more context (somewhere in of tree
> > > flattening?). I cannot make sense of the following:
> >
> > Hmm. Not tha
- depends on NUMA
> -
> source "kernel/Kconfig.hz"
>
> config ARCH_SPARSEMEM_ENABLE
For arm64:
Acked-by: Catalin Marinas
On Thu, Dec 16, 2021 at 05:13:47PM +, Christophe Leroy wrote:
> Le 09/12/2021 à 11:02, Nicholas Piggin a écrit :
> > Excerpts from Christophe Leroy's message of December 9, 2021 3:18 am:
> >> Use the generic version of arch_hugetlb_get_unmapped_area()
> >> which is now available at all time.
>
On Fri, Dec 17, 2021 at 10:27:23AM +, Christophe Leroy wrote:
> Powerpc needs flags and len to make decision on arch_get_mmap_end().
>
> So add them as parameters to arch_get_mmap_end().
>
> Signed-off-by: Christophe Leroy
> Cc: Steve Capper
> Cc: Catalin Marin
itectural code then they default
> to (TASK_SIZE) and (base) so should not introduce any behavioural
> changes to architectures that do not define them.
>
> For the time being, only ARM64 is affected by this change.
>
> Signed-off-by: Christophe Leroy
> Cc: Steve Capper
> Cc: Wil
available
> platforms definitions. This makes huge pte creation much cleaner and easier
> to follow.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Michael Ellerman
> Cc: Paul Mackerras
> Cc: "David S. Miller"
> Cc: Mike Kravetz
> Cc: Andrew Morton
available
> platforms definitions. This makes huge pte creation much cleaner and easier
> to follow.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Michael Ellerman
> Cc: Paul Mackerras
> Cc: "David S. Miller"
> Cc: Mike Kravetz
> Cc: Andrew Morton
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
> THIS
> + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
Pretty verbose header, I guess GPLv2 would do but IANAL.
Anyway:
Reviewed-by: Catalin Marinas
On Mon, Sep 18, 2017 at 04:39:37PM -0400, Roy Pledge wrote:
> Use the shared-memory-pool mechanism for free buffer proxy record
> area allocation.
>
> Signed-off-by: Roy Pledge
Reviewed-by: Catalin Marinas
On Mon, Sep 18, 2017 at 04:39:38PM -0400, Roy Pledge wrote:
> Use the shared-memory-pool mechanism for frame queue descriptor and
> packed frame descriptor record area allocations.
>
> Signed-off-by: Roy Pledge
Reviewed-by: Catalin Marinas
igned-off-by: Roy Pledge
Reviewed-by: Catalin Marinas
On Mon, Sep 18, 2017 at 04:39:41PM -0400, Roy Pledge wrote:
> From: Claudiu Manoil
>
> Not relevant and arch dependent. Overkill for PPC.
>
> Signed-off-by: Claudiu Manoil
> Signed-off-by: Roy Pledge
Reviewed-by: Catalin Marinas
On Mon, Sep 18, 2017 at 04:39:42PM -0400, Roy Pledge wrote:
> From: Valentin Rothberg
>
> The Kconfig symbol for 32bit ARM is 'ARM', not 'ARM32'.
>
> Signed-off-by: Valentin Rothberg
> Signed-off-by: Claudiu Manoil
> Signed-off-by: Roy Pledge
Reviewed-by: Catalin Marinas
M. This also fixes the code so sparse checking is clean.
>
> Signed-off-by: Roy Pledge
Reviewed-by: Catalin Marinas
On Mon, Sep 18, 2017 at 04:39:35PM -0400, Roy Pledge wrote:
> Madalin Bucur (4):
> soc/fsl/qbman: Drop set/clear_bits usage
> soc/fsl/qbman: add QMAN_REV32
> soc/fsl/qbman: different register offsets on ARM
> soc/fsl/qbman: Enable FSL_LAYERSCAPE config on ARM
>
> Roy Pledge (5):
> soc/fs
On Tue, Jan 02, 2018 at 08:05:44PM +, Ard Biesheuvel wrote:
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 10684b17d0bd..b6d51b4d5ce1 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -3556,9 +3556,16 @@ static void pci_do_fixups(struct pci_dev *dev, stru
On Tue, Jan 02, 2018 at 08:05:46PM +, Ard Biesheuvel wrote:
> diff --git a/arch/arm/include/asm/jump_label.h
> b/arch/arm/include/asm/jump_label.h
> index e12d7d096fc0..7b05b404063a 100644
> --- a/arch/arm/include/asm/jump_label.h
> +++ b/arch/arm/include/asm/jump_label.h
> @@ -45,5 +45,32 @@
On Fri, Jan 05, 2018 at 06:01:33PM +, Ard Biesheuvel wrote:
> On 5 January 2018 at 17:58, Catalin Marinas wrote:
> > On Tue, Jan 02, 2018 at 08:05:46PM +, Ard Biesheuvel wrote:
> >> diff --git a/arch/arm/include/asm/jump_label.h
> >> b/arch/arm/include/
safe syscalls.
For arm64:
Acked-by: Catalin Marinas
On Thu, Aug 24, 2017 at 04:37:45PM -0400, Roy Pledge wrote:
> --- a/drivers/soc/fsl/qbman/bman_ccsr.c
> +++ b/drivers/soc/fsl/qbman/bman_ccsr.c
[...]
> @@ -201,6 +202,38 @@ static int fsl_bman_probe(struct platform_device *pdev)
> return -ENODEV;
> }
>
> + /*
> + * If
On Thu, Aug 24, 2017 at 04:37:47PM -0400, Roy Pledge wrote:
> Updates the QMan and BMan device tree bindings for reserved memory
> nodes. This makes the reserved memory allocation compatible with
> the shared-dma-pool usage.
>
> Signed-off-by: Roy Pledge
> ---
> Documentation/devicetree/bindings
On Thu, Aug 24, 2017 at 04:37:49PM -0400, Roy Pledge wrote:
> From: Claudiu Manoil
>
> Not relevant and arch dependent. Overkill for PPC.
>
> Signed-off-by: Claudiu Manoil
> Signed-off-by: Roy Pledge
> ---
> drivers/soc/fsl/qbman/dpaa_sys.h | 4
> 1 file changed, 4 deletions(-)
>
> diff
On Thu, Aug 24, 2017 at 04:37:51PM -0400, Roy Pledge wrote:
> diff --git a/drivers/soc/fsl/qbman/bman.c b/drivers/soc/fsl/qbman/bman.c
> index ff8998f..e31c843 100644
> --- a/drivers/soc/fsl/qbman/bman.c
> +++ b/drivers/soc/fsl/qbman/bman.c
> @@ -154,7 +154,7 @@ struct bm_mc {
> };
>
> struct b
On Thu, Sep 14, 2017 at 07:07:50PM +, Roy Pledge wrote:
> On 9/14/2017 10:00 AM, Catalin Marinas wrote:
> > On Thu, Aug 24, 2017 at 04:37:51PM -0400, Roy Pledge wrote:
> >> @@ -123,23 +122,34 @@ static int bman_portal_probe(struct platform_device
> >> *pdev)
> &
(finally getting around to looking at this series, sorry for the delay)
On Tue, Apr 09, 2024 at 09:17:54AM +0300, Baruch Siach wrote:
> From: Catalin Marinas
>
> Hardware DMA limit might not be power of 2. When RAM range starts above
> 0, say 4GB, DMA limit of 30 bits should e
On Tue, Apr 09, 2024 at 09:17:55AM +0300, Baruch Siach wrote:
> of_dma_get_max_cpu_address() returns the highest CPU address that
> devices can use for DMA. The implicit assumption is that all CPU
> addresses below that limit are suitable for DMA. However the
> 'dma-ranges' property this code uses
On Tue, Apr 09, 2024 at 09:17:57AM +0300, Baruch Siach wrote:
> Current code using zone_dma_bits assume that all addresses range in the
> bits mask are suitable for DMA. For some existing platforms this
> assumption is not correct. DMA range might have non zero lower limit.
[...]
> @@ -59,7 +60,7 @
On Tue, May 28, 2024 at 12:24:57PM +0530, Amit Daniel Kachhap wrote:
> On 5/3/24 18:31, Joey Gouly wrote:
> > diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
> > index 5966ee4a6154..ecb2d18dc4d7 100644
> > --- a/arch/arm64/include/asm/mman.h
> > +++ b/arch/arm64/include/a
rotect() call.
> +
> fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs);
You'll need to rebase this on 6.10-rcX since this function disappeared.
Otherwise the patch looks fine.
Reviewed-by: Catalin Marinas
On Fri, May 03, 2024 at 02:01:23PM +0100, Joey Gouly wrote:
> This indicates if the system supports POE. This is a CPUCAP_BOOT_CPU_FEATURE
> as the boot CPU will enable POE if it has it, so secondary CPUs must also
> have this feature.
>
> Signed-off-by: Joey Gouly
> Cc: Cat
On Fri, May 03, 2024 at 02:01:23PM +0100, Joey Gouly wrote:
> This indicates if the system supports POE. This is a CPUCAP_BOOT_CPU_FEATURE
> as the boot CPU will enable POE if it has it, so secondary CPUs must also
> have this feature.
>
> Signed-off-by: Joey Gouly
> Cc: Cat
On Fri, Jun 21, 2024 at 06:01:52PM +0100, Catalin Marinas wrote:
> On Fri, May 03, 2024 at 02:01:23PM +0100, Joey Gouly wrote:
> > This indicates if the system supports POE. This is a CPUCAP_BOOT_CPU_FEATURE
> > as the boot CPU will enable POE if it has it, so secondary CPUs mus
On Fri, May 03, 2024 at 02:01:24PM +0100, Joey Gouly wrote:
> POR_EL0 is a register that can be modified by userspace directly,
> so it must be context switched.
>
> Signed-off-by: Joey Gouly
> Cc: Catalin Marinas
> Cc: Will Deacon
Reviewed-by: Catalin Marinas
On Fri, May 03, 2024 at 02:01:28PM +0100, Joey Gouly wrote:
> Expose a HWCAP and ID_AA64MMFR3_EL1_S1POE to userspace, so they can be used to
> check if the CPU supports the feature.
>
> Signed-off-by: Joey Gouly
> Cc: Catalin Marinas
> Cc: Will Deacon
Reviewed-by: Catalin Marinas
On Fri, May 03, 2024 at 02:01:29PM +0100, Joey Gouly wrote:
> To make it easier to share the generic PKEYs flags, move the MTE flag.
>
> Signed-off-by: Joey Gouly
> Cc: Catalin Marinas
> Cc: Will Deacon
Acked-by: Catalin Marinas
101 - 200 of 256 matches
Mail list logo