On Tue, Oct 07, 2014 at 09:00:48AM +0100, Lee Jones wrote:
> On Mon, 06 Oct 2014, Guenter Roeck wrote:
> > --- a/drivers/mfd/ab8500-sysctrl.c
> > +++ b/drivers/mfd/ab8500-sysctrl.c
> > @@ -6,6 +6,7 @@
>
> [...]
>
> > +static int ab8500_power_off(struct notifier_block *this, unsigned long
> > unu
On Tue, Oct 07, 2014 at 06:28:34AM +0100, Guenter Roeck wrote:
> Register with kernel poweroff handler instead of setting pm_power_off
> directly.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Signed-off-by: Guenter Roeck
> ---
> arch/arm64/kernel/psci.c | 2 +-
>
55,8 +153,7 @@ void machine_power_off(void)
> {
> local_irq_disable();
> smp_send_stop();
> - if (pm_power_off)
> - pm_power_off();
> + do_kernel_poweroff();
> }
Acked-by: Catalin Marinas
___
Linuxppc-dev mail
For arm64:
Acked-by: Catalin Marinas
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Tue, Jul 07, 2015 at 01:03:40PM -0400, Eric B Munson wrote:
> diff --git a/arch/arm/kernel/calls.S b/arch/arm/kernel/calls.S
> index 05745eb..514e77b 100644
> --- a/arch/arm/kernel/calls.S
> +++ b/arch/arm/kernel/calls.S
> @@ -397,6 +397,9 @@
> /* 385 */CALL(sys_memfd_create)
>
Christoph,
On 12 August 2015 at 08:05, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig
> ---
> include/asm-generic/dma-mapping-common.h | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/include/asm-generic/dma-mapping-common.h
> b/include/asm-generic/dm
On Fri, Jun 13, 2014 at 08:12:08AM +0100, Denis Kirjanov wrote:
> On 6/12/14, Catalin Marinas wrote:
> > On Thu, Jun 12, 2014 at 01:00:57PM +0100, Denis Kirjanov wrote:
> >> On 6/12/14, Denis Kirjanov wrote:
> >> > On 6/12/14, Catalin Marinas wrote:
> >
On 13 Jun 2014, at 22:44, Benjamin Herrenschmidt
wrote:
> On Fri, 2014-06-13 at 09:56 +0100, Catalin Marinas wrote:
>
>> OK, so that's the DART table allocated via alloc_dart_table(). Is
>> dart_tablebase removed from the kernel linear mapping after allocation?
>
On Mon, Jan 27, 2014 at 06:08:17AM +, Nicolas Pitre wrote:
> ARM and ARM64 are the only two architectures implementing
> arch_cpu_idle_prepare() simply to call local_fiq_enable().
>
> We have secondary_start_kernel() already calling local_fiq_enable() and
> this is done a second time in arch_c
On Mon, Jan 27, 2014 at 03:51:02PM +, Nicolas Pitre wrote:
> On Mon, 27 Jan 2014, Catalin Marinas wrote:
>
> > On Mon, Jan 27, 2014 at 06:08:17AM +, Nicolas Pitre wrote:
> > > ARM and ARM64 are the only two architectures implementing
> > > arch_cpu
On 6 February 2014 14:16, Nicolas Pitre wrote:
> The core idle loop now takes care of it.
>
> Signed-off-by: Nicolas Pitre
Acked-by: Catalin Marinas
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org
On Mon, Sep 14, 2015 at 04:04:37PM +1000, Michael Ellerman wrote:
> On Sun, 2015-09-13 at 21:36 +0300, Denis Kirjanov wrote:
> > During the MSI bitmap test on boot kmemleak spews the following trace:
> >
> > unreferenced object 0xc0016e86c900 (size 64):
> > comm "swapper/0", pid 1, jiffies
On Mon, Sep 14, 2015 at 07:36:49PM +1000, Michael Ellerman wrote:
> On Mon, 2015-09-14 at 10:15 +0100, Catalin Marinas wrote:
> > You could add some flag to struct msi_bitmap based on mem_init_done to
> > be able to reclaim some slab memory later. If the bitmap is small and
>
On Tue, Sep 15, 2015 at 08:14:08PM +0300, Denis Kirjanov wrote:
> diff --git a/arch/powerpc/include/asm/msi_bitmap.h
> b/arch/powerpc/include/asm/msi_bitmap.h
> index 97ac3f4..9a1d2fb 100644
> --- a/arch/powerpc/include/asm/msi_bitmap.h
> +++ b/arch/powerpc/include/asm/msi_bitmap.h
> @@ -19,6 +19,
ns
> from slab and memblock so we can properly free/handle
> memory in msi_bitmap_free().
>
> Signed-off-by: Denis Kirjanov
Reviewed-by: Catalin Marinas
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
end_ro_after_init);
I'm not a fan of this approach but I couldn't come up with anything
better. I was hoping we could check for PageReserved() in scan_block()
but on arm64 it ends up not scanning the .bss at all.
Until another user appears, I'm ok with this patch.
Acked-by: Catalin Marinas
On Thu, Mar 21, 2019 at 12:15:46AM +1100, Michael Ellerman wrote:
> Catalin Marinas writes:
> > On Wed, Mar 13, 2019 at 10:57:17AM -0400, Qian Cai wrote:
> >> @@ -1531,7 +1547,14 @@ static void kmemleak_scan(void)
> >>
> >>/* data/bss scanning */
>
ow partial freeing via
the kmemleak_free_part() in the powerpc kvm_free_tmp() function.
Acked-by: Michael Ellerman (powerpc)
Reported-by: Qian Cai
Signed-off-by: Catalin Marinas
---
Posting as a proper patch following the inlined one here:
http://lkml.kernel.org/r/201903201
On Mon, Apr 01, 2019 at 12:51:48PM +0100, Vincenzo Frascino wrote:
> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index 2d419006ad43..47ba72345739 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -245,6 +245,8 @@ void update_vsyscall(struct timekee
ll.tbl | 4
> arch/xtensa/kernel/syscalls/syscall.tbl | 4
> 16 files changed, 65 insertions(+), 1 deletion(-)
For arm64:
Acked-by: Catalin Marinas
gt; ccmpw0, #CLOCK_MONOTONIC_RAW, #0x4, ne
> - b.ne1f
> + b.ne2f
>
> - ldr x2, 5f
> - b 2f
> -1:
> +1: /* Get hrtimer_res */
> + ldr x2, [vdso_data, #CLOCK_REALTIME_RES]
And here we need an "ldr w2, ..." since hrtimer_res is u32.
With the above (which Will can fix up):
Reviewed-by: Catalin Marinas
On Fri, Apr 03, 2020 at 01:58:31AM +0100, Al Viro wrote:
> On Thu, Apr 02, 2020 at 11:35:57AM -0700, Kees Cook wrote:
> > Yup, I think it's a weakness of the ARM implementation and I'd like to
> > not extend it further. AFAIK we should never nest, but I would not be
> > surprised at all if we did.
On Thu, Oct 31, 2019 at 05:58:53PM +0100, Christoph Hellwig wrote:
> On Thu, Oct 31, 2019 at 05:22:59PM +0100, Nicolas Saenz Julienne wrote:
> > OK, I see what you mean now. It's wrong indeed.
> >
> > The trouble is the ZONE_DMA series[1] in arm64, also due for v5.5, will be
> > affected by this p
On Mon, May 11, 2020 at 08:51:15AM +0100, Will Deacon wrote:
> On Sun, May 10, 2020 at 09:54:41AM +0200, Christoph Hellwig wrote:
> > The second argument is the end "pointer", not the length.
> >
> > Signed-off-by: Christoph Hellwig
> > ---
> > arch/arm64/kernel/machine_kexec.c | 1 +
> > 1 file
On Mon, May 11, 2020 at 09:15:55PM +1000, Michael Ellerman wrote:
> Qian Cai writes:
> > kvmppc_pmd_alloc() and kvmppc_pte_alloc() allocate some memory but then
> > pud_populate() and pmd_populate() will use __pa() to reference the newly
> > allocated memory. The same is in xive_native_provision_p
On Mon, May 11, 2020 at 07:43:30AM -0400, Qian Cai wrote:
> On May 11, 2020, at 7:15 AM, Michael Ellerman wrote:
> > There is kmemleak_alloc_phys(), which according to the docs can be used
> > for tracking a phys address.
> >
> > Did you try that?
>
> Catalin, feel free to give your thoughts her
(catching up with emails)
On Wed, 11 Jul 2018 at 00:40, Benjamin Herrenschmidt
wrote:
> On Tue, 2018-07-10 at 17:17 +0200, Paul Menzel wrote:
> > On a the IBM S822LC (8335-GTA) with Ubuntu 18.04 I built Linux master
> > – 4.18-rc4+, commit 092150a2 (Merge branch 'for-linus'
> > of git://git.kerne
On Tue, Dec 04, 2018 at 07:38:24PM -0800, Douglas Anderson wrote:
> Douglas Anderson (4):
> kgdb: Remove irq flags from roundup
> kgdb: Fix kgdb_roundup_cpus() for arches who used smp_call_function()
> kgdb: Don't round up a CPU that failed rounding up before
> kdb: Don't back trace on a cp
Hi Doug,
On Fri, Dec 07, 2018 at 10:40:24AM -0800, Doug Anderson wrote:
> On Fri, Dec 7, 2018 at 9:42 AM Catalin Marinas
> wrote:
> > On Tue, Dec 04, 2018 at 07:38:24PM -0800, Douglas Anderson wrote:
> > > Douglas Anderson (4):
> > > kgdb: Remove irq flags
t; configured this way then architecturally it isn't allowed to have a
> large page that this level, and any code using these page walking macros
> is implicitly relying on the page size/number of levels being the same as
> the kernel. So it is safe to reuse this for p?d_large() as it
unsigned long attrs)
> {
> - if (!dev_is_dma_coherent(dev) || (attrs & DMA_ATTR_WRITE_COMBINE))
> - return pgprot_writecombine(prot);
> - return prot;
> + return pgprot_writecombine(prot);
> }
For arm64:
Acked-by: Catalin Marinas
n 64-bit kernels, so that argument is no
> longer very strong.
>
> Assigning the number lets us use the system call on 64-bit kernels as well
> as providing a more consistent set of syscalls across architectures.
>
> Signed-off-by: Arnd Bergmann
For the arm64 part:
Acked-by: Catalin Marinas
NR_migrate_pages 400
> __SYSCALL(__NR_migrate_pages, compat_sys_migrate_pages)
> +#define __NR_kexec_file_load 401
> +__SYSCALL(__NR_kexec_file_load, sys_kexec_file_load)
For arm64:
Acked-by: Catalin Marinas
uching the 32-bit architectures twice.
>
> Signed-off-by: Arnd Bergmann
For arm64:
Acked-by: Catalin Marinas
they
> pass only counts elapsed time, not time since the epoch. They
> will be dealt with later.
>
> Signed-off-by: Arnd Bergmann
Acked-by: Catalin Marinas
(as long as compat follows the arm32 syscall numbers)
On Mon, Jan 14, 2019 at 01:59:00PM +0100, David Hildenbrand wrote:
> This will be done by free_reserved_page().
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Bhupesh Sharma
> Cc: James Morse
> Cc: Marc Zyngier
> Cc: Dave Kleikamp
> Cc: Mark Rutland
> Cc: And
arking pages as PG_reserved is not necessary, they are
> already in the desired state (otherwise they would have been handed over
> to the buddy as free pages and bad things would happen).
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: James Morse
> Cc: Bhupesh Sharma
> Cc
On Mon, Jan 21, 2019 at 10:03:53AM +0200, Mike Rapoport wrote:
> diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
> index ae34e3a..2c61ea4 100644
> --- a/arch/arm64/mm/numa.c
> +++ b/arch/arm64/mm/numa.c
> @@ -237,6 +237,10 @@ static void __init setup_node_data(int nid, u64
> start_pfn, u6
For the arm64 bits in this series:
Acked-by: Catalin Marinas
On Mon, Jan 27, 2020 at 09:11:53PM -0500, Qian Cai wrote:
> On Jan 27, 2020, at 8:28 PM, Anshuman Khandual
> wrote:
> > This adds tests which will validate architecture page table helpers and
> > other accessors in their compliance with expected generic MM semantics.
> > This will help various ar
On Tue, Jan 28, 2020 at 02:07:10PM -0500, Qian Cai wrote:
> On Jan 28, 2020, at 12:47 PM, Catalin Marinas wrote:
> > The primary goal here is not finding regressions but having clearly
> > defined semantics of the page table accessors across architectures. x86
> > and arm6
On Tue, Jan 28, 2020 at 06:57:53AM +0530, Anshuman Khandual wrote:
> This gets build and run when CONFIG_DEBUG_VM_PGTABLE is selected along with
> CONFIG_VM_DEBUG. Architectures willing to subscribe this test also need to
> select CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE which for now is limited to x86 and
On Wed, 2010-07-14 at 00:04 +0100, Grant Likely wrote:
> - It still doesn't resolve dependencies. A solver would help with this.
> For the time being I work around the problem by running the generated
> config through 'oldconfig' and looking for differences. If the files
> differ (ignoring
On Fri, 2010-07-16 at 19:46 +0100, Linus Torvalds wrote:
> On Fri, Jul 16, 2010 at 11:40 AM, Nicolas Pitre wrote:
> >
> > DOH.
>
> Well, it's possible that the correct approach is a mixture.
>
> Automatically do the trivial cases (recursive selects, dependencies
> that are simple or of the form
On Fri, 2010-07-16 at 21:17 +0100, Grant Likely wrote:
> On Fri, Jul 16, 2010 at 2:09 PM, Catalin Marinas
> wrote:
> > On Fri, 2010-07-16 at 19:46 +0100, Linus Torvalds wrote:
> >> On Fri, Jul 16, 2010 at 11:40 AM, Nicolas Pitre wrote:
> >> >
> >> >
hardware but as long as someone sorts out things
like _edata or other PPC-specific allocators which aren't currently
tracked by kmemleak, I'm OK with the original patch:
Acked-by: Catalin Marinas
--
Catalin
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Thu, 2009-07-16 at 17:43 +1000, Michael Ellerman wrote:
> On Thu, 2009-07-16 at 11:25 +1000, Michael Ellerman wrote:
> > Very lightly tested, doesn't crash the kernel.
> >
> > Signed-off-by: Michael Ellerman
> > ---
> >
> > It doesn't look like we actually need to add any support in the
> > a
On Fri, 2009-07-17 at 10:29 +1000, Michael Ellerman wrote:
> On Thu, 2009-07-16 at 18:52 +0100, Catalin Marinas wrote:
> > On Thu, 2009-07-16 at 17:43 +1000, Michael Ellerman wrote:
> > > On Thu, 2009-07-16 at 11:25 +1000, Michael Ellerman wrote:
> > > > Very lig
On Fri, 2009-07-17 at 18:32 +1000, Michael Ellerman wrote:
> On Fri, 2009-07-17 at 09:26 +0100, Catalin Marinas wrote:
> > On Fri, 2009-07-17 at 10:29 +1000, Michael Ellerman wrote:
> > > The wrinkle is that lmb never frees, so by definition it can't leak :)
> >
>
d by kmemleak, ie. slab allocations etc.
Looks alright to me (though I haven't tested it). You can add a
Reviewed-by: Catalin Marinas
--
Catalin
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Fri, 2009-08-14 at 17:56 +1000, Benjamin Herrenschmidt wrote:
> On Thu, 2009-08-13 at 16:40 +0100, Catalin Marinas wrote:
> > On Thu, 2009-08-13 at 13:01 +1000, Michael Ellerman wrote:
> > > We don't actually want kmemleak to track the lmb allocations, so we
> > &
On Fri, 2009-08-14 at 12:49 -0700, David Miller wrote:
> From: Benjamin Herrenschmidt
> Date: Fri, 14 Aug 2009 17:56:40 +1000
>
> > On Thu, 2009-08-13 at 16:40 +0100, Catalin Marinas wrote:
> >> On Thu, 2009-08-13 at 13:01 +1000, Michael Ellerman wrote:
> >> >
On Mon, Sep 02, 2019 at 11:44:43AM +1000, Michael Ellerman wrote:
> Stephen Rothwell writes:
> > Hi all,
> >
> > Today's linux-next merge of the powerpc tree got a conflict in:
> >
> > arch/Kconfig
> >
> > between commit:
> >
> > 5cf896fb6be3 ("arm64: Add support for relocating the kernel with
On Mon, Oct 14, 2019 at 08:31:02PM +0200, Nicolas Saenz Julienne wrote:
> the Raspberry Pi 4 offers up to 4GB of memory, of which only the first
> is DMA capable device wide. This forces us to use of bounce buffers,
> which are currently not very well supported by ARM's custom DMA ops.
> Among othe
On Tue, Oct 15, 2019 at 09:48:22AM +0200, Nicolas Saenz Julienne wrote:
> A little off topic but I was wondering if you have a preferred way to refer to
> the arm architecture in a way that it unambiguously excludes arm64 (for
> example
> arm32 would work).
arm32 should be fine. Neither arm64 nor
On Thu, Sep 21, 2023 at 05:35:54PM +0100, Ryan Roberts wrote:
> On 21/09/2023 17:30, Andrew Morton wrote:
> > On Thu, 21 Sep 2023 17:19:59 +0100 Ryan Roberts
> > wrote:
> >> Ryan Roberts (8):
> >> parisc: hugetlb: Convert set_huge_pte_at() to take vma
> >> powerpc: hugetlb: Convert set_huge_p
by the
> perf folks.
> - Map Powerpc to sys_ni_syscall (Rick Edgecombe)
> ---
> arch/alpha/kernel/syscalls/syscall.tbl | 1 +
> arch/arm/tools/syscall.tbl | 1 +
> arch/arm64/include/asm/unistd.h | 2 +-
> arch/arm64/include/asm/unistd32.h | 2 ++
For arm64 (compat):
Acked-by: Catalin Marinas
gt; arch/arm64/kernel/efi.c | 4
> arch/arm64/kernel/image-vars.h| 2 ++
It's more Ard's thing and he reviewed it already but if you need another
ack:
Acked-by: Catalin Marinas
On Fri, Nov 10, 2023 at 08:19:23PM +0530, Aneesh Kumar K.V wrote:
> Some architectures can now support EXEC_ONLY mappings and I am wondering
> what get_user_pages() on those addresses should return. Earlier
> PROT_EXEC implied PROT_READ and pte_access_permitted() returned true for
> that. But arm64
On Thu, May 18, 2023 at 02:59:34PM +0800, Yicong Yang wrote:
> From: Barry Song
>
> on x86, batched and deferred tlb shootdown has lead to 90%
> performance increase on tlb shootdown. on arm64, HW can do
> tlb shootdown without software IPI. But sync tlbi is still
> quite expensive.
[...]
> .../
On Thu, Jun 29, 2023 at 05:31:36PM +0100, Catalin Marinas wrote:
> On Thu, May 18, 2023 at 02:59:34PM +0800, Yicong Yang wrote:
> > From: Barry Song
> >
> > on x86, batched and deferred tlb shootdown has lead to 90%
> > performance increase on tlb shootdown. on arm64,
On Mon, Jul 10, 2023 at 04:39:14PM +0800, Yicong Yang wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7856c3a3e35a..f0ce8208c57f 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -96,6 +96,7 @@ config ARM64
> select ARCH_SUPPORTS_NUMA_BALANCING
> se
318-2-khand...@linux.vnet.ibm.com/]
> Signed-off-by: Yicong Yang
> [Rebase and fix incorrect return value type]
> Reviewed-by: Kefeng Wang
> Reviewed-by: Anshuman Khandual
> Reviewed-by: Barry Song
> Reviewed-by: Xin Hao
> Tested-by: Punit Agrawal
Reviewed-by: Catalin Marinas
> Tested-by: Yicong Yang
> Tested-by: Xin Hao
> Tested-by: Punit Agrawal
> Signed-off-by: Barry Song
> Signed-off-by: Yicong Yang
> Reviewed-by: Kefeng Wang
> Reviewed-by: Xin Hao
> Reviewed-by: Anshuman Khandual
Reviewed-by: Catalin Marinas
64 may
> only need a synchronization barrier(dsb) here rather than
> a full mm flush. So add arch_flush_tlb_batched_pending() to
> allow an arch-specific implementation here. This intends no
> functional changes on x86 since still a full mm flush for
> x86.
>
> Signed-off-by: Yicong Yang
Reviewed-by: Catalin Marinas
dsb(ish);
> +}
Nitpick: as an additional patch, I'd add some comment for these two
functions that the TLBI has already been issued and only a DSB is needed
to synchronise its effect on the other CPUs.
Reviewed-by: Catalin Marinas
D(mm));
> __tlbi(vale1is, addr);
> __tlbi_user(vale1is, addr);
> + mmu_notifier_invalidate_range(mm, uaddr & PAGE_MASK,
> + (uaddr & PAGE_MASK) +
> PAGE_SIZE);
Nitpick: we have PAGE_ALIGN() for this.
For arm64:
Acked-by: Catalin Marinas
sh_tlb_range(struct
> vm_area_struct *vma,
> scale++;
> }
> dsb(ish);
> - mmu_notifier_invalidate_range(vma->vm_mm, start, end);
> + mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end);
> }
For arm64:
Acked-by: Catalin Marinas
On Mon, Mar 27, 2023 at 02:12:56PM +0200, Arnd Bergmann wrote:
> Another difference that I do not address here is what cache invalidation
> does for partical cache lines. On arm32, arm64 and powerpc, a partial
> cache line always gets written back before invalidation in order to
> ensure that data
As a consequence, neither configuration is actually safe to use in a
> general-purpose kernel that is used on both MPCore systems and ARM1176
> with prefetching enabled.
As the author of this terrible hack (created under duress ;))
Acked-by: Catalin Marinas
IIRC, RWFO is working in combinat
On Tue, Apr 04, 2023 at 06:50:01AM -0500, Justin Forbes wrote:
> On Tue, Apr 4, 2023 at 2:22 AM Mike Rapoport wrote:
> > On Wed, Mar 29, 2023 at 10:55:37AM -0500, Justin Forbes wrote:
> > > On Sat, Mar 25, 2023 at 1:09 AM Mike Rapoport wrote:
> > > >
> > > > From: "Mike Rapoport (IBM)"
> > > >
>
On Tue, Apr 18, 2023 at 03:05:57PM -0700, Andrew Morton wrote:
> On Wed, 12 Apr 2023 18:27:08 +0100 Catalin Marinas
> wrote:
> > > It sounds nice in theory. In practice. EXPERT hides too much. When you
> > > flip expert, you expose over a 175ish new config options which
On Tue, Apr 25, 2023 at 11:09:58AM -0500, Justin Forbes wrote:
> On Tue, Apr 18, 2023 at 5:22 PM Andrew Morton
> wrote:
> > On Wed, 12 Apr 2023 18:27:08 +0100 Catalin Marinas
> > wrote:
> > > > It sounds nice in theory. In practice. EXPERT hides too much. Whe
p_task_struct' [-Werror=missing-prototypes]
>
> There are already prototypes in a number of architecture specific headers
> that have addressed those warnings before, but it's much better to have
> these in a single place so the warning no longer shows up anywhere.
>
> Signed-off-by: Arnd Bergmann
For arm64:
Acked-by: Catalin Marinas
On Tue, May 09, 2023 at 09:43:47PM -0700, Hugh Dickins wrote:
> In rare transient cases, not yet made possible, pte_offset_map() and
> pte_offset_map_lock() may not find a page table: handle appropriately.
>
> Signed-off-by: Hugh Dickins
Acked-by: Catalin Marinas
t;
> Signed-off-by: Hugh Dickins
Acked-by: Catalin Marinas
> Fix this by providing it in only one place that is always visible.
>
> Signed-off-by: Arnd Bergmann
Acked-by: Catalin Marinas
Hi Stephen,
On Tue, Jun 13, 2023 at 04:21:19PM +1000, Stephen Rothwell wrote:
> After merging the mm tree, today's linux-next build (powerpc
> ppc44x_defconfig) failed like this:
>
> In file included from arch/powerpc/include/asm/page.h:247,
> from arch/powerpc/include/asm/thread
with the
ARCH_KMALLOC_MINALIGN series?
Thank you.
Catalin Marinas (3):
powerpc: Move the ARCH_DMA_MINALIGN definition to asm/cache.h
microblaze: Move the ARCH_{DMA,SLAB}_MINALIGN definitions to
asm/cache.h
sh: Move the ARCH_DMA_MINALIGN definition to asm/cache.h
arch/microblaze/includ
The powerpc architecture defines ARCH_DMA_MINALIGN in asm/page_32.h and
only if CONFIG_NOT_COHERENT_CACHE is enabled (32-bit platforms only).
Move this macro to asm/cache.h to allow a generic ARCH_DMA_MINALIGN
definition in linux/cache.h without redefine errors/warnings.
Signed-off-by: Catalin
The sh architecture defines ARCH_DMA_MINALIGN in asm/page.h. Move it to
asm/cache.h to allow a generic ARCH_DMA_MINALIGN definition in
linux/cache.h without redefine errors/warnings.
Signed-off-by: Catalin Marinas
Cc: Yoshinori Sato
Cc: Rich Felker
Cc: John Paul Adrian Glaubitz
Cc: linux
The microblaze architecture defines ARCH_DMA_MINALIGN in asm/page.h.
Move it to asm/cache.h to allow a generic ARCH_DMA_MINALIGN definition
in linux/cache.h without redefine errors/warnings.
While at it, also move ARCH_SLAB_MINALIGN to asm/cache.h for
consistency.
Signed-off-by: Catalin Marinas
On Tue, Jun 13, 2023 at 04:42:40PM +, Christophe Leroy wrote:
>
>
> Le 13/06/2023 à 17:52, Catalin Marinas a écrit :
> > Hi,
> >
> > The ARCH_KMALLOC_MINALIGN reduction series defines a generic
> > ARCH_DMA_MINALIGN in linux/cache.h:
> >
> > ht
On Mon, Jun 12, 2023 at 02:04:10PM -0700, Vishal Moola (Oracle) wrote:
> As part of the conversions to replace pgtable constructor/destructors with
> ptdesc equivalents, convert various page table functions to use ptdescs.
>
> Signed-off-by: Vishal Moola (Oracle)
Acked-by: Catalin Marinas
---
> arch/arm64/kernel/process.c | 4 ----
Acked-by: Catalin Marinas
dress, so no need kern_addr_valid(),
> let's remove unneeded kern_addr_valid() completely.
>
> Signed-off-by: Kefeng Wang
For arm64:
Acked-by: Catalin Marinas
On Thu, Feb 15, 2024 at 10:31:51AM +, Ryan Roberts wrote:
> Core-mm needs to be able to advance the pfn by an arbitrary amount, so
> override the new pte_advance_pfn() API to do so.
>
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
be the same.
>
> This will benefit us when we shortly introduce the transparent contpte
> support. In this case, ptep_get() will become more complex so we now
> have all the code abstracted through it.
>
> Tested-by: John Hubbard
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
since those call sites are acting on behalf of
> core-mm and should continue to call into the public set_ptes() rather
> than the arch-private __set_ptes().
>
> Tested-by: John Hubbard
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
transparent contpte work. We won't have a private version of
> ptep_clear() so let's convert it to directly call ptep_get_and_clear().
>
> Tested-by: John Hubbard
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
ar_young
> - ptep_clear_flush_young
> - ptep_set_wrprotect
> - ptep_set_access_flags
>
> Tested-by: John Hubbard
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
been discussed that __flush_tlb_page() may be wrong though.
> Regardless, both will be resolved separately if needed.
>
> Reviewed-by: David Hildenbrand
> Tested-by: John Hubbard
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
istent and the only variation allowed is the dirty/young
state to be passed to the orig_pte returned. The original pte may have
been updated by the time this loop finishes but I don't think it
matters, it wouldn't be any different than reading a single pte and
returning it while it is being updated.
If you can make this easier to parse (in a few years time) with an
additional patch adding some more comments, that would be great. For
this patch:
Reviewed-by: Catalin Marinas
--
Catalin
wrprotect a whole contpte block without unfolding is
> possible thanks to the tightening of the Arm ARM in respect to the
> definition and behaviour when 'Misprogramming the Contiguous bit'. See
> section D21194 at https://developer.arm.com/documentation/102105/ja-07/
>
> Tested-by: John Hubbard
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
the contpte. This significantly reduces unfolding
> operations, reducing the number of tlbis that must be issued.
>
> Tested-by: John Hubbard
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
;
> Acked-by: Mark Rutland
> Reviewed-by: David Hildenbrand
> Tested-by: John Hubbard
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
is called by them, as __always_inline. This is worth ~1% on the
> fork() microbenchmark with order-0 folios (the common case).
>
> Acked-by: Mark Rutland
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
perform the checks when an indiviual PTE is modified via mprotect
> (ptep_modify_prot_commit() -> set_pte_at() -> set_ptes(nr=1)) and only
> when we are setting the final PTE in a contpte-aligned block.
>
> Signed-off-by: Ryan Roberts
Acked-by: Catalin Marinas
On Fri, Feb 16, 2024 at 12:53:43PM +, Ryan Roberts wrote:
> On 16/02/2024 12:25, Catalin Marinas wrote:
> > On Thu, Feb 15, 2024 at 10:31:59AM +, Ryan Roberts wrote:
> >> arch/arm64/mm/contpte.c | 285 +++
> >
> > Ni
On Fri, Feb 16, 2024 at 12:53:43PM +, Ryan Roberts wrote:
> On 16/02/2024 12:25, Catalin Marinas wrote:
> > On Thu, Feb 15, 2024 at 10:31:59AM +, Ryan Roberts wrote:
> >> +pte_t contpte_ptep_get_lockless(pte_t *orig_ptep)
> >> +{
> >> + /*
> >>
1 - 100 of 254 matches
Mail list logo