* Ryan Roberts [250530 12:50]:
...
> >
> >
> > These wrappers are terrible for readability and annoying for argument
> > lists too.
>
> Agreed.
>
> >
> > Could we do something like the pgtbl_mod_mask or zap_details and pass
> > through a struct or one unsigned int for create and lazy_mmu?
>
+cc Jann who is a specialist in all things page table-y and especially scary
edge cases :)
On Fri, May 30, 2025 at 03:04:38PM +0100, Ryan Roberts wrote:
> Hi All,
>
> I recently added support for lazy mmu mode on arm64. The series is now in
> Linus's tree so should be in v6.16-rc1. But during test
* Ryan Roberts [250530 10:05]:
> Lazy mmu mode applies to the current task and permits pte modifications
> to be deferred and updated at a later time in a batch to improve
> performance. apply_to_page_range() calls its callback in lazy mmu mode
> and some of those callbacks call into the page allo
On 30/05/2025 17:23, Liam R. Howlett wrote:
> * Ryan Roberts [250530 10:05]:
>> Lazy mmu mode applies to the current task and permits pte modifications
>> to be deferred and updated at a later time in a batch to improve
>> performance. apply_to_page_range() calls its callback in lazy mmu mode
>> a
> On 30 May 2025, at 5:14 AM, Stephen Rothwell wrote:
>
> Hi all,
>
> On Tue, 13 May 2025 20:28:09 +1000 Stephen Rothwell
> wrote:
>>
>> After merging the powerpc tree, today's linux-next build (htmldocs)
>> produced this warning:
>>
>> Documentation/arch/powerpc/htm.rst: WARNING: documen
On Fri, May 30, 2025 at 6:45 PM Ryan Roberts wrote:
> On 30/05/2025 17:26, Jann Horn wrote:
> > On Fri, May 30, 2025 at 4:04 PM Ryan Roberts wrote:
> >> pagemap_scan_pmd_entry() was previously modifying ptes while in lazy mmu
> >> mode, then performing tlb maintenance for the modified ptes, then
On 30/05/2025 17:26, Jann Horn wrote:
> On Fri, May 30, 2025 at 4:04 PM Ryan Roberts wrote:
>> pagemap_scan_pmd_entry() was previously modifying ptes while in lazy mmu
>> mode, then performing tlb maintenance for the modified ptes, then
>> leaving lazy mmu mode. But any pte modifications during la
On Fri, May 30, 2025 at 4:04 PM Ryan Roberts wrote:
> pagemap_scan_pmd_entry() was previously modifying ptes while in lazy mmu
> mode, then performing tlb maintenance for the modified ptes, then
> leaving lazy mmu mode. But any pte modifications during lazy mmu mode
> may be deferred until arch_le
On Fri, May 30, 2025 at 04:23:31PM +0100, Conor Dooley wrote:
> On Wed, May 28, 2025 at 11:43:59AM -0400, Frank Li wrote:
> > On Mon, May 26, 2025 at 04:54:30PM +0100, Conor Dooley wrote:
> > > On Thu, May 22, 2025 at 05:39:50PM -0400, Frank Li wrote:
> > > > Add vf610 reset controller, which used
On Fri, May 30, 2025 at 06:34:04AM -0500, Bjorn Helgaas wrote:
> On Fri, May 30, 2025 at 09:16:59AM +0530, Manivannan Sadhasivam wrote:
> > On Wed, May 28, 2025 at 05:35:00PM -0500, Bjorn Helgaas wrote:
> > > On Thu, May 08, 2025 at 12:40:33PM +0530, Manivannan Sadhasivam wrote:
> > > > The PCI lin
On 30/05/2025 15:47, Lorenzo Stoakes wrote:
> +cc Jann who is a specialist in all things page table-y and especially scary
> edge cases :)
>
> On Fri, May 30, 2025 at 03:04:38PM +0100, Ryan Roberts wrote:
>> Hi All,
>>
>> I recently added support for lazy mmu mode on arm64. The series is now in
>>
On Wed, May 28, 2025 at 11:43:59AM -0400, Frank Li wrote:
> On Mon, May 26, 2025 at 04:54:30PM +0100, Conor Dooley wrote:
> > On Thu, May 22, 2025 at 05:39:50PM -0400, Frank Li wrote:
> > > Add vf610 reset controller, which used to reboot system to fix below
> > > CHECK_DTB warnings:
> > >
> > > ar
Introduce new arch_in_lazy_mmu_mode() API, which returns true if the
calling context is currently in lazy mmu mode or false otherwise. Each
arch that supports lazy mmu mode must provide an implementation of this
API.
The API will shortly be used to prevent accidental lazy mmu mode nesting
when per
Lazy mmu mode applies to the current task and permits pte modifications
to be deferred and updated at a later time in a batch to improve
performance. tlb_next_batch() is called in lazy mmu mode as follows:
zap_pte_range
arch_enter_lazy_mmu_mode
do_zap_pte_range
zap_present_ptes
zap_p
Commit 491344301b25 ("arm64/mm: Permit lazy_mmu_mode to be nested") made
the arm64 implementation of lazy_mmu_mode tolerant to nesting. But
subsequent commits have fixed the core code to ensure that lazy_mmu_mode
never gets nested (as originally intended). Therefore we can revert this
commit and re
Lazy mmu mode applies to the current task and permits pte modifications
to be deferred and updated at a later time in a batch to improve
performance. apply_to_page_range() calls its callback in lazy mmu mode
and some of those callbacks call into the page allocator to either
allocate or free pages.
pagemap_scan_pmd_entry() was previously modifying ptes while in lazy mmu
mode, then performing tlb maintenance for the modified ptes, then
leaving lazy mmu mode. But any pte modifications during lazy mmu mode
may be deferred until arch_leave_lazy_mmu_mode(), inverting the required
ordering between
migrate_vma_collect_pmd() was previously modifying ptes while in lazy
mmu mode, then performing tlb maintenance for the modified ptes, then
leaving lazy mmu mode. But any pte modifications during lazy mmu mode
may be deferred until arch_leave_lazy_mmu_mode(), inverting the required
ordering between
Hi All,
I recently added support for lazy mmu mode on arm64. The series is now in
Linus's tree so should be in v6.16-rc1. But during testing in linux-next we
found some ugly corners (unexpected nesting). I was able to fix those issues by
making the arm64 implementation more permissive (like the ot
On Fri, May 30, 2025 at 09:16:59AM +0530, Manivannan Sadhasivam wrote:
> On Wed, May 28, 2025 at 05:35:00PM -0500, Bjorn Helgaas wrote:
> > On Thu, May 08, 2025 at 12:40:33PM +0530, Manivannan Sadhasivam wrote:
> > > The PCI link, when down, needs to be recovered to bring it back. But that
> > > ca
Hi Johannes:
Thanks for your feedback. I will drop it.
On Mon, 2025-05-26 at 16:20 +0800, Ai Chao wrote:
Hi Johannes:
for_each_child_of_node.
You still haven't explained why it's even correct.
johannes
The for_each_child_of_node() function is used to iterate over all child
nodes of a d
On 29.05.25 08:32, Alistair Popple wrote:
Previously dax pages were skipped by the pagewalk code as pud_special() or
vm_normal_page{_pmd}() would be false for DAX pages. Now that dax pages are
refcounted normally that is no longer the case, so add explicit checks to
skip them.
Is this really wh
On 29.05.25 08:32, Alistair Popple wrote:
Currently dax is the only user of pmd and pud mapped ZONE_DEVICE
pages. Therefore page walkers that want to exclude DAX pages can check
pmd_devmap or pud_devmap. However soon dax will no longer set PFN_DEV,
meaning dax pages are mapped as normal pages.
E
On 29.05.25 08:32, Alistair Popple wrote:
The PFN_MAP flag is no longer used for anything, so remove it. The
PFN_SG_CHAIN and PFN_SG_LAST flags never appear to have been used so
also remove them.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
---
With SPECIAL mentioned as well
24 matches
Mail list logo