return pfn_pte(page_to_pfn(page), pgprot);
> +}
> +#endif
> +#endif
> +
> /**
> * folio_maybe_dma_pinned - Report if a folio may be pinned for DMA.
> * @folio: The folio.
For s390:
Reviewed-by: Alexander Gordeev
Thanks!
> Reviewed-by: Alexander Gordeev
Sorry, I meant for s390.
Can not judge the other archs impact.
_test_dirty(folio))
> + entry = pte_mkdirty(entry);
> if (unlikely(vmf_orig_pte_uffd_wp(vmf)))
> entry = pte_mkuffd_wp(entry);
> /* copy-on-write page */
Reviewed-by: Alexander Gordeev
Thanks!
On Tue, Feb 18, 2025 at 05:06:38PM +, Matthew Wilcox wrote:
...
> > With the above the implicit dirtifying of hugetlb PTEs (as result of
> > mk_huge_pte() -> mk_pte()) in make_huge_pte() is removed:
> >
> > static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
> >
On Mon, Feb 17, 2025 at 07:08:28PM +, Matthew Wilcox (Oracle) wrote:
Hi Matthew,
> If the first access to a folio is a read that is then followed by a
> write, we can save a page fault. s390 implemented this in their
> mk_pte() in commit abf09bed3cce ("s390/mm: implement software dirty
> bit
On Mon, Feb 03, 2025 at 08:58:49AM +0200, Dmitry V. Levin wrote:
Hi Dmitry,
> PTRACE_SET_SYSCALL_INFO is a generic ptrace API that complements
> PTRACE_GET_SYSCALL_INFO by letting the ptracer modify details of
> system calls the tracee is blocked in.
...
FWIW, I am getting these on s390:
# ./to
On Wed, Jan 22, 2025 at 03:06:05PM +0100, Alexander Gordeev wrote:
Hi Kevin,
> On Wed, Jan 22, 2025 at 08:49:54AM +0100, Heiko Carstens wrote:
> > > > static inline pgd_t *pgd_alloc(struct mm_struct *mm)
> > > > {
> > > > - return (pgd_t *) crst_ta
On Wed, Jan 22, 2025 at 08:49:54AM +0100, Heiko Carstens wrote:
> > > static inline pgd_t *pgd_alloc(struct mm_struct *mm)
> > > {
> > > - return (pgd_t *) crst_table_alloc(mm);
> > > + unsigned long *table = crst_table_alloc(mm);
> > > +
> > > + if (!table)
> > > + return NULL;
> >
> >
On Fri, Jan 03, 2025 at 06:44:15PM +, Kevin Brodsky wrote:
Hi Kevin,
...
> diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h
> index 5fced6d3c36b..b19b6ed2ab53 100644
> --- a/arch/s390/include/asm/pgalloc.h
> +++ b/arch/s390/include/asm/pgalloc.h
> @@ -130,11 +130,
gt;context.flush_mm = 1;
> tlb->freed_tables = 1;
> @@ -140,6 +141,7 @@ static inline void pud_free_tlb(struct mmu_gather *tlb,
> pud_t *pud,
> {
> if (mm_pud_folded(tlb->mm))
> return;
> + pagetable_pud_dtor(virt_to_ptdesc(pud));
> tlb->mm->context.flush_mm = 1;
> tlb->freed_tables = 1;
> tlb->cleared_p4ds = 1;
Acked-by: Alexander Gordeev
Thanks!
pagetable_pte_dtor_free(ptdesc);
> + pagetable_dtor_free(ptdesc);
> }
>
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -211,7 +205,7 @@ static void pte_free_now(struct rcu_head *head)
> {
> struct ptdesc *ptdesc = container_of(head, struct ptdesc, pt_rcu_head);
>
> - pagetable_pte_dtor_free(ptdesc);
> + pagetable_dtor_free(ptdesc);
> }
>
> void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
Acked-by: Alexander Gordeev
Thanks!
28 files changed, 62 insertions(+), 95 deletions(-)
...
For s390:
Acked-by: Alexander Gordeev
Thanks!
On Sun, Dec 22, 2024 at 07:15:37PM +0800, Guo Weikang wrote:
Hi Guo,
> Before SLUB initialization, various subsystems used memblock_alloc to
> allocate memory. In most cases, when memory allocation fails, an immediate
> panic is required. To simplify this behavior and reduce repetitive checks,
>
/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3001,6 +3001,12 @@ static inline void pagetable_dtor(struct ptdesc
> *ptdesc)
> lruvec_stat_sub_folio(folio, NR_PAGETABLE);
> }
>
> +static inline void pagetable_dtor_free(struct ptdesc *ptdesc)
> +{
> + pagetable_dtor(ptdesc);
> + pagetable_free(ptdesc);
> +}
> +
> static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc)
> {
> struct folio *folio = ptdesc_folio(ptdesc);
For s390:
Acked-by: Alexander Gordeev
Thanks!
es the actual freeing of
> these
> + * pages.
> *
> * MMU_GATHER_RCU_TABLE_FREE
> *
> @@ -207,6 +208,16 @@ struct mmu_table_batch {
> #define MAX_TABLE_BATCH \
> ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *))
>
> +#ifndef __HAV
On Mon, Jan 06, 2025 at 09:34:55PM +0800, Qi Zheng wrote:
> OK, will change the subject and description to:
>
> s390: pgtable: also move pagetable_dtor() of PxD to pagetable_dtor_free()
>
> To unify the PxD and PTE TLB free path, also move the pagetable_dtor() of
> PMD|PUD|P4D to pagetable_dtor_f
On Mon, Jan 06, 2025 at 07:05:16PM +0800, Qi Zheng wrote:
> > I understand that you want to sort p.._free_tlb() routines, but please
>
> Yes, I thought it was a minor change, so I just did it.
>
> > do not move the code around or make a separate follow-up patch.
>
> Well, if you have a strong op
On Mon, Jan 06, 2025 at 07:02:17PM +0800, Qi Zheng wrote:
> > On Mon, Dec 30, 2024 at 05:07:47PM +0800, Qi Zheng wrote:
> > > To unify the PxD and PTE TLB free path, also move the pagetable_dtor() of
> > > PMD|PUD|P4D to __tlb_remove_table().
> >
> > The above and Subject are still incorrect: page
On Mon, Jan 06, 2025 at 06:55:58PM +0800, Qi Zheng wrote:
> > > +static inline void pagetable_dtor(struct ptdesc *ptdesc)
> > > +{
> > > + struct folio *folio = ptdesc_folio(ptdesc);
> > > +
> > > + ptlock_free(ptdesc);
> > > + __folio_clear_pgtable(folio);
> > > + lruvec_stat_sub_folio(folio, NR_P
On Mon, Dec 30, 2024 at 05:07:47PM +0800, Qi Zheng wrote:
> To unify the PxD and PTE TLB free path, also move the pagetable_dtor() of
> PMD|PUD|P4D to __tlb_remove_table().
The above and Subject are still incorrect: pagetable_dtor() is
called from pagetable_dtor_free(), not from __tlb_remove_table
On Mon, Dec 30, 2024 at 05:07:42PM +0800, Qi Zheng wrote:
> The pagetable_p*_dtor() are exactly the same except for the handling of
> ptlock. If we make ptlock_free() handle the case where ptdesc->ptl is
> NULL and remove VM_BUG_ON_PAGE() from pmd_ptlock_free(), we can unify
> pagetable_p*_dtor() i
On Mon, Dec 30, 2024 at 05:07:41PM +0800, Qi Zheng wrote:
> Like PMD and PTE level page table, also add statistics for PUD and P4D
> page table.
...
> diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
> index e95b2c8081eb8..b946964afce8e 100644
> --- a/arch/s390/include/asm/tlb
On Wed, Aug 14, 2024 at 04:44:24PM +0100, Matthew Wilcox (Oracle) wrote:
Hi Matthew,
> I believe the test for PageDirty() is no longer needed. The
> commit adding it was abf09bed3cce with the rationale that this
> avoided faults for tmpfs and shmem pages. shmem does not mark
> newly allocated f
On Wed, Aug 14, 2024 at 04:44:24PM +0100, Matthew Wilcox (Oracle) wrote:
Hi Matthew,
> I believe the test for PageDirty() is no longer needed. The
> commit adding it was abf09bed3cce with the rationale that this
> avoided faults for tmpfs and shmem pages. shmem does not mark
> newly allocated f
On Tue, Jul 05, 2022 at 05:44:06PM +0200, Peter Zijlstra wrote:
Hi Peter,
> Sven, does all this still reproduce if you take out
> CONFIG_HAVE_MARCH_Z196_FEATURES ?
Yes, it hits.
___
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infra
On Tue, Jun 28, 2022 at 10:39:59PM -0500, Eric W. Biederman wrote:
> Steven Rostedt writes:
>
> > On Tue, 28 Jun 2022 17:42:22 -0500
> > "Eric W. Biederman" wrote:
> >
> >> diff --git a/kernel/ptrace.c b/kernel/ptrace.c
> >> index 156a99283b11..cb85bcf84640 100644
> >> --- a/kernel/ptrace.c
> >>
On Sat, Jun 25, 2022 at 11:34:46AM -0500, Eric W. Biederman wrote:
> I haven't gotten as far as reproducing this but I have started giving
> this issue some thought.
>
> This entire thing smells like a memory barrier is missing somewhere.
> However by definition the lock implementations in linux p
On Tue, Jun 21, 2022 at 09:02:05AM -0500, Eric W. Biederman wrote:
> Alexander Gordeev writes:
>
> > On Thu, May 05, 2022 at 01:26:45PM -0500, Eric W. Biederman wrote:
> >> From: Peter Zijlstra
> >>
> >> Currently ptrace_stop() / do_signal_stop() rely on
On Thu, May 05, 2022 at 01:26:45PM -0500, Eric W. Biederman wrote:
> From: Peter Zijlstra
>
> Currently ptrace_stop() / do_signal_stop() rely on the special states
> TASK_TRACED and TASK_STOPPED resp. to keep unique state. That is, this
> state exists only in task->__state and nowhere else.
>
>
29 matches
Mail list logo