On 18.07.25 14:43, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 10:03:44PM +0200, David Hildenbrand wrote:
On 17.07.25 21:55, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 08:51:51PM +0100, Lorenzo Stoakes wrote:
@@ -721,37 +772,21 @@ struct page *vm_normal_page_pmd(struct vm_area_struct
On 18.07.25 12:47, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 10:14:33PM +0200, David Hildenbrand wrote:
On 17.07.25 22:03, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 01:52:11PM +0200, David Hildenbrand wrote:
Let's introduce vm_normal_page_pud(), which ends up being fairly s
Yeah sorry I was in 'what locks do we need' mode and hadn't shifted back here,
but I guess the intent is that the caller _must_ hold this lock.
I know it's nitty and annoying (sorry!) but as asserting seems to not be a
possibility here, could we spell these out as a series of points like:
/*
On 18.07.25 12:41, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 10:31:28PM +0200, David Hildenbrand wrote:
On 17.07.25 20:29, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 01:52:08PM +0200, David Hildenbrand wrote:
The huge zero folio is refcounted (+mapcounted -- is that a word
On 18.07.25 09:59, Demi Marie Obenour wrote:
On 7/18/25 03:44, David Hildenbrand wrote:
On 18.07.25 00:06, Demi Marie Obenour wrote:
On 7/17/25 07:52, David Hildenbrand wrote:
print_bad_pte() looks like something that should actually be a WARN
or similar, but historically it apparently has
On 18.07.25 00:06, Demi Marie Obenour wrote:
On 7/17/25 07:52, David Hildenbrand wrote:
print_bad_pte() looks like something that should actually be a WARN
or similar, but historically it apparently has proven to be useful to
detect corruption of page tables even on production systems -- report
On 17.07.25 20:29, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 01:52:08PM +0200, David Hildenbrand wrote:
The huge zero folio is refcounted (+mapcounted -- is that a word?)
differently than "normal" folios, similarly (but different) to the ordinary
shared zeropage.
Yeah, I sort
On 17.07.25 22:03, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 01:52:11PM +0200, David Hildenbrand wrote:
Let's introduce vm_normal_page_pud(), which ends up being fairly simple
because of our new common helpers and there not being a PUD-sized zero
folio.
Use vm_normal_page_pud
-/*
- * vm_normal_page -- This function gets the "struct page" associated with a
pte.
+/**
+ * vm_normal_page_pfn() - Get the "struct page" associated with a PFN in a
+ * non-special page table entry.
This is a bit nebulous/confusing, I mean you'll get PTE entries with PT
that apparently
it can be useful in the real world.
Signed-off-by: David Hildenbrand
---
mm/memory.c | 120
1 file changed, 94 insertions(+), 26 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 173eb6267e0ac..08d16ed7b4cc7 1
On 17.07.25 21:55, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 08:51:51PM +0100, Lorenzo Stoakes wrote:
@@ -721,37 +772,21 @@ struct page *vm_normal_page_pmd(struct vm_area_struct
*vma, unsigned long addr,
print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
ret
ucing vm_normal_folio_pud() until really used.
Reviewed-by: Oscar Salvador
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 2 ++
mm/memory.c| 27 +++
mm/pagewalk.c | 20 ++--
3 files changed, 39 insertions(+), 10 deletions(-)
diff --git a/in
().
Add kerneldoc for all involved functions.
No functional change intended.
Reviewed-by: Oscar Salvador
Signed-off-by: David Hildenbrand
---
mm/memory.c | 183 +++-
1 file changed, 109 insertions(+), 74 deletions(-)
diff --git a/mm/memory.
Let's clean it all further up.
No functional change intended.
Reviewed-by: Oscar Salvador
Reviewed-by: Alistair Popple
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 36 +---
1 file changed, 13 insertions(+), 23 deletions(-)
diff --git
ng to insert a PMD
mapping a folio through dax_fault_iter()->vmf_insert_folio_pmd().
So, it sounds reasonable to not handle huge zero folios differently
to inserting PMDs mapping folios when there already is something mapped.
Reviewed-by: Alistair Popple
Signed-off-by: David Hildenbran
l Hocko
Cc: Zi Yan
Cc: Baolin Wang
Cc: Nico Pache
Cc: Ryan Roberts
Cc: Dev Jain
Cc: Barry Song
Cc: Jann Horn
Cc: Pedro Falcato
Cc: Hugh Dickins
Cc: Oscar Salvador
Cc: Lance Yang
David Hildenbrand (9):
mm/huge_memory: move more common code into insert_pmd()
mm/huge_memory: mov
01233f5867
[ 77.944077] addr:7fd84bb1c000 vm_flags:08100071 anon_vma: ...
[ 77.945186] pgd:10a89f067 p4d:10a89f067 pud:10e5a2067 pmd:105327067
Not using pgdp_get(), because that does not work properly on some arm
configs where pgd_t is an array. Note that we are dumping all levels
even
mentation, and add a comment in the code where XEN ends
up performing the pte_mkspecial() through a hypercall. More details can
be found in commit 923b2919e2c3 ("xen/gntdev: mark userspace PTEs as
special on x86 PV guests").
Cc: David Vrabel
Reviewed-by: Oscar Salvador
Signed-off-by
shared zeropage.
For now, the huge zero folio is not marked as special yet, although
vm_normal_page_pmd() really wants to treat it as special. We'll change
that next.
Reviewed-by: Oscar Salvador
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 8 +---
1 file changed, 5 inser
vm_normal_page().
While at it, update the doc regarding the shared zero folios.
Reviewed-by: Oscar Salvador
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 5 -
mm/memory.c | 14 +-
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/mm/huge_memory.c b/m
Let's clean it all further up.
No functional change intended.
Reviewed-by: Oscar Salvador
Reviewed-by: Alistair Popple
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 72
1 file changed, 24 insertions(+), 48 deletions(-)
diff --
On 17.07.25 10:38, Alistair Popple wrote:
On Tue, Jul 15, 2025 at 03:23:45PM +0200, David Hildenbrand wrote:
Let's convert to vmf_insert_folio_pmd().
There is a theoretical change in behavior: in the unlikely case there is
already something mapped, we'll now still call trace_dax_pmd
On 17.07.25 00:27, Andrew Morton wrote:
On Wed, 16 Jul 2025 10:47:29 +0200 David Hildenbrand wrote:
However the series rejects due to the is_huge_zero_pmd ->
is_huge_zero_pfn changes in Luiz's "mm: introduce snapshot_page() v3"
series, so could we please have a redo agai
On 16.07.25 01:31, Andrew Morton wrote:
On Tue, 15 Jul 2025 15:23:41 +0200 David Hildenbrand wrote:
Based on mm/mm-new. I dropped the CoW PFNMAP changes for now, still
working on a better way to sort all that out cleanly.
Cleanup and unify vm_normal_page_*() handling, also marking the
huge
On 15.07.25 15:23, David Hildenbrand wrote:
print_bad_pte() looks like something that should actually be a WARN
or similar, but historically it apparently has proven to be useful to
detect corruption of page tables even on production systems -- report
the issue and keep the system running to
a
Cc: Mike Rapoport
Cc: Suren Baghdasaryan
Cc: Michal Hocko
Cc: Zi Yan
Cc: Baolin Wang
Cc: Nico Pache
Cc: Ryan Roberts
Cc: Dev Jain
Cc: Barry Song
Cc: Jann Horn
Cc: Pedro Falcato
Cc: Hugh Dickins
Cc: Oscar Salvador
Cc: Lance Yang
David Hildenbrand (9):
mm/huge_memory: move mor
01233f5867
[ 77.944077] addr:7fd84bb1c000 vm_flags:08100071 anon_vma: ...
[ 77.945186] pgd:10a89f067 p4d:10a89f067 pud:10e5a2067 pmd:105327067
Signed-off-by: David Hildenbrand
---
mm/memory.c | 120
1 file changed, 94 insertions(+), 2
ucing vm_normal_folio_pud() until really used.
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 2 ++
mm/memory.c| 27 +++
mm/pagewalk.c | 20 ++--
3 files changed, 39 insertions(+), 10 deletions(-)
diff --git a/include/linux/mm.h b/include/
ng to insert a PMD
mapping a folio through dax_fault_iter()->vmf_insert_folio_pmd().
So, it sounds reasonable to not handle huge zero folios differently
to inserting PMDs mapping folios when there already is something mapped.
Signed-off-by: David Hildenbran
Let's clean it all further up.
No functional change intended.
Reviewed-by: Oscar Salvador
Reviewed-by: Alistair Popple
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 72
1 file changed, 24 insertions(+), 48 deletions(-)
diff --
().
Add kerneldoc for all involved functions.
No functional change intended.
Signed-off-by: David Hildenbrand
---
mm/memory.c | 183 +++-
1 file changed, 109 insertions(+), 74 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 00ee0df020503..d5f
mentation, and add a comment in the code where XEN ends
up performing the pte_mkspecial() through a hypercall. More details can
be found in commit 923b2919e2c3 ("xen/gntdev: mark userspace PTEs as
special on x86 PV guests").
Cc: David Vrabel
Signed-off-by: David Hildenbrand
---
drivers
Let's clean it all further up.
No functional change intended.
Reviewed-by: Oscar Salvador
Reviewed-by: Alistair Popple
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 36 +---
1 file changed, 13 insertions(+), 23 deletions(-)
diff --git
vm_normal_page().
While at it, update the doc regarding the shared zero folios.
Reviewed-by: Oscar Salvador
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 5 -
mm/memory.c | 14 +-
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/mm/huge_memory.c b/m
shared zeropage.
For now, the huge zero folio is not marked as special yet, although
vm_normal_page_pmd() really wants to treat it as special. We'll change
that next.
Reviewed-by: Oscar Salvador
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 8 +---
1 file changed, 5 inser
On 04.05.25 09:15, Andrew Morton wrote:
On Sun, 4 May 2025 08:47:45 +0200 David Hildenbrand wrote:
Methinks max_nr really wants to be unsigned long.
We only batch within a single PTE table, so an integer was sufficient.
The unsigned value is the result of a discussion with Ryan regarding
On 04.05.25 03:28, Andrew Morton wrote:
On Fri, 2 May 2025 23:50:19 +0200 Petr Vaněk wrote:
On XEN PV, folio_pte_batch() can incorrectly batch beyond the end of a
folio due to a corner case in pte_advance_pfn(). Specifically, when the
PFN following the folio maps to an invalidated MFN,
On 13.03.25 18:35, Nico Pache wrote:
On Thu, Mar 13, 2025 at 2:22 AM David Hildenbrand wrote:
On 13.03.25 00:04, Nico Pache wrote:
On Wed, Mar 12, 2025 at 4:19 PM David Hildenbrand wrote:
On 12.03.25 01:06, Nico Pache wrote:
Add NR_BALLOON_PAGES counter to track memory used by balloon
On 13.03.25 00:04, Nico Pache wrote:
On Wed, Mar 12, 2025 at 4:19 PM David Hildenbrand wrote:
On 12.03.25 01:06, Nico Pache wrote:
Add NR_BALLOON_PAGES counter to track memory used by balloon drivers and
expose it through /proc/meminfo and other memory reporting interfaces.
In
On 13.03.25 08:20, Michael S. Tsirkin wrote:
On Wed, Mar 12, 2025 at 11:19:06PM +0100, David Hildenbrand wrote:
On 12.03.25 01:06, Nico Pache wrote:
Add NR_BALLOON_PAGES counter to track memory used by balloon drivers and
expose it through /proc/meminfo and other memory reporting interfaces
On 12.03.25 01:06, Nico Pache wrote:
Add NR_BALLOON_PAGES counter to track memory used by balloon drivers and
expose it through /proc/meminfo and other memory reporting interfaces.
In balloon_page_enqueue_one(), we perform a
__count_vm_event(BALLOON_INFLATE)
and in balloon_page_list_dequeue
On 12.03.25 21:11, Nico Pache wrote:
On Wed, Mar 12, 2025 at 12:57 AM Michael S. Tsirkin wrote:
On Tue, Mar 11, 2025 at 06:06:59PM -0600, Nico Pache wrote:
Update the NR_BALLOON_PAGES counter when pages are added to or
removed from the VMware balloon.
Signed-off-by: Nico Pache
---
drivers
On 03.03.25 13:33, Ryan Roberts wrote:
On 03/03/2025 11:52, David Hildenbrand wrote:
On 02.03.25 15:55, Ryan Roberts wrote:
Commit 49147beb0ccb ("x86/xen: allow nesting of same lazy mode") was
added as a solution for a core-mm code change where
arch_[enter|leave]_lazy_mmu_mode() sta
gher level (which has now been done) and remove this x86-specific
solution.
Fixes: 49147beb0ccb ("x86/xen: allow nesting of same lazy mode")
Does this patch here deserve this tag? IIUC, it's rather a cleanup now
that it was properly fixed elsewhere.
Signed-off-by: Ryan Rober
On 03.03.25 11:22, Ryan Roberts wrote:
On 03/03/2025 08:52, David Hildenbrand wrote:
On 03.03.25 09:49, David Hildenbrand wrote:
On 02.03.25 15:55, Ryan Roberts wrote:
The docs, implementations and use of arch_[enter|leave]_lazy_mmu_mode()
is a bit of a mess (to put it politely). There are a
On 03.03.25 09:49, David Hildenbrand wrote:
On 02.03.25 15:55, Ryan Roberts wrote:
The docs, implementations and use of arch_[enter|leave]_lazy_mmu_mode()
is a bit of a mess (to put it politely). There are a number of issues
related to nesting of lazy mmu regions and confusion over whether the
leave_lazy_mmu_mode();
}
#define set_ptes set_ptes
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
ch);
tb->active = 1;
}
@@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void)
if (tb->tlb_nr)
flush_tlb_pending();
tb->active = 0;
+ preempt_enable();
}
static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
ct an immediate
mode pte modification but it would end up deferred.
arch-specific fixes to conform to the new spec will proceed this one.
These issues were spotted by code review and I have no evidence of
issues being reported in the wild.
All looking good to me!
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
Let's clean that up a bit and prepare for depending on
CONFIG_SPLIT_PMD_PTLOCKS in other Kconfig options.
More cleanups would be reasonable (like the arch-specific "depends on"
for CONFIG_SPLIT_PTE_PTLOCKS), but we'll leave that for another day.
Signed-off-by: David Hildenbra
ts to copy from the 8xx approach of
supporting such unusual ways of mapping hugetlb folios aware that it gets
tricky once multiple page tables are involved.
Signed-off-by: David Hildenbrand
---
arch/powerpc/mm/pgtable.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/mm/pgtabl
Sharing page tables between processes but falling back to per-MM page
table locks cannot possibly work.
So, let's make sure that we do have split PMD locks by adding a new
Kconfig option and letting that depend on CONFIG_SPLIT_PMD_PTLOCKS.
Signed-off-by: David Hildenbrand
---
fs/Kc
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: "H. Peter Anvin"
Cc: Alexander Viro
Cc: Christian Brauner
David Hildenbrand (3):
mm: turn USE_SPLIT_PTE_PTLOCKS / USE_SPLIT_PTE_PTLOCKS into Kconfig
options
mm/hugetlb: enforce that PMD PT sha
On 26.06.24 00:43, Andrew Morton wrote:
afaict we're in decent state to move this series into mm-stable. I've
tagged the following issues:
https://lkml.kernel.org/r/80532f73e52e2c21fdc9aac7bce24aefb76d11b0.ca...@linux.intel.com
https://lkml.kernel.org/r/30b5d493-b7c2-4e63-86c1-dcc73d21d...@redh
On 11.06.24 21:19, Andrew Morton wrote:
On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand wrote:
On 07.06.24 11:09, David Hildenbrand wrote:
In preparation for further changes, let's teach __free_pages_core()
about the differences of memory hotplug handling.
Move the memory ho
On 11.06.24 21:41, Tim Chen wrote:
On Fri, 2024-06-07 at 11:09 +0200, David Hildenbrand wrote:
In preparation for further changes, let's teach __free_pages_core()
about the differences of memory hotplug handling.
Move the memory hotplug specific handling from generic_online_page
On 11.06.24 21:19, Andrew Morton wrote:
On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand wrote:
On 07.06.24 11:09, David Hildenbrand wrote:
In preparation for further changes, let's teach __free_pages_core()
about the differences of memory hotplug handling.
Move the memory ho
On 07.06.24 11:09, David Hildenbrand wrote:
In preparation for further changes, let's teach __free_pages_core()
about the differences of memory hotplug handling.
Move the memory hotplug specific handling from generic_online_page() to
__free_pages_core(), use adjust_managed_page_count() o
On 07.06.24 11:09, David Hildenbrand wrote:
We currently initialize the memmap such that PG_reserved is set and the
refcount of the page is 1. In virtio-mem code, we have to manually clear
that PG_reserved flag to make memory offlining with partially hotplugged
memory blocks possible
On 11.06.24 09:45, Oscar Salvador wrote:
On Mon, Jun 10, 2024 at 10:56:02AM +0200, David Hildenbrand wrote:
There are fortunately not that many left.
I'd even say marking them (vmemmap) reserved is more wrong than right: note
that ordinary vmemmap pages after memory hotplug are not res
On 10.06.24 06:29, Oscar Salvador wrote:
On Fri, Jun 07, 2024 at 11:09:38AM +0200, David Hildenbrand wrote:
We currently have a hack for virtio-mem in place to handle memory
offlining with PageOffline pages for which we already adjusted the
managed page count.
Let's enlighten memory offl
On 10.06.24 06:23, Oscar Salvador wrote:
On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote:
We currently initialize the memmap such that PG_reserved is set and the
refcount of the page is 1. In virtio-mem code, we have to manually clear
that PG_reserved flag to make memory
On 10.06.24 06:03, Oscar Salvador wrote:
On Fri, Jun 07, 2024 at 11:09:36AM +0200, David Hildenbrand wrote:
In preparation for further changes, let's teach __free_pages_core()
about the differences of memory hotplug handling.
Move the memory hotplug specific handling from generic_online
On 07.06.24 11:09, David Hildenbrand wrote:
In preparation for further changes, let's teach __free_pages_core()
about the differences of memory hotplug handling.
Move the memory hotplug specific handling from generic_online_page() to
__free_pages_core(), use adjust_managed_page_count() o
We currently have a hack for virtio-mem in place to handle memory
offlining with PageOffline pages for which we already adjusted the
managed page count.
Let's enlighten memory offlining code so we can get rid of that hack,
and document the situation.
Signed-off-by: David Hilden
fano Stabellini
Cc: Oleksandr Tyshchenko
Cc: Alexander Potapenko
Cc: Marco Elver
Cc: Dmitry Vyukov
David Hildenbrand (3):
mm: pass meminit_context to __free_pages_core()
mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with
PageOffline() instead of PageReserved()
mm/memory_hotplug:
.
We'll leave the ZONE_DEVICE case alone for now.
Signed-off-by: David Hildenbrand
---
drivers/hv/hv_balloon.c | 5 ++---
drivers/virtio/virtio_mem.c | 18 --
drivers/xen/balloon.c | 9 +++--
include/linux/page-flags.h | 12 +---
mm/memory_hotplug.c
emory freed via memblock
cannot currently use adjust_managed_page_count().
Signed-off-by: David Hildenbrand
---
mm/internal.h | 3 ++-
mm/kmsan/init.c | 2 +-
mm/memory_hotplug.c | 9 +
mm/mm_init.c| 4 ++--
mm/page_alloc.c | 17 +++--
5 files change
eviewed-by: David Hildenbrand
--
Cheers,
David / dhildenb
ck->offset == 0) {
+if (xen_mr_is_memory(block->mr)) {
return xen_map_cache(block->mr, addr, len, lock, lock,
is_write);
}
I'd have moved that into a separate patch, because this is not a simple
abstraction here.
On 30.04.24 18:49, Edgar E. Iglesias wrote:
From: "Edgar E. Iglesias"
Propagate MR and is_write to xen_map_cache().
I'm pretty sure the patch subject is missing a "to" :)
This is in preparation for adding support for grant mappings.
No functional change.
Revie
On 14.11.23 15:38, Philippe Mathieu-Daudé wrote:
physmem.c doesn't use any declaration from "hw/xen/xen.h",
it only requires "sysemu/xen.h" and "system/xen-mapcache.h".
Suggested-by: David Woodhouse
Signed-off-by: Philippe Mathieu-Daudé
---
Reviewed-by: Da
: Philippe Mathieu-Daudé
---
Reviewed-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 27.06.23 22:13, Hugh Dickins wrote:
On Tue, 27 Jun 2023, David Hildenbrand wrote:
On 27.06.23 06:44, Hugh Dickins wrote:
On Mon, 26 Jun 2023, Vishal Moola (Oracle) wrote:
The MM subsystem is trying to shrink struct page. This patchset
introduces a memory descriptor for page table tracking
On 27.06.23 06:44, Hugh Dickins wrote:
On Mon, 26 Jun 2023, Vishal Moola (Oracle) wrote:
The MM subsystem is trying to shrink struct page. This patchset
introduces a memory descriptor for page table tracking - struct ptdesc.
...
39 files changed, 686 insertions(+), 455 deletions(-)
I don'
On 21.06.23 18:35, Joel Upham wrote:
Sorry, this was sent in error when I did the git send-email for the
folder. This was before I broke each patch down (after looking at the
Qemu submission guidance). This is my first time sending a patch in this
way, so thanks for the understanding. This patc
On 20.06.23 19:24, Joel Upham wrote:
Inexpressive patch subject and non-existant patch desciption. I have no
clue what this is supposed to do, except that it involes q35 and xen ()I
guess ?.
---
hw/acpi/ich9.c| 22 +-
hw/acpi/pcihp.c |6 +-
hw/core/mac
On 13.06.23 18:19, Edgecombe, Rick P wrote:
On Tue, 2023-06-13 at 10:44 +0300, Mike Rapoport wrote:
Previous patches have done the first step, so next move the callers
that
don't have a VMA to pte_mkwrite_novma(). Also do the same for
I hear x86 maintainers asking to drop "previous patches" ;-
: xen-devel@lists.xenproject.org
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Signed-off-by: Rick Edgecombe
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 18.04.23 23:33, Vishal Moola wrote:
On Tue, Apr 18, 2023 at 8:45 AM David Hildenbrand wrote:
On 17.04.23 22:50, Vishal Moola (Oracle) wrote:
s390 uses page->index to keep track of page tables for the guest address
space. In an attempt to consolidate the usage of page fields in s
On 17.04.23 22:50, Vishal Moola (Oracle) wrote:
s390 uses page->index to keep track of page tables for the guest address
space. In an attempt to consolidate the usage of page fields in s390,
replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.
This will help with the splitting o
On 01.03.23 08:03, Christophe Leroy wrote:
Le 27/02/2023 à 23:29, Rick Edgecombe a écrit :
The x86 Control-flow Enforcement Technology (CET) feature includes a new
type of memory called shadow stack. This shadow stack memory has some
unusual properties, which requires some core mm changes to f
nux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@lists.infradead.org
Cc: xen-devel@lists.xenproject.org
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Tested-by: Pengfei Xu
Suggested-by: David Hildenbrand
Signed-off-by: Rick Edgecombe
---
Hi Non-x86 A
...@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Tested-by: Pengfei Xu
Suggested-by: David Hildenbrand
Signed-off-by: Rick Edgecombe
Acked-by: David Hildenbrand
Do we also have to care about pmd_mkwrite() ?
--
Thanks,
David / dhildenb
: linux-arm-ker...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Tested-by: Pengfei Xu
Suggested-by: David Hildenbrand
Signed-off-by: Rick Edgecombe
I think it's a little weird that it's th
On 01.09.22 16:23, Kent Overstreet wrote:
> On Thu, Sep 01, 2022 at 10:05:03AM +0200, David Hildenbrand wrote:
>> On 31.08.22 21:01, Kent Overstreet wrote:
>>> On Wed, Aug 31, 2022 at 12:47:32PM +0200, Michal Hocko wrote:
>>>> On Wed 31-08-22 11:19:48, Mel Gorman wr
On 31.08.22 21:01, Kent Overstreet wrote:
> On Wed, Aug 31, 2022 at 12:47:32PM +0200, Michal Hocko wrote:
>> On Wed 31-08-22 11:19:48, Mel Gorman wrote:
>>> Whatever asking for an explanation as to why equivalent functionality
>>> cannot not be created from ftrace/kprobe/eBPF/whatever is reasonable
On 07.04.22 14:06, Juergen Gross wrote:
> Since commit 6aa303defb74 ("mm, vmscan: only allocate and reclaim from
> zones with pages managed by the buddy allocator") only zones with free
> memory are included in a built zonelist. This is problematic when e.g.
> all memory of a zone has been balloone
On 07.04.22 14:04, Michal Hocko wrote:
> On Thu 07-04-22 13:58:44, David Hildenbrand wrote:
> [...]
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 3589febc6d31..130a2feceddc 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_all
On 07.04.22 13:40, Michal Hocko wrote:
> On Thu 07-04-22 13:17:19, Juergen Gross wrote:
>> On 07.04.22 13:07, Michal Hocko wrote:
>>> On Thu 07-04-22 12:45:41, Juergen Gross wrote:
On 07.04.22 12:34, Michal Hocko wrote:
> Ccing Mel
>
> On Thu 07-04-22 11:32:21, Juergen Gross wrote:
ne, &zonerefs[nr_zones++]);
> check_highest_zone(zone_type);
> }
Let's see if we have to find another way to properly handle fadump.
Acked-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 07.04.22 10:50, Juergen Gross wrote:
> On 07.04.22 10:23, David Hildenbrand wrote:
>> On 06.04.22 15:32, Juergen Gross wrote:
>>> When onlining a new memory page in a guest the Xen balloon driver is
>>> adding it to the ballooned pages instead making it available
On 06.04.22 15:32, Juergen Gross wrote:
> When onlining a new memory page in a guest the Xen balloon driver is
> adding it to the ballooned pages instead making it available to be
> used immediately. This is meant to enable to add a new upper memory
> limit to a guest via hotplugging memory, withou
virtio_mem)" when creating the vmcore header) and
a recent dracut version (including the virtio_mem module in the kdump
initrd).
[1] https://lkml.kernel.org/r/20210526093041.8800-1-da...@redhat.com
[2] https://github.com/dracutdevs/dracut/pull/1157
Signed-off-by: David Hildenbrand
---
Let's prepare for a new virtio-mem kdump mode in which we don't actually
hot(un)plug any memory but only observe the state of device blocks.
Signed-off-by: David Hildenbrand
---
drivers/virtio/virtio_mem.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
di
Let's prepare for a new virtio-mem kdump mode in which we don't actually
hot(un)plug any memory but only observe the state of device blocks.
Signed-off-by: David Hildenbrand
---
drivers/virtio/virtio_mem.c | 87 +++--
1 file changed, 45 inserti
Let's prepare for a new virtio-mem kdump mode in which we don't actually
hot(un)plug any memory but only observe the state of device blocks.
Signed-off-by: David Hildenbrand
---
drivers/virtio/virtio_mem.c | 81 -
1 file changed, 44 inserti
istering a callback after the vmcore has already been
opened (warn and essentially read only zeroes from that point on).
Signed-off-by: David Hildenbrand
---
arch/x86/kernel/aperture_64.c | 13 -
arch/x86/xen/mmu_hvm.c| 11 ++--
fs/proc/vmcore.c
The callback should deal with errors internally, it doesn't make sense to
expose these via pfn_is_ram(). We'll rework the callbacks next. Right now
we consider errors as if "it's RAM"; no functional change.
Signed-off-by: David Hildenbrand
---
fs/proc/vmcore.c | 8 ++
Boris Ostrovsky
Signed-off-by: David Hildenbrand
---
arch/x86/xen/mmu_hvm.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/xen/mmu_hvm.c b/arch/x86/xen/mmu_hvm.c
index d1b38c77352b..6ba8826dcdcc 100644
--- a/arch/x86/xen/mmu_hvm.c
+++ b/arch/x86/xen/mmu_hvm.c
@@ -22,8 +
1 - 100 of 445 matches
Mail list logo