Re: [PATCH 1/1] hv_balloon: Enable hot-add for memblock sizes > 128 Mbytes

2024-04-26 Thread David Hildenbrand
On 11.03.24 19:12, mhkelle...@gmail.com wrote: From: Michael Kelley The Hyper-V balloon driver supports hot-add of memory in addition to ballooning. Current code hot-adds in fixed size chunks of 128 Mbytes (fixed constant HA_CHUNK in the code). While this works in Hyper-V VMs with 64 Gbytes or

Re: [PATCH 1/1] hv_balloon: Enable hot-add for memblock sizes > 128 Mbytes

2024-04-30 Thread David Hildenbrand
On 29.04.24 17:30, Michael Kelley wrote: From: Michael Kelley Sent: Friday, April 26, 2024 9:36 AM @@ -505,8 +505,9 @@ enum hv_dm_state { static __u8 recv_buffer[HV_HYP_PAGE_SIZE]; static __u8 balloon_up_send_buffer[HV_HYP_PAGE_SIZE]; +static unsigned long ha_chunk_pgs; Why not stick t

Re: [PATCH v2 1/2] hv_balloon: Use kernel macros to simplify open coded sequences

2024-05-02 Thread David Hildenbrand
, which is the case here. Signed-off-by: Michael Kelley --- Changes in v2: * No changes. This is a new patch that goes with v2 of patch 2 of this series. Reviewed-by: David Hildenbrand -- Cheers, David / dhildenb

Re: [PATCH v2 2/2] hv_balloon: Enable hot-add for memblock sizes > 128 MiB

2024-05-02 Thread David Hildenbrand
f 256 MiB and 2 GiB shows correct operation. Signed-off-by: Michael Kelley --- Changes in v2: * Change new global variable name from ha_chunk_pgs to ha_pages_in_chunk [David Hildenbrand] * Use kernel macros ALIGN(), ALIGN_DOWN(), and umin() to simplify code and reduce references to HA_CHUNK

[PATCH v1 0/3] mm/memory_hotplug: use PageOffline() instead of PageReserved() for !ZONE_DEVICE

2024-06-07 Thread David Hildenbrand
fano Stabellini Cc: Oleksandr Tyshchenko Cc: Alexander Potapenko Cc: Marco Elver Cc: Dmitry Vyukov David Hildenbrand (3): mm: pass meminit_context to __free_pages_core() mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with PageOffline() instead of PageReserved() mm/memory_hotplug:

[PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()

2024-06-07 Thread David Hildenbrand
emory freed via memblock cannot currently use adjust_managed_page_count(). Signed-off-by: David Hildenbrand --- mm/internal.h | 3 ++- mm/kmsan/init.c | 2 +- mm/memory_hotplug.c | 9 + mm/mm_init.c| 4 ++-- mm/page_alloc.c | 17 +++-- 5 files change

[PATCH v1 2/3] mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with PageOffline() instead of PageReserved()

2024-06-07 Thread David Hildenbrand
. We'll leave the ZONE_DEVICE case alone for now. Signed-off-by: David Hildenbrand --- drivers/hv/hv_balloon.c | 5 ++--- drivers/virtio/virtio_mem.c | 18 -- drivers/xen/balloon.c | 9 +++-- include/linux/page-flags.h | 12 +--- mm/memory_hotplug.c

[PATCH v1 3/3] mm/memory_hotplug: skip adjust_managed_page_count() for PageOffline() pages when offlining

2024-06-07 Thread David Hildenbrand
We currently have a hack for virtio-mem in place to handle memory offlining with PageOffline pages for which we already adjusted the managed page count. Let's enlighten memory offlining code so we can get rid of that hack, and document the situation. Signed-off-by: David Hilden

Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()

2024-06-07 Thread David Hildenbrand
On 07.06.24 11:09, David Hildenbrand wrote: In preparation for further changes, let's teach __free_pages_core() about the differences of memory hotplug handling. Move the memory hotplug specific handling from generic_online_page() to __free_pages_core(), use adjust_managed_page_count() o

Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()

2024-06-10 Thread David Hildenbrand
On 10.06.24 06:03, Oscar Salvador wrote: On Fri, Jun 07, 2024 at 11:09:36AM +0200, David Hildenbrand wrote: In preparation for further changes, let's teach __free_pages_core() about the differences of memory hotplug handling. Move the memory hotplug specific handling from generic_online

Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with PageOffline() instead of PageReserved()

2024-06-10 Thread David Hildenbrand
On 10.06.24 06:23, Oscar Salvador wrote: On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote: We currently initialize the memmap such that PG_reserved is set and the refcount of the page is 1. In virtio-mem code, we have to manually clear that PG_reserved flag to make memory

Re: [PATCH v1 3/3] mm/memory_hotplug: skip adjust_managed_page_count() for PageOffline() pages when offlining

2024-06-10 Thread David Hildenbrand
On 10.06.24 06:29, Oscar Salvador wrote: On Fri, Jun 07, 2024 at 11:09:38AM +0200, David Hildenbrand wrote: We currently have a hack for virtio-mem in place to handle memory offlining with PageOffline pages for which we already adjusted the managed page count. Let's enlighten memory offl

Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with PageOffline() instead of PageReserved()

2024-06-11 Thread David Hildenbrand
On 11.06.24 09:45, Oscar Salvador wrote: On Mon, Jun 10, 2024 at 10:56:02AM +0200, David Hildenbrand wrote: There are fortunately not that many left. I'd even say marking them (vmemmap) reserved is more wrong than right: note that ordinary vmemmap pages after memory hotplug are not res

Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with PageOffline() instead of PageReserved()

2024-06-11 Thread David Hildenbrand
On 07.06.24 11:09, David Hildenbrand wrote: We currently initialize the memmap such that PG_reserved is set and the refcount of the page is 1. In virtio-mem code, we have to manually clear that PG_reserved flag to make memory offlining with partially hotplugged memory blocks possible

Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()

2024-06-11 Thread David Hildenbrand
On 07.06.24 11:09, David Hildenbrand wrote: In preparation for further changes, let's teach __free_pages_core() about the differences of memory hotplug handling. Move the memory hotplug specific handling from generic_online_page() to __free_pages_core(), use adjust_managed_page_count() o

Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()

2024-06-11 Thread David Hildenbrand
On 11.06.24 21:19, Andrew Morton wrote: On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand wrote: On 07.06.24 11:09, David Hildenbrand wrote: In preparation for further changes, let's teach __free_pages_core() about the differences of memory hotplug handling. Move the memory ho

Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()

2024-06-11 Thread David Hildenbrand
On 11.06.24 21:41, Tim Chen wrote: On Fri, 2024-06-07 at 11:09 +0200, David Hildenbrand wrote: In preparation for further changes, let's teach __free_pages_core() about the differences of memory hotplug handling. Move the memory hotplug specific handling from generic_online_page

Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()

2024-06-12 Thread David Hildenbrand
On 11.06.24 21:19, Andrew Morton wrote: On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand wrote: On 07.06.24 11:09, David Hildenbrand wrote: In preparation for further changes, let's teach __free_pages_core() about the differences of memory hotplug handling. Move the memory ho

Re: [PATCH v1 0/3] mm/memory_hotplug: use PageOffline() instead of PageReserved() for !ZONE_DEVICE

2024-06-25 Thread David Hildenbrand
On 26.06.24 00:43, Andrew Morton wrote: afaict we're in decent state to move this series into mm-stable. I've tagged the following issues: https://lkml.kernel.org/r/80532f73e52e2c21fdc9aac7bce24aefb76d11b0.ca...@linux.intel.com https://lkml.kernel.org/r/30b5d493-b7c2-4e63-86c1-dcc73d21d...@redh

Re: [RFC 1/5] meminfo: add a per node counter for balloon drivers

2025-03-13 Thread David Hildenbrand
On 13.03.25 18:35, Nico Pache wrote: On Thu, Mar 13, 2025 at 2:22 AM David Hildenbrand wrote: On 13.03.25 00:04, Nico Pache wrote: On Wed, Mar 12, 2025 at 4:19 PM David Hildenbrand wrote: On 12.03.25 01:06, Nico Pache wrote: Add NR_BALLOON_PAGES counter to track memory used by balloon

Re: [RFC 1/5] meminfo: add a per node counter for balloon drivers

2025-03-12 Thread David Hildenbrand
On 12.03.25 01:06, Nico Pache wrote: Add NR_BALLOON_PAGES counter to track memory used by balloon drivers and expose it through /proc/meminfo and other memory reporting interfaces. In balloon_page_enqueue_one(), we perform a __count_vm_event(BALLOON_INFLATE) and in balloon_page_list_dequeue

Re: [RFC 1/5] meminfo: add a per node counter for balloon drivers

2025-03-13 Thread David Hildenbrand
On 13.03.25 00:04, Nico Pache wrote: On Wed, Mar 12, 2025 at 4:19 PM David Hildenbrand wrote: On 12.03.25 01:06, Nico Pache wrote: Add NR_BALLOON_PAGES counter to track memory used by balloon drivers and expose it through /proc/meminfo and other memory reporting interfaces. In

Re: [RFC 1/5] meminfo: add a per node counter for balloon drivers

2025-03-13 Thread David Hildenbrand
On 13.03.25 08:20, Michael S. Tsirkin wrote: On Wed, Mar 12, 2025 at 11:19:06PM +0100, David Hildenbrand wrote: On 12.03.25 01:06, Nico Pache wrote: Add NR_BALLOON_PAGES counter to track memory used by balloon drivers and expose it through /proc/meminfo and other memory reporting interfaces

Re: [RFC 4/5] vmx_balloon: update the NR_BALLOON_PAGES state

2025-03-12 Thread David Hildenbrand
On 12.03.25 21:11, Nico Pache wrote: On Wed, Mar 12, 2025 at 12:57 AM Michael S. Tsirkin wrote: On Tue, Mar 11, 2025 at 06:06:59PM -0600, Nico Pache wrote: Update the NR_BALLOON_PAGES counter when pages are added to or removed from the VMware balloon. Signed-off-by: Nico Pache --- drivers