On 30.06.25 19:05, Lorenzo Stoakes wrote:
On Mon, Jun 30, 2025 at 02:59:50PM +0200, David Hildenbrand wrote:
Let's factor it out, simplifying the calling code.
The assumption is that flush_dcache_page() is not required for
movable_ops pages: as documented for flush_dcache_folio(), it really
only applies when the kernel wrote to pagecache pages / pages in
highmem. movable_ops callbacks should be handling flushing
caches if ever required.
But we've enot changed this have we? The flush_dcache_folio() invocation seems
to happen the same way now as before? Did I miss something?
I think, before this change we would have called it also for movable_ops
pages
if (rc == MIGRATEPAGE_SUCCESS) {
if (__folio_test_movable(src)) {
...
}
...
if (likely(!folio_is_zone_device(dst)))
flush_dcache_folio(dst);
}
Now, we no longer do that for movable_ops pages.
For balloon pages, we're not copying anything, so we never possibly have
to flush the dcache.
For zsmalloc, we do the copy in zs_object_copy() through kmap_local.
I think we could have HIGHMEM, so I wonder if we should just do a
flush_dcache_page() in zs_object_copy().
At least, staring at highmem.h with memcpy_to_page(), it looks like that
might be the right thing to do.
So likely I'll add a patch before this one that will do the
flush_dcache_page() in there.
Note that we can now change folio_mapping_flags() to folio_test_anon()
to make it clearer, because movable_ops pages will never take that path.
Reviewed-by: Zi Yan <z...@nvidia.com>
Signed-off-by: David Hildenbrand <da...@redhat.com>
Have scrutinised this a lot and it seems correct to me, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoa...@oracle.com>
---
mm/migrate.c | 82 ++++++++++++++++++++++++++++------------------------
1 file changed, 45 insertions(+), 37 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index d97f7cd137e63..0898ddd2f661f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -159,6 +159,45 @@ static void putback_movable_ops_page(struct page *page)
folio_put(folio);
}
+/**
+ * migrate_movable_ops_page - migrate an isolated movable_ops page
+ * @page: The isolated page.
+ *
+ * Migrate an isolated movable_ops page.
+ *
+ * If the src page was already released by its owner, the src page is
+ * un-isolated (putback) and migration succeeds; the migration core will be the
+ * owner of both pages.
+ *
+ * If the src page was not released by its owner and the migration was
+ * successful, the owner of the src page and the dst page are swapped and
+ * the src page is un-isolated.
+ *
+ * If migration fails, the ownership stays unmodified and the src page
+ * remains isolated: migration may be retried later or the page can be putback.
+ *
+ * TODO: migration core will treat both pages as folios and lock them before
+ * this call to unlock them after this call. Further, the folio refcounts on
+ * src and dst are also released by migration core. These pages will not be
+ * folios in the future, so that must be reworked.
+ *
+ * Returns MIGRATEPAGE_SUCCESS on success, otherwise a negative error
+ * code.
+ */
Love these comments you're adding!!
+static int migrate_movable_ops_page(struct page *dst, struct page *src,
+ enum migrate_mode mode)
+{
+ int rc = MIGRATEPAGE_SUCCESS;
Maybe worth asserting src, dst locking?
We do have these sanity checks right now in move_to_new_folio() already.
(next patch moves it further out)
Not sure how reasonable these sanity checks are in these internal
helpers: E.g., after we called move_to_new_folio() we will unlock both
folios, which should blow up if the folios wouldn't be locked.
--
Cheers,
David / dhildenb