[PATCH v6 01/10] memory-failure: fetch compound_head after pgmap_pfn_valid()

2021-11-24 Thread Joao Martins
memory_failure_dev_pagemap() at the moment assumes base pages (e.g.
dax_lock_page()).  For devmap with compound pages fetch the
compound_head in case a tail page memory failure is being handled.

Currently this is a nop, but in the advent of compound pages in
dev_pagemap it allows memory_failure_dev_pagemap() to keep working.

Reported-by: Jane Chu 
Signed-off-by: Joao Martins 
Reviewed-by: Naoya Horiguchi 
Reviewed-by: Dan Williams 
Reviewed-by: Muchun Song 
---
 mm/memory-failure.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 8f0ee5b08696..f5749db8fad3 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1600,6 +1600,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, 
int flags,
goto out;
}
 
+   /*
+* Pages instantiated by device-dax (not filesystem-dax)
+* may be compound pages.
+*/
+   page = compound_head(page);
+
/*
 * Prevent the inode from being freed while we are interrogating
 * the address_space, typically this would be handled by
-- 
2.17.2




[PATCH v6 02/10] mm/page_alloc: split prep_compound_page into head and tail subparts

2021-11-24 Thread Joao Martins
Split the utility function prep_compound_page() into head and tail
counterparts, and use them accordingly.

This is in preparation for sharing the storage for compound page
metadata.

Signed-off-by: Joao Martins 
Acked-by: Mike Kravetz 
Reviewed-by: Dan Williams 
Reviewed-by: Muchun Song 
---
 mm/page_alloc.c | 30 --
 1 file changed, 20 insertions(+), 10 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 58490fa8948d..ba096f731e36 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -727,23 +727,33 @@ void free_compound_page(struct page *page)
free_the_page(page, compound_order(page));
 }
 
+static void prep_compound_head(struct page *page, unsigned int order)
+{
+   set_compound_page_dtor(page, COMPOUND_PAGE_DTOR);
+   set_compound_order(page, order);
+   atomic_set(compound_mapcount_ptr(page), -1);
+   if (hpage_pincount_available(page))
+   atomic_set(compound_pincount_ptr(page), 0);
+}
+
+static void prep_compound_tail(struct page *head, int tail_idx)
+{
+   struct page *p = head + tail_idx;
+
+   p->mapping = TAIL_MAPPING;
+   set_compound_head(p, head);
+}
+
 void prep_compound_page(struct page *page, unsigned int order)
 {
int i;
int nr_pages = 1 << order;
 
__SetPageHead(page);
-   for (i = 1; i < nr_pages; i++) {
-   struct page *p = page + i;
-   p->mapping = TAIL_MAPPING;
-   set_compound_head(p, page);
-   }
+   for (i = 1; i < nr_pages; i++)
+   prep_compound_tail(page, i);
 
-   set_compound_page_dtor(page, COMPOUND_PAGE_DTOR);
-   set_compound_order(page, order);
-   atomic_set(compound_mapcount_ptr(page), -1);
-   if (hpage_pincount_available(page))
-   atomic_set(compound_pincount_ptr(page), 0);
+   prep_compound_head(page, order);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
-- 
2.17.2




[PATCH v6 04/10] mm/memremap: add ZONE_DEVICE support for compound pages

2021-11-24 Thread Joao Martins
Add a new @vmemmap_shift property for struct dev_pagemap which specifies that a
devmap is composed of a set of compound pages of order @vmemmap_shift, instead 
of
base pages. When a compound page devmap is requested, all but the first
page are initialised as tail pages instead of order-0 pages.

For certain ZONE_DEVICE users like device-dax which have a fixed page size,
this creates an opportunity to optimize GUP and GUP-fast walkers, treating
it the same way as THP or hugetlb pages.

Additionally, commit 7118fc2906e2 ("hugetlb: address ref count racing in
prep_compound_gigantic_page") removed set_page_count() because the
setting of page ref count to zero was redundant. devmap pages don't come
from page allocator though and only head page refcount is used for
compound pages, hence initialize tail page count to zero.

Signed-off-by: Joao Martins 
Reviewed-by: Dan Williams 
---
 include/linux/memremap.h | 11 +++
 mm/memremap.c| 12 ++--
 mm/page_alloc.c  | 38 +-
 3 files changed, 54 insertions(+), 7 deletions(-)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 119f130ef8f1..aaf85bda093b 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -99,6 +99,11 @@ struct dev_pagemap_ops {
  * @done: completion for @internal_ref
  * @type: memory type: see MEMORY_* in memory_hotplug.h
  * @flags: PGMAP_* flags to specify defailed behavior
+ * @vmemmap_shift: structural definition of how the vmemmap page metadata
+ *  is populated, specifically the metadata page order.
+ * A zero value (default) uses base pages as the vmemmap metadata
+ * representation. A bigger value will set up compound struct pages
+ * of the requested order value.
  * @ops: method table
  * @owner: an opaque pointer identifying the entity that manages this
  * instance.  Used by various helpers to make sure that no
@@ -114,6 +119,7 @@ struct dev_pagemap {
struct completion done;
enum memory_type type;
unsigned int flags;
+   unsigned long vmemmap_shift;
const struct dev_pagemap_ops *ops;
void *owner;
int nr_range;
@@ -130,6 +136,11 @@ static inline struct vmem_altmap *pgmap_altmap(struct 
dev_pagemap *pgmap)
return NULL;
 }
 
+static inline unsigned long pgmap_vmemmap_nr(struct dev_pagemap *pgmap)
+{
+   return 1 << pgmap->vmemmap_shift;
+}
+
 #ifdef CONFIG_ZONE_DEVICE
 bool pfn_zone_device_reserved(unsigned long pfn);
 void *memremap_pages(struct dev_pagemap *pgmap, int nid);
diff --git a/mm/memremap.c b/mm/memremap.c
index 84de22c14567..3afa246eb1ab 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -102,11 +102,11 @@ static unsigned long pfn_end(struct dev_pagemap *pgmap, 
int range_id)
return (range->start + range_len(range)) >> PAGE_SHIFT;
 }
 
-static unsigned long pfn_next(unsigned long pfn)
+static unsigned long pfn_next(struct dev_pagemap *pgmap, unsigned long pfn)
 {
-   if (pfn % 1024 == 0)
+   if (pfn % (1024 << pgmap->vmemmap_shift))
cond_resched();
-   return pfn + 1;
+   return pfn + pgmap_vmemmap_nr(pgmap);
 }
 
 /*
@@ -130,7 +130,7 @@ bool pfn_zone_device_reserved(unsigned long pfn)
 }
 
 #define for_each_device_pfn(pfn, map, i) \
-   for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = 
pfn_next(pfn))
+   for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = 
pfn_next(map, pfn))
 
 static void dev_pagemap_kill(struct dev_pagemap *pgmap)
 {
@@ -315,8 +315,8 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct 
mhp_params *params,
memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
PHYS_PFN(range->start),
PHYS_PFN(range_len(range)), pgmap);
-   percpu_ref_get_many(pgmap->ref, pfn_end(pgmap, range_id)
-   - pfn_first(pgmap, range_id));
+   percpu_ref_get_many(pgmap->ref, (pfn_end(pgmap, range_id)
+   - pfn_first(pgmap, range_id)) >> pgmap->vmemmap_shift);
return 0;
 
 err_add_memory:
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f7f33c83222f..ea537839816e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6605,6 +6605,35 @@ static void __ref __init_zone_device_page(struct page 
*page, unsigned long pfn,
}
 }
 
+static void __ref memmap_init_compound(struct page *head,
+  unsigned long head_pfn,
+  unsigned long zone_idx, int nid,
+  struct dev_pagemap *pgmap,
+  unsigned long nr_pages)
+{
+   unsigned long pfn, end_pfn = head_pfn + nr_pages;
+   unsigned int order = pgmap->vmemmap_shift;
+
+   __SetPageHead(head);
+   for (pfn = head_pfn + 1; pfn < end_pfn; pfn++) {
+   struct page *page = pfn_to_page(pfn);
+
+   __init_zone_device_page

[PATCH v6 00/10] mm, device-dax: Introduce compound pages in devmap

2021-11-24 Thread Joao Martins
Changes since v5[9]:

* Keep @dev on the previous line to improve readability on 
patch 5 (Christoph Hellwig)
* Document is_static() function to clarify what are static and
dynamic dax regions in patch 7 (Christoph Hellwig)
* Deduce @f_mapping and @pgmap from vmf->vma->vm_file to reduce
the number of arguments of set_{page,compound}_mapping() in last
patch (Christoph Hellwig)
* Factor out @mapping initialization to a separate helper ([new] patch 8)
and rename set_page_mapping() to dax_set_mapping() in the process.
* Remove set_compound_mapping() and instead adjust dax_set_mapping()
to handle @vmemmap_shift case on the last patch. This greatly
simplifies the last patch, and addresses a similar comment by Christoph
on having an earlier return. No functional change on the changes
to dax_set_mapping compared to its earlier version so I retained
Dan's Rb on last patch.
* Initialize the mapping prior to inserting the PTE/PMD/PUD as opposed
to after the fact. ([new] patch 9, Jason Gunthorpe)

Patches 8 and 9 are new (small cleanups) in v6.
Patches 6 - 9 are the ones missing Rb tags.

---

This series converts device-dax to use compound pages, and moves away from the
'struct page per basepage on PMD/PUD' that is done today. Doing so, 1) unlocks
a few noticeable improvements on unpin_user_pages() and makes device-dax+altmap
case 4x times faster in pinning (numbers below and in last patch) 2) as
mentioned in various other threads it's one important step towards cleaning up
ZONE_DEVICE refcounting.

I've split the compound pages on devmap part from the rest based on recent
discussions on devmap pending and future work planned[5][6]. There is consensus
that device-dax should be using compound pages to represent its PMD/PUDs just
like HugeTLB and THP, and that leads to less specialization of the dax parts.
I will pursue the rest of the work in parallel once this part is merged,
particular the GUP-{slow,fast} improvements [7] and the tail struct page
deduplication memory savings part[8].

To summarize what the series does:

Patch 1: Prepare hwpoisoning to work with dax compound pages.

Patches 2-3: Split the current utility function of prep_compound_page()
into head and tail and use those two helpers where appropriate to take
advantage of caches being warm after __init_single_page(). This is used
when initializing zone device when we bring up device-dax namespaces.

Patches 4-10: Add devmap support for compound pages in device-dax.
memmap_init_zone_device() initialize its metadata as compound pages, and it
introduces a new devmap property known as vmemmap_shift which
outlines how the vmemmap is structured (defaults to base pages as done today).
The property describe the page order of the metadata essentially.
While at it do a few cleanups in device-dax in patches 5-9.
Finally enable device-dax usage of devmap @vmemmap_shift to a value
based on its own @align property. @vmemmap_shift returns 0 by default (which
is today's case of base pages in devmap, like fsdax or the others) and the
usage of compound devmap is optional. Starting with device-dax (*not* fsdax) we
enable it by default. There are a few pinning improvements particular on the
unpinning case and altmap, as well as unpin_user_page_range_dirty_lock() being
just as effective as THP/hugetlb[0] pages.

$ gup_test -f /dev/dax1.0 -m 16384 -r 10 -S -a -n 512 -w
(pin_user_pages_fast 2M pages) put:~71 ms -> put:~22 ms
[altmap]
(pin_user_pages_fast 2M pages) get:~524ms put:~525 ms -> get: ~127ms 
put:~71ms

 $ gup_test -f /dev/dax1.0 -m 129022 -r 10 -S -a -n 512 -w
(pin_user_pages_fast 2M pages) put:~513 ms -> put:~188 ms
[altmap with -m 127004]
(pin_user_pages_fast 2M pages) get:~4.1 secs put:~4.12 secs -> get:~1sec 
put:~563ms

Tested on x86 with 1Tb+ of pmem (alongside registering it with RDMA with and
without altmap), alongside gup_test selftests with dynamic dax regions and
static dax regions. Coupled with ndctl unit tests for dynamic dax devices
that exercise all of this. Note, for dynamic dax regions I had to revert
commit 8aa83e6395 ("x86/setup: Call early_reserve_memory() earlier"), it
is a known issue that this commit broke efi_fake_mem=.

Patches apply on top of linux-next tag next-20211124 (commit 4b74e088fef6).

Thanks for all the review so far.

As always, Comments and suggestions very much appreciated!

Older Changelog,

v4[4] -> v5[9]:

* Remove patches 8-14 as they will go in 2 separate (parallel) series;
* Rename @geometry to @vmemmap_shift (Christoph Hellwig)
* Make @vmemmap_shift an order rather than nr of pages (Christoph Hellwig)
* Consequently remove helper pgmap_geometry_order() as it's no longer
needed, in place of accessing directly the structure member [Patch 4 and 8]
* Rename pgmap_geometry() to pgmap_vmemmap_nr() in patches 4 and 8;
* Remove usage of pgmap_geometry() in favour for testing
  @vmemmap_shift for non-zero directly directly in

[PATCH v6 03/10] mm/page_alloc: refactor memmap_init_zone_device() page init

2021-11-24 Thread Joao Martins
Move struct page init to an helper function __init_zone_device_page().

This is in preparation for sharing the storage for compound page
metadata.

Signed-off-by: Joao Martins 
Reviewed-by: Dan Williams 
---
 mm/page_alloc.c | 74 +++--
 1 file changed, 41 insertions(+), 33 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ba096f731e36..f7f33c83222f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6565,6 +6565,46 @@ void __meminit memmap_init_range(unsigned long size, int 
nid, unsigned long zone
 }
 
 #ifdef CONFIG_ZONE_DEVICE
+static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
+ unsigned long zone_idx, int nid,
+ struct dev_pagemap *pgmap)
+{
+
+   __init_single_page(page, pfn, zone_idx, nid);
+
+   /*
+* Mark page reserved as it will need to wait for onlining
+* phase for it to be fully associated with a zone.
+*
+* We can use the non-atomic __set_bit operation for setting
+* the flag as we are still initializing the pages.
+*/
+   __SetPageReserved(page);
+
+   /*
+* ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
+* and zone_device_data.  It is a bug if a ZONE_DEVICE page is
+* ever freed or placed on a driver-private list.
+*/
+   page->pgmap = pgmap;
+   page->zone_device_data = NULL;
+
+   /*
+* Mark the block movable so that blocks are reserved for
+* movable at startup. This will force kernel allocations
+* to reserve their blocks rather than leaking throughout
+* the address space during boot when many long-lived
+* kernel allocations are made.
+*
+* Please note that MEMINIT_HOTPLUG path doesn't clear memmap
+* because this is done early in section_activate()
+*/
+   if (IS_ALIGNED(pfn, pageblock_nr_pages)) {
+   set_pageblock_migratetype(page, MIGRATE_MOVABLE);
+   cond_resched();
+   }
+}
+
 void __ref memmap_init_zone_device(struct zone *zone,
   unsigned long start_pfn,
   unsigned long nr_pages,
@@ -6593,39 +6633,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
struct page *page = pfn_to_page(pfn);
 
-   __init_single_page(page, pfn, zone_idx, nid);
-
-   /*
-* Mark page reserved as it will need to wait for onlining
-* phase for it to be fully associated with a zone.
-*
-* We can use the non-atomic __set_bit operation for setting
-* the flag as we are still initializing the pages.
-*/
-   __SetPageReserved(page);
-
-   /*
-* ZONE_DEVICE pages union ->lru with a ->pgmap back pointer
-* and zone_device_data.  It is a bug if a ZONE_DEVICE page is
-* ever freed or placed on a driver-private list.
-*/
-   page->pgmap = pgmap;
-   page->zone_device_data = NULL;
-
-   /*
-* Mark the block movable so that blocks are reserved for
-* movable at startup. This will force kernel allocations
-* to reserve their blocks rather than leaking throughout
-* the address space during boot when many long-lived
-* kernel allocations are made.
-*
-* Please note that MEMINIT_HOTPLUG path doesn't clear memmap
-* because this is done early in section_activate()
-*/
-   if (IS_ALIGNED(pfn, pageblock_nr_pages)) {
-   set_pageblock_migratetype(page, MIGRATE_MOVABLE);
-   cond_resched();
-   }
+   __init_zone_device_page(page, pfn, zone_idx, nid, pgmap);
}
 
pr_info("%s initialised %lu pages in %ums\n", __func__,
-- 
2.17.2




[PATCH v6 05/10] device-dax: use ALIGN() for determining pgoff

2021-11-24 Thread Joao Martins
Rather than calculating @pgoff manually, switch to ALIGN() instead.

Suggested-by: Dan Williams 
Signed-off-by: Joao Martins 
Reviewed-by: Dan Williams 
---
 drivers/dax/device.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index dd8222a42808..0b82159b3564 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -234,8 +234,8 @@ static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf,
 * mapped. No need to consider the zero page, or racing
 * conflicting mappings.
 */
-   pgoff = linear_page_index(vmf->vma, vmf->address
-   & ~(fault_size - 1));
+   pgoff = linear_page_index(vmf->vma,
+   ALIGN(vmf->address, fault_size));
for (i = 0; i < fault_size / PAGE_SIZE; i++) {
struct page *page;
 
-- 
2.17.2




[PATCH v6 07/10] device-dax: ensure dev_dax->pgmap is valid for dynamic devices

2021-11-24 Thread Joao Martins
Right now, only static dax regions have a valid @pgmap pointer in its
struct dev_dax. Dynamic dax case however, do not.

In preparation for device-dax compound devmap support, make sure that
dev_dax pgmap field is set after it has been allocated and initialized.

dynamic dax device have the @pgmap is allocated at probe() and it's
managed by devm (contrast to static dax region which a pgmap is provided
and dax core kfrees it). So in addition to ensure a valid @pgmap, clear
the pgmap when the dynamic dax device is released to avoid the same
pgmap ranges to be re-requested across multiple region device reconfigs.

Add a static_dev_dax() and use that helper in dev_dax_probe() to ensure
the initialization differences between dynamic and static regions are
more explicit. While at it, consolidate the ranges initialization when we
allocate the @pgmap for the dynamic dax region case. Also take the
opportunity to document the differences between static and dynamic da
regions.

Suggested-by: Dan Williams 
Signed-off-by: Joao Martins 
---
 drivers/dax/bus.c| 32 
 drivers/dax/bus.h|  1 +
 drivers/dax/device.c | 29 +
 3 files changed, 54 insertions(+), 8 deletions(-)

diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
index 6cc4da4c713d..a22350e822fa 100644
--- a/drivers/dax/bus.c
+++ b/drivers/dax/bus.c
@@ -129,11 +129,35 @@ ATTRIBUTE_GROUPS(dax_drv);
 
 static int dax_bus_match(struct device *dev, struct device_driver *drv);
 
+/*
+ * Static dax regions are regions created by an external subsystem
+ * nvdimm where a single range is assigned. Its boundaries are by the external
+ * subsystem and are usually limited to one physical memory range. For example,
+ * for PMEM it is usually defined by NVDIMM Namespace boundaries (i.e. a
+ * single contiguous range)
+ *
+ * On dynamic dax regions, the assigned region can be partitioned by dax core
+ * into multiple subdivisions. A subdivision is represented into one
+ * /dev/daxN.M device composed by one or more potentially discontiguous ranges.
+ *
+ * When allocating a dax region, drivers must set whether it's static
+ * (IORESOURCE_DAX_STATIC).  On static dax devices, the @pgmap is pre-assigned
+ * to dax core when calling devm_create_dev_dax(), whereas in dynamic dax
+ * devices it is NULL but afterwards allocated by dax core on device ->probe().
+ * Care is needed to make sure that dynamic dax devices are torn down with a
+ * cleared @pgmap field (see kill_dev_dax()).
+ */
 static bool is_static(struct dax_region *dax_region)
 {
return (dax_region->res.flags & IORESOURCE_DAX_STATIC) != 0;
 }
 
+bool static_dev_dax(struct dev_dax *dev_dax)
+{
+   return is_static(dev_dax->region);
+}
+EXPORT_SYMBOL_GPL(static_dev_dax);
+
 static u64 dev_dax_size(struct dev_dax *dev_dax)
 {
u64 size = 0;
@@ -363,6 +387,14 @@ void kill_dev_dax(struct dev_dax *dev_dax)
 
kill_dax(dax_dev);
unmap_mapping_range(inode->i_mapping, 0, 0, 1);
+
+   /*
+* Dynamic dax region have the pgmap allocated via dev_kzalloc()
+* and thus freed by devm. Clear the pgmap to not have stale pgmap
+* ranges on probe() from previous reconfigurations of region devices.
+*/
+   if (!static_dev_dax(dev_dax))
+   dev_dax->pgmap = NULL;
 }
 EXPORT_SYMBOL_GPL(kill_dev_dax);
 
diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h
index 1e946ad7780a..4acdfee7dd59 100644
--- a/drivers/dax/bus.h
+++ b/drivers/dax/bus.h
@@ -48,6 +48,7 @@ int __dax_driver_register(struct dax_device_driver *dax_drv,
__dax_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
 void dax_driver_unregister(struct dax_device_driver *dax_drv);
 void kill_dev_dax(struct dev_dax *dev_dax);
+bool static_dev_dax(struct dev_dax *dev_dax);
 
 #if IS_ENABLED(CONFIG_DEV_DAX_PMEM_COMPAT)
 int dev_dax_probe(struct dev_dax *dev_dax);
diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 038816b91af6..630de5a795b0 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -398,18 +398,34 @@ int dev_dax_probe(struct dev_dax *dev_dax)
void *addr;
int rc, i;
 
-   pgmap = dev_dax->pgmap;
-   if (dev_WARN_ONCE(dev, pgmap && dev_dax->nr_range > 1,
-   "static pgmap / multi-range device conflict\n"))
-   return -EINVAL;
+   if (static_dev_dax(dev_dax))  {
+   if (dev_dax->nr_range > 1) {
+   dev_warn(dev,
+   "static pgmap / multi-range device conflict\n");
+   return -EINVAL;
+   }
+
+   pgmap = dev_dax->pgmap;
+   } else {
+   if (dev_dax->pgmap) {
+   dev_warn(dev,
+"dynamic-dax with pre-populated page map\n");
+   return -EINVAL;
+   }
 
-   if (!pgmap) {
pgmap = devm_kzalloc(dev,
struct_size(

[PATCH v6 08/10] device-dax: factor out page mapping initialization

2021-11-24 Thread Joao Martins
Move initialization of page->mapping into a separate helper.

This is in preparation to move the mapping set to be prior
to inserting the page table entry and also for tidying up
compound page handling into one helper.

Signed-off-by: Joao Martins 
---
 drivers/dax/device.c | 45 ++--
 1 file changed, 23 insertions(+), 22 deletions(-)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 630de5a795b0..9c87927d4bc2 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -73,6 +73,27 @@ __weak phys_addr_t dax_pgoff_to_phys(struct dev_dax 
*dev_dax, pgoff_t pgoff,
return -1;
 }
 
+static void dax_set_mapping(struct vm_fault *vmf, pfn_t pfn,
+ unsigned long fault_size)
+{
+   unsigned long i, nr_pages = fault_size / PAGE_SIZE;
+   struct file *filp = vmf->vma->vm_file;
+   pgoff_t pgoff;
+
+   pgoff = linear_page_index(vmf->vma,
+   ALIGN(vmf->address, fault_size));
+
+   for (i = 0; i < nr_pages; i++) {
+   struct page *page = pfn_to_page(pfn_t_to_pfn(pfn) + i);
+
+   if (page->mapping)
+   continue;
+
+   page->mapping = filp->f_mapping;
+   page->index = pgoff + i;
+   }
+}
+
 static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax,
struct vm_fault *vmf, pfn_t *pfn)
 {
@@ -224,28 +245,8 @@ static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf,
rc = VM_FAULT_SIGBUS;
}
 
-   if (rc == VM_FAULT_NOPAGE) {
-   unsigned long i;
-   pgoff_t pgoff;
-
-   /*
-* In the device-dax case the only possibility for a
-* VM_FAULT_NOPAGE result is when device-dax capacity is
-* mapped. No need to consider the zero page, or racing
-* conflicting mappings.
-*/
-   pgoff = linear_page_index(vmf->vma,
-   ALIGN(vmf->address, fault_size));
-   for (i = 0; i < fault_size / PAGE_SIZE; i++) {
-   struct page *page;
-
-   page = pfn_to_page(pfn_t_to_pfn(pfn) + i);
-   if (page->mapping)
-   continue;
-   page->mapping = filp->f_mapping;
-   page->index = pgoff + i;
-   }
-   }
+   if (rc == VM_FAULT_NOPAGE)
+   dax_set_mapping(vmf, pfn, fault_size);
dax_read_unlock(id);
 
return rc;
-- 
2.17.2




[PATCH v6 06/10] device-dax: use struct_size()

2021-11-24 Thread Joao Martins
Use the struct_size() helper for the size of a struct with variable array
member at the end, rather than manually calculating it.

Suggested-by: Dan Williams 
Signed-off-by: Joao Martins 
---
 drivers/dax/device.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 0b82159b3564..038816b91af6 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -404,8 +404,9 @@ int dev_dax_probe(struct dev_dax *dev_dax)
return -EINVAL;
 
if (!pgmap) {
-   pgmap = devm_kzalloc(dev, sizeof(*pgmap) + sizeof(struct range)
-   * (dev_dax->nr_range - 1), GFP_KERNEL);
+   pgmap = devm_kzalloc(dev,
+   struct_size(pgmap, ranges, dev_dax->nr_range - 1),
+   GFP_KERNEL);
if (!pgmap)
return -ENOMEM;
pgmap->nr_range = dev_dax->nr_range;
-- 
2.17.2




[PATCH v6 09/10] device-dax: set mapping prior to vmf_insert_pfn{,_pmd,pud}()

2021-11-24 Thread Joao Martins
Normally, the @page mapping is set prior to inserting the page into a
page table entry. Make device-dax adhere to the same ordering, rather
than setting mapping after the PTE is inserted.

The address_space never changes and it is always associated with the
same inode and underlying pages. So, the page mapping is set once but
cleared when the struct pages are removed/freed (i.e. after
{devm_}memunmap_pages()).

Suggested-by: Jason Gunthorpe 
Signed-off-by: Joao Martins 
---
 drivers/dax/device.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 9c87927d4bc2..0ef9fecec005 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -121,6 +121,8 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax 
*dev_dax,
 
*pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
 
+   dax_set_mapping(vmf, *pfn, fault_size);
+
return vmf_insert_mixed(vmf->vma, vmf->address, *pfn);
 }
 
@@ -161,6 +163,8 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax 
*dev_dax,
 
*pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
 
+   dax_set_mapping(vmf, *pfn, fault_size);
+
return vmf_insert_pfn_pmd(vmf, *pfn, vmf->flags & FAULT_FLAG_WRITE);
 }
 
@@ -203,6 +207,8 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax 
*dev_dax,
 
*pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
 
+   dax_set_mapping(vmf, *pfn, fault_size);
+
return vmf_insert_pfn_pud(vmf, *pfn, vmf->flags & FAULT_FLAG_WRITE);
 }
 #else
@@ -245,8 +251,6 @@ static vm_fault_t dev_dax_huge_fault(struct vm_fault *vmf,
rc = VM_FAULT_SIGBUS;
}
 
-   if (rc == VM_FAULT_NOPAGE)
-   dax_set_mapping(vmf, pfn, fault_size);
dax_read_unlock(id);
 
return rc;
-- 
2.17.2




[PATCH v6 10/10] device-dax: compound devmap support

2021-11-24 Thread Joao Martins
Use the newly added compound devmap facility which maps the assigned dax
ranges as compound pages at a page size of @align.

dax devices are created with a fixed @align (huge page size) which is
enforced through as well at mmap() of the device. Faults, consequently
happen too at the specified @align specified at the creation, and those
don't change throughout dax device lifetime. MCEs unmap a whole dax
huge page, as well as splits occurring at the configured page size.

Performance measured by gup_test improves considerably for
unpin_user_pages() and altmap with NVDIMMs:

$ gup_test -f /dev/dax1.0 -m 16384 -r 10 -S -a -n 512 -w
(pin_user_pages_fast 2M pages) put:~71 ms -> put:~22 ms
[altmap]
(pin_user_pages_fast 2M pages) get:~524ms put:~525 ms -> get: ~127ms put:~71ms

 $ gup_test -f /dev/dax1.0 -m 129022 -r 10 -S -a -n 512 -w
(pin_user_pages_fast 2M pages) put:~513 ms -> put:~188 ms
[altmap with -m 127004]
(pin_user_pages_fast 2M pages) get:~4.1 secs put:~4.12 secs -> get:~1sec 
put:~563ms

.. as well as unpin_user_page_range_dirty_lock() being just as effective
as THP/hugetlb[0] pages.

[0] 
https://lore.kernel.org/linux-mm/20210212130843.13865-5-joao.m.mart...@oracle.com/

Signed-off-by: Joao Martins 
Reviewed-by: Dan Williams 
---
 drivers/dax/device.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 0ef9fecec005..9b51108aea91 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -78,14 +78,20 @@ static void dax_set_mapping(struct vm_fault *vmf, pfn_t pfn,
 {
unsigned long i, nr_pages = fault_size / PAGE_SIZE;
struct file *filp = vmf->vma->vm_file;
+   struct dev_dax *dev_dax = filp->private_data;
pgoff_t pgoff;
 
+   /* mapping is only set on the head */
+   if (dev_dax->pgmap->vmemmap_shift)
+   nr_pages = 1;
+
pgoff = linear_page_index(vmf->vma,
ALIGN(vmf->address, fault_size));
 
for (i = 0; i < nr_pages; i++) {
struct page *page = pfn_to_page(pfn_t_to_pfn(pfn) + i);
 
+   page = compound_head(page);
if (page->mapping)
continue;
 
@@ -445,6 +451,9 @@ int dev_dax_probe(struct dev_dax *dev_dax)
}
 
pgmap->type = MEMORY_DEVICE_GENERIC;
+   if (dev_dax->align > PAGE_SIZE)
+   pgmap->vmemmap_shift =
+   order_base_2(dev_dax->align >> PAGE_SHIFT);
addr = devm_memremap_pages(dev, pgmap);
if (IS_ERR(addr))
return PTR_ERR(addr);
-- 
2.17.2




Re: [PATCH v6 00/10] mm, device-dax: Introduce compound pages in devmap

2021-11-24 Thread Dan Williams
On Wed, Nov 24, 2021 at 11:10 AM Joao Martins  wrote:
>
> Changes since v5[9]:
>
> * Keep @dev on the previous line to improve readability on
> patch 5 (Christoph Hellwig)
> * Document is_static() function to clarify what are static and
> dynamic dax regions in patch 7 (Christoph Hellwig)
> * Deduce @f_mapping and @pgmap from vmf->vma->vm_file to reduce
> the number of arguments of set_{page,compound}_mapping() in last
> patch (Christoph Hellwig)
> * Factor out @mapping initialization to a separate helper ([new] patch 8)
> and rename set_page_mapping() to dax_set_mapping() in the process.
> * Remove set_compound_mapping() and instead adjust dax_set_mapping()
> to handle @vmemmap_shift case on the last patch. This greatly
> simplifies the last patch, and addresses a similar comment by Christoph
> on having an earlier return. No functional change on the changes
> to dax_set_mapping compared to its earlier version so I retained
> Dan's Rb on last patch.
> * Initialize the mapping prior to inserting the PTE/PMD/PUD as opposed
> to after the fact. ([new] patch 9, Jason Gunthorpe)
>

Looks good Joao, I was about to ping Christoph and Jason to make sure
their review comments are fully addressed before pulling this into my
dax tree, but I see Andrew has already picked this up. I'm ok for this
to go through -mm.

It might end up colliding with some of the DAX cleanups that are
brewing, but if that happens I might apply them to resolve conflicts
and ask Andrew to drop them out of -mm. We can cross that bridge
later.

Thanks for all the effort on this Joao, it's a welcome improvement.



Re: [PATCH v6 00/10] mm, device-dax: Introduce compound pages in devmap

2021-11-24 Thread Andrew Morton
On Wed, 24 Nov 2021 14:30:56 -0800 Dan Williams  
wrote:

> It might end up colliding with some of the DAX cleanups that are
> brewing, but if that happens I might apply them to resolve conflicts
> and ask Andrew to drop them out of -mm. We can cross that bridge
> later.

Yep, not a problem.



Re: [PATCH v6 04/10] mm/memremap: add ZONE_DEVICE support for compound pages

2021-11-24 Thread Christoph Hellwig
On Wed, Nov 24, 2021 at 07:09:59PM +, Joao Martins wrote:
> Add a new @vmemmap_shift property for struct dev_pagemap which specifies that 
> a
> devmap is composed of a set of compound pages of order @vmemmap_shift, 
> instead of
> base pages. When a compound page devmap is requested, all but the first
> page are initialised as tail pages instead of order-0 pages.

Please wrap commit log lines after 73 characters.

>  #define for_each_device_pfn(pfn, map, i) \
> - for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = 
> pfn_next(pfn))
> + for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = 
> pfn_next(map, pfn))

It would be nice to fix up this long line while you're at it.

>  static void dev_pagemap_kill(struct dev_pagemap *pgmap)
>  {
> @@ -315,8 +315,8 @@ static int pagemap_range(struct dev_pagemap *pgmap, 
> struct mhp_params *params,
>   memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
>   PHYS_PFN(range->start),
>   PHYS_PFN(range_len(range)), pgmap);
> - percpu_ref_get_many(pgmap->ref, pfn_end(pgmap, range_id)
> - - pfn_first(pgmap, range_id));
> + percpu_ref_get_many(pgmap->ref, (pfn_end(pgmap, range_id)
> + - pfn_first(pgmap, range_id)) >> pgmap->vmemmap_shift);

In the Linux coding style the - goes ointo the first line.

But it would be really nice to clean this up with a helper ala pfn_len
anyway:

percpu_ref_get_many(pgmap->ref,
pfn_len(pgmap, range_id) >> pgmap->vmemmap_shift);