On 11/25/21 06:11, Christoph Hellwig wrote:
> On Wed, Nov 24, 2021 at 07:09:59PM +0000, Joao Martins wrote:
>> Add a new @vmemmap_shift property for struct dev_pagemap which specifies 
>> that a
>> devmap is composed of a set of compound pages of order @vmemmap_shift, 
>> instead of
>> base pages. When a compound page devmap is requested, all but the first
>> page are initialised as tail pages instead of order-0 pages.
> 
> Please wrap commit log lines after 73 characters.
> 
Fixed.

>>  #define for_each_device_pfn(pfn, map, i) \
>> -    for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = 
>> pfn_next(pfn))
>> +    for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = 
>> pfn_next(map, pfn))
> 
> It would be nice to fix up this long line while you're at it.
> 
OK -- I am gonna assume that it's enough to move pfn = pfn_next(...)
clause into the next line.

>>  static void dev_pagemap_kill(struct dev_pagemap *pgmap)
>>  {
>> @@ -315,8 +315,8 @@ static int pagemap_range(struct dev_pagemap *pgmap, 
>> struct mhp_params *params,
>>      memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
>>                              PHYS_PFN(range->start),
>>                              PHYS_PFN(range_len(range)), pgmap);
>> -    percpu_ref_get_many(pgmap->ref, pfn_end(pgmap, range_id)
>> -                    - pfn_first(pgmap, range_id));
>> +    percpu_ref_get_many(pgmap->ref, (pfn_end(pgmap, range_id)
>> +                    - pfn_first(pgmap, range_id)) >> pgmap->vmemmap_shift);
> 
> In the Linux coding style the - goes ointo the first line.
> 
> But it would be really nice to clean this up with a helper ala pfn_len
> anyway:
> 
>       percpu_ref_get_many(pgmap->ref,
>                           pfn_len(pgmap, range_id) >> pgmap->vmemmap_shift);
> 
OK, I moved the computation to an helper.

I've staged your comments (see below diff for this patch), plus wrapping the 
commit
message to 73 columns (I've also double-checked and this one seems to be the 
only one
making that mistake).

I'll wait a couple days to follow up v7 should you have further comments
in other patches.

diff --git a/mm/memremap.c b/mm/memremap.c
index 3afa246eb1ab..d591f3aa8884 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -109,6 +109,12 @@ static unsigned long pfn_next(struct dev_pagemap *pgmap, 
unsigned
long pfn)
        return pfn + pgmap_vmemmap_nr(pgmap);
 }

+static unsigned long pfn_len(struct dev_pagemap *pgmap, unsigned long range_id)
+{
+       return (pfn_end(pgmap, range_id) -
+               pfn_first(pgmap, range_id)) >> pgmap->vmemmap_shift;
+}
+
 /*
  * This returns true if the page is reserved by ZONE_DEVICE driver.
  */
@@ -130,7 +136,8 @@ bool pfn_zone_device_reserved(unsigned long pfn)
 }

 #define for_each_device_pfn(pfn, map, i) \
-       for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = 
pfn_next(map, pfn))
+       for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); \
+            pfn = pfn_next(map, pfn))

 static void dev_pagemap_kill(struct dev_pagemap *pgmap)
 {
@@ -315,8 +322,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct 
mhp_params
*params,
        memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
                                PHYS_PFN(range->start),
                                PHYS_PFN(range_len(range)), pgmap);
-       percpu_ref_get_many(pgmap->ref, (pfn_end(pgmap, range_id)
-                       - pfn_first(pgmap, range_id)) >> pgmap->vmemmap_shift);
+       percpu_ref_get_many(pgmap->ref, pfn_len(pgmap, range_id));
        return 0;

 err_add_memory:

Reply via email to