determine how much do_fault_around() will
attempt to read when processing a fault.
These comments should have been updated when fault_around_pages() and
fault_around_mask() were removed in commit
aecd6f44266c13b8709245b21ded2d19291ab070.
Signed-off-by: William Kucharski
Reviewed-by: Larry Bassel
ing on for some time now.
Thanks,
William Kucharski
e code elsewhere performs additional checks and does the actual THP
mapping, not an all-encompassing go/no go check for THP mapping.
Thanks,
William Kucharski
a cascade of messages,
within reason;
one there are so many they overflow the dmesg buffer they're of limited
usefulness.
Something like a pr_alert() that could rate limit to a preset value, perhaps a
default of
fifty or so, could prove quite useful indeed without being an all or once
choice.
William Kucharski
> On Nov 27, 2018, at 9:50 AM, Vlastimil Babka wrote:
>
> On 11/27/18 3:50 PM, William Kucharski wrote:
>>
>> I was just double checking that this was meant to be more of a check done
>> before code elsewhere performs additional checks and does the actual
ve, something like:
+ /*
+* Check for a locked page first, as a speculative
+* reference may adversely influence page migration.
+ */
Reviewed-by: William Kucharski
/* enum zone_stat_item counters */
> "nr_free_pages",
> "nr_zone_inactive_anon",
> "nr_zone_active_anon",
> --
> 2.19.1
>
> Signed-off-by: Emre Ates
Reviewed-by: William Kucharski
Reviewed-by: William Kucharski
> On Nov 29, 2018, at 4:44 AM, Yongkai Wu wrote:
>
> A stack trace was triggered by VM_BUG_ON_PAGE(page_mapcount(page),
> page) in free_huge_page(). Unfortunately, the page->mapping field
> was set to NULL before this test. This made it
VM_UNUSED2=2, /* was; int: Linear or sqrt() swapout for hogs
*/
+ VM_UNUSED2=2, /* was: int: Linear or sqrt() swapout for hogs
*/
Reviewed-by: William Kucharski
of a check
or BUG_ON().
It's a generic math check for overflow, so it should work with any address.
Reviewed-by: William Kucharski
where to start looking for the problem.
Reviewed-by: William Kucharski
. Sparse will detect any attempts to return a
>> value which is not a VM_FAULT code.
>>
>> VM_FAULT_SET_HINDEX and VM_FAULT_GET_HINDEX values are changed
>> to avoid conflict with other VM_FAULT codes.
>>
>> Signed-off-by: Souptick Joarder
>
> Any further comment on this patch ?
Reviewed-by: William Kucharski
+ n);
I'm being paranoid, but is it possible this routine could ever be passed "n"
set to zero?
If so, it will erroneously abort indicating a wrapped address as (n - 1) wraps
to ULONG_MAX.
Easily fixed via:
if ((n != 0) && (ptr + (n - 1) < ptr))
William Kucharski
> On Nov 14, 2018, at 4:09 AM, David Laight wrote:
>
> From: William Kucharski
>> Sent: 14 November 2018 10:35
>>
>>> On Nov 13, 2018, at 5:51 PM, Isaac J. Manjarres
>>> wrote:
>>>
>>> diff --git a/mm/usercopy.c b/mm/usercopy.c
ld be possible with
additional effort to allow for mapping using PUD-sized pages.
Q: What about the use of non-PMD large page sizes (on non-x86 architectures)?
A: I haven't looked into that; I don't have an answer as to how to best
map a page that wasn't sized to be a PMD
> On May 17, 2018, at 1:57 AM, Michal Hocko wrote:
>
> [CCing Kirill and fs-devel]
>
> On Mon 14-05-18 07:12:13, William Kucharski wrote:
>> One of the downsides of THP as currently implemented is that it only supports
>> large page mappings for anonymous pages.
> On May 17, 2018, at 9:23 AM, Matthew Wilcox wrote:
>
> I'm certain it is. The other thing I believe is true that we should be
> able to share page tables (my motivation is thousands of processes each
> mapping the same ridiculously-sized file). I was hoping this prototype
> would have code
> /*
> * Withdraw the table only after we mark the pmd entry invalid.
> --
Looks good.
Reviewed-by: William Kucharski
less instructoh trace to sift thru when debugging pesky issues.
>
> Signed-off-by: Vineet Gupta
I would rather see 256 as a #define somewhere rather than a magic number
sprinkled
around arch/arc/kernel/troubleshoot.c.
Still, that's what the existing code does, so I suppose it's OK.
Otherwise the change looks good.
Reviewed-by: William Kucharski
+ /* Drop uffd context if remap feature not enabled */
> + vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> + vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING);
> }
> }
>
> --
> 2.17.1
>
Looks good.
Reviewed-by: William Kucharski
to_free_swap(page);
>
> - SetPageDirty(page);
> unlock_page(page);
> put_page(page);
>
> --
> 2.18.1
>
Since try_to_free_swap() can return 0 under certain error conditions, you
should check
check for a return status of 1 before calling unlock_page() and put_page().
Reviewed-by: William Kucharski
> On Dec 20, 2018, at 1:31 PM, Qian Cai wrote:
>
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index ae44f7adbe07..d76fd51e312a 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -399,9 +399,8 @@ void __init page_ext_init(void)
>* -pfn-->
>
> On Jan 2, 2019, at 8:14 PM, Shakeel Butt wrote:
>
> countersize = COUNTER_OFFSET(tmp.nentries) * nr_cpu_ids;
> - newinfo = vmalloc(sizeof(*newinfo) + countersize);
> + newinfo = __vmalloc(sizeof(*newinfo) + countersize, GFP_KERNEL_ACCOUNT,
> + PAGE_KERNE
[ code that dereferences "mapping" without further checks ]
}
I see nothing wrong with your second check but a few extra instructions
performed, but depending upon how often transparent_hugepage_enabled() is called
there may be at least theoretical performance concerns.
William Kucharski
william.kuchar...@oracle.com
;
>> }
>>
>> /*
>
> I suppose so.
>
> That function seems too clever for its own good :(. I wonder if these
> branch-avoiding tricks are really worthwhile.
At the very least I'd like to see some comments added as to why that approach
was taken for the sake of future maintainers.
William Kucharski
_t vs. the C conditional mostly to be more self-documenting?
The compiler-generated assembly between the two versions seems mostly a wash.
William Kucharski
6bits/sec
65536 64 60.00 36937716 0 315.20
65536 60.00 16838656 143.69
William Kucharski
william.kuchar...@oracle.com
you think modification of the
interrupt delay would achieve better results?
William Kucharski
> On May 14, 2018, at 9:19 AM, Christopher Lameter wrote:
>
> Cool. This could be controlled by the faultaround logic right? If we get
> fault_around_bytes up to huge page size then it is reasonable to use a
> huge page directly.
It isn't presently but certainly could be; for the prototype it
> On Feb 28, 2019, at 1:33 AM, Andrey Ryabinin wrote:
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a9852ed7b97f..2d081a32c6a8 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1614,8 +1614,8 @@ static __always_inline void update_lru_sizes(struct
> lruvec *lruvec,
>
> }
>
> -/*
> - *
> On Feb 28, 2019, at 11:22 AM, Andrew Morton wrote:
>
> I don't think so. This kernedoc comment was missing its leading /**.
> The patch fixes that.
That makes sense; it had looked like just an extraneous asterisk.
> On Mar 14, 2019, at 7:30 AM, Jan Kara wrote:
>
> Well I have some crash reports couple years old and they are not from QA
> departments. So I'm pretty confident there are real users that use this in
> production... and just reboot their machine in case it crashes.
Do you know what the use c
If you need it:
Reviewed-by: William Kucharski
Does this still happen on 5.1-rc2?
Do you have idea as to what max_low_pfn() gets set to on your system at boot
time?
From the screen shot I'm guessing it MIGHT be 0x373fe, but it's hard to know
for sure.
> On Mar 21, 2019, at 2:22 PM, Meelis Roos wrote:
>
> I tried to debug another problem
The dmesg output you posted confirms that max_low_pfn is indeed 0x373fe, and it
appears
that the value of phys_mem being checked mat be 0x3f401ff1, which translates to
pfn 0x3f401,
at least if what's still in registers can be believed.
Since that is indeed greater than max_low_pfn, VIRTUAL_BUG
Looks good to me.
Reviewed-by: William Kucharski
> On Aug 10, 2020, at 5:53 PM, ira.we...@intel.com wrote:
>
> From: Ira Weiny
>
> While reviewing Protection Key Supervisor support it was pointed out
> that using a counter to track static branch enable was an anti-pattern
I like this, it reminds me of the changes I proposed a few years ago to try
to automatically map read-only text regions of appropriate sizes and
alignment with THPs.
My concern had always been whether commercial software and distro vendors
would buy into supplying the appropriate linker flags when
LGTM.
Reviewed-by: William Kucharski
> On Mar 3, 2021, at 3:25 PM, Matthew Wilcox (Oracle)
> wrote:
>
> If the I/O completed successfully, the page will remain Uptodate,
> even if it is subsequently truncated. If the I/O completed with an error,
> this check would cause u
Sounds good.
Reviewed-by: William Kucharski
> On Apr 6, 2021, at 11:48 AM, Collin Fijalkovich
> wrote:
>
> Instrumenting filemap_nr_thps_inc() should be sufficient for ensuring
> writable file mappings will not be THP-backed.
>
> If filemap_nr_thps_dec() in unaccount_
Looks good to me and I like the cleanup.
For the series:
Reviewed-by: William Kucharski
> On Apr 16, 2021, at 5:15 PM, Matthew Wilcox (Oracle)
> wrote:
>
> [I'm told that patches 2-6 did not make it to the list; resending and
> cc'ing lkml this time]
>
> Whi
Correct an error where setting /proc/sys/net/rds/tcp/rds_tcp_rcvbuf would
instead modify the socket's sk_sndbuf and would leave sk_rcvbuf untouched.
Signed-off-by: William Kucharski
---
net/rds/tcp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/rds/tcp.c b/ne
I saw a similar change a few years ago with my prototype:
https://lore.kernel.org/linux-mm/5bb682e1-dd52-4aa9-83e9-def091e0c...@oracle.com/
the key being a very nice drop in iTLB-load-misses, so it looks like your code
is
having the right effect.
What about the call to filemap_nr_thps_dec() i
Nice cleanup, IMHO.
Reviewed-by: William Kucharski
> On Mar 9, 2021, at 12:57 PM, Matthew Wilcox (Oracle)
> wrote:
>
> My UEK-derived config has 1030 files depending on pagemap.h before
> this change. Afterwards, just 326 files need to be rebuilt when I
> touch pagemap.h.
Looks good, just one super minor nit inline.
Reviewed-by: William Kucharski
> On Mar 10, 2021, at 6:51 AM, Matthew Wilcox (Oracle)
> wrote:
>
> There's no need to give the page an address_space. Leaving the
> page->mapping as NULL will cause the VM to handle set_page
eaving the
> page->mapping as NULL will cause the VM to handle set_page_dirty()
> the same way that it's handled now, and that was the only reason to
> set the address_space in the first place.
>
> Signed-off-by: Matthew Wilcox (Oracle)
> Reviewed-by: Christoph Hellwig
&
> On Mar 19, 2019, at 10:33 PM, Jerome Glisse wrote:
>
> So i believe best we could do is send a SIGBUS to the process that has
> GUPed a range of a file that is being truncated this would match what
> we do for CPU acces. There is no reason access through GUP should be
> handled any different
order' argument to __page_cache_alloc() and
do_read_cache_page(). Ensure the allocated pages are compound pages.
William Kucharski (1):
Add filemap_huge_fault() to attempt to satisfy page faults on
memory-mapped read-only text pages using THP when possible.
fs/afs/dir.c|
Add an 'order' argument to __page_cache_alloc() and
do_read_cache_page(). Ensure the allocated pages are compound pages.
Signed-off-by: Matthew Wilcox (Oracle)
Signed-off-by: William Kucharski
Reported-by: kbuild test robot
---
fs/afs/dir.c| 2 +-
fs/btrfs/compressi
Add filemap_huge_fault() to attempt to satisfy page
faults on memory-mapped read-only text pages using THP when possible.
Signed-off-by: William Kucharski
---
include/linux/mm.h | 2 +
mm/Kconfig | 15 ++
mm/filemap.c | 398 +++--
mm
> On Sep 3, 2019, at 1:15 PM, Michal Hocko wrote:
>
> Then I would suggest mentioning all this in the changelog so that the
> overall intention is clear. It is also up to you fs developers to find a
> consensus on how to move forward. I have brought that up mostly because
> I really hate seein
> On Sep 3, 2019, at 5:57 AM, Michal Hocko wrote:
>
> On Mon 02-09-19 03:23:40, William Kucharski wrote:
>> Add an 'order' argument to __page_cache_alloc() and
>> do_read_cache_page(). Ensure the allocated pages are compound pages.
>
> Why do we need
Add an 'order' argument to __page_cache_alloc() and
do_read_cache_page(). Ensure the allocated pages are compound pages.
Signed-off-by: Matthew Wilcox (Oracle)
Signed-off-by: William Kucharski
Reported-by: kbuild test robot
---
fs/afs/dir.c| 2 +-
fs/btrfs/compressi
Add filemap_huge_fault() to attempt to satisfy page
faults on memory-mapped read-only text pages using THP when possible.
Signed-off-by: William Kucharski
---
include/linux/mm.h | 2 +
mm/Kconfig | 15 ++
mm/filemap.c | 337 +++--
mm
):
Add an 'order' argument to __page_cache_alloc() and
do_read_cache_page(). Ensure the allocated pages are compound pages.
William Kucharski (1):
Add filemap_huge_fault() to attempt to satisfy page faults on
memory-mapped read-only text pages using THP when possible.
fs
> On Oct 1, 2019, at 4:45 AM, Kirill A. Shutemov wrote:
>
> On Tue, Sep 24, 2019 at 05:52:13PM -0700, Matthew Wilcox wrote:
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index cbe7d0619439..670a1780bd2f 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -563,8 +563,6
On 10/1/19 5:32 AM, Kirill A. Shutemov wrote:
On Tue, Oct 01, 2019 at 05:21:26AM -0600, William Kucharski wrote:
On Oct 1, 2019, at 4:45 AM, Kirill A. Shutemov wrote:
On Tue, Sep 24, 2019 at 05:52:13PM -0700, Matthew Wilcox wrote:
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index
> On Oct 1, 2019, at 8:20 AM, Kirill A. Shutemov wrote:
>
> On Tue, Oct 01, 2019 at 06:18:28AM -0600, William Kucharski wrote:
>>
>>
>> On 10/1/19 5:32 AM, Kirill A. Shutemov wrote:
>>> On Tue, Oct 01, 2019 at 05:21:26AM -0600, William Kucharski wrote:
&g
> On Feb 13, 2019, at 4:59 PM, Matthew Wilcox wrote:
>
> I believe the direction is clear. It needs people to do the work.
> We're critically short of reviewers. I got precious little review of
> the original XArray work, which made Andrew nervous and delayed its
> integration. Now I'm gett
t in rejecting adding
> memory areas resulting in a memory size above the allowed limit.
>
> Signed-off-by: Juergen Gross
> Acked-by: Ingo Molnar
Reviewed-by: William Kucharski
> On Feb 12, 2019, at 4:57 PM, Yu Zhao wrote:
>
> It seems to me it's pefectly fine to use fields of xas directly,
> and it's being done this way throughout the file.
Fair enough.
Reviewed-by: William Kucharski
> On Feb 15, 2019, at 10:02 AM, Steven Price wrote:
>
> The pte_hole() callback is called at multiple levels of the page tables.
> Code dumping the kernel page tables needs to know what at what depth
> the missing entry is. Add this is an extra parameter to pte_hole().
> When the depth isn't k
> On Jan 23, 2019, at 5:09 AM, Jann Horn wrote:
>
> AFAICS this only applies to switch statements (because they jump to a
> case and don't execute stuff at the start of the block), not blocks
> after if/while/... .
It bothers me that we are going out of our way to deprecate valid C constructs
> On Jan 20, 2019, at 1:13 AM, Yang Fan wrote:
>
> The variable 'addr' is redundant in arch_get_unmapped_area_topdown(),
> just use parameter 'addr0' directly. Then remove the const qualifier
> of the parameter, and change its name to 'addr'.
>
> Signed-off-by: Yang Fan
These seem similar
he balloon driver is up
> + * it will remove that restriction again.
> + */
> + max_mem_size = xen_e820_table.entries[i].addr +
> +xen_e820_table.entries[i].size;
> +#endif
> }
>
>
ast is added, so it seems better to
match current usage elsewhere in the kernel.
Reviewed-by: William Kucharski
This seems very reasonable, but if the code is just going to panic if the
allocation fails, why not call memblock_alloc_node() instead?
If there is a reason we'd prefer to call memblock_alloc_node_nopanic(), I'd
like to see pgdat->nodeid printed in the panic message as well.
> On Jan 17, 2019, at 4:26 AM, Mike Rapoport wrote:
>
> On Thu, Jan 17, 2019 at 03:19:35AM -0700, William Kucharski wrote:
>>
>> This seems very reasonable, but if the code is just going to panic if the
>> allocation fails, why not call memblock_alloc_node() inst
> On Jan 22, 2019, at 1:06 AM, Juergen Gross wrote:
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index b9a667d36c55..7fc2a87110a3 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -96,10 +96,16 @@ void mem_hotplug_done(void)
> cpus_read_unlock();
> }
>
> On May 9, 2019, at 9:03 PM, Huang, Ying wrote:
>
> Yang Shi writes:
>
>> On 5/9/19 7:12 PM, Huang, Ying wrote:
>>>
>>> How about to change this to
>>>
>>>
>>> nr_reclaimed += hpage_nr_pages(page);
>>
>> Either is fine to me. Is this faster than "1 << compound_order(page)"?
>
>
> On Apr 4, 2019, at 1:23 AM, Huang Shijie wrote:
>
>
> + * This function is different from the get_user_pages_unlocked():
> + * The @pages may has different page order with the result
> + * got by get_user_pages_unlocked().
> + *
I suggest a slight rewrite of the comment, somethin
> On May 10, 2019, at 10:36 AM, Matthew Wilcox wrote:
>
> Please don't. That embeds the knowledge that we can only swap out either
> normal pages or THP sized pages. I'm trying to make the VM capable of
> supporting arbitrary-order pages, and this would be just one more place
> to fix.
>
standard readpage() routine defined for the address_space.
*
*/
3) Patch 5/4?
Otherwise it looks good.
Reviewed-by: William Kucharski
> On May 1, 2019, at 11:34 AM, Christoph Hellwig wrote:
>
> Fix the callback 9p passes to read_cache_page to actually have the
> proper type expect
> On May 2, 2019, at 7:04 AM, Christoph Hellwig wrote:
>
> Except that we don't pass v9fs_vfs_readpage as the filler any more,
> we now pass v9fs_fid_readpage.
True, so never mind. :-)
On 8/1/19 6:36 AM, Kirill A. Shutemov wrote:
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-#define HPAGE_PMD_SHIFT PMD_SHIFT
-#define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT)
-#define HPAGE_PMD_MASK (~(HPAGE_PMD_SIZE - 1))
-
-#define HPAGE_PUD_SHIFT PUD_SHIFT
-#define HPAGE_PUD_SIZE ((1UL) << HPAGE_
> On Aug 5, 2019, at 7:28 AM, Kirill A. Shutemov wrote:
>
>>
>> Is there different terminology you'd prefer to see me use here to clarify
>> this?
>
> My point is that maybe we should just use ~HPAGE_P?D_MASK in code. The new
> HPAGE_P?D_OFFSET doesn't add much for readability in my opinion.
Add an 'order' argument to __page_cache_alloc() and
do_read_cache_page(). Ensure the allocated pages are compound pages.
Signed-off-by: Matthew Wilcox (Oracle)
Signed-off-by: William Kucharski
Reported-by: kbuild test robot
---
fs/afs/dir.c| 2 +-
fs/btrfs/compressi
to enable submission as an independent
patch
2. Inadvertent tab spacing and comment changes were reverted
Changes since v1:
1. Fix improperly generated patch for v1 PATCH 1/2
Matthew Wilcox (1):
mm: Allow the page cache to allocate large pages
William Kucharski (1):
Add filemap_huge_fault
Add filemap_huge_fault() to attempt to satisfy page
faults on memory-mapped read-only text pages using THP when possible.
Signed-off-by: William Kucharski
---
include/linux/huge_mm.h | 16 ++-
include/linux/mm.h | 6 +
mm/Kconfig | 15 ++
mm/filemap.c| 300
On 7/31/19 2:35 AM, Song Liu wrote:
Could you please explain how to test/try this? Would it automatically map
all executables to THPs?
Until there is filesystem support you can't actually try this, though I have
tested it through some hacks during development and am also working on some
o
> On Aug 6, 2019, at 5:12 AM, Kirill A. Shutemov wrote:
>
> IIUC, you are missing ->vm_pgoff from the picture. The newly allocated
> page must land into page cache aligned on HPAGE_PMD_NR boundary. In other
> word you cannout have huge page with ->index, let say, 1.
>
> VMA is only suitable f
uot;if" can be
made the branch and jump leg, though in reality optimization is much more
complex than that.
Still, the unlikely() call is also nicely self-documenting as to what the
expected outcome is.
Reviewed-by: William Kucharski
I suspect I'm being massively pedantic here, but the comments for
atomic_pte_lookup() note:
* Only supports Intel large pages (2MB only) on x86_64.
* ZZZ - hugepage support is incomplete
That makes me wonder how many systems using this hardware are actually
configured with CONFIG_HUGETLB
> On Mar 21, 2019, at 3:21 AM, Baoquan He wrote:
It appears as is so often the case that the usage has far outpaced the
documentation and -EEXIST may be the proper code to return.
The correct answer here may be to modify the documentation to note the
additional semantic, though if the usage i
> On Mar 21, 2019, at 4:35 AM, Michal Hocko wrote:
>
> I am sorry to be snarky but hasn't this generated way much more email
> traffic than it really deserves? A simply and trivial clean up in the
> beginning that was it, right?
That's rather the point; that it did generate a fair amount of e
n't run across any problems and have been hammering the code for over
five days
without issue; all my testing was with transparent_hugepage/enabled set to
"always."
Tested-by: William Kucharski
Reviewed-by: William Kucharski
@mapping: Mapping.
> * @index: Index.
> * @max_scan: Maximum range to search.
> --
> 2.21.0
>
Reviewed-by: William Kucharski
> On Jan 9, 2019, at 8:08 PM, Yu Zhao wrote:
>
> find_get_pages_range() and find_get_pages_range_tag() already
> correctly increment reference count on head when seeing compound
> page, but they may still use page index from tail. Page index
> from tail is always zero, so these functions don't
Just a few grammar corrections since this is going into Documentation:
> On Jan 9, 2019, at 12:14 PM, Yang Shi wrote:
>
> Add desprition of wipe_on_offline interface in cgroup documents.
Add a description of the wipe_on_offline interface to the cgroup documents.
> Cc: Michal Hocko
> Cc: Johan
->size + 1 : 1;
>
> /* Allocate memory for new array of thresholds */
> - new = kmalloc(sizeof(*new) + size * sizeof(struct mem_cgroup_threshold),
> - GFP_KERNEL);
> + new = kmalloc(struct_size(new, entries, size), GFP_KERNEL);
> if (!new) {
> ret = -ENOMEM;
> goto unlock;
> --
> 2.20.1
>
Reviewed-by: William Kucharski
Except for [PATCH v4 36/36], which I can't approve for obvious reasons:
Reviewed-by: William Kucharski
Really nice improvements here.
Reviewed-by: William Kucharski
> On Aug 24, 2020, at 9:16 AM, Matthew Wilcox (Oracle)
> wrote:
>
> As promised earlier [1], here are the patches which I would like to
> merge into 5.11 to support THPs. They depend on that earlier series.
> I
Looks good to me; I really like the addition of the "end" parameter to
find_get_entries() and the conversion of pagevec_lookup_entries().
For the series:
Reviewed-by: William Kucharski
> On Sep 14, 2020, at 7:00 AM, Matthew Wilcox (Oracle)
> wrote:
>
> The critical p
AM, Matthew Wilcox wrote:
>
> From: William Kucharski
>
> When we have the opportunity to use transparent huge pages to map a
> file, we want to follow the same rules as DAX.
>
> Signed-off-by: William Kucharski
> [Inline __thp_get_unmapped_area() into thp_get_unmap
> On Oct 21, 2020, at 6:49 PM, Matthew Wilcox wrote:
>
> On Wed, Oct 21, 2020 at 08:30:18PM -0400, Qian Cai wrote:
>> Today's linux-next starts to trigger this wondering if anyone has any clue.
>
> I've seen that occasionally too. I changed that BUG_ON to VM_BUG_ON_PAGE
> to try to get a clu
| 21 +++-
> mm/memcontrol.c | 25 ++
> mm/mincore.c | 28 ++--
> mm/shmem.c| 15 +
> mm/swap_state.c | 31 +++++
> 10 files changed, 98 insertions(+), 98 deletions(-)
For the series:
Reviewed-by: William Kucharski
--
> mm/swap.c | 38 +-
> mm/truncate.c | 33 +++-
> 6 files changed, 45 insertions(+), 137 deletions(-)
Very nice cleanups and the code makes more sense, thanks.
Reviewed-by: William Kucharski
> On Sep 9, 2020, at 7:27 AM, David Hildenbrand wrote:
>
> On 09.09.20 15:14, Jason Gunthorpe wrote:
>> On Wed, Sep 09, 2020 at 01:32:44PM +0100, Matthew Wilcox wrote:
>>
>>> But here's the thing ... we already allow
>>> mmap(MAP_POPULATE | MAP_HUGETLB | MAP_HUGE_1GB)
>>>
>>> So if we're
I really like that as it’s self-documenting and anyone debugging it can see
what is actually being used at a glance.
> On Sep 20, 2020, at 09:15, Matthew Wilcox wrote:
>
> On Fri, Sep 18, 2020 at 02:45:25PM +0200, Christoph Hellwig wrote:
>> Add a flag to force processing a syscall as a compat
Is there any reason to worry about supporting PUD-sized uprobe pages if
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD is defined? I would prefer
not to bake in the assumption that "huge" means PMD-sized and more than
it already is.
For example, if CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD is configu
1 - 100 of 103 matches
Mail list logo