On 03/31/2015 11:48 AM, Michal Hocko wrote:
On Fri 27-03-15 15:23:50, Nishanth Aravamudan wrote:
On 27.03.2015 [13:17:59 -0700], Dave Hansen wrote:
On 03/27/2015 12:28 PM, Nishanth Aravamudan wrote:
@@ -2585,7 +2585,7 @@ static bool pfmemalloc_watermark_ok(pg_data_t *pgdat)
for (i =
27;t cause any major harm. The likelihood of that situation, as
well, in a non-reserved memory setup like the one I described, seems
exceedingly low.
OK, I guess when a reasonably configured system has nothing to reclaim,
it's already busted and throttling won't change much.
Consider
On 06/11/2015 09:34 PM, Andrew Morton wrote:
On Thu, 11 Jun 2015 15:21:30 -0400 Eric B Munson wrote:
Ditto mlockall(MCL_ONFAULT) followed by munlock(). I'm not sure
that even makes sense but the behaviour should be understood and
tested.
I have extended the kselftest for lock-on-fault to tr
On 06/22/2015 04:18 PM, Eric B Munson wrote:
On Mon, 22 Jun 2015, Michal Hocko wrote:
On Fri 19-06-15 12:43:33, Eric B Munson wrote:
On Fri, 19 Jun 2015, Michal Hocko wrote:
On Thu 18-06-15 16:30:48, Eric B Munson wrote:
On Thu, 18 Jun 2015, Michal Hocko wrote:
[...]
Wouldn't it be much m
On 06/15/2015 04:43 PM, Eric B Munson wrote:
Note that the semantic of MAP_LOCKED can be subtly surprising:
"mlock(2) fails if the memory range cannot get populated to guarantee
that no future major faults will happen on the range.
mmap(MAP_LOCKED) on the other hand silently succeeds even if the
e") and
b360edb43f8e ("mm, mempolicy: migrate_to_node should only migrate to node").
To prevent further mistakes, this patch renames the function to
alloc_pages_prefer_node() and documents it together with alloc_pages_node().
Signed-off-by: Vlastimil Babka
Cc: Mel Gorman
Cc:
On 07/10/2015 06:19 PM, Eric B Munson wrote:
On Fri, 10 Jul 2015, Jonathan Corbet wrote:
On Thu, 9 Jul 2015 14:46:35 -0400
Eric B Munson wrote:
One other question...if I call mlock2(MLOCK_ONFAULT) on a range that
already has resident pages, I believe that those pages will not be locked
until
On 07/21/2015 09:59 PM, Eric B Munson wrote:
With the refactored mlock code, introduce new system calls for mlock,
munlock, and munlockall. The new calls will allow the user to specify
what lock states are being added or cleared. mlock2 and munlock2 are
trivial at the moment, but a follow on pa
On 07/21/2015 09:59 PM, Eric B Munson wrote:
The cost of faulting in all memory to be locked can be very high when
working with large mappings. If only portions of the mapping will be
used this can incur a high penalty for locking.
For the example of a large file, this is the usage pattern for
On 07/21/2015 11:31 PM, David Rientjes wrote:
> On Tue, 21 Jul 2015, Vlastimil Babka wrote:
>
>> The function alloc_pages_exact_node() was introduced in 6484eb3e2a81 ("page
>> allocator: do not check NUMA node ID when the caller knows the node is
>> valid&qu
On 07/22/2015 08:43 PM, Eric B Munson wrote:
> On Wed, 22 Jul 2015, Vlastimil Babka wrote:
>
>>
>> Hi,
>>
>> I think you should include a complete description of which
>> transitions for vma states and mlock2/munlock2 flags applied on them
>> are vali
On 07/23/2015 10:27 PM, David Rientjes wrote:
On Thu, 23 Jul 2015, Christoph Lameter wrote:
The only possible downside would be existing users of
alloc_pages_node() that are calling it with an offline node. Since it's a
VM_BUG_ON() that would catch that, I think it should be changed to a
VM_WA
On 07/24/2015 11:28 PM, Eric B Munson wrote:
...
Changes from V4:
Drop all architectures for new sys call entries except x86[_64] and MIPS
Drop munlock2 and munlockall2
Make VM_LOCKONFAULT a modifier to VM_LOCKED only to simplify book keeping
Adjust tests to match
Hi, thanks for considering m
On 07/27/2015 03:35 PM, Eric B Munson wrote:
On Mon, 27 Jul 2015, Vlastimil Babka wrote:
On 07/24/2015 11:28 PM, Eric B Munson wrote:
...
Changes from V4:
Drop all architectures for new sys call entries except x86[_64] and MIPS
Drop munlock2 and munlockall2
Make VM_LOCKONFAULT a modifier to
On 07/27/2015 04:54 PM, Eric B Munson wrote:
On Mon, 27 Jul 2015, Vlastimil Babka wrote:
We do actually have an MCL_LOCKED, we just call it MCL_CURRENT. Would
you prefer that I match the name in mlock2() (add MLOCK_CURRENT
instead)?
Hm it's similar but not exactly the same, be
On 07/28/2015 01:17 PM, Michal Hocko wrote:
[I am sorry but I didn't get to this sooner.]
On Mon 27-07-15 10:54:09, Eric B Munson wrote:
Now that VM_LOCKONFAULT is a modifier to VM_LOCKED and
cannot be specified independentally, it might make more sense to mirror
that relationship to userspace.
On 07/28/2015 03:49 PM, Eric B Munson wrote:
On Tue, 28 Jul 2015, Michal Hocko wrote:
[...]
The only
remaining question I have is should we have 2 new mlockall flags so that
the caller can explicitly set VM_LOCKONFAULT in the mm->def_flags vs
locking all current VMAs on fault. I ask because
On 07/29/2015 12:45 PM, Michal Hocko wrote:
>> In a much less
>> likely corner case, it is not possible in the current setup to request
>> all current VMAs be VM_LOCKONFAULT and all future be VM_LOCKED.
>
> Vlastimil has already pointed that out. MCL_FUTURE doesn't clear
> MCL_CURRENT. I was quite
offline node will now be checked for
DEBUG_VM builds. Since it's not fatal if the node has been previously online,
and this patch may expose some existing buggy callers, change the VM_BUG_ON
in __alloc_pages_node() to VM_WARN_ON.
Signed-off-by: Vlastimil Babka
Acked-by: David Rientjes
---
numa_mem_id() is able to handle allocation from CPUs on memory-less nodes,
so it's a more robust fallback than the currently used numa_node_id().
Suggested-by: Christoph Lameter
Signed-off-by: Vlastimil Babka
Acked-by: David Rientjes
Acked-by: Mel Gorman
---
include/linux/gfp.h | 5 +++
ges, except temporarily hiding
potentially buggy callers. Restricting the checks in alloc_pages_node() is
left for the next patch which can in turn expose more existing buggy callers.
Signed-off-by: Vlastimil Babka
Cc: Mel Gorman
Cc: David Rientjes
Cc: Greg Thelen
Cc: Aneesh Kumar K.V
Cc: Chri
On 07/30/2015 07:58 PM, Christoph Lameter wrote:
> On Thu, 30 Jul 2015, Vlastimil Babka wrote:
>
>> --- a/mm/slob.c
>> +++ b/mm/slob.c
>> void *page;
>>
>> -#ifdef CONFIG_NUMA
>> -if (node != NUMA_NO_NODE)
>> -page = alloc_
On 31.7.2015 23:25, David Rientjes wrote:
> On Thu, 30 Jul 2015, Vlastimil Babka wrote:
>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index aa58a32..56355f2 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2469,7 +2469,7 @@ khuge
On 07/30/2015 07:41 PM, Johannes Weiner wrote:
On Thu, Jul 30, 2015 at 06:34:31PM +0200, Vlastimil Babka wrote:
numa_mem_id() is able to handle allocation from CPUs on memory-less nodes,
so it's a more robust fallback than the currently used numa_node_id().
Won't it fall through t
.
Signed-off-by: Eric B Munson
Cc: Michal Hocko
Cc: Vlastimil Babka
Acked-by: Vlastimil Babka
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
LT in either mlockall()
invocation.
munlock() will unconditionally clear both vma flags. munlockall()
unconditionally clears for VMA flags on all VMAs and in the
mm->def_flags field.
Signed-off-by: Eric B Munson
Cc: Michal Hocko
Cc: Vlastimil Babka
The logic seems ok, althoug
On 03/03/2016 08:01 AM, Li Zhang wrote:
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -293,13 +293,20 @@ static inline bool update_defer_init(pg_data_t *pgdat,
> unsigned long pfn, unsigned long zone_end,
> unsigned long *nr_initial
fdd0] [c0b70924].inode_init+0xa4/0xf0
> [c12bfe60] [c0b706a0].vfs_caches_init+0x80/0x144
> [c12bfef0] [c0b35208].start_kernel+0x40c/0x4e0
> [c12bff90] [c0008cfc]start_here_common+0x20/0x4a4
> Mem-Info:
>
> Acked-by: Mel Gorma
t temporarily hiding
potentially buggy callers. Restricting the checks in alloc_pages_node() is
left for the next patch which can in turn expose more existing buggy callers.
Signed-off-by: Vlastimil Babka
Acked-by: Johannes Weiner
Cc: Mel Gorman
Cc: David Rientjes
Cc: Greg Thelen
Cc: Aneesh Kumar K.V
offline node will now be checked for
DEBUG_VM builds. Since it's not fatal if the node has been previously online,
and this patch may expose some existing buggy callers, change the VM_BUG_ON
in __alloc_pages_node() to VM_WARN_ON.
Signed-off-by: Vlastimil Babka
Acked-by: David Rientjes
Ack
: Christoph Lameter
Signed-off-by: Vlastimil Babka
Acked-by: David Rientjes
Acked-by: Mel Gorman
Acked-by: Christoph Lameter
---
v3: better commit message
include/linux/gfp.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index
On 2/27/19 3:47 PM, Aneesh Kumar K.V wrote:
> This patch adds PF_MEMALLOC_NOCMA which make sure any allocation in that
> context
> is marked non-movable and hence cannot be satisfied by CMA region.
>
> This is useful with get_user_pages_longterm where we want to take a page pin
> by
> migrating
On 3/1/19 2:21 PM, Alexandre Ghiti wrote:
> I collected mistakes here: domain name expired and no mailing list added :)
> Really sorry about that, I missed the whole discussion (if any).
> Could someone forward it to me (if any) ? Thanks !
Bounced you David and Mike's discussion (4 messages total)
On 3/6/19 8:00 PM, Alexandre Ghiti wrote:
> This condition allows to define alloc_contig_range, so simplify
> it into a more accurate naming.
>
> Suggested-by: Vlastimil Babka
> Signed-off-by: Alexandre Ghiti
Acked-by: Vlastimil Babka
(you could have sent this with my ack f
On 2/27/20 5:00 PM, Sachin Sant wrote:
>
>
>> On 27-Feb-2020, at 5:42 PM, Michal Hocko wrote:
>>
>> A very good hint indeed. I would do this
>> diff --git a/include/linux/topology.h b/include/linux/topology.h
>> index eb2fe6edd73c..d9f1b6737e4d 100644
>> --- a/include/linux/topology.h
>> +++ b/
On 3/2/20 7:47 AM, Anshuman Khandual wrote:
> There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS
> This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the
> existing VM_STACK_DEFAULT_FLAGS. While here, also define some more macros
> with standard VMA access fla
On 3/2/20 7:47 AM, Anshuman Khandual wrote:
> There are many places where all basic VMA access flags (read, write, exec)
> are initialized or checked against as a group. One such example is during
> page fault. Existing vma_is_accessible() wrapper already creates the notion
> of VMA accessibility a
ts.infradead.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-s...@vger.kernel.org
> Cc: de...@driverdev.osuosl.org
> Cc: linux...@kvack.org
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Anshuman Khandual
Reviewed-by: Vlastimil Babka
Thanks.
On 3/12/20 9:23 AM, Sachin Sant wrote:
>
>
>> On 12-Mar-2020, at 10:57 AM, Srikar Dronamraju
>> wrote:
>>
>> * Michal Hocko [2020-03-11 12:57:35]:
>>
>>> On Wed 11-03-20 16:32:35, Srikar Dronamraju wrote:
A Powerpc system with multiple possible nodes and with CONFIG_NUMA
enabled al
On 3/12/20 2:14 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-12 10:30:50]:
>
>> On 3/12/20 9:23 AM, Sachin Sant wrote:
>> >> On 12-Mar-2020, at 10:57 AM, Srikar Dronamraju
>> >> wrote:
>> >> * Michal Hocko [2020-03-11 12:57:35]:
&g
On 3/12/20 5:13 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-12 14:51:38]:
>
>> > * Vlastimil Babka [2020-03-12 10:30:50]:
>> >
>> >> On 3/12/20 9:23 AM, Sachin Sant wrote:
>> >> >> On 12-Mar-2020, at 10:57 AM, Srikar Dronamra
On 3/13/20 12:12 PM, Srikar Dronamraju wrote:
> * Michael Ellerman [2020-03-13 21:48:06]:
>
>> Sachin Sant writes:
>> >> The patch below might work. Sachin can you test this? I tried faking up
>> >> a system with a memoryless node zero but couldn't get it to even start
>> >> booting.
>> >>
>> >
On 3/13/20 12:04 PM, Srikar Dronamraju wrote:
>> I lost all the memory about it. :)
>> Anyway, how about this?
>>
>> 1. make node_present_pages() safer
>> static inline node_present_pages(nid)
>> {
>> if (!node_online(nid)) return 0;
>> return (NODE_DATA(nid)->node_present_pages);
>> }
>>
>
> Ye
On 3/17/20 2:17 PM, Srikar Dronamraju wrote:
> Currently while allocating a slab for a offline node, we use its
> associated node_numa_mem to search for a partial slab. If we don't find
> a partial slab, we try allocating a slab from the offline node using
> __alloc_pages_node. However this is boun
On 3/16/20 10:06 AM, Michal Hocko wrote:
> On Thu 12-03-20 17:41:58, Vlastimil Babka wrote:
> [...]
>> with nid present in:
>> N_POSSIBLE - pgdat might not exist, node_to_mem_node() must return some
>> online
>
> I would rather have a dummy pgdat for those
On 3/17/20 2:45 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 14:34:25]:
>
>> On 3/17/20 2:17 PM, Srikar Dronamraju wrote:
>> > Currently while allocating a slab for a offline node, we use its
>> > associated node_numa_mem to search for a partial
On 3/17/20 3:51 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 14:53:26]:
>
>> >> >
>> >> > Mitigate this by allocating the new slab from the node_numa_mem.
>> >>
>> >> Are you sure this is really needed and the othe
On 3/17/20 12:53 PM, Bharata B Rao wrote:
> On Tue, Mar 17, 2020 at 02:56:28PM +0530, Bharata B Rao wrote:
>> Case 1: 2 node NUMA, node0 empty
>>
>> # numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus:
>> node 0 size: 0 MB
>> node 0 free: 0 MB
>> node 1 cpus: 0
On 3/17/20 5:25 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 16:56:04]:
>
>>
>> I wonder why do you get a memory leak while Sachin in the same situation [1]
>> gets a crash? I don't understand anything anymore.
>
> Sachin was testing on linux-
On 3/18/20 4:20 AM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 17:45:15]:
>>
>> Yes, that Kirill's patch was about the memcg shrinker map allocation. But the
>> patch hunk that Bharata posted as a "hack" that fixes the problem, it follows
>>
accessing the pgdat
>> structure. Fix the same for node_spanned_pages() too.
>>
>> Cc: Andrew Morton
>> Cc: linux...@kvack.org
>> Cc: Mel Gorman
>> Cc: Michael Ellerman
>> Cc: Sachin Sant
>> Cc: Michal Hocko
>> Cc: Christopher Lameter
>
m/088b5996-faae-8a56-ef9c-5b567125a...@suse.cz/
Reported-by: Sachin Sant
Reported-by: Bharata B Rao
Debugged-by: Srikar Dronamraju
Signed-off-by: Vlastimil Babka
Cc: Mel Gorman
Cc: Michael Ellerman
Cc: Michal Hocko
Cc: Christopher Lameter
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Joonsoo Kim
Cc
On 3/18/20 5:06 PM, Bharata B Rao wrote:
> On Wed, Mar 18, 2020 at 03:42:19PM +0100, Vlastimil Babka wrote:
>> This is a PowerPC platform with following NUMA topology:
>>
>> available: 2 nodes (0-1)
>> node 0 cpus:
>> node 0 size: 0 MB
>> node 0 free: 0 MB
[1]
https://lore.kernel.org/linux-next/3381cd91-ab3d-4773-ba04-e7a072a63...@linux.vnet.ibm.com/
[2]
https://lore.kernel.org/linux-mm/fff0e636-4c36-ed10-281c-8cdb0687c...@virtuozzo.com/
[3] https://lore.kernel.org/linux-mm/20200317092624.gb22...@in.ibm.com/
[4]
https://lore.kernel.org/linux-mm/088b599
On 3/19/20 1:32 AM, Michael Ellerman wrote:
> Seems like a nice solution to me
Thanks :)
>> 8<
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..1d4f2d7a0080 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1511,7 +1511,7 @@ static inline struct page *alloc_slab_page(struct
On 3/19/20 9:52 AM, Sachin Sant wrote:
>
>> OK how about this version? It's somewhat ugly, but important is that the fast
>> path case (c->page exists) is unaffected and another common case (c->page is
>> NULL, but node is NUMA_NO_NODE) is just one extra check - impossible to
>> avoid at
>> some
On 3/19/20 2:26 PM, Sachin Sant wrote:
>
>
>> On 19-Mar-2020, at 6:53 PM, Vlastimil Babka wrote:
>>
>> On 3/19/20 9:52 AM, Sachin Sant wrote:
>>>
>>>> OK how about this version? It's somewhat ugly, but important is that the
>>>>
On 3/19/20 3:05 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-19 14:47:58]:
>
>> 8<
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..7113b1f9cd77 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1973,8 +1973,6 @@
On 3/20/20 4:42 AM, Bharata B Rao wrote:
> On Thu, Mar 19, 2020 at 02:47:58PM +0100, Vlastimil Babka wrote:
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..7113b1f9cd77 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1973,8 +1973,6 @@ static void
On 3/20/20 8:46 AM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-19 15:10:19]:
>
>> On 3/19/20 3:05 PM, Srikar Dronamraju wrote:
>> > * Vlastimil Babka [2020-03-19 14:47:58]:
>> >
>>
>> No, but AFAICS, such node values are already han
tested-by: Sachin Sant
Reported-by: PUVICHAKRAVARTHY RAMACHANDRAN
Tested-by: Bharata B Rao
Debugged-by: Srikar Dronamraju
Signed-off-by: Vlastimil Babka
Fixes: a561ce00b09e ("slub: fall back to node_to_mem_node() node if allocating
on memoryless node")
Cc: sta...@vger.kernel.org
Cc: Mel G
On 4/17/20 6:53 PM, Michal Suchánek wrote:
> Hello,
Hi, thanks for reproducing on latest upstream!
> instrumenting the kernel with the following patch
>
> ---
> mm/slub.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index d6787bbe0248..d40995d5f8ff 100644
>
On 4/21/20 10:39 AM, Nicolai Stange wrote:
> Hi
>
> [adding some drivers/char/random folks + LKML to CC]
>
> Vlastimil Babka writes:
>
>> On 4/17/20 6:53 PM, Michal Suchánek wrote:
>>> Hello,
>>
>> Hi, thanks for reproducing on latest upstr
On 10/16/18 12:33 AM, Joel Fernandes wrote:
> On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
>> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
>>> Android needs to mremap large regions of memory during memory management
>>> related operations.
>>
>> Jus
On 10/16/18 9:43 PM, Joel Fernandes wrote:
> On Tue, Oct 16, 2018 at 01:29:52PM +0200, Vlastimil Babka wrote:
>> On 10/16/18 12:33 AM, Joel Fernandes wrote:
>>> On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
>>>> On Fri, Oct 12, 2018 at 06:31:58PM
On 1/17/19 7:39 PM, Alexandre Ghiti wrote:
> From: Alexandre Ghiti
>
> On systems without CMA or (MEMORY_ISOLATION && COMPACTION) activated but
> that support gigantic pages, boottime reserved gigantic pages can not be
> freed at all. This patchs simply enables the possibility to hand back
> thos
On 2/13/19 8:30 PM, Dave Hansen wrote:
>> -#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) ||
>> defined(CONFIG_CMA)
>> +#ifdef CONFIG_COMPACTION_CORE
>> static __init int gigantic_pages_init(void)
>> {
>> /* With compaction or CMA we can allocate gigantic pages at runt
to make it more accurate: this value being false
> does not mean that the system cannot use gigantic pages, it just means that
> runtime allocation of gigantic pages is not supported, one can still
> allocate boottime gigantic pages if the architecture supports it.
>
> Sig
andard gfp flags and callers can pass __GFP_ZERO to get zeroed buffer,
> what has already been an issue: see commit dd65a941f6ba ("arm64:
> dma-mapping: clear buffers allocated with FORCE_CONTIGUOUS flag").
>
> Signed-off-by: Marek Szyprowski
Acked-by: Vlastimil Babka
mapping: clear buffers allocated with FORCE_CONTIGUOUS flag").
>
> Signed-off-by: Marek Szyprowski
Acked-by: Vlastimil Babka
: linux-ker...@vger.kernel.org
> Cc: linux...@kvack.org
> Signed-off-by: Anshuman Khandual
Some comment for the function wouln't hurt, but perhaps it is self-explanatory
enough.
Acked-by: Vlastimil Babka
rg
> Acked-by: Geert Uytterhoeven
> Acked-by: Guo Ren
> Signed-off-by: Anshuman Khandual
Acked-by: Vlastimil Babka
el.org
> Cc: linux-a...@vger.kernel.org
> Cc: linux...@kvack.org
> Signed-off-by: Anshuman Khandual
Meh, why is there _page in the function's name... but too many users to bother
changing it now, I guess.
Acked-by: Vlastimil Babka
On 2/26/20 7:41 PM, Michal Hocko wrote:
> On Wed 26-02-20 18:25:28, Cristopher Lameter wrote:
>> On Mon, 24 Feb 2020, Michal Hocko wrote:
>>
>>> Hmm, nasty. Is there any reason why kmalloc_node behaves differently
>>> from the page allocator?
>>
>> The page allocator will do the same thing if you p
On 2/26/20 10:45 PM, Vlastimil Babka wrote:
>
>
> if (node == NUMA_NO_NODE)
> page = alloc_pages(flags, order);
> else
> page = __alloc_pages_node(node, flags, order);
>
> So yeah looks like SLUB's kmalloc_node() is supposed to behave like the
> page allo
On 8/20/19 4:30 AM, Christoph Hellwig wrote:
> On Mon, Aug 19, 2019 at 07:46:00PM +0200, David Sterba wrote:
>> Another thing that is lost is the slub debugging support for all
>> architectures, because get_zeroed_pages lacking the red zones and sanity
>> checks.
>>
>> I find working with raw page
On 7/11/23 12:35, Leon Romanovsky wrote:
>
> On Mon, Feb 27, 2023 at 09:35:59AM -0800, Suren Baghdasaryan wrote:
>
> <...>
>
>> Laurent Dufour (1):
>> powerc/mm: try VMA lock-based page fault handling first
>
> Hi,
>
> This series and specifically the commit above broke docker over PPC.
> It
On 7/25/23 14:51, Matthew Wilcox wrote:
> On Tue, Jul 25, 2023 at 01:24:03PM +0300, Kirill A . Shutemov wrote:
>> On Tue, Jul 18, 2023 at 04:44:53PM -0700, Sean Christopherson wrote:
>> > diff --git a/mm/compaction.c b/mm/compaction.c
>> > index dbc9f86b1934..a3d2b132df52 100644
>> > --- a/mm/compa
On 7/19/23 01:44, Sean Christopherson wrote:
> Signed-off-by: Sean Christopherson
Process wise this will probably be frowned upon when done separately, so I'd
fold it in the patch using the export, seems to be the next one.
> ---
> security/security.c | 1 +
> 1 file changed, 1 insertion(+)
>
On 7/26/23 13:20, Nikunj A. Dadhania wrote:
> Hi Sean,
>
> On 7/24/2023 10:30 PM, Sean Christopherson wrote:
>> On Mon, Jul 24, 2023, Nikunj A. Dadhania wrote:
>>> On 7/19/2023 5:14 AM, Sean Christopherson wrote:
This is the next iteration of implementing fd-based (instead of vma-based)
folios are also unevictable - it is the
case for guest memfd folios.
Also incorporate comment update suggested by Matthew.
Fixes: 3424873596ce ("mm: Add AS_UNMOVABLE to mark mapping as completely
unmovable")
Signed-off-by: Vlastimil Babka
---
Feel free to squash into 3424873596ce.
mm/co
On 7/25/23 14:51, Matthew Wilcox wrote:
> On Tue, Jul 25, 2023 at 01:24:03PM +0300, Kirill A . Shutemov wrote:
>> On Tue, Jul 18, 2023 at 04:44:53PM -0700, Sean Christopherson wrote:
>> > diff --git a/mm/compaction.c b/mm/compaction.c
>> > index dbc9f86b1934..a3d2b132df52 100644
>> > --- a/mm/compa
On 9/6/23 01:56, Sean Christopherson wrote:
> On Fri, Sep 01, 2023, Vlastimil Babka wrote:
>> As Kirill pointed out, mapping can be removed under us due to
>> truncation. Test it under folio lock as already done for the async
>> compaction / dirty folio case. To prevent loc
folios are also unevictable. To enforce
that expecation, make mapping_set_unmovable() also set AS_UNEVICTABLE.
Also incorporate comment update suggested by Matthew.
Fixes: 3424873596ce ("mm: Add AS_UNMOVABLE to mark mapping as completely
unmovable")
Signed-off-by: Vlastimil Bab
CONFIG_SLAB=y remove the line so those also
switch to SLUB. Regressions due to the switch should be reported to
linux-mm and slab maintainers.
[1] https://lore.kernel.org/all/4b9fc9c6-b48c-198f-5f80-811a44737...@suse.cz/
[2] https://lwn.net/Articles/932201/
Signed-off-by: Vlastimil Babka
---
arch/arc
On 5/23/23 11:22, Geert Uytterhoeven wrote:
> Hi Vlastimil,
>
> Thanks for your patch!
>
> On Tue, May 23, 2023 at 11:12 AM Vlastimil Babka wrote:
>> As discussed at LSF/MM [1] [2] and with no objections raised there,
>> deprecate the SLAB allocator. Rename the
On 5/24/23 02:29, David Rientjes wrote:
> On Tue, 23 May 2023, Vlastimil Babka wrote:
>
>> As discussed at LSF/MM [1] [2] and with no objections raised there,
>> deprecate the SLAB allocator. Rename the user-visible option so that
>> users with CONFIG_SLAB=y get a new
On 9/2/22 01:26, Suren Baghdasaryan wrote:
> On Thu, Sep 1, 2022 at 1:58 PM Kent Overstreet
> wrote:
>>
>> On Thu, Sep 01, 2022 at 10:34:48AM -0700, Suren Baghdasaryan wrote:
>> > Resending to fix the issue with the In-Reply-To tag in the original
>> > submission at [4].
>> >
>> > This is a proof
On 9/28/22 04:28, Suren Baghdasaryan wrote:
> On Sun, Sep 11, 2022 at 2:35 AM Vlastimil Babka wrote:
>>
>> On 9/2/22 01:26, Suren Baghdasaryan wrote:
>> >
>> >>
>> >> Two complaints so far:
>> >> - I don't like the vma_mark_locked(
CKUP @
>> __update_freelist_slow+0x74/0x90
>
> Sorry, the bug can be fixed by this patch from Vlastimil Babka:
>
> https://lore.kernel.org/all/83ff4b9e-94f1-8b35-1233-3dd414ea4...@suse.cz/
The current -next should be fixed, the fix was folded to the preparatory
commit, which
On 11/2/23 16:46, Paolo Bonzini wrote:
> On Thu, Nov 2, 2023 at 4:38 PM Sean Christopherson wrote:
>> Actually, looking that this again, there's not actually a hard dependency on
>> THP.
>> A THP-enabled kernel _probably_ gives a higher probability of using
>> hugepages,
>> but mostly because T
On 10/8/20 11:49 AM, Christophe Leroy wrote:
In a 10 years old commit
(https://github.com/linuxppc/linux/commit/d069cb4373fe0d451357c4d3769623a7564dfa9f),
powerpc 8xx has
made the handling of PTE accessed bit conditional to CONFIG_SWAP.
Since then, this has been extended to some other powerpc va
pages when page
allocation debug is enabled.
Signed-off-by: Mike Rapoport
Reviewed-by: David Hildenbrand
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
But, the "enable" param is hideous. I would rather have map and unmap variants
(and just did the same split for page pois
,invalid}_noflush().
Still, add a pr_warn() so that future changes in set_memory APIs will not
silently break hibernation.
Signed-off-by: Mike Rapoport
Acked-by: Rafael J. Wysocki
Reviewed-by: David Hildenbrand
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
The bool param is a bit
On 11/3/20 5:20 PM, Mike Rapoport wrote:
From: Mike Rapoport
Subject should have "on DEBUG_PAGEALLOC" ?
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architec
On 11/8/20 7:57 AM, Mike Rapoport wrote:
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_cache
*cachep)
return false;
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int m
to separate it. So let's prepare for non-anon
> tests by renaming to "cow".
>
> Signed-off-by: David Hildenbrand
Acked-by: Vlastimil Babka
nbrand
> ---
> mm/huge_memory.c | 3 ---
> mm/hugetlb.c | 5 -
> mm/memory.c | 23 ---
> 3 files changed, 20 insertions(+), 11 deletions(-)
Reviewed-by: Vlastimil Babka
; This is a preparation for reliable R/O long-term pinning of pages in
> private mappings, whereby we want to make sure that we will never break
> COW in a read-only private mapping.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
> ---
> mm/memory.c | 8
private mappings last.
>
> While at it, use folio-based functions instead of page-based functions
> where we touch the code either way.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
1 - 100 of 141 matches
Mail list logo