pgprot_t vm_get_page_prot(unsigned long
vm_flags)
With that sorted out, feel free to add:
Reviewed-by: Vlastimil Babka
Thanks!
erall, this patch does not introduce any functional change.
>
> Signed-off-by: Lorenzo Stoakes
Reviewed-by: Vlastimil Babka
ny functional change.
>
> Signed-off-by: Lorenzo Stoakes
Reviewed-by: Vlastimil Babka
Thanks!
On 12/4/24 09:59, Oscar Salvador wrote:
> On Tue, Dec 03, 2024 at 08:19:02PM +0100, David Hildenbrand wrote:
>> It was always set using "GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL",
>> and I removed the same flag combination in #2 from memory offline code, and
>> we do have the exact same thing
On 12/3/24 20:19, David Hildenbrand wrote:
> On 03.12.24 15:24, Vlastimil Babka wrote:
>> On 12/3/24 15:12, David Hildenbrand wrote:
>>> On 03.12.24 14:55, Vlastimil Babka wrote:
>>> likely the thing we are assuming here is that we are migrating a page, and
>>>
On 12/3/24 10:47, David Hildenbrand wrote:
> alloc_contig_pages()->alloc_contig_range() now supports __GFP_ZERO,
> so let's use that instead to resolve our TODO.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
> be converting (powernv/memtrace) next won't trigger this.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
> ---
> mm/page_alloc.c | 9 +
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/pa
On 12/3/24 15:12, David Hildenbrand wrote:
> On 03.12.24 14:55, Vlastimil Babka wrote:
>> On 12/3/24 10:47, David Hildenbrand wrote:
>>> It's all a bit complicated for alloc_contig_range(). For example, we don't
>>> support many flags, so let's start baili
or
> compaction/migration exactly once. Update the documentation of the
> gfp_mask parameter for alloc_contig_range() and alloc_contig_pages().
>
> Acked-by: Zi Yan
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
> + /*
> + * Flags to control page compact
On 12/3/24 10:47, David Hildenbrand wrote:
> The flags are no longer used, we can stop passing them to
> isolate_single_pageblock().
>
> Reviewed-by: Zi Yan
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
> ---
> mm/page_isolation.c | 8 +++-
On 12/3/24 10:47, David Hildenbrand wrote:
> The single user is in page_alloc.c.
>
> Reviewed-by: Zi Yan
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
> ---
> mm/internal.h | 4
> mm/page_alloc.c | 5 ++---
> 2 files changed, 2 in
On 12/3/24 10:47, David Hildenbrand wrote:
> The parameter is unused, so let's stop passing it.
>
> Reviewed-by: Zi Yan
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
irectly.
>
> The changes were done using the following Coccinelle semantic patch.
> This semantic patch is designed to ignore cases where the callback
> function is used in another way.
Thanks, LGTM!
For the series:
Acked-by: Vlastimil Babka
> position r.p;
> @@
>
> - call_rcu(&e->f,cb@p)
> + kfree_rcu(e,f)
>
> @r1a depends on !s@
> type T;
> identifier x,r.cb;
> @@
>
> - cb(...) {
> (
> - kmem_cache_free(...);
> |
> - T x = ...;
> - kmem_cache_free(...,x);
> |
&
On 7/24/24 15:53, Paul E. McKenney wrote:
> On Mon, Jul 15, 2024 at 10:39:38PM +0200, Vlastimil Babka wrote:
>> On 6/21/24 11:32 AM, Uladzislau Rezki wrote:
>> > On Wed, Jun 19, 2024 at 11:28:13AM +0200, Vlastimil Babka wrote:
>> > One question. Maybe it is already l
To: Sven Schnelle
> To: Yoshinori Sato
> To: Rich Felker
> To: John Paul Adrian Glaubitz
> To: David S. Miller
> To: Andreas Larsson
> To: Thomas Gleixner
> To: Ingo Molnar
> To: Borislav Petkov
> To: Dave Hansen
> To: x...@kernel.org
> To: H. Peter Anvin
&
On 7/24/24 3:53 PM, Paul E. McKenney wrote:
> On Mon, Jul 15, 2024 at 10:39:38PM +0200, Vlastimil Babka wrote:
>> On 6/21/24 11:32 AM, Uladzislau Rezki wrote:
>> > On Wed, Jun 19, 2024 at 11:28:13AM +0200, Vlastimil Babka wrote:
>> > One question. Maybe it is already l
On 6/21/24 11:32 AM, Uladzislau Rezki wrote:
> On Wed, Jun 19, 2024 at 11:28:13AM +0200, Vlastimil Babka wrote:
> One question. Maybe it is already late but it is better to ask rather than
> not.
>
> What do you think if we have a small discussion about it on the LPC 2024 as a
&g
On 6/19/24 11:51 AM, Uladzislau Rezki wrote:
> On Tue, Jun 18, 2024 at 09:48:49AM -0700, Paul E. McKenney wrote:
>> On Tue, Jun 18, 2024 at 11:31:00AM +0200, Uladzislau Rezki wrote:
>> > > On 6/17/24 8:42 PM, Uladzislau Rezki wrote:
>> > > >> +
>> > > >> + s = container_of(work, struct kmem_cac
On 6/18/24 7:53 PM, Paul E. McKenney wrote:
> On Tue, Jun 18, 2024 at 07:21:42PM +0200, Vlastimil Babka wrote:
>> On 6/18/24 6:48 PM, Paul E. McKenney wrote:
>> > On Tue, Jun 18, 2024 at 11:31:00AM +0200, Uladzislau Rezki wrote:
>> >> > On 6/17/2
On 6/18/24 6:48 PM, Paul E. McKenney wrote:
> On Tue, Jun 18, 2024 at 11:31:00AM +0200, Uladzislau Rezki wrote:
>> > On 6/17/24 8:42 PM, Uladzislau Rezki wrote:
>> > >> +
>> > >> + s = container_of(work, struct kmem_cache, async_destroy_work);
>> > >> +
>> > >> + // XXX use the real kme
On 6/17/24 8:54 PM, Paul E. McKenney wrote:
> On Mon, Jun 17, 2024 at 07:23:36PM +0200, Vlastimil Babka wrote:
>> On 6/17/24 6:12 PM, Paul E. McKenney wrote:
>>> On Mon, Jun 17, 2024 at 05:10:50PM +0200, Vlastimil Babka wrote:
>>>> On 6/13/24 2:22 PM, Jason A. Donenfe
On 6/17/24 7:04 PM, Jason A. Donenfeld wrote:
>>> Vlastimil, this is just checking a boolean (which could be
>>> unlikely()'d), which should have pretty minimal overhead. Is that
>>> alright with you?
>>
>> Well I doubt we can just set and check it without any barriers? The
>> completion of the las
On 6/17/24 8:42 PM, Uladzislau Rezki wrote:
>> +
>> +s = container_of(work, struct kmem_cache, async_destroy_work);
>> +
>> +// XXX use the real kmem_cache_free_barrier() or similar thing here
> It implies that we need to introduce kfree_rcu_barrier(), a new API, which i
> wanted to avoid i
On 6/17/24 6:12 PM, Paul E. McKenney wrote:
> On Mon, Jun 17, 2024 at 05:10:50PM +0200, Vlastimil Babka wrote:
>> On 6/13/24 2:22 PM, Jason A. Donenfeld wrote:
>> > On Wed, Jun 12, 2024 at 08:38:02PM -0700, Paul E. McKenney wrote:
>> >> o Make the current kmem_cache
On 6/17/24 6:33 PM, Jason A. Donenfeld wrote:
> On Mon, Jun 17, 2024 at 6:30 PM Uladzislau Rezki wrote:
>> Here if an "err" is less then "0" means there are still objects
>> whereas "is_destroyed" is set to "true" which is not correlated
>> with a comment:
>>
>> "Destruction happens when no object
On 6/13/24 2:22 PM, Jason A. Donenfeld wrote:
> On Wed, Jun 12, 2024 at 08:38:02PM -0700, Paul E. McKenney wrote:
>> oMake the current kmem_cache_destroy() asynchronously wait for
>> all memory to be returned, then complete the destruction.
>> (This gets rid of a valuable debugging te
On 6/14/24 9:33 PM, Jason A. Donenfeld wrote:
> On Fri, Jun 14, 2024 at 02:35:33PM +0200, Uladzislau Rezki wrote:
>> +/* Should a destroy process be deferred? */
>> +if (s->flags & SLAB_DEFER_DESTROY) {
>> +list_move_tail(&s->list, &slab_caches_defer_destroy);
>> +sc
On 6/6/24 3:32 PM, Erhard Furtner wrote:
> On Thu, 6 Jun 2024 09:24:56 +0200
> "Vlastimil Babka (SUSE)" wrote:
>
>> Besides the zpool commit which might have just pushed the machine over the
>> edge, but it was probably close to it already. I've noticed a mo
On 6/6/24 1:41 AM, Yosry Ahmed wrote:
> On Wed, Jun 5, 2024 at 4:04 PM Erhard Furtner wrote:
>
> I am personally leaning toward (c), but I want to hear the opinions of
> other people here. Yu, Vlastimil, Johannes, Nhat? Anyone else?
Besides the zpool commit which might have just pushed the machi
On 6/4/24 8:01 PM, Yosry Ahmed wrote:
> On Tue, Jun 4, 2024 at 10:54 AM Yu Zhao wrote:
>> There was a lot of user memory in the DMA zone. So at a point the
>> highmem zone was full and allocation fallback happened.
>>
>> The problem with zone fallback is that recent allocations go into
>> lower zo
On 6/4/24 1:24 AM, Yosry Ahmed wrote:
> On Mon, Jun 3, 2024 at 3:13 PM Erhard Furtner wrote:
>>
>> On Sun, 2 Jun 2024 20:03:32 +0200
>> Erhard Furtner wrote:
>>
>> > On Sat, 1 Jun 2024 00:01:48 -0600
>> > Yu Zhao wrote:
>> >
>> > > The OOM kills on both kernel versions seem to be reasonable to m
On 11/2/23 16:46, Paolo Bonzini wrote:
> On Thu, Nov 2, 2023 at 4:38 PM Sean Christopherson wrote:
>> Actually, looking that this again, there's not actually a hard dependency on
>> THP.
>> A THP-enabled kernel _probably_ gives a higher probability of using
>> hugepages,
>> but mostly because T
CKUP @
>> __update_freelist_slow+0x74/0x90
>
> Sorry, the bug can be fixed by this patch from Vlastimil Babka:
>
> https://lore.kernel.org/all/83ff4b9e-94f1-8b35-1233-3dd414ea4...@suse.cz/
The current -next should be fixed, the fix was folded to the preparatory
commit, which
folios are also unevictable. To enforce
that expecation, make mapping_set_unmovable() also set AS_UNEVICTABLE.
Also incorporate comment update suggested by Matthew.
Fixes: 3424873596ce ("mm: Add AS_UNMOVABLE to mark mapping as completely
unmovable")
Signed-off-by: Vlastimil Bab
On 9/6/23 01:56, Sean Christopherson wrote:
> On Fri, Sep 01, 2023, Vlastimil Babka wrote:
>> As Kirill pointed out, mapping can be removed under us due to
>> truncation. Test it under folio lock as already done for the async
>> compaction / dirty folio case. To prevent loc
On 7/25/23 14:51, Matthew Wilcox wrote:
> On Tue, Jul 25, 2023 at 01:24:03PM +0300, Kirill A . Shutemov wrote:
>> On Tue, Jul 18, 2023 at 04:44:53PM -0700, Sean Christopherson wrote:
>> > diff --git a/mm/compaction.c b/mm/compaction.c
>> > index dbc9f86b1934..a3d2b132df52 100644
>> > --- a/mm/compa
folios are also unevictable - it is the
case for guest memfd folios.
Also incorporate comment update suggested by Matthew.
Fixes: 3424873596ce ("mm: Add AS_UNMOVABLE to mark mapping as completely
unmovable")
Signed-off-by: Vlastimil Babka
---
Feel free to squash into 3424873596ce.
mm/co
On 7/26/23 13:20, Nikunj A. Dadhania wrote:
> Hi Sean,
>
> On 7/24/2023 10:30 PM, Sean Christopherson wrote:
>> On Mon, Jul 24, 2023, Nikunj A. Dadhania wrote:
>>> On 7/19/2023 5:14 AM, Sean Christopherson wrote:
This is the next iteration of implementing fd-based (instead of vma-based)
On 7/19/23 01:44, Sean Christopherson wrote:
> Signed-off-by: Sean Christopherson
Process wise this will probably be frowned upon when done separately, so I'd
fold it in the patch using the export, seems to be the next one.
> ---
> security/security.c | 1 +
> 1 file changed, 1 insertion(+)
>
On 7/25/23 14:51, Matthew Wilcox wrote:
> On Tue, Jul 25, 2023 at 01:24:03PM +0300, Kirill A . Shutemov wrote:
>> On Tue, Jul 18, 2023 at 04:44:53PM -0700, Sean Christopherson wrote:
>> > diff --git a/mm/compaction.c b/mm/compaction.c
>> > index dbc9f86b1934..a3d2b132df52 100644
>> > --- a/mm/compa
On 7/11/23 12:35, Leon Romanovsky wrote:
>
> On Mon, Feb 27, 2023 at 09:35:59AM -0800, Suren Baghdasaryan wrote:
>
> <...>
>
>> Laurent Dufour (1):
>> powerc/mm: try VMA lock-based page fault handling first
>
> Hi,
>
> This series and specifically the commit above broke docker over PPC.
> It
On 5/24/23 02:29, David Rientjes wrote:
> On Tue, 23 May 2023, Vlastimil Babka wrote:
>
>> As discussed at LSF/MM [1] [2] and with no objections raised there,
>> deprecate the SLAB allocator. Rename the user-visible option so that
>> users with CONFIG_SLAB=y get a new
On 5/23/23 11:22, Geert Uytterhoeven wrote:
> Hi Vlastimil,
>
> Thanks for your patch!
>
> On Tue, May 23, 2023 at 11:12 AM Vlastimil Babka wrote:
>> As discussed at LSF/MM [1] [2] and with no objections raised there,
>> deprecate the SLAB allocator. Rename the
CONFIG_SLAB=y remove the line so those also
switch to SLUB. Regressions due to the switch should be reported to
linux-mm and slab maintainers.
[1] https://lore.kernel.org/all/4b9fc9c6-b48c-198f-5f80-811a44737...@suse.cz/
[2] https://lwn.net/Articles/932201/
Signed-off-by: Vlastimil Babka
---
arch/arc
On 1/9/23 21:53, Suren Baghdasaryan wrote:
> rw_semaphore is a sizable structure of 40 bytes and consumes
> considerable space for each vm_area_struct. However vma_lock has
> two important specifics which can be used to replace rw_semaphore
> with a simpler structure:
> 1. Readers never wait. They
lock limit of existing setups.
>
> For example, a VM running with VFIO could run into the memlock limit and
> fail to run. However, we essentially had the same behavior already in
> commit 17839856fd58 ("gup: document and work around "COW can break either
> way" issue") which got merged into some enterprise distros, and there were
> not any such complaints. So most probably, we're fine.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
also handles it
> correctly, for example, splitting the huge zeropage on FAULT_FLAG_UNSHARE
> such that we can handle FAULT_FLAG_UNSHARE on the PTE level.
>
> This change is a requirement for reliable long-term R/O pinning in
> COW mappings.
>
> Signed-off-by: David Hildenb
; Let's just split (->zap) + fallback in that case.
>
> This is a preparation for more generic FAULT_FLAG_UNSHARE support in
> COW mappings.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
Nits:
> ---
> mm/memory.c | 24 +++-
&
private mappings last.
>
> While at it, use folio-based functions instead of page-based functions
> where we touch the code either way.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
; This is a preparation for reliable R/O long-term pinning of pages in
> private mappings, whereby we want to make sure that we will never break
> COW in a read-only private mapping.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
> ---
> mm/memory.c | 8
nbrand
> ---
> mm/huge_memory.c | 3 ---
> mm/hugetlb.c | 5 -
> mm/memory.c | 23 ---
> 3 files changed, 20 insertions(+), 11 deletions(-)
Reviewed-by: Vlastimil Babka
to separate it. So let's prepare for non-anon
> tests by renaming to "cow".
>
> Signed-off-by: David Hildenbrand
Acked-by: Vlastimil Babka
On 9/28/22 04:28, Suren Baghdasaryan wrote:
> On Sun, Sep 11, 2022 at 2:35 AM Vlastimil Babka wrote:
>>
>> On 9/2/22 01:26, Suren Baghdasaryan wrote:
>> >
>> >>
>> >> Two complaints so far:
>> >> - I don't like the vma_mark_locked(
On 9/2/22 01:26, Suren Baghdasaryan wrote:
> On Thu, Sep 1, 2022 at 1:58 PM Kent Overstreet
> wrote:
>>
>> On Thu, Sep 01, 2022 at 10:34:48AM -0700, Suren Baghdasaryan wrote:
>> > Resending to fix the issue with the In-Reply-To tag in the original
>> > submission at [4].
>> >
>> > This is a proof
On 3/29/22 18:43, David Hildenbrand wrote:
> Let's test that __HAVE_ARCH_PTE_SWP_EXCLUSIVE works as expected.
>
> Signed-off-by: David Hildenbrand
Acked-by: Vlastimil Babka
> ---
> mm/debug_vm_pgtable.c | 15 +++
> 1 file changed, 15 insertions(
ffset bits.
>
> Note: R/O FOLL_GET references were never really reliable, especially
> when taking one on a shared page and then writing to the page (e.g., GUP
> after fork()). FOLL_GET, including R/W references, were never really
> reliable once fork was involved (e.g., GUP befor
On 11/29/21 23:08, Zi Yan wrote:
> On 23 Nov 2021, at 12:32, Vlastimil Babka wrote:
>
>> On 11/23/21 17:35, Zi Yan wrote:
>>> On 19 Nov 2021, at 10:15, Zi Yan wrote:
>>>>>> From what my understanding, cma required alignment of
>>>>>> max(
On 11/23/21 17:35, Zi Yan wrote:
> On 19 Nov 2021, at 10:15, Zi Yan wrote:
From what my understanding, cma required alignment of
max(MAX_ORDER - 1, pageblock_order), because when MIGRATE_CMA was
introduced,
__free_one_page() does not prevent merging two different pageblocks, wh
On 11/15/21 20:37, Zi Yan wrote:
> From: Zi Yan
>
> Hi David,
>
> You suggested to make alloc_contig_range() deal with pageblock_order instead
> of
> MAX_ORDER - 1 and get rid of MAX_ORDER - 1 dependency in virtio_mem[1]. This
> patchset is my attempt to achieve that. Please take a look and let
On 11/8/20 7:57 AM, Mike Rapoport wrote:
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_cache
*cachep)
return false;
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int m
On 11/3/20 5:20 PM, Mike Rapoport wrote:
From: Mike Rapoport
Subject should have "on DEBUG_PAGEALLOC" ?
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architec
,invalid}_noflush().
Still, add a pr_warn() so that future changes in set_memory APIs will not
silently break hibernation.
Signed-off-by: Mike Rapoport
Acked-by: Rafael J. Wysocki
Reviewed-by: David Hildenbrand
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
The bool param is a bit
pages when page
allocation debug is enabled.
Signed-off-by: Mike Rapoport
Reviewed-by: David Hildenbrand
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
But, the "enable" param is hideous. I would rather have map and unmap variants
(and just did the same split for page pois
On 10/8/20 11:49 AM, Christophe Leroy wrote:
In a 10 years old commit
(https://github.com/linuxppc/linux/commit/d069cb4373fe0d451357c4d3769623a7564dfa9f),
powerpc 8xx has
made the handling of PTE accessed bit conditional to CONFIG_SWAP.
Since then, this has been extended to some other powerpc va
On 4/21/20 10:39 AM, Nicolai Stange wrote:
> Hi
>
> [adding some drivers/char/random folks + LKML to CC]
>
> Vlastimil Babka writes:
>
>> On 4/17/20 6:53 PM, Michal Suchánek wrote:
>>> Hello,
>>
>> Hi, thanks for reproducing on latest upstr
On 4/17/20 6:53 PM, Michal Suchánek wrote:
> Hello,
Hi, thanks for reproducing on latest upstream!
> instrumenting the kernel with the following patch
>
> ---
> mm/slub.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index d6787bbe0248..d40995d5f8ff 100644
>
tested-by: Sachin Sant
Reported-by: PUVICHAKRAVARTHY RAMACHANDRAN
Tested-by: Bharata B Rao
Debugged-by: Srikar Dronamraju
Signed-off-by: Vlastimil Babka
Fixes: a561ce00b09e ("slub: fall back to node_to_mem_node() node if allocating
on memoryless node")
Cc: sta...@vger.kernel.org
Cc: Mel G
On 3/20/20 8:46 AM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-19 15:10:19]:
>
>> On 3/19/20 3:05 PM, Srikar Dronamraju wrote:
>> > * Vlastimil Babka [2020-03-19 14:47:58]:
>> >
>>
>> No, but AFAICS, such node values are already han
On 3/20/20 4:42 AM, Bharata B Rao wrote:
> On Thu, Mar 19, 2020 at 02:47:58PM +0100, Vlastimil Babka wrote:
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..7113b1f9cd77 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1973,8 +1973,6 @@ static void
On 3/19/20 3:05 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-19 14:47:58]:
>
>> 8<
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..7113b1f9cd77 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1973,8 +1973,6 @@
On 3/19/20 2:26 PM, Sachin Sant wrote:
>
>
>> On 19-Mar-2020, at 6:53 PM, Vlastimil Babka wrote:
>>
>> On 3/19/20 9:52 AM, Sachin Sant wrote:
>>>
>>>> OK how about this version? It's somewhat ugly, but important is that the
>>>>
On 3/19/20 9:52 AM, Sachin Sant wrote:
>
>> OK how about this version? It's somewhat ugly, but important is that the fast
>> path case (c->page exists) is unaffected and another common case (c->page is
>> NULL, but node is NUMA_NO_NODE) is just one extra check - impossible to
>> avoid at
>> some
On 3/19/20 1:32 AM, Michael Ellerman wrote:
> Seems like a nice solution to me
Thanks :)
>> 8<
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..1d4f2d7a0080 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1511,7 +1511,7 @@ static inline struct page *alloc_slab_page(struct
[1]
https://lore.kernel.org/linux-next/3381cd91-ab3d-4773-ba04-e7a072a63...@linux.vnet.ibm.com/
[2]
https://lore.kernel.org/linux-mm/fff0e636-4c36-ed10-281c-8cdb0687c...@virtuozzo.com/
[3] https://lore.kernel.org/linux-mm/20200317092624.gb22...@in.ibm.com/
[4]
https://lore.kernel.org/linux-mm/088b599
On 3/18/20 5:06 PM, Bharata B Rao wrote:
> On Wed, Mar 18, 2020 at 03:42:19PM +0100, Vlastimil Babka wrote:
>> This is a PowerPC platform with following NUMA topology:
>>
>> available: 2 nodes (0-1)
>> node 0 cpus:
>> node 0 size: 0 MB
>> node 0 free: 0 MB
m/088b5996-faae-8a56-ef9c-5b567125a...@suse.cz/
Reported-by: Sachin Sant
Reported-by: Bharata B Rao
Debugged-by: Srikar Dronamraju
Signed-off-by: Vlastimil Babka
Cc: Mel Gorman
Cc: Michael Ellerman
Cc: Michal Hocko
Cc: Christopher Lameter
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Joonsoo Kim
Cc
accessing the pgdat
>> structure. Fix the same for node_spanned_pages() too.
>>
>> Cc: Andrew Morton
>> Cc: linux...@kvack.org
>> Cc: Mel Gorman
>> Cc: Michael Ellerman
>> Cc: Sachin Sant
>> Cc: Michal Hocko
>> Cc: Christopher Lameter
>
On 3/18/20 4:20 AM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 17:45:15]:
>>
>> Yes, that Kirill's patch was about the memcg shrinker map allocation. But the
>> patch hunk that Bharata posted as a "hack" that fixes the problem, it follows
>>
On 3/17/20 5:25 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 16:56:04]:
>
>>
>> I wonder why do you get a memory leak while Sachin in the same situation [1]
>> gets a crash? I don't understand anything anymore.
>
> Sachin was testing on linux-
On 3/17/20 12:53 PM, Bharata B Rao wrote:
> On Tue, Mar 17, 2020 at 02:56:28PM +0530, Bharata B Rao wrote:
>> Case 1: 2 node NUMA, node0 empty
>>
>> # numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus:
>> node 0 size: 0 MB
>> node 0 free: 0 MB
>> node 1 cpus: 0
On 3/17/20 3:51 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 14:53:26]:
>
>> >> >
>> >> > Mitigate this by allocating the new slab from the node_numa_mem.
>> >>
>> >> Are you sure this is really needed and the othe
On 3/17/20 2:45 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 14:34:25]:
>
>> On 3/17/20 2:17 PM, Srikar Dronamraju wrote:
>> > Currently while allocating a slab for a offline node, we use its
>> > associated node_numa_mem to search for a partial
On 3/16/20 10:06 AM, Michal Hocko wrote:
> On Thu 12-03-20 17:41:58, Vlastimil Babka wrote:
> [...]
>> with nid present in:
>> N_POSSIBLE - pgdat might not exist, node_to_mem_node() must return some
>> online
>
> I would rather have a dummy pgdat for those
On 3/17/20 2:17 PM, Srikar Dronamraju wrote:
> Currently while allocating a slab for a offline node, we use its
> associated node_numa_mem to search for a partial slab. If we don't find
> a partial slab, we try allocating a slab from the offline node using
> __alloc_pages_node. However this is boun
On 3/13/20 12:04 PM, Srikar Dronamraju wrote:
>> I lost all the memory about it. :)
>> Anyway, how about this?
>>
>> 1. make node_present_pages() safer
>> static inline node_present_pages(nid)
>> {
>> if (!node_online(nid)) return 0;
>> return (NODE_DATA(nid)->node_present_pages);
>> }
>>
>
> Ye
On 3/13/20 12:12 PM, Srikar Dronamraju wrote:
> * Michael Ellerman [2020-03-13 21:48:06]:
>
>> Sachin Sant writes:
>> >> The patch below might work. Sachin can you test this? I tried faking up
>> >> a system with a memoryless node zero but couldn't get it to even start
>> >> booting.
>> >>
>> >
On 3/12/20 5:13 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-12 14:51:38]:
>
>> > * Vlastimil Babka [2020-03-12 10:30:50]:
>> >
>> >> On 3/12/20 9:23 AM, Sachin Sant wrote:
>> >> >> On 12-Mar-2020, at 10:57 AM, Srikar Dronamra
On 3/12/20 2:14 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-12 10:30:50]:
>
>> On 3/12/20 9:23 AM, Sachin Sant wrote:
>> >> On 12-Mar-2020, at 10:57 AM, Srikar Dronamraju
>> >> wrote:
>> >> * Michal Hocko [2020-03-11 12:57:35]:
&g
On 3/12/20 9:23 AM, Sachin Sant wrote:
>
>
>> On 12-Mar-2020, at 10:57 AM, Srikar Dronamraju
>> wrote:
>>
>> * Michal Hocko [2020-03-11 12:57:35]:
>>
>>> On Wed 11-03-20 16:32:35, Srikar Dronamraju wrote:
A Powerpc system with multiple possible nodes and with CONFIG_NUMA
enabled al
ts.infradead.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-s...@vger.kernel.org
> Cc: de...@driverdev.osuosl.org
> Cc: linux...@kvack.org
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Anshuman Khandual
Reviewed-by: Vlastimil Babka
Thanks.
On 3/2/20 7:47 AM, Anshuman Khandual wrote:
> There are many places where all basic VMA access flags (read, write, exec)
> are initialized or checked against as a group. One such example is during
> page fault. Existing vma_is_accessible() wrapper already creates the notion
> of VMA accessibility a
On 3/2/20 7:47 AM, Anshuman Khandual wrote:
> There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS
> This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the
> existing VM_STACK_DEFAULT_FLAGS. While here, also define some more macros
> with standard VMA access fla
On 2/27/20 5:00 PM, Sachin Sant wrote:
>
>
>> On 27-Feb-2020, at 5:42 PM, Michal Hocko wrote:
>>
>> A very good hint indeed. I would do this
>> diff --git a/include/linux/topology.h b/include/linux/topology.h
>> index eb2fe6edd73c..d9f1b6737e4d 100644
>> --- a/include/linux/topology.h
>> +++ b/
On 2/26/20 10:45 PM, Vlastimil Babka wrote:
>
>
> if (node == NUMA_NO_NODE)
> page = alloc_pages(flags, order);
> else
> page = __alloc_pages_node(node, flags, order);
>
> So yeah looks like SLUB's kmalloc_node() is supposed to behave like the
> page allo
On 2/26/20 7:41 PM, Michal Hocko wrote:
> On Wed 26-02-20 18:25:28, Cristopher Lameter wrote:
>> On Mon, 24 Feb 2020, Michal Hocko wrote:
>>
>>> Hmm, nasty. Is there any reason why kmalloc_node behaves differently
>>> from the page allocator?
>>
>> The page allocator will do the same thing if you p
el.org
> Cc: linux-a...@vger.kernel.org
> Cc: linux...@kvack.org
> Signed-off-by: Anshuman Khandual
Meh, why is there _page in the function's name... but too many users to bother
changing it now, I guess.
Acked-by: Vlastimil Babka
rg
> Acked-by: Geert Uytterhoeven
> Acked-by: Guo Ren
> Signed-off-by: Anshuman Khandual
Acked-by: Vlastimil Babka
: linux-ker...@vger.kernel.org
> Cc: linux...@kvack.org
> Signed-off-by: Anshuman Khandual
Some comment for the function wouln't hurt, but perhaps it is self-explanatory
enough.
Acked-by: Vlastimil Babka
On 8/20/19 4:30 AM, Christoph Hellwig wrote:
> On Mon, Aug 19, 2019 at 07:46:00PM +0200, David Sterba wrote:
>> Another thing that is lost is the slub debugging support for all
>> architectures, because get_zeroed_pages lacking the red zones and sanity
>> checks.
>>
>> I find working with raw page
1 - 100 of 144 matches
Mail list logo