On 10/4/23 5:18 AM, Verma, Vishal L wrote:
> On Tue, 2023-10-03 at 09:34 +0530, Aneesh Kumar K V wrote:
>> On 9/29/23 2:00 AM, Vishal Verma wrote:
>>> Large amounts of memory managed by the kmem driver may come in via CXL,
>>> and it is often desirable to have the memmap
On 9/29/23 2:00 AM, Vishal Verma wrote:
> Large amounts of memory managed by the kmem driver may come in via CXL,
> and it is often desirable to have the memmap for this memory on the new
> memory itself.
>
> Enroll kmem-managed memory for memmap_on_memory semantics if the dax
> region originates
Vishal Verma writes:
> The MHP_MEMMAP_ON_MEMORY flag for hotplugged memory is currently
> restricted to 'memblock_size' chunks of memory being added. Adding a
> larger span of memory precludes memmap_on_memory semantics.
>
> For users of hotplug such as kmem, large amounts of memory might get
> a
Vishal Verma writes:
> The MHP_MEMMAP_ON_MEMORY flag for hotplugged memory is currently
> restricted to 'memblock_size' chunks of memory being added. Adding a
> larger span of memory precludes memmap_on_memory semantics.
>
> For users of hotplug such as kmem, large amounts of memory might get
> a
David Hildenbrand writes:
> On 16.06.23 00:00, Vishal Verma wrote:
>> With DAX memory regions originating from CXL memory expanders or
>> NVDIMMs, the kmem driver may be hot-adding huge amounts of system memory
>> on a system without enough 'regular' main memory to support the memmap
>> for it. T
nt nid, struct resource
> *res, mhp_t mhp_flags)
>* Self hosted memmap array
>*/
> if (mhp_flags & MHP_MEMMAP_ON_MEMORY) {
> - if (!mhp_supports_memmap_on_memory(size)) {
> + if (!mhp_supports_memmap_on_memory(size,
On Fri, 2022-02-25 at 12:08 +0530, kajoljain wrote:
>
>
> On 2/25/22 11:25, Nageswara Sastry wrote:
> >
> >
> > On 17/02/22 10:03 pm, Kajol Jain wrote:
> > >
> > >
> > > Changelog
> >
> > Tested these patches with the automated tests at
> > avocado-misc-tests/perf/perf_nmem.py
> > URL:
On 4/16/21 2:39 PM, Andy Shevchenko wrote:
On Fri, Apr 16, 2021 at 01:28:21PM +0530, Aneesh Kumar K.V wrote:
On 4/15/21 7:16 PM, Andy Shevchenko wrote:
Parse to and export from UUID own type, before dereferencing.
This also fixes wrong comment (Little Endian UUID is something else)
and should
,unit-guid as the iset cookie")
Fixes: 259a948c4ba1 ("powerpc/pseries/scm: Use a specific endian format for storing
uuid from the device tree")
Cc: Oliver O'Halloran
Cc: Aneesh Kumar K.V
Signed-off-by: Andy Shevchenko
---
Not tested
arch/powerpc/platforms/pseries/papr_scm
namespace before this patch.
Fixes: 43001c52b603 ("powerpc/papr_scm: Use ibm,unit-guid as the iset cookie")
Fixes: 259a948c4ba1 ("powerpc/pseries/scm: Use a specific endian format for storing
uuid from the device tree")
Cc: Oliver O'Halloran
Cc: Aneesh Kumar K.V
Signe
Christophe Leroy writes:
> When probe_kernel_read_inst() was created, it was to mimic
> probe_kernel_read() function.
>
> Since then, probe_kernel_read() has been renamed
> copy_from_kernel_nofault().
>
> Rename probe_kernel_read_inst() into copy_from_kernel_nofault_inst().
At first glance I rea
Christophe Leroy writes:
> flush_coherent_icache() can use any valid address as mentionned
> by the comment.
>
> Use PAGE_OFFSET as base address. This allows removing the
> user access stuff.
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/mm/mem.c | 13 +
> 1 file changed,
Tyler Hicks writes:
> The alignment constraint for namespace creation in a region was
> increased, from 2M to 16M, for non-PowerPC architectures in v5.7 with
> commit 2522afb86a8c ("libnvdimm/region: Introduce an 'align'
> attribute"). The thought behind the change was that region alignment
> sho
On 2/12/21 8:45 PM, Jens Axboe wrote:
On 2/11/21 11:59 PM, Aneesh Kumar K.V wrote:
Hi,
I am trying to estabilish the behaviour we should expect when passing a
buffer with memory keys attached to io_uring syscalls. As show in the
blow test
/*
* gcc -Wall -O2 -D_GNU_SOURCE -o pkey_uring
Hi,
I am trying to estabilish the behaviour we should expect when passing a
buffer with memory keys attached to io_uring syscalls. As show in the
blow test
/*
* gcc -Wall -O2 -D_GNU_SOURCE -o pkey_uring pkey_uring.c -luring
*/
#include
#include
#include
#include
#include
#include
#incl
eturning true when [start;end[ is not fully
> contained inside [floor;ceiling[
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/mm/hugetlbpage.c | 56 ---
> 1 file changed, 19 insertions(+), 37 deletions(-
Christophe Leroy writes:
> search_exception_tables() is an heavy operation, we have to avoid it.
> When KUAP is selected, we'll know the fault has been blocked by KUAP.
> Otherwise, it behaves just as if the address was already in the TLBs
> and no fault was generated.
>
> Signed-off-by: Christop
Christophe Leroy writes:
> Le 08/12/2020 à 14:00, Aneesh Kumar K.V a écrit :
>> On 12/8/20 2:07 PM, Christophe Leroy wrote:
>>> search_exception_tables() is an heavy operation, we have to avoid it.
>>> When KUAP is selected, we'll know the fault has been bl
On 12/8/20 2:07 PM, Christophe Leroy wrote:
search_exception_tables() is an heavy operation, we have to avoid it.
When KUAP is selected, we'll know the fault has been blocked by KUAP.
Otherwise, it behaves just as if the address was already in the TLBs
and no fault was generated.
Signed-off-by:
Christophe Leroy writes:
> Le 12/10/2020 à 17:39, Christophe Leroy a écrit :
>> On the same principle as commit 773edeadf672 ("powerpc/mm: Add mask
>> of possible MMU features"), add mask for MMU features that are
>> always there in order to optimise out dead branches.
>>
>> Signed-off-by: Chris
Hi Michal,
On 10/15/20 8:16 PM, Michal Suchánek wrote:
Hello,
On Thu, Feb 06, 2020 at 12:25:18AM -0300, Leonardo Bras wrote:
On Thu, 2020-02-06 at 00:08 -0300, Leonardo Bras wrote:
gup_pgd_range(addr, end, gup_flags, pages, &nr);
- local_irq_enable();
+
On 10/13/20 3:45 PM, Michael Ellerman wrote:
Christophe Leroy writes:
Le 13/10/2020 à 09:23, Aneesh Kumar K.V a écrit :
Christophe Leroy writes:
CPU_FTR_NODSISRALIGN has not been used since
commit 31bfdb036f12 ("powerpc: Use instruction emulation
infrastructure to handle alignment f
Christophe Leroy writes:
> CPU_FTR_NODSISRALIGN has not been used since
> commit 31bfdb036f12 ("powerpc: Use instruction emulation
> infrastructure to handle alignment faults")
>
> Remove it.
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/include/asm/cputable.h | 22 ++
On 10/8/20 10:32 PM, Linus Torvalds wrote:
On Thu, Oct 8, 2020 at 2:27 AM Aneesh Kumar K.V
wrote:
In copy_present_page, after we mark the pte non-writable, we should
check for previous dirty bit updates and make sure we don't lose the dirty
bit on reset.
No, we'll just remove tha
Cc: John Hubbard
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: Andrew Morton
Cc: Jan Kara
Cc: Michal Hocko
Cc: Kirill Shutemov
Cc: Hugh Dickins
Cc: Linus Torvalds
Signed-off-by: Aneesh Kumar K.V
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
horpe
Cc: John Hubbard
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: Andrew Morton
Cc: Jan Kara
Cc: Michal Hocko
Cc: Kirill Shutemov
Cc: Hugh Dickins
Cc: Linus Torvalds
Signed-off-by: Aneesh Kumar K.V
---
mm/memory.c | 8
1 file changed, 8 insertions(+)
diff --git
On 9/22/20 2:22 PM, Anshuman Khandual wrote:
On 09/22/2020 09:33 AM, Aneesh Kumar K.V wrote:
On 9/21/20 2:51 PM, kernel test robot wrote:
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: e2aad6f1d232b457ea6a3194992dd4c0a83534a5 ("mm/debug_vm_pgtable/locks:
On 9/21/20 2:51 PM, kernel test robot wrote:
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: e2aad6f1d232b457ea6a3194992dd4c0a83534a5 ("mm/debug_vm_pgtable/locks: take
correct page table lock")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
UG:sleeping_function_called_from_invalid_context_at_mm/page_alloc.c | 0
> | 10 |
> +--+++
>
>
> If you fix the issue, kindly add following tag
> Reported-by: kernel test robot
>
How about this?
>From a654324a2d09c61b9fb271b550f543ef7b09a
Christophe Leroy writes:
> search_exception_tables() is an heavy operation, we have to avoid it.
> When KUAP is selected, we'll know the fault has been blocked by KUAP.
> Otherwise, it behaves just as if the address was already in the TLBs
> and no fault was generated.
>
> Signed-off-by: Christop
On 9/2/20 1:41 PM, Christophe Leroy wrote:
Le 02/09/2020 à 05:23, Aneesh Kumar K.V a écrit :
Christophe Leroy writes:
The following random segfault is observed from time to time with
map_hugetlb selftest:
root@localhost:~# ./map_hugetlb 1 19
524288 kB hugepages
Mapping 1 Mbytes
not be done in hugetlb_free_pgd_range(), it
> must be done in hugetlb_free_pte_range().
>
Reviewed-by: Aneesh Kumar K.V
> Fixes: b250c8c08c79 ("powerpc/8xx: Manage 512k huge pages as standard pages.")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Christophe Leroy
&g
Peter Zijlstra writes:
> For SMP systems using IPI based TLB invalidation, looking at
> current->active_mm is entirely reasonable. This then presents the
> following race condition:
>
>
> CPU0CPU1
>
> flush_tlb_mm(mm)use_mm(mm)
>
> tsk-
Mike Kravetz writes:
> On 7/19/20 11:22 PM, Anshuman Khandual wrote:
>>
>>
>> On 07/17/2020 10:32 PM, Mike Kravetz wrote:
>>> On 7/16/20 10:02 PM, Anshuman Khandual wrote:
On 07/16/2020 11:55 PM, Mike Kravetz wrote:
> >From 17c8f37afbf42fe7412e6eebb3619c6e0b7e1c3c Mon Sep 17
Vlastimil Babka writes:
> On 7/8/20 9:41 AM, Michal Hocko wrote:
>> On Wed 08-07-20 16:16:02, Joonsoo Kim wrote:
>>> On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote:
>>>
>>> Simply, I call memalloc_nocma_{save,restore} in new_non_cma_page(). It
>>> would not cause any problem.
>>
On 6/25/20 10:16 PM, Mike Kravetz wrote:
On 6/25/20 5:01 AM, Aneesh Kumar K.V wrote:
Mike Kravetz writes:
On 6/24/20 2:26 AM, Bibo Mao wrote:
When set_pmd_at is called in function do_huge_pmd_anonymous_page,
new tlb entry can be added by software on MIPS platform.
Here add
Mike Kravetz writes:
> On 6/24/20 2:26 AM, Bibo Mao wrote:
>> When set_pmd_at is called in function do_huge_pmd_anonymous_page,
>> new tlb entry can be added by software on MIPS platform.
>>
>> Here add update_mmu_cache_pmd when pmd entry is set, and
>> update_mmu_cache_pmd is defined as empty e
27;struct papr_scm_priv.health'
> thats an instance of 'struct nd_papr_pdsm_health' to cache the health
> information of a nvdimm. As a result functions drc_pmem_query_health()
> and flags_show() are updated to populate and use this new struct
> instead of a u64 integer t
and performs sanity tests on them. A new function
> papr_scm_service_pdsm() is introduced and is called from
> papr_scm_ndctl() in case of a PDSM request is received via ND_CMD_CALL
> command from libnvdimm.
>
Reviewed-by: Aneesh Kumar K.V
> Cc: "Aneesh Kumar K . V"
> Cc: Dan Williams
> Cc: Michael Ellerman
> Cc: Ira Weiny
> Signed-off-by: Vaibhav Jain
> ---
-aneesh
y
> the the new sysfs attribute 'papr/flags' is also introduced at
> Documentation/ABI/testing/sysfs-bus-papr-scm.
>
> [1] commit 58b278f568f0 ("powerpc: Provide initial documentation for
> PAPR hcalls")
>
Reviewed-by: Aneesh Kumar K.V
> Cc: "Anee
Vaibhav Jain writes:
+
> +/* Papr-scm-header + payload expected with ND_CMD_CALL ioctl from libnvdimm
> */
> +struct nd_pdsm_cmd_pkg {
> + struct nd_cmd_pkg hdr; /* Package header containing sub-cmd */
> + __s32 cmd_status; /* Out: Sub-cmd status returned back */
> + __
On 10/14/19 7:22 PM, Kirill A. Shutemov wrote:
On Sun, Oct 13, 2019 at 11:43:10PM -0700, John Hubbard wrote:
On 10/13/19 11:12 PM, kbuild test robot wrote:
Hi John,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on linus/master]
[cannot apply to v5.4-rc3 next-201910
On 10/4/19 2:33 PM, David Hildenbrand wrote:
On 04.10.19 11:00, David Hildenbrand wrote:
On 03.10.19 18:48, Aneesh Kumar K.V wrote:
On 10/1/19 8:33 PM, David Hildenbrand wrote:
On 01.10.19 16:57, David Hildenbrand wrote:
On 01.10.19 16:40, David Hildenbrand wrote:
From: "Aneesh Kuma
On 10/1/19 8:33 PM, David Hildenbrand wrote:
On 01.10.19 16:57, David Hildenbrand wrote:
On 01.10.19 16:40, David Hildenbrand wrote:
From: "Aneesh Kumar K.V"
With altmap, all the resource pfns are not initialized. While initializing
pfn, altmap reserve space is skipped. Hence whe
>
> Cc: Andrew Morton
> Cc: Oscar Salvador
> Cc: David Hildenbrand
> Cc: Michal Hocko
> Cc: Pavel Tatashin
> Cc: Dan Williams
> Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug")
> Reported-by: Aneesh Kumar K.V
> Signed-off-by: David Hil
David Hildenbrand writes:
@@ -134,11 +134,12 @@ void memunmap_pages(struct dev_pagemap *pgmap)
> mem_hotplug_begin();
> + remove_pfn_range_from_zone(page_zone(pfn_to_page(pfn)), pfn,
> +PHYS_PFN(resource_size(res)));
That should be part of PATCH 3?
>
On 9/20/19 9:21 PM, Qiujun Huang wrote:
__get_user_pages_fast try to walk the page table but the
hugepage pte is replace by hwpoison swap entry by mca path.
...
Can you describe this in more details. I guess you are facing the issue
with respect PUD level PTE entry that got updated by hwpoiso
tch/11133445/
> [2] https://raw.githubusercontent.com/cailca/linux-mm/master/powerpc.config
Sorry for breaking the build. How about?
commit ea15fd8b5489e2c8e9f1b96d67248a7428ffb74a
Author: Aneesh Kumar K.V
Date: Fri Sep 20 19:47:56 2019 +0530
powerpc/book3s/nvdimm: Fix build error with n
On 9/18/19 5:01 PM, Michael Ellerman wrote:
"Naveen N. Rao" writes:
Michael Ellerman wrote:
"Gautham R. Shenoy" writes:
From: "Gautham R. Shenoy"
Also, since we expose [S]PURR through sysfs, any tools that make use of
that directly are also affected due to this.
But again if we
to lpar.c
you can use
Reviewed-by: Aneesh Kumar K.V
for the series.
-aneesh
On 9/17/19 2:25 AM, Leonardo Bras wrote:
If a process (qemu) with a lot of CPUs (128) try to munmap() a large chunk
of memory (496GB) mapped with THP, it takes an average of 275 seconds,
which can cause a lot of problems to the load (in qemu case, the guest
will lock for this time).
Trying to fi
On 9/12/19 12:13 AM, Dan Carpenter wrote:
On Wed, Sep 11, 2019 at 08:48:59AM -0700, Dan Williams wrote:
+Coding Style Addendum
+-
+libnvdimm expects multi-line statements to be double indented. I.e.
+
+if (x...
+&& ...y) {
That looks horrible
On 9/13/19 12:56 AM, Laurent Dufour wrote:
Le 12/09/2019 à 16:44, Aneesh Kumar K.V a écrit :
Laurent Dufour writes:
+
+ idx = 2;
+ while (idx < len) {
+ unsigned int block_size = local_buffer[idx++];
+ unsigned int npsize;
+
+ if (!block_size)
+ br
Christophe Leroy writes:
> Add support for GENERIC_EARLY_IOREMAP.
>
> Let's define 16 slots of 256Kbytes each for early ioremap.
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/asm/Kbuild | 1 +
> arch/powerpc/include/asm/fixmap.h
Laurent Dufour writes:
> The PAPR document specifies the TLB Block Invalidate Characteristics which
> is telling which couple base page size / page size is supported by the
> H_BLOCK_REMOVE hcall.
>
> A new set of feature is added to the mmu_psize_def structure to record per
> base page size whic
On 8/30/19 5:37 PM, Laurent Dufour wrote:
Instead of calling H_BLOCK_REMOVE all the time when the feature is
exhibited, call this hcall only when the couple base page size, page size
is supported as reported by the TLB Invalidate Characteristics.
supported is not actually what we are checking
On 8/30/19 5:37 PM, Laurent Dufour wrote:
The PAPR document specifies the TLB Block Invalidate Characteristics which
is telling which couple base page size / page size is supported by the
H_BLOCK_REMOVE hcall.
A new set of feature is added to the mmu_psize_def structure to record per
base page s
On 8/30/19 5:37 PM, Laurent Dufour wrote:
Since the commit ba2dd8a26baa ("powerpc/pseries/mm: call H_BLOCK_REMOVE"),
the call to H_BLOCK_REMOVE is always done if the feature is exhibited.
On some system, the hypervisor may not support all the combination of
segment base page size and page size.
On 8/30/19 5:37 PM, Laurent Dufour wrote:
Before reading the HPTE encoding values we initialize all of them to -1 (an
invalid value) to later being able to detect the initialized ones.
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/book3s64/hash_utils.c | 8 ++--
1 file changed, 6 ins
Oscar Salvador writes:
> On Mon, 2019-07-15 at 21:41 +0530, Aneesh Kumar K.V wrote:
>> Oscar Salvador writes:
>>
>> > Since [1], shrink_{zone,node}_span work on PAGES_PER_SUBSECTION
>> > granularity.
>> > The problem is that deac
Oscar Salvador writes:
> Since [1], shrink_{zone,node}_span work on PAGES_PER_SUBSECTION granularity.
> The problem is that deactivation of the section occurs later on in
> sparse_remove_section, so pfn_valid()->pfn_section_valid() will always return
> true before we deactivate the {sub}section.
em is that we zero section_mem_map, so the last early_section()
> will always report false and the section will not be removed.
>
> Fix this checking whether a section is early or not at function
> entry.
>
Reviewed-by: Aneesh Kumar K.V
> Fixes: mmotm ("mm/sparsemem: Support sub
On 7/9/19 7:50 AM, Oliver O'Halloran wrote:
On Tue, Jul 9, 2019 at 12:22 AM Aneesh Kumar K.V
wrote:
Christophe Leroy writes:
*snip*
+ if (IS_ENABLED(CONFIG_PPC64))
+ isync();
}
Was checking with Michael about why we need that extra isync. Michael
pointed this cam
Christophe Leroy writes:
> This patch drops the assembly PPC64 version of flush_dcache_range()
> and re-uses the PPC32 static inline version.
>
> With GCC 8.1, the following code is generated:
>
> void flush_test(unsigned long start, unsigned long stop)
> {
> flush_dcache_range(start, stop)
some reviewed-bys
>
> [1]:
> https://lore.kernel.org/lkml/155977186863.2443951.9036044808311959913.st...@dwillia2-desk3.amr.corp.intel.com/
You can add Tested-by: Aneesh Kumar K.V
for ppc64.
BTW even after this series we have the kernel crash mentioned in the
below email on reconfig
Dan Williams writes:
> At namespace creation time there is the potential for the "expected to
> be zero" fields of a 'pfn' info-block to be filled with indeterminate
> data. While the kernel buffer is zeroed on allocation it is immediately
> overwritten by nd_pfn_validate() filling it with the cu
Dan Williams writes:
> Teach devm_memremap_pages() about the new sub-section capabilities of
> arch_{add,remove}_memory(). Effectively, just replace all usage of
> align_start, align_end, and align_size with res->start, res->end, and
> resource_size(res). The existing sanity check will still make
Dan Williams writes:
> Allow sub-section sized ranges to be added to the memmap.
> populate_section_memmap() takes an explict pfn range rather than
> assuming a full section, and those parameters are plumbed all the way
> through to vmmemap_populate(). There should be no sub-section usage in
> cu
Dan Williams writes:
> On Fri, Jun 14, 2019 at 9:18 AM Aneesh Kumar K.V
> wrote:
>>
>> On 6/14/19 9:05 PM, Oscar Salvador wrote:
>> > On Fri, Jun 14, 2019 at 02:28:40PM +0530, Aneesh Kumar K.V wrote:
>> >> Can you check with this change on ppc64. I
On 6/14/19 10:38 PM, Jeff Moyer wrote:
"Aneesh Kumar K.V" writes:
On 6/14/19 10:06 PM, Dan Williams wrote:
On Fri, Jun 14, 2019 at 9:26 AM Aneesh Kumar K.V
wrote:
Why not let the arch
arch decide the SUBSECTION_SHIFT and default to one subsection per
section if arch is not
On 6/14/19 10:06 PM, Dan Williams wrote:
On Fri, Jun 14, 2019 at 9:26 AM Aneesh Kumar K.V
wrote:
Why not let the arch
arch decide the SUBSECTION_SHIFT and default to one subsection per
section if arch is not enabled to work with subsection.
Because that keeps the implementation from ever
On 6/14/19 10:06 PM, Dan Williams wrote:
On Fri, Jun 14, 2019 at 9:26 AM Aneesh Kumar K.V
wrote:
On 6/14/19 9:52 PM, Dan Williams wrote:
On Fri, Jun 14, 2019 at 9:18 AM Aneesh Kumar K.V
wrote:
On 6/14/19 9:05 PM, Oscar Salvador wrote:
On Fri, Jun 14, 2019 at 02:28:40PM +0530, Aneesh
On 6/14/19 9:52 PM, Dan Williams wrote:
On Fri, Jun 14, 2019 at 9:18 AM Aneesh Kumar K.V
wrote:
On 6/14/19 9:05 PM, Oscar Salvador wrote:
On Fri, Jun 14, 2019 at 02:28:40PM +0530, Aneesh Kumar K.V wrote:
Can you check with this change on ppc64. I haven't reviewed this series yet.
On 6/14/19 9:05 PM, Oscar Salvador wrote:
On Fri, Jun 14, 2019 at 02:28:40PM +0530, Aneesh Kumar K.V wrote:
Can you check with this change on ppc64. I haven't reviewed this series yet.
I did limited testing with change . Before merging this I need to go
through the full series again
Qian Cai writes:
> 1) offline is busted [1]. It looks like test_pages_in_a_zone() missed the same
> pfn_section_valid() check.
>
> 2) powerpc booting is generating endless warnings [2]. In vmemmap_populated()
> at
> arch/powerpc/mm/init_64.c, I tried to change PAGES_PER_SECTION to
> PAGES_PER_S
Dan Williams writes:
> At namespace creation time there is the potential for the "expected to
> be zero" fields of a 'pfn' info-block to be filled with indeterminate
> data. While the kernel buffer is zeroed on allocation it is immediately
> overwritten by nd_pfn_validate() filling it with the cu
Pingfan Liu writes:
> As for FOLL_LONGTERM, it is checked in the slow path
> __gup_longterm_unlocked(). But it is not checked in the fast path, which
> means a possible leak of CMA page to longterm pinned requirement through
> this crack.
Shouldn't we disallow FOLL_LONGTERM with get_user_pages f
Two architecture that use arch specific MMAP flags are powerpc and sparc.
We still have few flag values common across them and other architectures.
Consolidate this in mman-common.h.
Also update the comment to indicate where to find HugeTLB specific reserved
values
Signed-off-by: Aneesh Kumar
On 6/4/19 12:56 PM, Pingfan Liu wrote:
The PF_MEMALLOC_NOCMA is set by memalloc_nocma_save(), which is finally
cast to ~_GFP_MOVABLE. So __get_user_pages_locked() will get pages from
non CMA area and pin them. There is no need to
check_and_migrate_cma_pages().
That is not completely correct.
ch/powerpc/include/uapi/asm/mman.h, I am moving the #define to
asm-generic/mman-common.h. Two architectures using mman-common.h directly are
sparc and powerpc. We should be able to consloidate more #defines to
mman-common.h. That can be done as a separate patch.
Signed-off-by: Aneesh Kumar K.V
-
On 5/20/19 8:25 PM, Nicholas Piggin wrote:
Bharata B Rao's on May 21, 2019 12:29 am:
On Mon, May 20, 2019 at 01:50:35PM +0530, Bharata B Rao wrote:
On Mon, May 20, 2019 at 05:00:21PM +1000, Nicholas Piggin wrote:
Bharata B Rao's on May 20, 2019 3:56 pm:
On Mon, May 20, 2019 at 02:48:35PM +100
x1f0
> ? __switch_to_asm+0x40/0x70
> __handle_mm_fault+0x3f6/0x1370
> ? __switch_to_asm+0x34/0x70
> ? __switch_to_asm+0x40/0x70
> handle_mm_fault+0xda/0x200
> __do_page_fault+0x249/0x4f0
> do_page_fault+0x32/0x110
> ? page_fault+0x8/0x30
> page
Christophe Leroy writes:
> Now that slice_mask_for_size() is in mmu.h, the mm_ctx_slice_mask_xxx()
> are not needed anymore, so drop them. Note that the 8xx ones where
> not used anyway.
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Christophe Leroy
> ---
> arc
2 deletions(-)
> delete mode 100644 arch/powerpc/include/asm/nohash/32/mmu.h
> delete mode 100644 arch/powerpc/include/asm/nohash/64/mmu.h
>
> --
> 2.13.3
Looks good. You can add for the series
Reviewed-by: Aneesh Kumar K.V
mm/debug.c: In function ‘dump_mm’:
include/linux/kern_levels.h:5:18: warning: format ‘%llx’ expects argument of
type ‘long long unsigned int’, but argument 19 has type ‘long int’ [-Wformat=]
~~~^
Signed-off-by: Aneesh Kumar K.V
---
mm/debug.c | 2 +-
1 file changed, 1 insertion
Andrew Morton writes:
> On Thu, 21 Mar 2019 09:36:10 +0530 "Aneesh Kumar K.V"
> wrote:
>
>> MADV_DONTNEED is handled with mmap_sem taken in read mode.
>> We call page_mkclean without holding mmap_sem.
>>
>> MADV_DONTNEED implies that pages in the r
one. Avoid doing
that while marking the page clean.
Keep the sequence same for dax too even though we don't support MADV_DONTNEED
for dax mapping
Signed-off-by: Aneesh Kumar K.V
---
fs/dax.c | 2 +-
mm/rmap.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b
On 3/6/19 5:14 PM, Michal Suchánek wrote:
On Wed, 06 Mar 2019 14:47:33 +0530
"Aneesh Kumar K.V" wrote:
Dan Williams writes:
On Thu, Feb 28, 2019 at 1:40 AM Oliver wrote:
On Thu, Feb 28, 2019 at 7:35 PM Aneesh Kumar K.V
wrote:
Also even if the user decided to not use TH
Dan Williams writes:
> On Thu, Feb 28, 2019 at 1:40 AM Oliver wrote:
>>
>> On Thu, Feb 28, 2019 at 7:35 PM Aneesh Kumar K.V
>> wrote:
>> >
>> > Add a flag to indicate the ability to do huge page dax mapping. On
>> > architecture
>> > li
On 2/28/19 3:10 PM, Oliver wrote:
On Thu, Feb 28, 2019 at 7:35 PM Aneesh Kumar K.V
wrote:
Add a flag to indicate the ability to do huge page dax mapping. On architecture
like ppc64, the hypervisor can disable huge page support in the guest. In
such a case, we should not enable huge page dax
On 2/28/19 2:51 PM, Jan Kara wrote:
On Thu 28-02-19 14:05:21, Aneesh Kumar K.V wrote:
Architectures like ppc64 use the deposited page table to store hardware
page table slot information. Make sure we deposit a page table when
using zero page at the pmd level for hash.
Without this we hit
On 2/28/19 3:10 PM, Jan Kara wrote:
On Thu 28-02-19 14:05:22, Aneesh Kumar K.V wrote:
Add a flag to indicate the ability to do huge page dax mapping. On architecture
like ppc64, the hypervisor can disable huge page support in the guest. In
such a case, we should not enable huge page dax mapping
+0x2c/0x50 [nd_pmem]
dax_copy_from_iter+0x40/0x70
dax_iomap_actor+0x134/0x360
iomap_apply+0xfc/0x1b0
dax_iomap_rw+0xac/0x130
ext4_file_write_iter+0x254/0x460 [ext4]
__vfs_write+0x120/0x1e0
vfs_write+0xd8/0x220
SyS_write+0x6c/0x110
system_call+0x3c/0x130
Signed-off-by: Aneesh Kumar K.V
or -4
NOTE: The patch also use
echo never > /sys/kernel/mm/transparent_hugepage/enabled
to disable dax huge page mapping.
Signed-off-by: Aneesh Kumar K.V
---
TODO:
* Add Fixes: tag
include/linux/huge_mm.h | 4 +++-
mm/huge_memory.c| 4
2 files changed, 7 insertions(+), 1 deletion(
se we access hpas in real mode
and we can't do that struct page * to pfn conversion in real mode.
Reviewed-by: Michael Ellerman
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/mmu_context_iommu.c | 125 +---
1 file changed, 38 insertions(+), 87 deletions(-)
di
PF_MEMALLOC_NOCMA ensures
that we avoid unnecessary page migration later.
Suggested-by: Andrea Arcangeli
Reviewed-by: Andrea Arcangeli
Signed-off-by: Aneesh Kumar K.V
---
include/linux/sched.h| 1 +
include/linux/sched/mm.h | 48 +---
2 files changed, 41 insertions
memory is backed by CMA
region, it becomes unmovable resulting in fragmenting the CMA and possibly
preventing other guests from allocation a large enough hash page table.
NOTE: We allocate the new page without using __GFP_THISNODE
Signed-off-by: Aneesh Kumar K.V
---
include/linux/hugetlb.h | 2
.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/mmu_context_iommu.c | 24 +++-
1 file changed, 7 insertions(+), 17 deletions(-)
diff --git a/arch/powerpc/mm/mmu_context_iommu.c
b/arch/powerpc/mm/mmu_context_iommu.c
index 85b4e9f5c615..e7a9c4f6bfca 100644
--- a/arch
* Move the hugetlb check before transhuge check
* Use compound head page when isolating hugetlb page
*** BLURB HERE ***
Aneesh Kumar K.V (4):
mm/cma: Add PF flag to force non cma alloc
mm: Update get_user_pages_longterm to migrate pages allocated from CMA
region
powerpc/mm/iommu: Allow mi
Andrew Morton writes:
> [patch 1/4]: OK. I guess. Was this worth consuming our last PF_ flag?
That was done based on request from Andrea and it also helps in avoiding
allocating pages from CMA region where we know we are anyway going to
migrate them out. So yes, this helps.
> [patch 2/4]: un
1 - 100 of 838 matches
Mail list logo