Donet Tom writes:
> A vmemmap altmap is a device-provided region used to provide
> backing storage for struct pages. For each namespace, the altmap
> should belong to that same namespace. If the namespaces are
> created unaligned, there is a chance that the section vmemmap
> start address could a
Donet Tom writes:
> On 3/3/25 18:32, Aneesh Kumar K.V wrote:
>> Donet Tom writes:
>>
>>> A vmemmap altmap is a device-provided region used to provide
>>> backing storage for struct pages. For each namespace, the altmap
>>> should belong to that same na
Donet Tom writes:
> A vmemmap altmap is a device-provided region used to provide
> backing storage for struct pages. For each namespace, the altmap
> should belong to that same namespace. If the namespaces are
> created unaligned, there is a chance that the section vmemmap
> start address could a
Dave Hansen writes:
> On 9/11/24 08:01, Kevin Brodsky wrote:
>> On 22/08/2024 17:10, Joey Gouly wrote:
>>> @@ -371,6 +382,9 @@ int copy_thread(struct task_struct *p, const struct
>>> kernel_clone_args *args)
>>> if (system_supports_tpidr2())
>>> p->thread.tpidr2_e
Sean Christopherson writes:
> On Thu, Aug 01, 2024, Aneesh Kumar K.V wrote:
>> Sean Christopherson writes:
>>
>> > Disallow copying MTE tags to guest memory while KVM is dirty logging, as
>> > writing guest memory without marking the gfn as dirty in the memsl
Sean Christopherson writes:
> Disallow copying MTE tags to guest memory while KVM is dirty logging, as
> writing guest memory without marking the gfn as dirty in the memslot could
> result in userspace failing to migrate the updated page. Ideally (maybe?),
> KVM would simply mark the gfn as dirt
Michael Ellerman writes:
> Aneesh is stepping down from powerpc maintenance.
>
> Signed-off-by: Michael Ellerman
Acked-by: Aneesh Kumar K.V (Arm)
> ---
> MAINTAINERS | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7c1214
Michael Ellerman writes:
> Aneesh's IBM address no longer works, switch to his preferred kernel.org
> address.
>
> Signed-off-by: Michael Ellerman
Acked-by: Aneesh Kumar K.V (Arm)
> ---
> MAINTAINERS | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
s will see the new aligned value of the memory limit.
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/prom.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 7451bedad1f4..b8f764453eaa 100644
.
Cc: Mahesh Salgaonkar
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/fadump.c | 16
1 file changed, 16 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index d14eda1e8589..4e768d93c6d4 100644
--- a/arch/powerpc/kernel
. This alignment value will work for both
hash and radix translations.
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/prom.c | 7 +--
arch/powerpc/kernel/prom_init.c | 4 ++--
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/prom.c b/arch
PAGE_ALIGN(memparse(p, &p));
> +#endif
> DBG("memory limit = 0x%llx\n", memory_limit);
>
> return 0;
> --
> 2.43.0
Can you try this change?
commit bc55e1aa71f545cff31e1eccdb4a2e39df84
Author: Aneesh Kumar K.V (IBM)
Date: Fri Mar 8 14:45:26 2024 +053
On 3/7/24 5:13 PM, Michael Ellerman wrote:
> Hi Mahesh,
>
> Mahesh Salgaonkar writes:
>> nmi_enter()/nmi_exit() touches per cpu variables which can lead to kernel
>> crash when invoked during real mode interrupt handling (e.g. early HMI/MCE
>> interrupt handler) if percpu allocation comes from vm
On 3/2/24 4:53 AM, Michael Ellerman wrote:
> Hi Joel,
>
> Joel Savitz writes:
>> On 64-bit powerpc, usage of a non-16MB-aligned value for the mem= kernel
>> cmdline parameter results in a system hang at boot.
>
> Can you give us any more details on that? It might be a bug we can fix.
>
>> For e
Michael Ellerman writes:
> Kunwu Chan writes:
>> Thanks for the reply.
>>
>> On 2024/2/26 18:49, Michael Ellerman wrote:
>>> Kunwu Chan writes:
This part was commented from commit 6d492ecc6489
("powerpc/THP: Add code to handle HPTE faults for hugepages")
in about 11 years before.
On 2/20/24 8:16 AM, Andrew Morton wrote:
> On Mon, 29 Jan 2024 13:43:39 +0530 "Aneesh Kumar K.V"
> wrote:
>
>>> return (pud_val(pud) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
>>> }
>>> #endif
>>>
>>> #ifdef CONFIG_HAVE_
00 HSRR1
> CFAR
> LPCR 00020400
> PTCR DAR DSISR
>
> kernel:trap=0xffea | pc=0x100 | msr=0x1000
>
> This patch updates kvmppc_set_arch_compat() to use the host PVR valu
Madhavan Srinivasan writes:
> reg.h is updated with Power11 pvr. pvr_mask value of 0x0F07
> means we are arch v3.1 compliant.
>
If it is called arch v3.1, it will conflict with.
#define PVR_ARCH_31 0x0f06
>This is used by phyp and
> kvm when booting as a pseries guest to detect a
On 1/29/24 12:23 PM, Anshuman Khandual wrote:
>
>
> On 1/29/24 11:56, Aneesh Kumar K.V wrote:
>> On 1/29/24 11:52 AM, Anshuman Khandual wrote:
>>>
>>>
>>> On 1/29/24 11:30, Aneesh Kumar K.V (IBM) wrote:
>>>> Architectures like powerpc add d
On 1/29/24 11:52 AM, Anshuman Khandual wrote:
>
>
> On 1/29/24 11:30, Aneesh Kumar K.V (IBM) wrote:
>> Architectures like powerpc add debug checks to ensure we find only devmap
>> PUD pte entries. These debug checks are only done with CONFIG_DEBUG_VM.
>> This patch
tests+0x1b4/0x334
[c4a2fa40] [c206db34] debug_vm_pgtable+0xcbc/0x1c48
[c4a2fc10] [c000fd28] do_one_initcall+0x60/0x388
Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
Signed-off-by: Aneesh Kumar K.V (IBM)
---
mm/debug_v
On 1/25/24 3:16 PM, Kunwu Chan wrote:
> This part was commented in about 17 years before.
> If there are no plans to enable this part code in the future,
> we can remove this dead code.
>
> Signed-off-by: Kunwu Chan
> ---
> arch/powerpc/include/asm/book3s/64/mmu-hash.h | 22 ---
>
Amit Machhiwal writes:
> Currently, rebooting a pseries nested qemu-kvm guest (L2) results in
> below error as L1 qemu sends PVR value 'arch_compat' == 0 via
> ppc_set_compat ioctl. This triggers a condition failure in
> kvmppc_set_arch_compat() resulting in an EINVAL.
>
> qemu-system-ppc64: Unab
David Hildenbrand writes:
>>>
If high bits are used for
something else, then we might produce a garbage PTE on overflow, but that
shouldn't really matter I concluded for folio_pte_batch() purposes, we'd
not
detect "belongs to this folio batch" either way.
>>>
>>> Exactly
David Hildenbrand writes:
> On 23.01.24 12:38, Ryan Roberts wrote:
>> On 23/01/2024 11:31, David Hildenbrand wrote:
>
>> If high bits are used for
>> something else, then we might produce a garbage PTE on overflow, but that
>> shouldn't really matter I concluded for folio_pte_batc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi Linus,
Please pull powerpc fixes for 6.8:
The following changes since commit d2441d3e8c0c076d0a2e705fa235c76869a85140:
MAINTAINERS: powerpc: Add Aneesh & Naveen (2023-12-13 22:35:57 +1100)
are available in the git repository at:
https:/
Michael Ellerman writes:
> #ifdef CONFIG_PPC64
> int boot_cpu_hwid = -1;
> @@ -492,12 +493,26 @@ void __init smp_setup_cpu_maps(void)
> avail = !of_property_match_string(dn,
> "enable-method", "spin-table");
>
> - c
Michael Ellerman writes:
> +static int assign_threads(unsigned cpu, unsigned int nthreads, bool avail,
>
May be rename 'avail' to 'present'
> + const __be32 *hw_ids)
> +{
> + for (int i = 0; i < nthreads && cpu < nr_cpu_ids; i++) {
> + __be32 hwi
Haren Myneni writes:
> VAS allocate, modify and deallocate HCALLs returns
> H_LONG_BUSY_ORDER_1_MSEC or H_LONG_BUSY_ORDER_10_MSEC for busy
> delay and expects OS to reissue HCALL after that delay. But using
> msleep() will often sleep at least 20 msecs even though the
> hypervisor suggests OS rei
Luming Yu writes:
> Before we have powerpc to use the generic entry infrastructure,
> the call to fire user return notifier is made temporarily in powerpc
> entry code.
>
It is still not clear what will be registered as user return notifier.
Can you summarize that here?
>
> Signed-off-by: Lumin
Vaibhav Jain writes:
> Hi Aneesh,
>
> "Aneesh Kumar K.V" writes:
>
>
>>> Yes, Agreed and thats a nice suggestion. However ATM the hypervisor
>>> supporting Nestedv2 doesnt have support for this hcall. In future
>>> once we have support fo
Vaibhav Jain writes:
> Hi Aneesh,
>
> Thanks for looking into this patch. My responses inline below:
>
> "Aneesh Kumar K.V (IBM)" writes:
>
>> Vaibhav Jain writes:
>>
>>> From: Jordan Niethe
>>>
>>> An L0 must invalidate the L
Nicholas Miehlbradt writes:
> Splits the vmalloc region into four. The first quarter is the new
> vmalloc region, the second is used to store shadow metadata and the
> third is used to store origin metadata. The fourth quarter is unused.
>
Do we support KMSAN for both hash and radix? If hash is
Nicholas Miehlbradt writes:
> Functions which walk the stack read parts of the stack which cannot be
> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
> these functions to prevent false positives.
>
Is the annotation needed to avoid uninitialized access check when
reading
Srikar Dronamraju writes:
> If there are shared processor LPARs, underlying Hypervisor can have more
> virtual cores to handle than actual physical cores.
>
> Starting with Power 9, a big core (aka SMT8 core) has 2 nearly
> independent thread groups. On a shared processors LPARs, it helps to
> pa
Gaurav Batra writes:
> When kdump kernel tries to copy dump data over SR-IOV, LPAR panics due to
> NULL pointer execption.
>
> Here is the complete stack
>
> [ 19.944378] Kernel attempted to read user page (0) - exploit attempt?
> (uid: 0)^M
> [ 19.944388] BUG: Kernel NULL pointer dereferenc
d be helpful if you could include the details mentioned in your
reply in the commit message. Specifically, provide information
about the over-provisioned config and if you plan to send another
update, please remove the additional changes in the printk_once section.
Reviewed-by: Aneesh Kumar K.V (IBM)
Thank you.
-aneesh
On 12/11/23 9:26 AM, Vaibhav Jain wrote:
> Hi Aneesh,
>
> Thanks for looking into this patch. My responses inline:
>
> "Aneesh Kumar K.V (IBM)" writes:
>
>
>> May be we should use
>> firmware_has_feature(FW_FEATURE_H_COPY_TOFROM_GUEST))?
>>
From: "Aneesh Kumar K.V (IBM)"
This reverts commit 1abce0580b89 ("powerpc/64s: Fix __pte_needs_flush()
false positive warning")
The previous patch dropped the usage of _PAGE_PRIVILEGED with PAGE_NONE.
Hence this check can be dropped.
Signed-off-by: Aneesh Kumar K.V (IBM
From: "Aneesh Kumar K.V (IBM)"
There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite.
But that got dropped by
commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to
replace savedwrite")
With the change in this patch numa fault pte (pte_pro
On 11/22/23 4:05 PM, Sourabh Jain wrote:
> Hello Michael,
>
>
> On 22/11/23 10:47, Michael Ellerman wrote:
>> Aneesh Kumar K V writes:
>> ...
>>> I am not sure whether we need to add all the complexity to enable
>>> supporting different fadump kerne
IBLE has been set to 'N'
>
> Reported-by: Srikar Dronamraju
> Suggested-by: Aneesh Kumar K V
> Suggested-by: Michael Ellerman
> Signed-off-by: Vishal Chourasia
>
> v1: https://lore.kernel.org/all/20231114082046.6018-1-vish...@linux.ibm.com
> ---
> During the configu
On 11/17/23 10:03 AM, Sourabh Jain wrote:
> Hi Aneesh,
>
> Thanks for reviewing the patch.
>
> On 15/11/23 10:14, Aneesh Kumar K.V wrote:
>> Sourabh Jain writes:
>>
>>
>>
>>> diff --git a/arch/powerpc/include/asm/fadump-internal.h
>>&g
On 11/15/23 5:23 PM, Vishal Chourasia wrote:
>
> On 15/11/23 1:39 pm, Aneesh Kumar K.V wrote:
>> Vishal Chourasia writes:
>>
>>> This patch modifies the ARCH_HIBERNATION_POSSIBLE option to ensure that it
>>> correctly depends on these PowerPC configurat
Vishal Chourasia writes:
> This patch modifies the ARCH_HIBERNATION_POSSIBLE option to ensure that it
> correctly depends on these PowerPC configurations being enabled. As a result,
> it prevents the HOTPLUG_CPU from being selected when the required dependencies
> are not satisfied.
>
> This chan
Srikar Dronamraju writes:
> If there are shared processor LPARs, underlying Hypervisor can have more
> virtual cores to handle than actual physical cores.
>
> Starting with Power 9, a big core (aka SMT8 core) has 2 nearly
> independent thread groups. On a shared processors LPARs, it helps to
> pa
Srikar Dronamraju writes:
> PowerVM systems configured in shared processors mode have some unique
> challenges. Some device-tree properties will be missing on a shared
> processor. Hence some sched domains may not make sense for shared processor
> systems.
>
> Most shared processor systems are ov
Srikar Dronamraju writes:
> If there are shared processor LPARs, underlying Hypervisor can have more
> virtual cores to handle than actual physical cores.
>
> Starting with Power 9, a big core (aka SMT8 core) has 2 nearly
> independent thread groups. On a shared processors LPARs, it helps to
> pa
Sourabh Jain writes:
> diff --git a/arch/powerpc/include/asm/fadump-internal.h
> b/arch/powerpc/include/asm/fadump-internal.h
> index 27f9e11eda28..7be3d8894520 100644
> --- a/arch/powerpc/include/asm/fadump-internal.h
> +++ b/arch/powerpc/include/asm/fadump-internal.h
> @@ -42,7 +42,25 @@
On 11/14/23 3:16 PM, Srikar Dronamraju wrote:
> * Aneesh Kumar K.V [2023-11-14 12:42:19]:
>
>> No functional change in this patch. A helper is added to find if
>> vcpu is dispatched by hypervisor. Use that instead of opencoding.
>> Also clarify some of the comments.
>
On 11/14/23 2:53 PM, Shrikanth Hegde wrote:
>
>
> On 11/14/23 12:42 PM, Aneesh Kumar K.V wrote:
>> No functional change in this patch. A helper is added to find if
>> vcpu is dispatched by hypervisor. Use that instead of opencoding.
>> Also clarify some of the co
No functional change in this patch. A helper is added to find if
vcpu is dispatched by hypervisor. Use that instead of opencoding.
Also clarify some of the comments.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/paravirt.h | 33 ++---
1 file changed, 25
age fault
path")
explains the details.
Also revert commit 1abce0580b89 ("powerpc/64s: Fix __pte_needs_flush() false
positive warning")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 9 +++--
arch/powerpc/include/asm/book3s/64/tlbflush.h | 9 ++---
On 11/13/23 5:17 PM, Nicholas Piggin wrote:
> On Mon Nov 13, 2023 at 8:45 PM AEST, Aneesh Kumar K V wrote:
>>>> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
>>>> b/arch/powerpc/mm/book3s64/hash_utils.c
>>>> index ad2afa08e62e..b2eda22195f0 100
On 11/13/23 3:46 PM, Nicholas Piggin wrote:
> On Thu Nov 2, 2023 at 11:23 PM AEST, Aneesh Kumar K.V wrote:
>> There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite.
>> But that got dropped by
>> commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)
Christophe Leroy writes:
> Le 07/11/2023 à 14:34, Aneesh Kumar K.V a écrit :
>> Christophe Leroy writes:
>>
>>> Le 31/10/2023 à 11:15, Aneesh Kumar K.V a écrit :
>>>> Christophe Leroy writes:
>>
>>
>> We are adding the pte flags
On 11/10/23 8:23 PM, Jason Gunthorpe wrote:
> On Fri, Nov 10, 2023 at 08:19:23PM +0530, Aneesh Kumar K.V wrote:
>>
>> Hello,
>>
>> Some architectures can now support EXEC_ONLY mappings and I am wondering
>> what get_user_pages() on those addresses should r
Hello,
Some architectures can now support EXEC_ONLY mappings and I am wondering
what get_user_pages() on those addresses should return. Earlier
PROT_EXEC implied PROT_READ and pte_access_permitted() returned true for
that. But arm64 does have this explicit comment that says
/*
* p??_access_pe
Christophe Leroy writes:
> Le 31/10/2023 à 11:15, Aneesh Kumar K.V a écrit :
>> Christophe Leroy writes:
>>
>>> pte_user() is now only used in pte_access_permitted() to check
>>> access on vmas. User flag is cleared to make a page unreadable.
>>>
>&
On 11/6/23 6:53 PM, Christophe Leroy wrote:
>
>
> Le 02/11/2023 à 06:39, Aneesh Kumar K.V a écrit :
>> Christophe Leroy writes:
>>
>>> Introduce PAGE_EXECONLY_X macro which provides exec-only rights.
>>> The _X may be seen as redundant with the EXECONLY
leared (no-access). This also remove pte_user() from
book3s/64.
pte_access_permitted() now checks for _PAGE_EXEC because we now support
EXECONLY mappings.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 23 +---
arch/powerpc/mm/book3s64/ha
Christophe Leroy writes:
> Introduce PAGE_EXECONLY_X macro which provides exec-only rights.
> The _X may be seen as redundant with the EXECONLY but it helps
> keep consistancy, all macros having the EXEC right have _X.
>
> And put it next to PAGE_NONE as PAGE_EXECONLY_X is
> somehow PAGE_NONE + E
Christophe Leroy writes:
> pte_user() is now only used in pte_access_permitted() to check
> access on vmas. User flag is cleared to make a page unreadable.
>
> So rename it pte_read() and remove pte_user() which isn't used
> anymore.
>
> For the time being it checks _PAGE_USER but in the near fut
gt;
> if (ret == H_SUCCESS)
> return retbuf[0];
>
There is no functionality change in this patch. It is clarifying the
details that it expect the buf to have the big-endian format and retbuf
contains native endian format.
Not sure why this was not picked.
Reviewed-by: Aneesh Kumar K.V
Hari Bathini writes:
> patch_instruction() entails setting up pte, patching the instruction,
> clearing the pte and flushing the tlb. If multiple instructions need
> to be patched, every instruction would have to go through the above
> drill unnecessarily. Instead, introduce patch_instructions()
1 ("powerpc: implement the new page table range API")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/pgtable.c | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index 3ba9fe4116
Aneesh Kumar K V writes:
> On 10/18/23 11:25 AM, Christophe Leroy wrote:
>>
>>
>> Le 18/10/2023 à 06:55, Aneesh Kumar K.V a écrit :
>>> With commit 9fee28baa601 ("powerpc: implement the new page table range
>>> API") we added set_ptes to power
On 10/18/23 11:25 AM, Christophe Leroy wrote:
>
>
> Le 18/10/2023 à 06:55, Aneesh Kumar K.V a écrit :
>> With commit 9fee28baa601 ("powerpc: implement the new page table range
>> API") we added set_ptes to powerpc architecture but the implementation
>> miss
e expensive tlb invalidate which
is not needed when you are setting up the pte for the first time. See
commit 56eecdb912b5 ("mm: Use ptep/pmdp_set_numa() for updating
_PAGE_NUMA bit") for more details
Fixes: 9fee28baa601 ("powerpc: implement the new page table range API")
Signed-
Erhard Furtner writes:
> On Thu, 12 Oct 2023 20:54:13 +0100
> "Matthew Wilcox (Oracle)" wrote:
>
>> Dave Woodhouse reported that we now nest calls to
>> arch_enter_lazy_mmu_mode(). That was inadvertent, but in principle we
>> should allow it. On further investigation, Juergen already fixed it
Erhard Furtner writes:
> On Fri, 06 Oct 2023 11:04:15 +0530
> "Aneesh Kumar K.V" wrote:
>
>> Can you check this change?
>>
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index 3ba9fe411604..6d144fedd557 100644
>
..
Hi,
Erhard Furtner writes:
> Greetings!
>
> Kernel 6.5.5 boots fine on my PowerMac G5 11,2 but kernel 6.6-rc3 fails to
> boot with following dmesg shown on the OpenFirmware console (transcribed
> screenshot):
> I bisected the issue and got 9fee28baa601f4dbf869b1373183b312d2d5ef3d as 1st
>
Aditya Gupta writes:
> On Wed, Sep 20, 2023 at 05:45:36PM +0530, Aneesh Kumar K.V wrote:
>> Aditya Gupta writes:
>>
>> > Since below commit, address mapping for vmemmap has changed for Radix
>> > MMU, where address mapping is stored in kernel page table its
Aditya Gupta writes:
> Since below commit, address mapping for vmemmap has changed for Radix
> MMU, where address mapping is stored in kernel page table itself,
> instead of earlier used 'vmemmap_list'.
>
> commit 368a0590d954 ("powerpc/book3s64/vmemmap: switch radix to use
> a different
On 8/28/23 1:16 PM, Aneesh Kumar K.V wrote:
> With CONFIG_SPARSEMEM disabled the below kernel build error is observed.
>
> arch/powerpc/mm/init_64.c:477:38: error: use of undeclared identifier
> 'SECTION_SIZE_BITS'
>
> CONFIG_MEMORY_HOTPLUG depends on CONFIG_SPARSEM
can still map them using a 256MB memory block size.
Fixes: 4d15721177d5 ("powerpc/mm: Cleanup memory block size probing")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/init_64.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/init_64
ck size probing")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/init_64.c | 19 +++
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index fcda46c2b8df..e3d7379ef480 100644
--- a/arch/powerpc/mm/init_64.c
+
On 8/25/23 12:39 PM, kernel test robot wrote:
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> head: b9bbbf4979073d5536b7650decd37fcb901e6556
> commit: 4d15721177d539d743fcf31d7bb376fb3b81aeb6 [84/128] powerpc/mm: Cleanup
> memory block size probing
> config: po
block
size, we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages. ie, for every 4096
pages 28 pages get reserved.
Reviewed-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1
: Michal Hocko
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/base/memory.c | 27 +
include/linux/memory.h | 8 ++-
mm/memory_hotplug.c| 54 ++
3 files changed, 52 insertions(+), 37 deletions(-)
diff
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
.../admin-guide/mm/memory-hotplug.rst | 12 ++
mm/memory_hotplug.c | 120 +++---
2 files changed, 113 insertions(+), 19 deletions(-)
diff --git a/Documentation/admin-guide/mm/memory
Some architectures would want different restrictions. Hence add an
architecture-specific override.
The PMD_SIZE check is moved there.
Acked-by: Michal Hocko
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 24
1 file changed, 20
If not supported, fallback to not using memap on memmory. This avoids
the need for callers to do the fallback.
Acked-by: Michal Hocko
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/acpi/acpi_memhotplug.c | 3 +--
include/linux/memory_hotplug.h | 3 ++-
mm
Instead of adding menu entry with all supported architectures, add
mm/Kconfig variable and select the same from supported architectures.
No functional change in this patch.
Acked-by: Michal Hocko
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/arm64/Kconfig | 4
we remove the memory we can find the altmap details which
is needed on some architectures.
* rebase to latest linus tree
Aneesh Kumar K.V (6):
mm/memory_hotplug: Simplify ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE kconfig
mm/memory_hotplug: Allow memmap on memory hotplug request to fallback
mm
On 8/8/23 12:05 AM, David Hildenbrand wrote:
> On 07.08.23 14:41, David Hildenbrand wrote:
>> On 07.08.23 14:27, Michal Hocko wrote:
>>> On Sat 05-08-23 19:54:23, Aneesh Kumar K V wrote:
>>> [...]
>>>> Do you see a need for firmware-managed memory to be h
On 8/8/23 12:05 AM, David Hildenbrand wrote:
> On 07.08.23 14:41, David Hildenbrand wrote:
>> On 07.08.23 14:27, Michal Hocko wrote:
>>> On Sat 05-08-23 19:54:23, Aneesh Kumar K V wrote:
>>> [...]
>>>> Do you see a need for firmware-managed memory to be h
On 8/3/23 5:00 PM, Michal Hocko wrote:
> On Thu 03-08-23 11:24:08, David Hildenbrand wrote:
> [...]
>>> would be readable only when the block is offline and it would reallocate
>>> vmemmap on the change. Makes sense? Are there any risks? Maybe pfn
>>> walkers?
>>
>> The question is: is it of any re
On 8/2/23 9:24 PM, David Hildenbrand wrote:
> On 02.08.23 17:50, Michal Hocko wrote:
>> On Wed 02-08-23 10:15:04, Aneesh Kumar K V wrote:
>>> On 8/1/23 4:20 PM, Michal Hocko wrote:
>>>> On Tue 01-08-23 14:58:29, Aneesh Kumar K V wrote:
>>>>> On 8/1/23
On 8/2/23 4:40 AM, Verma, Vishal L wrote:
> On Tue, 2023-08-01 at 10:11 +0530, Aneesh Kumar K.V wrote:
>> With memmap on memory, some architecture needs more details w.r.t altmap
>> such as base_pfn, end_pfn, etc to unmap vmemmap memory. Instead of
>> computing them again wh
On 8/1/23 4:20 PM, Michal Hocko wrote:
> On Tue 01-08-23 14:58:29, Aneesh Kumar K V wrote:
>> On 8/1/23 2:28 PM, Michal Hocko wrote:
>>> On Tue 01-08-23 10:11:16, Aneesh Kumar K.V wrote:
>>>> Allow updating memmap_on_memory mode after the kernel boot. Memory
>>&
On 8/1/23 2:28 PM, Michal Hocko wrote:
> On Tue 01-08-23 10:11:16, Aneesh Kumar K.V wrote:
>> Allow updating memmap_on_memory mode after the kernel boot. Memory
>> hotplug done after the mode update will use the new mmemap_on_memory
>> value.
>
> Well, this is a user
also be more than the section size.
Reviewed-by: Reza Arbab
Signed-off-by: Aneesh Kumar K.V
---
.../admin-guide/kernel-parameters.txt | 3 +++
arch/powerpc/kernel/setup_64.c| 23 +++
arch/powerpc/mm/init_64.c | 17 ++
3
block size value.
Add workaround to force 256MB memory block size if device driver managed
memory such as GPU memory is present. This helps to add GPU memory
that is not aligned to 1G.
Co-developed-by: Reza Arbab
Signed-off-by: Reza Arbab
Signed-off-by: Aneesh Kumar K.V
---
Changes from v3
: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/base/memory.c | 27 +
include/linux/memory.h | 8 ++
mm/memory_hotplug.c| 55 ++
3 files changed, 53 insertions(+), 37 deletions(-)
diff --git a/drivers/base
Allow updating memmap_on_memory mode after the kernel boot. Memory
hotplug done after the mode update will use the new mmemap_on_memory
value.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 33 +
1 file changed, 17
block
size, we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages. ie, for every 4096
pages 28 pages get reserved.
Reviewed-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1
Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
.../admin-guide/mm/memory-hotplug.rst | 12 ++
mm/memory_hotplug.c | 120 +++---
2 files changed, 113 insertions(+), 19 deletions(-)
diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst
b
Some architectures would want different restrictions. Hence add an
architecture-specific override.
The PMD_SIZE check is moved there.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 24
1 file changed, 20 insertions(+), 4
If not supported, fallback to not using memap on memmory. This avoids
the need for callers to do the fallback.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/acpi/acpi_memhotplug.c | 3 +--
include/linux/memory_hotplug.h | 3 ++-
mm/memory_hotplug.c| 13
1 - 100 of 2697 matches
Mail list logo