es not introduce any functional change.
>
> Signed-off-by: Lorenzo Stoakes
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
ot introduce any functional change.
>
> Signed-off-by: Lorenzo Stoakes
Acked-by: Oscar Salvador
Heh, for one second I thought this was about to convert vm_fault->flags to
the actual enum fault_flag (which wouldn't be bad if you ask me, because
"unsigned int flags" is not
> At the moment this is simply an incongruity, however in future we plan to
> change this type and therefore this change is a critical requirement for
> doing so.
>
> Overall, this patch does not introduce any functional change.
>
> Signed-off-by: Lorenzo Stoakes
Revi
On Wed, Dec 04, 2024 at 10:28:39AM +0100, David Hildenbrand wrote:
> On 04.12.24 10:15, Oscar Salvador wrote:
> > On Wed, Dec 04, 2024 at 10:03:28AM +0100, Vlastimil Babka wrote:
> > > On 12/4/24 09:59, Oscar Salvador wrote:
> > > > On Tue, Dec 03, 2024 at 08:19:02PM
On Wed, Dec 04, 2024 at 10:03:28AM +0100, Vlastimil Babka wrote:
> On 12/4/24 09:59, Oscar Salvador wrote:
> > On Tue, Dec 03, 2024 at 08:19:02PM +0100, David Hildenbrand wrote:
> >> It was always set using "GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL",
>
ch is not
part of the cpuset of the task that originally allocated it, thus violating the
policy? Is not that a problem?
--
Oscar Salvador
SUSE Labs
ed; the caller we'll
> be converting (powernv/memtrace) next won't trigger this.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
flags used for
> compaction/migration exactly once. Update the documentation of the
> gfp_mask parameter for alloc_contig_range() and alloc_contig_pages().
>
> Acked-by: Zi Yan
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
On Tue, Dec 03, 2024 at 10:47:27AM +0100, David Hildenbrand wrote:
> The flags are no longer used, we can stop passing them to
> isolate_single_pageblock().
>
> Reviewed-by: Zi Yan
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
On Tue, Dec 03, 2024 at 10:47:29AM +0100, David Hildenbrand wrote:
> The single user is in page_alloc.c.
>
> Reviewed-by: Zi Yan
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
On Tue, Dec 03, 2024 at 10:47:28AM +0100, David Hildenbrand wrote:
> The parameter is unused, so let's stop passing it.
>
> Reviewed-by: Zi Yan
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
ll code up some tests, and once you have the pmd_* stuff on 8xx we
can give it a shot.
Thanks!
--
Oscar Salvador
SUSE Labs
;
> + }
You mentioned that we need to bail out otherwise only the first PxD would be
updated.
In the comment you say that mm will take care of making the page young
or dirty.
Does this mean that the PxDs underneath will not have its bits updated?
--
Oscar Salvador
SUSE Labs
ng off rails when unifying hugetlb and normal walkers.
test_8mbp_hugepage() could so some checks with pmd_ operations, print
the results, and then compare them with those that check_hugetlb_entry()
would give us.
If everything is alright, both results should be the same.
I can write the tests up, so we run some sort of smoketests.
So yes, I do think that this is a good initiative.
Thanks a lot Christoph
--
Oscar Salvador
SUSE Labs
On Tue, Jun 11, 2024 at 11:20:01AM -0400, Peter Xu wrote:
> On Tue, Jun 11, 2024 at 05:08:45PM +0200, Oscar Salvador wrote:
> > The problem is that we do not have spare bits for 8xx to mark these ptes
> > as cont-ptes or mark them pte as 8MB, so I do not see a clear path on how
>
On Tue, Jun 11, 2024 at 10:17:30AM -0400, Peter Xu wrote:
> Oscar,
>
> On Tue, Jun 11, 2024 at 11:34:23AM +0200, Oscar Salvador wrote:
> > Which means that they would be caught in the following code:
> >
> > ptl = pmd_huge_lock(pmd, vma);
> >
that instead of this patch, we have one implementing pmd_leaf
and pmd_leaf_size for 8Mb hugepages on power8xx, as that takes us closer to our
goal of
unifying hugetlb.
[1] https://github.com/leberus/linux/tree/hugetlb-pagewalk-v2
--
Oscar Salvador
SUSE Labs
t; vm_area_struct *vma,
>
> return pte_alloc_huge(mm, pmd, addr);
> }
> -#endif
Did not notice this before.
This belongs to the previous patch.
--
Oscar Salvador
SUSE Labs
ing on the walk_page API to get rid of hugetlb
specific hooks basing it on this patchset.
Thanks a lot for this work Christophe
--
Oscar Salvador
SUSE Labs
On Mon, May 27, 2024 at 03:30:13PM +0200, Christophe Leroy wrote:
> All targets have now opted out of CONFIG_ARCH_HAS_HUGEPD so
> remove left over code.
>
> Signed-off-by: Christophe Leroy
Acked-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
> I have 0 clue about this code. What would happen if we do not bail out?
> >
>
> In that case the pte_xchg() in the while () will only set ACCESS or
> DIRTY bit on the first PxD entry, not on all cont-PxD entries.
I see, thanks for explaining.
--
Oscar Salvador
SUSE Labs
On Mon, May 27, 2024 at 03:30:14PM +0200, Christophe Leroy wrote:
> powerpc was the only user of CONFIG_ARCH_HAS_HUGEPD and doesn't
> use it anymore, so remove all related code.
>
> Signed-off-by: Christophe Leroy
Acked-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
On Wed, May 29, 2024 at 10:14:15AM +, Christophe Leroy wrote:
>
>
> Le 29/05/2024 à 12:09, Oscar Salvador a écrit :
> > On Wed, May 29, 2024 at 09:49:48AM +, Christophe Leroy wrote:
> >> Doesn't really matter if it's PUD or PMD at this point. On a 32 b
)
> 1011 4 Gbytes (e500v2 only) (Shift 32)
You say hugehages start at 2MB (shift 21), but you say that the smallest
hugepage
Linux support is 4MB (shift 22).?
--
Oscar Salvador
SUSE Labs
e) + (nr << PFN_PTE_SHIFT));
> }
>
> And when I called it with nr = PMD_SIZE / PAGE_SIZE = 2M / 4k = 512, as
> we have PFN_PTE_SHIFT = 24, I got 512 << 24 = 0
Ah, I missed that trickery with the types.
Thanks!
--
Oscar Salvador
SUSE Labs
urn 1;
> + if ((access & _PAGE_WRITE) && !(old_pte & _PAGE_DIRTY))
> + return 1;
I have 0 clue about this code. What would happen if we do not bail out?
--
Oscar Salvador
SUSE Labs
__set_pte_at(mm, addr, ptep, pte, 0);
> + pte = __pte(pte_val(pte) + ((unsigned long long)pdsize /
> PAGE_SIZE << PFN_PTE_SHIFT));
You can use pte_advance_pfn() here? Just give have
nr = (unsigned long long)pdsize / PAGE_SIZE << PFN_PTE_SHIFT)
pte_advance_pfn(pte, nr)
Which 'sz's can we have here? You mentioned that e500 support:
4M, 16M, 64M, 256M, 1G.
which of these ones can be huge?
--
Oscar Salvador
SUSE Labs
PMD entries will point to that page table.
>
> The PMD entries also get a flag to tell it is addressing an 8M page,
> this is required for the HW tablewalk assistance.
>
> Signed-off-by: Christophe Leroy
> Reviewed-by: Oscar Salvador
> ---
...
> +#define __HAVE_ARCH_
> +#define _PAGE_HSIZE_MSK (_PAGE_U0 | _PAGE_U1 | _PAGE_U2 | _PAGE_U3)
> +#define _PAGE_HSIZE_SHIFT14
Add a comment in above explaining which P*_SHIFT we need cover with these
4bits.
--
Oscar Salvador
SUSE Labs
; - pte = huge_ptep_get(ptep);
> + pte = huge_ptep_get(vma->mm, addr, ptep);
I looked again and I stumbled upon this.
It should have been "vma->vm_mm".
--
Oscar Salvador
SUSE Labs
of the entry.
>
> So huge_ptep_get() will need to know either the size of the page
> or get the pmd.
>
> In order to be consistent with huge_ptep_get_and_clear(), give
> mm and address to huge_ptep_get().
>
> Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
d handling in
> hugetlb rework") we now have the vma in gup_hugepte() so we now pass
> vma->vm_mm
I did not notice, thanks.
--
Oscar Salvador
SUSE Labs
,
but other than that looks good to me, so FWIW:
Reviewed-by: Oscar Salvador
Just a nit below:
> +#define __HAVE_ARCH_HUGE_PTEP_GET
> +static inline pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr,
> pte_t *ptep)
> +{
> + if (ptep_is_8m_pmdp(mm, addr, ptep))
>
-by: Christophe Leroy
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
gtable.c:394:15: note: declared here
>
> This is due to pmd_offset() being a no-op in that case.
>
> So rework it for powerpc/32 so that pXd_offset() are used on real
> pointers and not on on-stack copies.
>
> Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
GEPD, which should not be the case anymore for 8xx after
patch#8, and since 8xx is the only one that will use the mm parameter from
huge_ptep_get, we are all good.
--
Oscar Salvador
SUSE Labs
age size param to
> set_huge_pte_at()")
> Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
> ---
> arch/powerpc/mm/nohash/8xx.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
> i
nto the patch that makes pmd_leaf() not returning
always false, but no strong feelings:
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
the
> core.
>
> Signed-off-by: Christophe Leroy
thanks, this looks much cleaner.
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
PMD/PUD instead of HUGEPD
> powerpc/mm: Remove hugepd leftovers
> mm: Remove CONFIG_ARCH_HAS_HUGEPD
I glanced over it and it looks much better, not having to fiddle with other arch
code and generic declarations is a big plus.
I plan to do a proper review tomorrow.
Thanks for working on this Christophe!
--
Oscar Salvador
SUSE Labs
On Sat, May 25, 2024 at 06:44:06AM +, Christophe Leroy wrote:
> No, all have cont-PMD but only 8xx handles pages greater than PMD_SIZE
> as cont-PTE instead of cont-PMD.
Yes, sorry, I managed to confuse myself. It is obvious from the code.
--
Oscar Salvador
SUSE Labs
r12, r11, r12;
You add the offset to pgdir?
> + lwz r11, 4(r12);/* Get pgd/pmd entry */ \
What is i offset 4?
--
Oscar Salvador
SUSE Labs
rom shift field.
>
> Also remove inc filed which is unused.
>
> Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
SZ_8M)
> return NULL;
Since this function is the core for allocation huge pages, I think it would
benefit from a comment at the top explaining the possible layouts.
e.g: Who can have cont-{P4d,PUD,PMD} etc.
A brief explanation of the possible scheme for all powerpc platforms.
That would help people looking into this in a future.
--
Oscar Salvador
SUSE Labs
offset(p4dp, ea);
> + pmdp = pmd_offset(pudp, ea);
I would drop a comment on top explaining that these are no-op for 32bits,
otherwise it might not be obvious to people as why this distiction between 64
and
32bits.
Other than that looks good to me
--
Oscar Salvador
SUSE Labs
On Fri, May 17, 2024 at 09:00:02PM +0200, Christophe Leroy wrote:
> On 8xx, only the shift field is used in struct mmu_psize_def
>
> Remove other fields and related macros.
>
> Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
4c9b93e..59f0d7706d2f 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> +void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
> + pte_t pte, unsigned long sz)
> +{
> + pmd_t *pmdp = pmd_off(mm, addr);
> +
> + pte = set_pte_filter(pte, addr);
> +
> + if (sz == SZ_8M) {
> + __set_huge_pte_at(pmdp, pte_offset_kernel(pmdp, 0),
> pte_val(pte));
> + __set_huge_pte_at(pmdp, pte_offset_kernel(pmdp + 1, 0),
> pte_val(pte) + SZ_4M);
You also mentioned that this would slightly change after you drop
patch#0 and patch#1.
The only comment I have right know would be to add a little comment
explaining the layout (the replication of 1024 entries), or just
something like "see comment from number_of_cells_per_pte".
--
Oscar Salvador
SUSE Labs
ry
> appreciated if you can help have a look at this series.
I am not a powerpc developer but I plan on keep on reviewing this series
today and next week.
thanks
--
Oscar Salvador
SUSE Labs
his page mapped as usual, so if ProcB re-access it, that will not
trigger a fault (because the page is still mapped in its pagetables).
--
Oscar Salvador
SUSE Labs
hen we would not need all these 'sz' parameters scattered.
Can that work?
PD: Do you know a way to emulate a 8xx VM? qemu seems to not have
support support.
Thanks
--
Oscar Salvador
SUSE Labs
nsigned long addr)
pte = ptep_get_lockless(ptep);
if (pte_present(pte))
- size = pte_leaf_size(pte);
+ size = pmd_pte_leaf_size(pmd, pte);
pte_unmap(ptep);
#endif /* CONFIG_HAVE_GUP_FAST */
--
Oscar Salvador
SUSE Labs
On Tue, May 21, 2024 at 10:48:21AM +1000, Michael Ellerman wrote:
> Yeah I can. Does it actually cause a bug at runtime (I assume so)?
No, currently set_huge_pte_at() from 8xx ignores the 'sz' parameter.
But it will be used after this series.
--
Oscar Salvador
SUSE Labs
On Mon, May 20, 2024 at 04:31:39PM +, Christophe Leroy wrote:
> Hi Oscar, hi Michael,
>
> Le 20/05/2024 à 11:14, Oscar Salvador a écrit :
> > On Fri, May 17, 2024 at 09:00:00PM +0200, Christophe Leroy wrote:
> >> set_huge_pte_at() expects the real page size
dc8 ("mm: hugetlb: add huge page size param to
> set_huge_pte_at()")
> Signed-off-by: Christophe Leroy
Reviewed-by: Oscar Salvador
AFAICS, this fixup is not related to the series, right? (yes, you will
the parameter later)
I would have it at the very beginning of the series.
>
break;
> }
> if (unlikely(pmd_none(dst_pmdval)) &&
> - unlikely(__pte_alloc(dst_mm, dst_pmd))) {
> + unlikely(__pte_alloc(dst_mm, dst_pmd, PAGE_SIZE))) {
> err = -ENOMEM;
> break;
> }
> @@ -1687,7 +1687,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx,
> unsigned long dst_start,
> err = -ENOENT;
> break;
> }
> - if (unlikely(__pte_alloc(mm, src_pmd))) {
> + if (unlikely(__pte_alloc(mm, src_pmd,
> PAGE_SIZE))) {
> err = -ENOMEM;
> break;
> }
> --
> 2.44.0
>
--
Oscar Salvador
SUSE Labs
yout
of hugepd,
expected layout after the work, etc.
I think it would help in reviewing this series.
Thanks!
[1] https://github.com/linuxppc/wiki/wiki/Huge-pages
--
Oscar Salvador
SUSE Labs
creaming,
but he still wants to be able to trigger force_sig_mceerr().
--
Oscar Salvador
SUSE Labs
", because I want to make it clear that pte marker can used in any
> form, so itself shouldn't imply anything..
I think it would make more sense if we have a separate marker for swapin
errors?
I mean, deep down, they do not mean the same as poison, right?
Then you can choose which events get to be silent because you do not
care, and which ones need to scream loud.
I think swapin errors belong to the latter. At least I a heads-up why a
process is getting killed is appreciated, IMHO.
--
Oscar Salvador
SUSE Labs
marker, so we do not have any
means to differentiate between the two of them.
Would it make sense to create yet another pte marker type to split that
up? Because when I look at VM_FAULT_HWPOISON, I get reminded of MCE
stuff, and that does not hold here.
--
Oscar Salvador
SUSE Labs
fix is to make sure we update high_memory on memory hotplug.
> This is similar to what x86 does in commit 3072e413e305 ("mm/memory_hotplug:
> introduce add_pages")
>
> Fixes: ffa0b64e3be5 ("powerpc: Fix virt_addr_valid() for 64-bit Book3E &
> 32-bit"
u resolve the conflict, or you would rather want me to send
v2 with the amendment?
--
Oscar Salvador
SUSE Labs
Hi Michael,
It's done [1].
thanks!
[1]
http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20220411074934.4632-1-osalva...@suse.de/
--
Oscar Salvador
SUSE Labs
the proper numa<->cpu mapping,
so cpu_to_node() in cpu_up() returns the right node and
try_online_node() can do its work.
Signed-off-by: Oscar Salvador
Reviewed-by: Srikar Dronamraju
Tested-by: Geetika Moolchandani
---
arch/powerpc/include/asm/topology.h | 8 ++--
stage in start_secondary:
start_secondary:
set_numa_node(numa_cpu_lookup_table[cpu])
But we do not really care, as we already now the
CPU <-> NUMA associativity back in find_and_online_cpu_nid(),
so let us make use of that and set the proper numa<->cpu mapping,
so cpu_to_node() in cpu_
ot a subset of the NODE domain
>
> Fixes: 09f49dca570a ("mm: handle uninitialized numa nodes gracefully")
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux...@kvack.org
> Cc: Michal Hocko
> Cc: Michael Ellerman
> Reported-by: Geetika Moolchandani
> Signed-off-by: Sri
gt; + } else
> + outer_end = begin_pfn + 1;
> }
I think there are cases could optimize for. If the page has already been
split in pageblock by the outer_start loop, we could skip this outer_end
logic altogether.
E.g: An order-10 page is split in two pageblocks. There's nothing else
to be done, right? We could skip this.
--
Oscar Salvador
SUSE Labs
allocate the second part.
Yeah, I see, I was a bit slow there, but I see the point now.
Thanks David
--
Oscar Salvador
SUSE Labs
em, if those
pfn turn out to be actually unmovable?
--
Oscar Salvador
SUSE Labs
; Cc: linux-arm-ker...@lists.infradead.org
> Cc: linux-ker...@vger.kernel.org
> Cc: linux-i...@vger.kernel.org
> Cc: linux-m...@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-ri...@lists.infradead.org
> Cc: linux-s...@vger.kernel.org
> Cc: linux...@vger.kernel.org
> Cc: sparcli...@vger.kernel.org
> Cc: linux...@kvack.org
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
we want to remark that somehow in the changelog,
so it is crystal clear that by the time the node_dev_init() gets called,
we already set the nodes online.
Anyway, just saying, but is fine as is.
--
Oscar Salvador
SUSE Labs
one
of the functions we used to call before expect the nodes not to be
there for some weird reason).
So, no functional change, right?
This certainly looks like an improvment.
--
Oscar Salvador
SUSE Labs
On Tue, Jan 25, 2022 at 02:19:46PM +0100, Oscar Salvador wrote:
> I know that this has been discussed previously, and the cover-letter already
> mentions it, but I think it would be great to have some sort of information
> about
> the problem in the commit message as well, so people
great to have some sort of information
about
the problem in the commit message as well, so people do not have to go and find
it somewhere else.
--
Oscar Salvador
SUSE Labs
On 2022-01-19 20:06, Zi Yan wrote:
From: Zi Yan
has_unmovable_pages() is only used in mm/page_isolation.c. Move it from
mm/page_alloc.c and make it static.
Signed-off-by: Zi Yan
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE Labs
pfn])?
If so, should not the names be reversed?
--
Oscar Salvador
SUSE Labs
if (ret)
> - return ret;
> - }
> -
> - return __add_pages(nid, start_pfn, nr_pages, params);
> -}
> -
> -void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
> -{
> - unsigned long start_pfn = start >> PAGE_SHIFT;
> - unsigned long nr_pages = size >> PAGE_SHIFT;
> -
> - __remove_pages(start_pfn, nr_pages, altmap);
> -}
> -#endif
> -
> int kernel_set_to_readonly __read_mostly;
>
> static void mark_nxdata_nx(void)
> --
> 2.31.1
>
>
--
Oscar Salvador
SUSE Labs
On Wed, Sep 29, 2021 at 04:35:59PM +0200, David Hildenbrand wrote:
> These functions no longer exist.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
> ---
> include/linux/memory_hotplug.h | 3 ---
> 1 file changed, 3 deletions(-)
>
> di
er.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
> ---
> Documentation/core-api/memory-hotplug.rst | 3 --
> .../zh_CN/core-api/memory-hotplug.rst | 4 ---
> include/linux/memory.h| 1 -
> mm/memory_hotplug.c
t, dropping the "BROKEN" dependency to
> make clear that we are not going to support it again. Next, we'll remove
> some HIGHMEM leftovers from memory hotplug code to clean up.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
> ---
> mm/Kconfi
; Signed-off-by: David Hildenbrand
Acked-by: Oscar Salvador
> ---
> arch/powerpc/include/asm/machdep.h| 2 +-
> arch/powerpc/kernel/setup_64.c| 2 +-
> arch/powerpc/platforms/powernv/setup.c| 4 ++--
> arch/powerpc/platforms/pseries/setup.c
d X86_64_ACPI_NUMA (obviously) only supports x86-64:
> config X86_64_ACPI_NUMA
> def_bool y
> depends on X86_64 && NUMA && ACPI && PCI
>
> Let's just remove the CONFIG_X86_64_ACPI_NUMA dependency, as it does no
> longer mak
select ARCH_ENABLE_HUGEPAGE_MIGRATION if x86_64 && HUGETLB_PAGE &&
> MIGRATION
> select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64 || (X86_32 && HIGHMEM)
> select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
> + select ARCH_ENABLE_THP_MIGRATION if x86_64 && TRANSPARENT_HUGEPAGE
you need s/x86_64/X86_64/, otherwise we are left with no migration :-)
--
Oscar Salvador
SUSE L3
On Wed, Nov 11, 2020 at 03:53:22PM +0100, David Hildenbrand wrote:
> Suggested-by: Michal Hocko
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Rashmica Gupta
> Cc: Andrew Morton
> Cc: Mike Rapoport
> Cc: Michal Hocko
> Cc: Osc
c: Paul Mackerras
> Cc: Rashmica Gupta
> Cc: Andrew Morton
> Cc: Mike Rapoport
> Cc: Michal Hocko
> Cc: Oscar Salvador
> Cc: Wei Yang
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE L3
On Wed, Nov 11, 2020 at 03:53:20PM +0100, David Hildenbrand wrote:
> The single caller (arch_remove_linear_mapping()) prints a proper warning
> when this function fails. No need to eventually crash the kernel - let's
> drop this WARN_ON.
>
> Suggested-by: Oscar Salvador
&g
mutex_lock(&linear_mapping_mutex);
> ret = remove_section_mapping(start, start + size);
> + mutex_unlock(&linear_mapping_mutex);
> WARN_ON_ONCE(ret);
My expertise in this area is low, so bear with me.
Why we do not need to protect flush_dcache_range_chunked and
vm_un
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Rashmica Gupta
> Cc: Andrew Morton
> Cc: Mike Rapoport
> Cc: Michal Hocko
> Cc: Oscar Salvador
> Cc: Wei Yang
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE L3
kerras
> Cc: Rashmica Gupta
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE L3
ul Mackerras
> Cc: Rashmica Gupta
> Cc: Andrew Morton
> Cc: Mike Rapoport
> Cc: Michal Hocko
> Cc: Oscar Salvador
> Cc: Wei Yang
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
--
Oscar Salvador
SUSE L3
on
> Cc: Mike Rapoport
> Cc: Michal Hocko
> Cc: Oscar Salvador
> Cc: Wei Yang
> Signed-off-by: David Hildenbrand
Reviewed-by: Oscar Salvador
> ---
> arch/powerpc/mm/mem.c | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/po
; > > error.
> >
> > Dumb question, but should not we do this for other arches as well?
>
> It seems arm64 and s390 already do that.
> x86 could have its arch_add_memory() improved though :)
Right, I only stared at x86 and see it did not have it.
I guess we want to
ough the patches that did
lack review (#6-#10).
I hope this helps in moving forward the series, although Michal's review would
be
great as well.
--
Oscar Salvador
SUSE L3
On Sun, Oct 06, 2019 at 10:56:46AM +0200, David Hildenbrand wrote:
> Let's drop the basically unused section stuff and simplify.
>
> Also, let's use a shorter variant to calculate the number of pages to
> the next section boundary.
>
> Cc: Andrew Morton
> Cc: Osc
On Sun, Oct 06, 2019 at 10:56:45AM +0200, David Hildenbrand wrote:
> Get rid of the unnecessary local variables.
>
> Cc: Andrew Morton
> Cc: Oscar Salvador
> Cc: David Hildenbrand
> Cc: Michal Hocko
> Cc: Pavel Tatashin
> Cc: Dan Williams
> Cc: Wei Yang
> Si
On Sun, Oct 06, 2019 at 10:56:44AM +0200, David Hildenbrand wrote:
> If we have holes, the holes will automatically get detected and removed
> once we remove the next bigger/smaller section. The extra checks can
> go.
>
> Cc: Andrew Morton
> Cc: Oscar Salvador
> Cc: Mich
On Sun, Oct 06, 2019 at 10:56:43AM +0200, David Hildenbrand wrote:
> With shrink_pgdat_span() out of the way, we now always have a valid
> zone.
>
> Cc: Andrew Morton
> Cc: Oscar Salvador
> Cc: David Hildenbrand
> Cc: Michal Hocko
> Cc: Pavel Tatashin
> Cc: D
at it, calculate the pfn in memunmap_pages() only once.
>
> Cc: Andrew Morton
> Cc: David Hildenbrand
> Cc: Oscar Salvador
> Cc: Michal Hocko
> Cc: Pavel Tatashin
> Cc: Dan Williams
> Signed-off-by: David Hildenbrand
Looks good to me, it is fine as long as we do not
Anyway, for this one:
Reviewed-by: Oscar Salvador
off-topic: I __think__ we really need to trim the CC list.
> ---
> arch/arm64/mm/mmu.c| 4 +---
> arch/ia64/mm/init.c| 4 +---
> arch/powerpc/mm/mem.c | 3 +--
> arch/s390/mm/init.c
minder
--
Oscar Salvador
SUSE L3
1 - 100 of 113 matches
Mail list logo