mmu_linear_psize is only set at boot once on 64e, is not necessarily the correct size of the linear map pages, and is never used anywhere. Remove it.
Signed-off-by: Nicholas Piggin <npig...@gmail.com> --- arch/powerpc/mm/nohash/tlb.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c index 5872f69141d5..8c1523ae7f7f 100644 --- a/arch/powerpc/mm/nohash/tlb.c +++ b/arch/powerpc/mm/nohash/tlb.c @@ -150,7 +150,6 @@ static inline int mmu_get_tsize(int psize) */ #ifdef CONFIG_PPC64 -int mmu_linear_psize; /* Page size used for the linear mapping */ int mmu_pte_psize; /* Page size used for PTE pages */ int mmu_vmemmap_psize; /* Page size used for the virtual mem map */ int book3e_htw_mode; /* HW tablewalk? Value is PPC_HTW_* */ @@ -655,14 +654,6 @@ static void early_init_this_mmu(void) static void __init early_init_mmu_global(void) { - /* XXX This will have to be decided at runtime, but right - * now our boot and TLB miss code hard wires it. Ideally - * we should find out a suitable page size and patch the - * TLB miss code (either that or use the PACA to store - * the value we want) - */ - mmu_linear_psize = MMU_PAGE_1G; - /* XXX This should be decided at runtime based on supported * page sizes in the TLB, but for now let's assume 16M is * always there and a good fit (which it probably is) -- 2.23.0