Hi!
On Sat, May 09, 2020 at 03:03:40PM +1000, Paul Mackerras wrote:
> + __asm__ volatile("mtmsrd %3,0; ldcix %0,%1,%2; mtmsrd %4,0"
> + : "=r" (val) : "b" (potato_uart_base), "r" (offset),
> +"r" (msr & ~MSR_DR), "r" (msr));
That should be "=&r"(v
On 2020-05-09 23:48, Greg KH wrote:
On Sat, May 09, 2020 at 06:30:56PM -0700, rana...@codeaurora.org wrote:
On 2020-05-06 02:48, Greg KH wrote:
> On Mon, Apr 27, 2020 at 08:26:01PM -0700, Raghavendra Rao Ananta wrote:
> > Potentially, hvc_open() can be called in parallel when two tasks calls
> >
On 2020-05-11 00:23, rana...@codeaurora.org wrote:
On 2020-05-09 23:48, Greg KH wrote:
On Sat, May 09, 2020 at 06:30:56PM -0700, rana...@codeaurora.org
wrote:
On 2020-05-06 02:48, Greg KH wrote:
> On Mon, Apr 27, 2020 at 08:26:01PM -0700, Raghavendra Rao Ananta wrote:
> > Potentially, hvc_open(
On Sun, May 10, 2020 at 9:57 AM Christoph Hellwig wrote:
> The function currently known as flush_icache_user_range only operates
> on a single page. Rename it to flush_icache_user_page as we'll need
> the name flush_icache_user_range for something else soon.
>
> Signed-off-by: Christoph Hellwig
On Sun, May 10, 2020 at 9:57 AM Christoph Hellwig wrote:
> Rename the current flush_icache_range to flush_icache_user_range as
> per commit ae92ef8a4424 ("PATCH] flush icache in correct context") there
> seems to be an assumption that it operates on user addresses. Add a
> flush_icache_range arou
On Mon, May 11, 2020 at 12:23:58AM -0700, rana...@codeaurora.org wrote:
> On 2020-05-09 23:48, Greg KH wrote:
> > On Sat, May 09, 2020 at 06:30:56PM -0700, rana...@codeaurora.org wrote:
> > > On 2020-05-06 02:48, Greg KH wrote:
> > > > On Mon, Apr 27, 2020 at 08:26:01PM -0700, Raghavendra Rao Anant
On Sun, May 10, 2020 at 9:57 AM Christoph Hellwig wrote:
>
> flush_icache_range generally operates on kernel addresses, but for some
> reason m68k needed a set_fs override. Move that into the m68k code
> insted of keeping it in the module loader.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by
On Mon, May 11, 2020 at 12:34:44AM -0700, rana...@codeaurora.org wrote:
> On 2020-05-11 00:23, rana...@codeaurora.org wrote:
> > On 2020-05-09 23:48, Greg KH wrote:
> > > On Sat, May 09, 2020 at 06:30:56PM -0700, rana...@codeaurora.org
> > > wrote:
> > > > On 2020-05-06 02:48, Greg KH wrote:
> > >
Hi Christoph,
On Sun, May 10, 2020 at 9:55 AM Christoph Hellwig wrote:
> none of which really are used by a typical MMU enabled kernel, as a.out can
> only be build for alpha and m68k to start with.
Quoting myself:
"I think it's safe to assume no one still runs a.out binaries on m68k."
http://lo
[+James and Catalin]
On Sun, May 10, 2020 at 09:54:41AM +0200, Christoph Hellwig wrote:
> The second argument is the end "pointer", not the length.
>
> Signed-off-by: Christoph Hellwig
> ---
> arch/arm64/kernel/machine_kexec.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64
allmodconfig
parisc defconfig
powerpc allyesconfig
powerpc allmodconfig
powerpc defconfig
powerpc rhel-kconfig
i386 randconfig-a006-20200511
i386
allnoconfig
i386 randconfig-a006-20200511
i386 randconfig-a005-20200511
i386 randconfig-a003-20200511
i386 randconfig-a001-20200511
i386 randconfig-a004-20200511
i386 randconfig-a002
-20200508
x86_64 randconfig-a006-20200508
x86_64 randconfig-a004-20200508
x86_64 randconfig-a002-20200508
i386 randconfig-a006-20200511
i386 randconfig-a005-20200511
i386 randconfig-a003-20200511
i386
allnoconfig
i386 randconfig-a006-20200511
i386 randconfig-a005-20200511
i386 randconfig-a003-20200511
i386 randconfig-a001-20200511
i386 randconfig-a004-20200511
i386
This option increases the number of SLB misses by limiting the number of
kernel SLB entries, and increased flushing of cached lookaside information.
This helps stress test difficult to hit paths in the kernel.
Signed-off-by: Nicholas Piggin
---
v3:
- Fix compile error found by ktr
.../admin-gui
This option increases the number of hash misses by limiting the number of
kernel HPT entries, by accessing the address immediately after installing
the PTE, then removing it again (except in the case of CI entries that
must not be accessed, these are removed on the next hash fault).
This helps str
On 5/8/20 11:44 AM, Paolo Bonzini wrote:
> So in general I'd say the sources/values model holds up. We certainly
> want to:
>
> - switch immediately to callbacks instead of the type constants (so that
> core statsfs code only does signed/unsigned)
>
> - add a field to distinguish cumulative an
Nicholas Piggin writes:
> Excerpts from kbuild test robot's message of May 9, 2020 1:13 pm:
>> Hi Nicholas,
>>
>> I love your patch! Yet something to improve:
>
> ...
>
>> 1419 #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
>> 1420 pr_cont("DEAR: "REG" ESR: "REG"
On Fri, May 8, 2020 at 12:36 AM wrote:
>
> From: Wen Xiong
>
> Several device drivers hit EEH(Extended Error handling) when triggering
> kdump on Pseries PowerVM. This patch implemented a reset of the PHBs
> in pci general code. PHB reset stop all PCI transactions from previous
> kernel. We have
Returning from an interrupt or syscall to a signal handler currently
begins execution directly at the handler's entry point, with LR set to
the address of the sigreturn trampoline. When the signal handler
function returns, it runs the trampoline. It looks like this:
# interrupt at user address
On Mon, May 11, 2020 at 08:51:15AM +0100, Will Deacon wrote:
> On Sun, May 10, 2020 at 09:54:41AM +0200, Christoph Hellwig wrote:
> > The second argument is the end "pointer", not the length.
> >
> > Signed-off-by: Christoph Hellwig
> > ---
> > arch/arm64/kernel/machine_kexec.c | 1 +
> > 1 file
Qian Cai writes:
> kvmppc_pmd_alloc() and kvmppc_pte_alloc() allocate some memory but then
> pud_populate() and pmd_populate() will use __pa() to reference the newly
> allocated memory. The same is in xive_native_provision_pages().
>
> Since kmemleak is unable to track the physical memory resultin
Doing kasan pages allocation in MMU_init is too early, kernel doesn't
have access yet to the entire memory space and memblock_alloc() fails
when the kernel is a bit big.
Do it from kasan_init() instead.
Fixes: 2edb16efc899 ("powerpc/32: Add KASAN support")
Cc: sta...@vger.kernel.org
Signed-off-by
Commit 45ff3c559585 ("powerpc/kasan: Fix parallel loading of
modules.") added spinlocks to manage parallele module loading.
Since then commit 47febbeeec44 ("powerpc/32: Force KASAN_VMALLOC for
modules") converted the module loading to KASAN_VMALLOC.
The spinlocking has then become unneeded and ca
In case (k_start & PAGE_MASK) doesn't equal (kstart), 'va' will never be
NULL allthough 'block' is NULL
Check the return of memblock_alloc() directly instead of
the resulting address in the loop.
Fixes: 509cd3f2b473 ("powerpc/32: Simplify KASAN init")
Signed-off-by: Christophe Leroy
---
arch/po
At the time being, KASAN_SHADOW_END is 0x1, which
is 0 in 32 bits representation.
This leads to a couple of issues:
- kasan_remap_early_shadow_ro() does nothing because the comparison
k_cur < k_end is always false.
- In ptdump, address comparison for markers display fails and the
marker's
The main purpose of this big series is to:
- reorganise huge page handling to avoid using mm_slices.
- use huge pages to map kernel memory on the 8xx.
The 8xx supports 4 page sizes: 4k, 16k, 512k and 8M.
It uses 2 Level page tables, PGD having 1024 entries, each entry
covering 4M address space. Th
Display the size of areas mapped with BATs.
For that, the size display for pages is refactorised.
Signed-off-by: Christophe Leroy
---
v2: Add missing include of linux/seq_file.h (Thanks to kbuild test robot)
---
arch/powerpc/mm/ptdump/bats.c | 4
arch/powerpc/mm/ptdump/ptdump.c | 23 +++
Display BAT flags the same way as page flags: rwx and wimg
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/bats.c | 37 ++-
1 file changed, 15 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/ptdump/bats.c b/arch/powerpc/mm/ptdump/bats.c
ind
Reorder flags in a more logical way:
- Page size (huge) first
- User
- RWX
- Present
- WIMG
- Special
- Dirty and Accessed
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/8xx.c| 30 +++---
arch/powerpc/mm/ptdump/shared.c | 30 +++---
kasan_remap_early_shadow_ro() and kasan_unmap_early_shadow_vmalloc()
are both updating the early shadow mapping: the first one sets
the mapping read-only while the other clears the mapping.
Refactor and create kasan_update_early_region()
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/kasan
In order to have all flags fit on a 80 chars wide screen,
reduce the flags to 1 char (2 where ambiguous).
No cache is 'i'
User is 'ur' (Supervisor would be sr)
Shared (for 8xx) becomes 'sh' (it was 'user' when not shared but
that was ambiguous because that's not entirely right)
Signed-off-by: Chr
For platforms using shared.c (4xx, Book3e, Book3s/32),
also handle the _PAGE_COHERENT flag with corresponds to the
M bit of the WIMG flags.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/shared.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/mm/ptdump/shared.c
In order to properly display information regardless of the page size,
it is necessary to take into account real page size.
Signed-off-by: Christophe Leroy
Fixes: cabe8138b23c ("powerpc: dump as a single line areas mapping a single
physical page.")
Cc: sta...@vger.kernel.org
---
v3: Fixed sizes w
In order to alloc sub-arches to alloc KASAN regions using optimised
methods (Huge pages on 8xx, BATs on BOOK3S, ...), declare
kasan_init_region() weak.
Also make kasan_init_shadow_page_tables() accessible from outside,
so that it can be called from the specific kasan_init_region()
functions if nee
The 8xx is about to map kernel linear space and IMMR using huge
pages.
In order to display those pages properly, ptdump needs to handle
hugepd tables at PGD level.
For the time being do it only at PGD level. Further patches may
add handling of hugepd tables at lower level for other platforms
when
Mapping RO data as ROX is not an issue since that data
cannot be modified to introduce an exploit.
PPC64 accepts to have RO data mapped ROX, as a trade off
between kernel size and strictness of protection.
On PPC32, kernel size is even more critical as amount of
memory is usually small.
Dependin
Allocate static page tables for the fixmap area. This allows
setting mappings through page tables before memblock is ready.
That's needed to use early_ioremap() early and to use standard
page mappings with fixmap.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/fixmap.h | 4
a
Setting init mem to NX shall depend on sinittext being mapped by
block, not on stext being mapped by block.
Setting text and rodata to RO shall depend on stext being mapped by
block, not on sinittext being mapped by block.
Fixes: 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX")
Cc:
Only 40x still uses PTE_ATOMIC_UPDATES.
40x cannot not select CONFIG_PTE64_BIT.
Drop handling of PTE_ATOMIC_UPDATES:
- In nohash/64
- In nohash/32 for CONFIG_PTE_64BIT
Keep PTE_ATOMIC_UPDATES only for nohash/32 for !CONFIG_PTE_64BIT
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/
When CONFIG_PTE_64BIT is set, pte_update() operates on
'unsigned long long'
When CONFIG_PTE_64BIT is not set, pte_update() operates on
'unsigned long'
In asm/page.h, we have pte_basic_t which is 'unsigned long long'
when CONFIG_PTE_64BIT is set and 'unsigned long' otherwise.
Refactor pte_update()
When CONFIG_PTE_64BIT is set, pte_update() operates on
'unsigned long long'
When CONFIG_PTE_64BIT is not set, pte_update() operates on
'unsigned long'
In asm/page.h, we have pte_basic_t which is 'unsigned long long'
when CONFIG_PTE_64BIT is set and 'unsigned long' otherwise.
Refactor pte_update()
On PPC32, __ptep_test_and_clear_young() takes the mm->context.id
In preparation of standardising pte_update() params between PPC32 and
PPC64, __ptep_test_and_clear_young() need mm instead of mm->context.id
Replace context param by mm.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/as
PPC64 takes 3 additional parameters compared to PPC32:
- mm
- address
- huge
These 3 parameters will be needed in order to perform different
action depending on the page size on the 8xx.
Make pte_update() prototype identical for PPC32 and PPC64.
This allows dropping an #ifdef in huge_ptep_get_an
pte_update() is a bit special for the 8xx. At the time
being, that's an #ifdef inside the nohash/32 pte_update().
As we are going to make it even more special in the coming
patches, create a dedicated version for pte_update() for 8xx.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm
Commit 55c8fc3f4930 ("powerpc/8xx: reintroduce 16K pages with HW
assistance") redefined pte_t as a struct of 4 pte_basic_t, because
in 16K pages mode there are four identical entries in the page table.
But hugepd entries for 8M pages require only one entry of size
pte_basic_t. So there is no point
CONFIG_8xx_COPYBACK was there to help disabling copyback cache mode
for debuging hardware. But nobody will design new boards with 8xx now.
All 8xx platforms select it, so make it the default and remove
the option.
Also remove the Mx_RESETVAL values which are pretty useless and hide
the real value
Prepare ITLB handler to handle _PAGE_HUGE when CONFIG_HUGETLBFS
is enabled. This means that the L1 entry has to be kept in r11
until L2 entry is read, in order to insert _PAGE_HUGE into it.
Also move pgd_offset helpers before pte_update() as they
will be needed there in next patch.
Signed-off-by:
At the time being, 512k huge pages are handled through hugepd page
tables. The PMD entry is flagged as a hugepd pointer and it
means that only 512k hugepages can be managed in that 4M block.
However, the hugepd table has the same size as a normal page
table, and 512k entries can therefore be nested
Pinned TLBs cannot be modified when the MMU is enabled.
Create a function to rewrite the pinned TLB entries with MMU off.
To set pinned TLB, we have to turn off MMU, disable pinning,
do a TLB flush (Either with tlbie and tlbia) then reprogam
the TLB entries, enable pinning and turn on MMU.
If us
PPC_PIN_TLB options are dedicated to the 8xx, move them into
the 8xx Kconfig.
While we are at it, add some text to explain what it does.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kconfig | 20 ---
arch/powerpc/platforms/8xx/Kconfig | 41 +
As the 8xx now manages 512k pages in standard page tables,
it doesn't need CONFIG_PPC_MM_SLICES anymore.
Don't select it anymore and remove all related code.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 64
arch/powerpc/include/asm/noha
Only early debug requires IMMR to be mapped early.
No need to set it up and pin it in assembly. Map it
through page tables at udbg init when necessary.
If CONFIG_PIN_TLB_IMMR is selected, pin it once we
don't need the 32 Mb pinned RAM anymore.
Signed-off-by: Christophe Leroy
---
v2: Disable TLB
At startup, map 32 Mbytes of memory through 4 pages of 8M,
and PIN them inconditionnaly. They need to be pinned because
KASAN is using page tables early and the TLBs might be
dynamically replaced otherwise.
Remove RSV4I flag after installing mappings unless
CONFIG_PIN_TLB_ is selected.
Signed
512k pages are now standard pages, so only 8M pages
are hugepte.
No more handling of normal page tables through hugepd allocation
and freeing, and hugepte helpers can also be simplified.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h | 7 +++
arch/powe
Up to now, linear and IMMR mappings are managed via huge TLB entries
through specific code directly in TLB miss handlers. This implies
some patching of the TLB miss handlers at startup, and a lot of
dedicated code.
Remove all this specific dedicated code.
For now we are back to normal handling vi
The code to setup linear and IMMR mapping via huge TLB entries is
not called anymore. Remove it.
Also remove the handling of removed code exits in the perf driver.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 8 +-
arch/powerpc/kernel/head_8xx.S
Now that space have been freed next to the DTLB miss handler,
it's associated DTLB perf handling can be brought back in
the same place.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a
Similar to PPC64, accept to map RO data as ROX as a trade off between
between security and memory usage.
Having RO data executable is not a high risk as RO data can't be
modified to forge an exploit.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kconfig | 26
Now that linear and IMMR dedicated TLB handling is gone, kernel
boundary address comparison is similar in ITLB miss handler and
in DTLB miss handler.
Create a macro named compare_to_kernel_boundary.
When TASK_SIZE is strictly below 0x8000 and PAGE_OFFSET is
above 0x8000, it is enough to c
Add a function to early map kernel memory using huge pages.
For 512k pages, just use standard page table and map in using 512k
pages.
For 8M pages, create a hugepd table and populate the two PGD
entries with it.
This function can only be used to create page tables at startup. Once
the regular SL
Map the IMMR area with a single 512k huge page.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/nohash/8xx.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
index 570ab2114a73..fb31a0c1c2a4 100644
--- a/a
Map linear memory space with 512k and 8M pages whenever
possible.
Three mappings are performed:
- One for kernel text
- One for RO data
- One for the rest
Separating the mappings is done to be able to update the
protection later when using STRICT_KERNEL_RWX.
The ITLB miss handler now need to als
Pinned TLB are 8M. Now that there is no strict boundary anymore
between text and RO data, it is possible to use 8M pinned executable
TLB that covers both text and RO data.
When PIN_TLB_DATA or PIN_TLB_TEXT is selected, enforce 8M RW data
alignment and allow STRICT_KERNEL_RWX.
Signed-off-by: Chris
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with hugepages and pinned TLB.
In order to map with hugepages, also enforce a 512kB data alignment
minimum. That's a trade-off between size of speed, taking into
account that DEBUG_PAGEALLOC is a debug option. Anyway the a
Implement a kasan_init_region() dedicated to 8xx that
allocates KASAN regions using huge pages.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/kasan/8xx.c| 74 ++
arch/powerpc/mm/kasan/Makefile | 1 +
2 files changed, 75 insertions(+)
create mode 100644
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with BATs.
In order to map with BATs, also enforce data alignment. Set
by default to 256M which is a good compromise for keeping
enough BATs for also KASAN and IMMR.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kcon
Implement a kasan_init_region() dedicated to book3s/32 that
allocates KASAN regions using BATs.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/kasan.h | 1 +
arch/powerpc/mm/kasan/Makefile| 1 +
arch/powerpc/mm/kasan/book3s_32.c | 57 +++
Srikar Dronamraju writes:
> * Christopher Lameter [2020-05-02 22:55:16]:
>
>> On Fri, 1 May 2020, Srikar Dronamraju wrote:
>>
>> > - for_each_present_cpu(cpu)
>> > - numa_setup_cpu(cpu);
>> > + for_each_possible_cpu(cpu) {
>> > + /*
>> > + * Powerpc with CONFIG_NUMA
On Mon, May 11, 2020 at 09:15:55PM +1000, Michael Ellerman wrote:
> Qian Cai writes:
> > kvmppc_pmd_alloc() and kvmppc_pte_alloc() allocate some memory but then
> > pud_populate() and pmd_populate() will use __pa() to reference the newly
> > allocated memory. The same is in xive_native_provision_p
Start using dcbstps; phwsync; sequence for flushing persistent memory range.
The new instructions are implemented as a variant of dcbf and hwsync and on
POWER9 they will be executed as those instructions.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/cacheflush.h | 31
POWER10 introduces two new variants of dcbf instructions (dcbstps and dcbfps)
that can be used to write modified locations back to persistent storage.
Additionally, POWER10 also introduce phwsync and plwsync which can be used
to establish order of these writes to persistent storage.
This patch ex
randconfig-a005-20200511
x86_64 randconfig-a003-20200511
x86_64 randconfig-a006-20200511
x86_64 randconfig-a004-20200511
x86_64 randconfig-a001-20200511
x86_64 randconfig-a002-20200511
i386 randconfig-a005-20200509
> On May 11, 2020, at 7:15 AM, Michael Ellerman wrote:
>
> There is kmemleak_alloc_phys(), which according to the docs can be used
> for tracking a phys address.
>
> Did you try that?
Caitlin, feel free to give your thoughts here.
My understanding is that it seems the doc is a bit misleadin
From: Nicholas Piggin
This option increases the number of SLB misses by limiting the number
of kernel SLB entries, and increased flushing of cached lookaside
information. This helps stress test difficult to hit paths in the
kernel.
Reported-by: kbuild test robot
Signed-off-by: Nicholas Piggin
From: Nicholas Piggin
This option increases the number of hash misses by limiting the number
of kernel HPT entries, by accessing the address immediately after
installing the PTE, then removing it again (except in the case of CI
entries that must not be accessed, these are removed on the next hash
Hi Marek,
On Mon, May 11, 2020 at 08:36:41AM +0200, Marek Szyprowski wrote:
> Hi Mike,
>
> On 08.05.2020 19:42, Mike Rapoport wrote:
> > On Fri, May 08, 2020 at 08:53:27AM +0200, Marek Szyprowski wrote:
> >> On 07.05.2020 18:11, Mike Rapoport wrote:
> >>> On Thu, May 07, 2020 at 02:16:56PM +0200,
On Mon, May 11, 2020 at 09:40:39AM +0200, Geert Uytterhoeven wrote:
> On Sun, May 10, 2020 at 9:57 AM Christoph Hellwig wrote:
> >
> > flush_icache_range generally operates on kernel addresses, but for some
> > reason m68k needed a set_fs override. Move that into the m68k code
> > insted of keepi
On Mon, May 11, 2020 at 09:46:17AM +0200, Geert Uytterhoeven wrote:
> Hi Christoph,
>
> On Sun, May 10, 2020 at 9:55 AM Christoph Hellwig wrote:
> > none of which really are used by a typical MMU enabled kernel, as a.out can
> > only be build for alpha and m68k to start with.
>
> Quoting myself:
On Mon, May 11, 2020 at 12:00:14PM +0100, Catalin Marinas wrote:
> Anyway, I think Christoph's patch needs to go in with a fixes tag:
>
> Fixes: d28f6df1305a ("arm64/kexec: Add core kexec support")
> Cc: # 4.8.x-
>
> and we'll change these functions/helpers going forward for arm64.
>
> Happy to
Hi Christoph,
On Mon, May 11, 2020 at 5:11 PM Christoph Hellwig wrote:
> On Mon, May 11, 2020 at 09:40:39AM +0200, Geert Uytterhoeven wrote:
> > On Sun, May 10, 2020 at 9:57 AM Christoph Hellwig wrote:
> > >
> > > flush_icache_range generally operates on kernel addresses, but for some
> > > reas
Hi Christoph,
On Mon, May 11, 2020 at 5:14 PM Christoph Hellwig wrote:
> On Mon, May 11, 2020 at 09:46:17AM +0200, Geert Uytterhoeven wrote:
> > On Sun, May 10, 2020 at 9:55 AM Christoph Hellwig wrote:
> > > none of which really are used by a typical MMU enabled kernel, as a.out
> > > can
> > >
On Sun, May 10, 2020 at 09:54:42AM +0200, Christoph Hellwig wrote:
> __flush_icache_user_range is not used in modular code, so unexport it.
>
> Signed-off-by: Christoph Hellwig
> ---
> arch/mips/mm/cache.c | 1 -
> 1 file changed, 1 deletion(-)
applied to mips-next.
Thomas.
--
Crap can work.
On Mon, May 11, 2020 at 05:25:45PM +0200, Geert Uytterhoeven wrote:
> > Do you want to drop the:
> >
> > select HAVE_AOUT if MMU
> >
> > for m68k then?
>
> If that helps to reduce maintenance, it's fine for me.
> That leaves alpha as the sole user?
Yes. And for alpha it isn't classic Linux a
On Mon, May 11, 2020 at 05:24:30PM +0200, Geert Uytterhoeven wrote:
> > Btw, do you know what part of flush_icache_range relied on set_fs?
> > Do any of the m68k maintainers have an idea how to handle that in
> > a nicer way when we can split the implementations?
>
> arch/m68k/mm/cache.c:virt_to_p
Hi Jonathan, I think the remaining sticky point is this one:
On 11/05/20 19:02, Jonathan Adams wrote:
> I think I'd characterize this slightly differently; we have a set of
> statistics which are essentially "in parallel":
>
> - a variety of statistics, N CPUs they're available for, or
> - a
* David Hildenbrand [2020-05-08 15:42:12]:
Hi David,
Thanks for the steps to tryout.
> >
> > #! /bin/bash
> > sudo x86_64-softmmu/qemu-system-x86_64 \
> > --enable-kvm \
> > -m 4G,maxmem=20G,slots=2 \
> > -smp sockets=2,cores=2 \
> > -numa node,nodeid=0,cpus=0-1,mem=4G -numa no
On 5/10/20 8:14 PM, Anshuman Khandual wrote:
> On 05/09/2020 03:52 AM, Mike Kravetz wrote:
>> On 5/7/20 8:07 PM, Anshuman Khandual wrote:
>>
>> Did you try building without CONFIG_HUGETLB_PAGE defined? I'm guessing
>
> Yes I did for multiple platforms (s390, arm64, ia64, x86, powerpc etc).
>
>>
Hello,
Kajol Jain writes:
> Function 'read_sys_info_pseries()' is added to get system parameter
> values like number of sockets and chips per socket.
> and it gets these details via rtas_call with token
> "PROCESSOR_MODULE_INFO".
>
> Incase lpar migrate from one system to another, system
> parame
On 5/7/20 8:07 PM, Anshuman Khandual wrote:
> There are multiple similar definitions for arch_clear_hugepage_flags() on
> various platforms. Lets just add it's generic fallback definition for
> platforms that do not override. This help reduce code duplication.
>
> Cc: Russell King
> Cc: Catalin M
Hi,
Kajol Jain writes:
> diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
> index 48e8f4b17b91..8cf242aad98f 100644
> --- a/arch/powerpc/perf/hv-24x7.c
> +++ b/arch/powerpc/perf/hv-24x7.c
> @@ -20,6 +20,7 @@
> #include
> #include
>
> +#include
> #include "hv-24x7.h"
Paul Mackerras writes:
> Microwatt is a FPGA-based implementation of the Power ISA. It
> currently only implements little-endian 64-bit mode, and does
> not (yet) support SMP.
... or FP or VSX or Altivec?
What about transactional memory?
> This adds a new machine type to support FPGA-based SoC
Paul Mackerras writes:
> Currently microwatt-based SoCs come with a "potato" UART. This
> adds udbg support for the potato UART, giving us both an early
> debug console, and a runtime console using the hvc-udbg support.
Can y'all get a real UART?
There's more code here than in the platform itse
Hi Mark, Nicolin
On Wed, May 6, 2020 at 10:33 AM Shengjiu Wang wrote:
>
> Hi
>
> On Fri, May 1, 2020 at 6:23 PM Mark Brown wrote:
> >
> > On Fri, May 01, 2020 at 04:12:05PM +0800, Shengjiu Wang wrote:
> > > The difference for esai on imx8qm is that DMA device is EDMA.
> > >
> > > EDMA requires t
Hello Paul, thanks for the reply!
On Thu, 2020-04-09 at 10:27 +1000, Paul Mackerras wrote:
> On Wed, Apr 08, 2020 at 10:21:29PM +1000, Michael Ellerman wrote:
> > We should be able to just allocate the rtas_args on the stack, it's only
> > ~80 odd bytes. And then we can use rtas_call_unlocked() wh
On Thu, May 07, 2020 at 08:10:37AM -0500, wenxi...@linux.vnet.ibm.com wrote:
> From: Wen Xiong
>
> Several device drivers hit EEH(Extended Error handling) when triggering
> kdump on Pseries PowerVM. This patch implemented a reset of the PHBs
> in pci general code. PHB reset stop all PCI transacti
96 matches
Mail list logo