Balamuruhan S wrote:
ld instruction should have 14 bit immediate field (DS) concatenated with
0b00 on the right, encode it accordingly. Introduce macro `IMM_DS()`
to encode DS form instructions with 14 bit immediate field.
Fixes: 4ceae137bdab ("powerpc: emulate_step() tests for load/store
instr
2020년 3월 13일 (금) 오후 8:38, Vlastimil Babka 님이 작성:
>
> On 3/13/20 12:04 PM, Srikar Dronamraju wrote:
> >> I lost all the memory about it. :)
> >> Anyway, how about this?
> >>
> >> 1. make node_present_pages() safer
> >> static inline node_present_pages(nid)
> >> {
> >> if (!node_online(nid)) return 0
On Mon, Mar 16, 2020 at 10:53 AM Aneesh Kumar K.V
wrote:
>
> On 3/4/20 2:17 PM, Pingfan Liu wrote:
> > At present, plpar_hcall(H_SCM_BIND_MEM, ...) takes a very long time, so
> > if dumping to fsdax, it will take a very long time.
> >
>
>
> that should be fixed by
>
> faa6d21153fd11e139dd880044521
On Sun 15-03-20 14:20:05, Cristopher Lameter wrote:
> On Wed, 11 Mar 2020, Srikar Dronamraju wrote:
>
> > Currently Linux kernel with CONFIG_NUMA on a system with multiple
> > possible nodes, marks node 0 as online at boot. However in practice,
> > there are systems which have node 0 as memoryles
On Thu 12-03-20 17:41:58, Vlastimil Babka wrote:
[...]
> with nid present in:
> N_POSSIBLE - pgdat might not exist, node_to_mem_node() must return some online
I would rather have a dummy pgdat for those. Have a look at
$ git grep "NODE_DATA.*->" | wc -l
63
Who knows how many else we have there.
WANG Wenhu writes:
> Include "linux/of_address.h" to fix the compile error for
> mpc85xx_l2ctlr_of_probe() when compiling fsl_85xx_cache_sram.c.
>
> CC arch/powerpc/sysdev/fsl_85xx_l2ctlr.o
> arch/powerpc/sysdev/fsl_85xx_l2ctlr.c: In function ‘mpc85xx_l2ctlr_of_probe’:
> arch/powerpc/sysdev
( Please use c...@kaod.org. I hardly use c...@fr.ibm.com.)
On 3/15/20 11:37 PM, Ram Pai wrote:
> XIVE interrupt controller maintains a Event-Queue(EQ) page. This page is
> used to communicate events with the Hypervisor/Qemu.
Here is an alternative for the above :
XIVE interrupt controller us
From: Michael Ellerman
Date: 2020-03-16 17:41:12
To:WANG Wenhu ,Benjamin Herrenschmidt
,Paul Mackerras ,WANG Wenhu
,Allison Randal ,Richard Fontana
,Greg Kroah-Hartman ,Thomas
Gleixner
,linuxppc-dev@lists.ozlabs.org,linux-ker...@vger.kernel.org
cc: triv...@kernel.org,ker...@vivo.com,stable
ping...
在 2020/3/6 14:40, Jason Yan 写道:
This is a try to implement KASLR for Freescale BookE64 which is based on
my earlier implementation for Freescale BookE32:
https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=131718&state=*
The implementation for Freescale BookE64 is similar as
In case (k_start & PAGE_MASK) doesn't equal (kstart), 'va' will never be
NULL allthough 'block' is NULL
Check the return of memblock_alloc() directly instead of
the resulting address in the loop.
Fixes: 509cd3f2b473 ("powerpc/32: Simplify KASAN init")
Signed-off-by: Christophe Leroy
---
arch/po
With CONFIG_KASAN_VMALLOC, new page tables are created at the time
shadow memory for vmalloc area in unmapped. If some parts of the
page table still has entries to the zero page shadow memory, the
entries are wrongly marked RW.
With CONFIG_KASAN_VMALLOC, almost the entire kernel address space
is m
The main purpose of this big series is to:
- reorganise huge page handling to avoid using mm_slices.
- use huge pages to map kernel memory on the 8xx.
The 8xx supports 4 page sizes: 4k, 16k, 512k and 8M.
It uses 2 Level page tables, PGD having 1024 entries, each entry
covering 4M address space. Th
Doing kasan pages allocation in MMU_init is too early, kernel doesn't
have access yet to the entire memory space and memblock_alloc() fails
when the kernel is a bit big.
Do it from kasan_init() instead.
Fixes: 2edb16efc899 ("powerpc/32: Add KASAN support")
Cc: sta...@vger.kernel.org
Signed-off-by
At the time being, KASAN_SHADOW_END is 0x1, which
is 0 in 32 bits representation.
This leads to a couple of issues:
- kasan_remap_early_shadow_ro() does nothing because the comparison
k_cur < k_end is always false.
- In ptdump, address comparison for markers display fails and the
marker's
For platforms using shared.c (4xx, Book3e, Book3s/32),
also handle the _PAGE_COHERENT flag with corresponds to the
M bit of the WIMG flags.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/shared.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/mm/ptdump/shared.c
kasan_remap_early_shadow_ro() and kasan_unmap_early_shadow_vmalloc()
are both updating the early shadow mapping: the first one sets
the mapping read-only while the other clears the mapping.
Refactor and create kasan_update_early_region()
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/kasan
In order to alloc sub-arches to alloc KASAN regions using optimised
methods (Huge pages on 8xx, BATs on BOOK3S, ...), declare
kasan_init_region() weak.
Also make kasan_init_shadow_page_tables() accessible from outside,
so that it can be called from the specific kasan_init_region()
functions if nee
Display the size of areas mapped with BATs.
For that, the size display for pages is refactorised.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/bats.c | 4
arch/powerpc/mm/ptdump/ptdump.c | 23 ++-
arch/powerpc/mm/ptdump/ptdump.h | 2 ++
3 files changed
Reorder flags in a more logical way:
- Page size (huge) first
- User
- RWX
- Present
- WIMG
- Special
- Dirty and Accessed
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/8xx.c| 30 +++---
arch/powerpc/mm/ptdump/shared.c | 30 +++---
In order to have all flags fit on a 80 chars wide screen,
reduce the flags to 1 char (2 where ambiguous).
No cache is 'i'
User is 'ur' (Supervisor would be sr)
Shared (for 8xx) becomes 'sh' (it was 'user' when not shared but
that was ambiguous because that's not entirely right)
Signed-off-by: Chr
Display BAT flags the same way as page flags: rwx and wimg
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/bats.c | 37 ++-
1 file changed, 15 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/ptdump/bats.c b/arch/powerpc/mm/ptdump/bats.c
ind
Commit 45ff3c559585 ("powerpc/kasan: Fix parallel loading of
modules.") added spinlocks to manage parallele module loading.
Since then commit 47febbeeec44 ("powerpc/32: Force KASAN_VMALLOC for
modules") converted the module loading to KASAN_VMALLOC.
The spinlocking has then become unneeded and ca
In order to properly display information regardless of the page size,
it is necessary to take into account real page size.
Signed-off-by: Christophe Leroy
Fixes: cabe8138b23c ("powerpc: dump as a single line areas mapping a single
physical page.")
Cc: sta...@vger.kernel.org
---
arch/powerpc/mm/
Mapping RO data as ROX is not an issue since that data
cannot be modified to introduce an exploit.
PPC64 accepts to have RO data mapped ROX, as a trade off
between kernel size and strictness of protection.
On PPC32, kernel size is even more critical as amount of
memory is usually small.
Dependin
Allocate static page tables for the fixmap area. This allows
setting mappings through page tables before memblock is ready.
That's needed to use early_ioremap() early and to use standard
page mappings with fixmap.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/fixmap.h | 4
a
Setting init mem to NX shall depend on sinittext being mapped by
block, not on stext being mapped by block.
Setting text and rodata to RO shall depend on stext being mapped by
block, not on sinittext being mapped by block.
Fixes: 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX")
Cc:
Only 40x still uses PTE_ATOMIC_UPDATES.
40x cannot not select CONFIG_PTE64_BIT.
Drop handling of PTE_ATOMIC_UPDATES:
- In nohash/64
- In nohash/32 for CONFIG_PTE_64BIT
Keep PTE_ATOMIC_UPDATES only for nohash/32 for !CONFIG_PTE_64BIT
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/
The 8xx is about to map kernel linear space and IMMR using huge
pages.
In order to display those pages properly, ptdump needs to handle
hugepd tables at PGD level.
For the time being do it only at PGD level. Further patches may
add handling of hugepd tables at lower level for other platforms
when
When CONFIG_PTE_64BIT is set, pte_update() operates on
'unsigned long long'
When CONFIG_PTE_64BIT is not set, pte_update() operates on
'unsigned long'
In asm/page.h, we have pte_basic_t which is 'unsigned long long'
when CONFIG_PTE_64BIT is set and 'unsigned long' otherwise.
Refactor pte_update()
When CONFIG_PTE_64BIT is set, pte_update() operates on
'unsigned long long'
When CONFIG_PTE_64BIT is not set, pte_update() operates on
'unsigned long'
In asm/page.h, we have pte_basic_t which is 'unsigned long long'
when CONFIG_PTE_64BIT is set and 'unsigned long' otherwise.
Refactor pte_update()
On PPC32, __ptep_test_and_clear_young() takes the mm->context.id
In preparation of standardising pte_update() params between PPC32 and
PPC64, __ptep_test_and_clear_young() need mm instead of mm->context.id
Replace context param by mm.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/as
Commit 55c8fc3f4930 ("powerpc/8xx: reintroduce 16K pages with HW
assistance") redefined pte_t as a struct of 4 pte_basic_t, because
in 16K pages mode there are four identical entries in the page table.
But hugepd entries for 8M pages require only one entry of size
pte_basic_t. So there is no point
CONFIG_8xx_COPYBACK was there to help disabling copyback cache mode
for debuging hardware. But nobody will design new boards with 8xx now.
All 8xx platforms select it, so make it the default and remove
the option.
Also remove the Mx_RESETVAL values which are pretty useless and hide
the real value
pte_update() is a bit special for the 8xx. At the time
being, that's an #ifdef inside the nohash/32 pte_update().
As we are going to make it even more special in the coming
patches, create a dedicated version for pte_update() for 8xx.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm
PPC64 takes 3 additional parameters compared to PPC32:
- mm
- address
- huge
These 3 parameters will be needed in order to perform different
action depending on the page size on the 8xx.
Make pte_update() prototype identical for PPC32 and PPC64.
This allows dropping an #ifdef in huge_ptep_get_an
Prepare ITLB handler to handle _PAGE_HUGE when CONFIG_HUGETLBFS
is enabled. This means that the L1 entry has to be kept in r11
until L2 entry is read, in order to insert _PAGE_HUGE into it.
Also move pgd_offset helpers before pte_update() as they
will be needed there in next patch.
Signed-off-by:
512k pages are now standard pages, so only 8M pages
are hugepte.
No more handling of normal page tables through hugepd allocation
and freeing, and hugepte helpers can also be simplified.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h | 7 +++
arch/powe
At the time being, 512k huge pages are handled through hugepd page
tables. The PMD entry is flagged as a hugepd pointer and it
means that only 512k hugepages can be managed in that 4M block.
However, the hugepd table has the same size as a normal page
table, and 512k entries can therefore be nested
As the 8xx now manages 512k pages in standard page tables,
it doesn't need CONFIG_PPC_MM_SLICES anymore.
Don't select it anymore and remove all related code.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 64
arch/powerpc/include/asm/noha
PPC_PIN_TLB options are dedicated to the 8xx, move them into
the 8xx Kconfig.
While we are at it, add some text to explain what it does.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kconfig | 20 ---
arch/powerpc/platforms/8xx/Kconfig | 41 +
Only early debug requires IMMR to be mapped early.
No need to set it up and pin it in assembly. Map it
through page tables at udbg init when necessary.
If CONFIG_PIN_TLB_IMMR is selected, pin it once we
don't need the 32 Mb pinned RAM anymore.
Signed-off-by: Christophe Leroy
---
arch/powerpc/k
Pinned TLBs are not easy to modify when the MMU is enabled.
Create a small function to update a pinned TLB entry with MMU off.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 3 ++
arch/powerpc/kernel/head_8xx.S | 44
2 file
The code to setup linear and IMMR mapping via huge TLB entries is
not called anymore. Remove it.
Also remove the handling of removed code exits in the perf driver.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 8 +-
arch/powerpc/kernel/head_8xx.S
Up to now, linear and IMMR mappings are managed via huge TLB entries
through specific code directly in TLB miss handlers. This implies
some patching of the TLB miss handlers at startup, and a lot of
dedicated code.
Remove all this specific dedicated code.
For now we are back to normal handling vi
Now that space have been freed next to the DTLB miss handler,
it's associated DTLB perf handling can be brought back in
the same place.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a
At startup, map 32 Mbytes of memory through 4 pages of 8M,
and PIN them inconditionnaly. They need to be pinned because
KASAN is using page tables early and the TLBs might be
dynamically replaced otherwise.
Remove RSV4I flag after installing mappings unless
CONFIG_PIN_TLB_ is selected.
Signed
Similar to PPC64, accept to map RO data as ROX as a trade off between
between security and memory usage.
Having RO data executable is not a high risk as RO data can't be
modified to forge an exploit.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kconfig | 26
Now that linear and IMMR dedicated TLB handling is gone, kernel
boundary address comparison is similar in ITLB miss handler and
in DTLB miss handler.
Create a macro named compare_to_kernel_boundary.
When TASK_SIZE is strictly below 0x8000 and PAGE_OFFSET is
above 0x8000, it is enough to c
Add a function to early map kernel memory using huge pages.
For 512k pages, just use standard page table and map in using 512k
pages.
For 8M pages, create a hugepd table and populate the two PGD
entries with it.
This function can only be used to create page tables at startup. Once
the regular SL
Map the IMMR area with a single 512k huge page.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/nohash/8xx.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
index 81ddcd9554e1..57e0c7496a6a 100644
--- a/a
Map linear memory space with 512k and 8M pages whenever
possible.
Three mappings are performed:
- One for kernel text
- One for RO data
- One for the rest
Separating the mappings is done to be able to update the
protection later when using STRICT_KERNEL_RWX.
The ITLB miss handler now need to als
Pinned TLB are 8M. Now that there is no strict boundary anymore
between text and RO data, it is possible to use 8M pinned executable
TLB that covers both text and RO data.
When PIN_TLB_DATA or PIN_TLB_TEXT is selected, enforce 8M RW data
alignment and allow STRICT_KERNEL_RWX.
Signed-off-by: Chris
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with hugepages and pinned TLB.
In order to map with hugepages, also enforce a 512kB data alignment
minimum. That's a trade-off between size of speed, taking into
account that DEBUG_PAGEALLOC is a debug option. Anyway the a
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with BATs.
In order to map with BATs, also enforce data alignment. Set
by default to 256M which is a good compromise for keeping
enough BATs for also KASAN and IMMR.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kcon
Implement a kasan_init_region() dedicated to book3s/32 that
allocates KASAN regions using BATs.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/kasan.h | 1 +
arch/powerpc/mm/kasan/Makefile| 1 +
arch/powerpc/mm/kasan/book3s_32.c | 57 +++
Implement a kasan_init_region() dedicated to 8xx that
allocates KASAN regions using huge pages.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/kasan/8xx.c| 74 ++
arch/powerpc/mm/kasan/Makefile | 1 +
2 files changed, 75 insertions(+)
create mode 100644
Some opencapi FPGA images allow to control if the FPGA should be reloaded
on the next adapter reset. If it is supported, the image specifies it
through a Vendor Specific DVSEC in the config space of function 0.
Signed-off-by: Philippe Bergheaud
---
Changelog:
v2:
- refine ResetReload debug mess
From: Daniel Axtens
Currently we set up the paca after parsing the device tree for CPU
features. Prior to that, r13 contains random data, which means there
is random data in r13 while we're running the generic dt parsing code.
This random data varies depending on whether we boot through a vmlinu
The previous commit reduced the amount of code that is run before we
setup a paca. However there are still a few remaining functions that
run with no paca, or worse, with an arbitrary value in r13 that will
be used as a paca pointer.
In particular the stack protector canary is stored in the paca,
Hi Haren,
If I understand correctly, to test these, I need to apply both this
series and your VAS userspace page fault handling series - is that
right?
Kind regards,
Daniel
> Power9 processor supports Virtual Accelerator Switchboard (VAS) which
> allows kernel and userspace to send compression r
Hi Pratik,
Please could you resend this with a more meaningful subject line and
move the Fixes: line to immediately above your signed-off-by?
Thanks!
Regards,
Daniel
> The patch avoids allocating cpufreq_policy on stack hence fixing frame
> size overflow in 'powernv_cpufreq_work_fn'
>
> Signed-
Hi Daniel,
Sure thing I'll re-send them. Rookie mistake, my bad.
Thanks for pointing it out!
Regards,
Pratik
On 16/03/20 6:35 pm, Daniel Axtens wrote:
Hi Pratik,
Please could you resend this with a more meaningful subject line and
move the Fixes: line to immediately above your signed-off-by?
The patch avoids allocating cpufreq_policy on stack hence fixing frame
size overflow in 'powernv_cpufreq_work_fn'
Fixes: 227942809b52 ("cpufreq: powernv: Restore cpu frequency to policy->cur on
unthrottling")
Signed-off-by: Pratik Rajesh Sampat
---
drivers/cpufreq/powernv-cpufreq.c | 13 +++
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
So far, powerpc Book3S code has been written with an assumption of only
one watchpoint. But future power architecture is introducing second
watchpoint register (DAWR). Even though this patchset does not enable
2nd DAWR, it make the infrastructure
On Wed 11-03-20 13:30:22, David Hildenbrand wrote:
> The name is misleading. Let's just name it like the online_type name we
> expose to user space ("online").
I would disagree the name is misleading. It just says that you want to
online and keep the zone type. Nothing I would insist on though.
>
On Wed 11-03-20 13:30:23, David Hildenbrand wrote:
> I have no idea why we have to start at -1.
Because this is how the offline state offline was represented
originally.
> Just treat 0 as the special
> case. Clarify a comment (which was wrong, when we come via
> device_online() the first time, th
On Wed 11-03-20 13:30:24, David Hildenbrand wrote:
> Let's use a simple array which we can reuse soon. While at it, move the
> string->mmop conversion out of the device hotplug lock.
>
> Cc: Greg Kroah-Hartman
> Cc: Andrew Morton
> Cc: Michal Hocko
> Cc: Oscar Salvador
> Cc: "Rafael J. Wysocki
On 16.03.20 16:12, Michal Hocko wrote:
> On Wed 11-03-20 13:30:22, David Hildenbrand wrote:
>> The name is misleading. Let's just name it like the online_type name we
>> expose to user space ("online").
>
> I would disagree the name is misleading. It just says that you want to
> online and keep th
On Wed 11-03-20 13:30:25, David Hildenbrand wrote:
[...]
> diff --git a/arch/powerpc/platforms/powernv/memtrace.c
> b/arch/powerpc/platforms/powernv/memtrace.c
> index d6d64f8718e6..e15a600cfa4d 100644
> --- a/arch/powerpc/platforms/powernv/memtrace.c
> +++ b/arch/powerpc/platforms/powernv/memtrac
On Wed 11-03-20 13:30:26, David Hildenbrand wrote:
> For now, distributions implement advanced udev rules to essentially
> - Don't online any hotplugged memory (s390x)
> - Online all memory to ZONE_NORMAL (e.g., most virt environments like
> hyperv)
> - Online all memory to ZONE_MOVABLE in case t
On 16.03.20 16:24, Michal Hocko wrote:
> On Wed 11-03-20 13:30:25, David Hildenbrand wrote:
> [...]
>> diff --git a/arch/powerpc/platforms/powernv/memtrace.c
>> b/arch/powerpc/platforms/powernv/memtrace.c
>> index d6d64f8718e6..e15a600cfa4d 100644
>> --- a/arch/powerpc/platforms/powernv/memtrace.c
On Mon 16-03-20 16:34:06, David Hildenbrand wrote:
[...]
> Best I can do here is to also always online all memory.
Yes that sounds like a cleaner solution than having a condition that
doesn't make much sense at first glance.
--
Michal Hocko
SUSE Labs
Currently, the log-level of show_stack() depends on a platform
realization. It creates situations where the headers are printed with
lower log level or higher than the stacktrace (depending on
a platform or user).
Furthermore, it forces the logic decision from user to an architecture
side. In resu
On 16.03.20 16:31, Michal Hocko wrote:
> On Wed 11-03-20 13:30:26, David Hildenbrand wrote:
>> For now, distributions implement advanced udev rules to essentially
>> - Don't online any hotplugged memory (s390x)
>> - Online all memory to ZONE_NORMAL (e.g., most virt environments like
>> hyperv)
>>
https://bugzilla.kernel.org/show_bug.cgi?id=206669
--- Comment #12 from John Paul Adrian Glaubitz (glaub...@physik.fu-berlin.de)
---
Another crash:
watson login: [17667512263.751484] BUG: Unable to handle kernel data access at
0xc00ff06e4838
[17667512263.751507] Faulting instruction address:
On Mon, Mar 16, 2020 at 04:05:01PM +0100, Christophe Leroy wrote:
> Some book3s (e300 family for instance, I think G2 as well) already have
> a DABR2 in addition to DABR.
The original "G2" (meaning 603 and 604) do not have DABR2. The newer
"G2" (meaning e300) does have it. e500 and e600 do not
On 3/14/20 9:18 AM, Nicholas Piggin wrote:
Ganesh Goudar's on March 14, 2020 12:04 am:
MCE handling on pSeries platform fails as recent rework to use common
code for pSeries and PowerNV in machine check error handling tries to
access per-cpu variables in realmode. The per-cpu variables may be
Add files to access the powerpc NX-GZIP engine in user space.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../selftests/powerpc/nx-gzip/inc/crb.h | 170 ++
.../selftests/powerpc/nx-gzip/inc/nx-gzip.h | 27 +++
.../powerpc/nx-gzip/inc/nx-helpers
This patch series are intended to test the power8 and power9 Nest
Accelerator (NX) GZIP engine that is being introduced by
https://lists.ozlabs.org/pipermail/linuxppc-dev/2020-March/205659.html
More information about how to access the NX can be found in that patch, also a
complete userspace library
Add files to be able to compress and decompress files using the
powerpc NX-GZIP engine.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../powerpc/nx-gzip/inc/copy-paste.h | 54 ++
.../selftests/powerpc/nx-gzip/inc/nx_dbg.h| 95 +++
.../selftests/powerpc/nx
Add a compression testcase for the powerpc NX-GZIP engine.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../selftests/powerpc/nx-gzip/Makefile| 21 +
.../selftests/powerpc/nx-gzip/gzfht_test.c| 475 ++
.../selftests/powerpc/nx-gzip/gzip_vas.
Include a decompression testcase for the powerpc NX-GZIP
engine.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../selftests/powerpc/nx-gzip/Makefile|7 +-
.../selftests/powerpc/nx-gzip/gunz_test.c | 1058 +
2 files changed, 1062 insertion
Include a README file with the instructions to use the
testcases at selftests/powerpc/nx-gzip.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../powerpc/nx-gzip/99-nx-gzip.rules | 1 +
.../testing/selftests/powerpc/nx-gzip/README | 44 +++
2 fi
Changes to v2:
- Removed excessive pr_cont("\n") (nits by Senozhatsky)
- Leave backtrace debugging messages with pr_debug()
(noted by Russell King and Will Deacon)
- Correct microblaze_unwind_inner() declaration
(Thanks to Michal Simek and kbuild test robot)
- Fix copy'n'paste typo in show_stac
On 12/03/2020 23.28, Li Yang wrote:
> Fixes the following sparse warnings:
>
[snip]
>
> Also removed the unneccessary clearing for kzalloc'ed structure.
Please don't mix that in the same patch, do it in a preparatory patch.
That makes reviewing much easier.
>
> /* Get PRAM base */
>
On 12/03/2020 23.28, Li Yang wrote:
> The QE code was previously only supported on big-endian PowerPC systems
> that use the same endian as the QE device. The endian transfer code is
> not really exercised. Recent updates extended the QE drivers to
> little-endian ARM/ARM64 systems which makes th
On Mon, 2020-03-16 at 15:07 -0300, Raphael Moreira Zinsly wrote:
> This patch series are intended to test the power8 and power9 Nest
> Accelerator (NX) GZIP engine that is being introduced by
> https://lists.ozlabs.org/pipermail/linuxppc-dev/2020-March/205659.html
> More information about how to ac
On Tue, 2020-03-17 at 00:04 +1100, Daniel Axtens wrote:
> Hi Haren,
>
> If I understand correctly, to test these, I need to apply both this
> series and your VAS userspace page fault handling series - is that
> right?
Daniel,
Yes, This patch series enables GZIP engine and provides user space AP
Nick Desaulniers writes:
> Hello ppc friends, did this get picked up into -next yet?
Not yet.
It's in my next-test, but it got stuck there because some subsequent
patches caused some CI errors that I had to debug.
I'll push it to next today.
cheers
> On Thu, Mar 12, 2020 at 8:35 PM Nathan Cha
Hi Christophe,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on next-20200316]
[cannot apply to powerpc/next v5.6-rc6 v5.6-rc5 v5.6-rc4 v5.6-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to
Pratik Rajesh Sampat writes:
> Parse the device tree for nodes self-save, self-restore and populate
> support for the preferred SPRs based what was advertised by the device
> tree.
These should be documented in:
Documentation/devicetree/bindings/powerpc/opal/power-mgt.txt
> diff --git a/arch/p
Hi Christophe,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on next-20200316]
[cannot apply to powerpc/next v5.6-rc6 v5.6-rc5 v5.6-rc4 v5.6-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also
Haren Myneni writes:
> Process close windows after its requests are completed. In multi-thread
> applications, child can open a window but release FD will not be called
> upon its exit. Parent thread will be closing it later upon its exit.
What if the parent exits first?
> The parent can also se
Haren Myneni writes:
> For each fault CRB, update fault address in CRB (fault_storage_addr)
> and translation error status in CSB so that user space can touch the
> fault address and resend the request. If the user space passed invalid
> CSB address send signal to process with SIGSEGV.
>
> Signed-
Le 16/03/2020 à 19:43, Segher Boessenkool a écrit :
On Mon, Mar 16, 2020 at 04:05:01PM +0100, Christophe Leroy wrote:
Some book3s (e300 family for instance, I think G2 as well) already have
a DABR2 in addition to DABR.
The original "G2" (meaning 603 and 604) do not have DABR2. The newer
"G
Patchset fixes the inconsistent results we are getting when
we run multiple 24x7 events.
Patchset adds json file metric support for the hv_24x7 socket/chip level
events. "hv_24x7" pmu interface events needs system dependent parameter
like socket/chip/core. For example, hv_24x7 chip level events ne
From: Jiri Olsa
Adding expr_ prefix for parse_ctx and parse_id,
to straighten out the expr* namespace.
There's no functional change.
Signed-off-by: Jiri Olsa
---
tools/perf/tests/expr.c | 4 ++--
tools/perf/util/expr.c| 10 +-
tools/perf/util/expr.h| 12 ++--
From: Jiri Olsa
Adding expr_scanner_ctx object to hold user data
for the expr scanner. Currently it holds only
start_token, Kajol Jain will use it to hold 24x7
runtime param.
Signed-off-by: Jiri Olsa
---
tools/perf/util/expr.c | 6 --
tools/perf/util/expr.h | 4
tools/perf/util/expr
Commit 2b206ee6b0df ("powerpc/perf/hv-24x7: Display change in counter
values")' added to print _change_ in the counter value rather then raw
value for 24x7 counters. Incase of transactions, the event count
is set to 0 at the beginning of the transaction. It also sets
the event's prev_count to the r
For hv_24x7 socket/chip level events, specific chip-id to which
the data requested should be added as part of pmu events.
But number of chips/socket in the system details are not exposed.
Patch implements read_sys_info_pseries() to get system
parameter values like number of sockets and chips per s
1 - 100 of 108 matches
Mail list logo