Nicholas Piggin writes:
> Excerpts from Nicholas Piggin's message of April 20, 2020 10:17 am:
>> Excerpts from Aneesh Kumar K.V's message of April 19, 2020 11:53 pm:
>>> As per the ISA, context synchronizing instructions is needed before and
>>> after
>>> SPRN_AMR update. Use isync before and the
The PAPR standard[1][3] provides mechanisms to query the health and
performance stats of an NVDIMM via various hcalls as described in
Ref[2]. Until now these stats were never available nor exposed to the
user-space tools like 'ndctl'. This is partly due to PAPR platform not
having support for ACPI
Implement support for fetching nvdimm health information via
H_SCM_HEALTH hcall as documented in Ref[1]. The hcall returns a pair
of 64-bit big-endian integers, bitwise-and of which is then stored in
'struct papr_scm_priv' and subsequently partially exposed to
user-space via newly introduced dimm s
Add documentation to 'papr_hcalls.rst' describing the bitmap flags
that are returned from H_SCM_HEALTH hcall as per the PAPR-SCM
specification.
Cc: Dan Williams
Cc: Michael Ellerman
Cc: "Aneesh Kumar K . V"
Signed-off-by: Vaibhav Jain
---
Changelog:
v5..v6
* New patch in the series
---
Docum
Introduce support for Papr nvDimm Specific Methods (PDSM) in papr_scm
modules and add the command family to the white list of NVDIMM command
sets. Also advertise support for ND_CMD_CALL for the dimm
command mask and implement necessary scaffolding in the module to
handle ND_CMD_CALL ioctl and PDSM
This patch implements support for PDSM request 'PAPR_SCM_PDSM_HEALTH'
that returns a newly introduced 'struct nd_papr_pdsm_health' instance
containing dimm health information back to user space in response to
ND_CMD_CALL. This functionality is implemented in newly introduced
papr_scm_get_health() t
On 19.04.20 09:51, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. Earlier than historical reasons, many kvm-related function
> parameters retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time.
> This patch does a unified cl
On 3/27/20 1:18 AM, Kim Phillips wrote:
On 3/26/20 5:19 AM, maddy wrote:
On 3/18/20 11:05 PM, Kim Phillips wrote:
Hi Maddy,
On 3/17/20 1:50 AM, maddy wrote:
On 3/13/20 4:08 AM, Kim Phillips wrote:
On 3/11/20 11:00 AM, Ravi Bangoria wrote:
On 3/6/20 3:36 AM, Kim Phillips wrote:
On 3/3/
On 2020/4/19 16:24, Xiaoyao Li wrote:
On 4/19/2020 3:30 PM, Tianjia Zhang wrote:
The compiler reported the following compilation errors:
arch/x86/kvm/svm/sev.c: In function ‘sev_pin_memory’:
arch/x86/kvm/svm/sev.c:361:3: error: implicit declaration of function
‘release_pages’ [-Werror=implic
Hi Dan / Mpe,
I have sent out a v6 of this patch set that addresses your review
comments so far. Also I have added a new doc patch in the patchset that
adds documentation for PAPR_SCM_HEALTH hcall specification.
Requesting you to please review the new patchset at
https://lore.kernel.org/linux-nvd
On 2020/4/20 15:07, Christian Borntraeger wrote:
On 19.04.20 09:51, Tianjia Zhang wrote:
In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. Earlier than historical reasons, many kvm-related function
parameters retain the 'kvm_run' and 'kvm_vcpu' parameters
At times, memory ranges have to be looked up during early boot, when
kernel couldn't be initialized for dynamic memory allocation. In fact,
reserved-ranges look up is needed during FADump memory reservation.
Without accounting for reserved-ranges in reserving memory for FADump,
MPIPL boot fails wit
Commit 0962e8004e97 ("powerpc/prom: Scan reserved-ranges node for
memory reservations") enabled support to parse reserved-ranges DT
node and reserve kernel memory falling in these ranges for F/W
purposes. Memory reserved for FADump should not overlap with these
ranges as it could corrupt memory mea
On 20/04/20 10:50 AM, Mahesh J Salgaonkar wrote:
> On 2020-03-11 01:57:10 Wed, Hari Bathini wrote:
>> Commit 0962e8004e97 ("powerpc/prom: Scan reserved-ranges node for
>> memory reservations") enabled support to parse reserved-ranges DT
>> node and reserve kernel memory falling in these ranges f
gpr2 is not a parametre of kuap_check(), it doesn't exist.
Use gpr instead.
Fixes: a68c31fc01ef ("powerpc/32s: Implement Kernel Userspace Access
Protection")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/kup.h | 2 +-
1 file changed, 1 insert
At times, memory ranges have to be looked up during early boot, when
kernel couldn't be initialized for dynamic memory allocation. In fact,
reserved-ranges look up is needed during FADump memory reservation.
Without accounting for reserved-ranges in reserving memory for FADump,
MPIPL boot fails wit
Commit 0962e8004e97 ("powerpc/prom: Scan reserved-ranges node for
memory reservations") enabled support to parse reserved-ranges DT
node and reserve kernel memory falling in these ranges for F/W
purposes. Memory reserved for FADump should not overlap with these
ranges as it could corrupt memory mea
At times, memory ranges have to be looked up during early boot, when
kernel couldn't be initialized for dynamic memory allocation. In fact,
reserved-ranges look up is needed during FADump memory reservation.
Without accounting for reserved-ranges in reserving memory for FADump,
MPIPL boot fails wit
Commit 0962e8004e97 ("powerpc/prom: Scan reserved-ranges node for
memory reservations") enabled support to parse reserved-ranges DT
node and reserve kernel memory falling in these ranges for F/W
purposes. Memory reserved for FADump should not overlap with these
ranges as it could corrupt memory mea
Hi Christoph,
On Tue, Apr 14, 2020 at 3:22 PM Christoph Hellwig wrote:
> Open code it in __bpf_map_area_alloc, which is the only caller. Also
> clean up __bpf_map_area_alloc to have a single vmalloc call with
> slightly different flags instead of the current two different calls.
>
> For this to
Hi Christoph,
On Tue, Apr 14, 2020 at 3:21 PM Christoph Hellwig wrote:
> Just use __vmalloc_node instead which gets and extra argument. To be
> able to to use __vmalloc_node in all caller make it available outside
> of vmalloc and implement it in nommu.c.
>
> Signed-off-by: Christoph Hellwig
>
On 4/20/20 9:07 AM, Christian Borntraeger wrote:
>
>
> On 19.04.20 09:51, Tianjia Zhang wrote:
>> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
>> structure. Earlier than historical reasons, many kvm-related function
>> parameters retain the 'kvm_run' and 'kvm_vcpu' pa
On 2020/4/20 18:32, maobibo wrote:
On 04/19/2020 03:51 PM, Tianjia Zhang wrote:
In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. Earlier than historical reasons, many kvm-related function
parameters retain the 'kvm_run' and 'kvm_vcpu' parameters at the sa
On 04/19/2020 03:51 PM, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. Earlier than historical reasons, many kvm-related function
> parameters retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time.
> This patch does a unifi
From: Jiri Olsa
Adding expr_ prefix for parse_ctx and parse_id, to straighten out the
expr* namespace.
There's no functional change.
Signed-off-by: Jiri Olsa
Cc: Alexander Shishkin
Cc: Andi Kleen
Cc: Anju T Sudhakar
Cc: Benjamin Herrenschmidt
Cc: Greg Kroah-Hartman
Cc: Jin Yao
Cc: Joe Ma
From: Jiri Olsa
Add the expr_scanner_ctx object to hold user data for the expr scanner.
Currently it holds only start_token, Kajol Jain will use it to hold 24x7
runtime param.
Signed-off-by: Jiri Olsa
Cc: Alexander Shishkin
Cc: Andi Kleen
Cc: Anju T Sudhakar
Cc: Benjamin Herrenschmidt
Cc: G
From: Kajol Jain
This patch refactors metricgroup__add_metric function where some part of
it move to function metricgroup__add_metric_param. No logic change.
Signed-off-by: Kajol Jain
Acked-by: Jiri Olsa
Cc: Alexander Shishkin
Cc: Andi Kleen
Cc: Anju T Sudhakar
Cc: Benjamin Herrenschmidt
Problem Summary:
Slow termination of KVM guest with large guest RAM config due to a large number
of IPIs that were caused by clearing level 1 PTE entries (THP) entries.
This is shown in the stack trace below.
- qemu-system-ppc [kernel.vmlinux][k] smp_call_function_many
- smp_call_
Fetch pkey from vma instead of linux page table. Also document the fact that in
some cases the pkey returned in siginfo won't be the same as the one we took
keyfault on. Even with linux page table walk, we can end up in a similar
scenario.
Cc: Ram Pai
Signed-off-by: Aneesh Kumar K.V
---
arch/p
If multiple threads in userspace keep changing the protection keys
mapping a range, there can be a scenario where kernel takes a key fault
but the pkey value found in the siginfo struct is a permissive one.
This can confuse the userspace as shown in the below test case.
/* use this to control the
This makes the pte_present check stricter by checking for additional _PAGE_PTE
bit. A level 1 pte pointer (THP pte) can be switched to a pointer to level 0 pte
page table page by following two operations.
1) THP split.
2) madvise(MADV_DONTNEED) in parallel to page fault.
A lockless page table wal
This is only used with init_mm currently. Walking init_mm is much simpler
because we don't need to handle concurrent page table like other mm_context
Signed-off-by: Aneesh Kumar K.V
---
.../include/asm/book3s/64/tlbflush-hash.h| 3 +--
arch/powerpc/kernel/pci_64.c |
Don't fetch the pte value using lockless page table walk. Instead use the value
from the
caller. hash_preload is called with ptl lock held. So it is safe to use the
pte_t address directly.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/hash_utils.c | 27 +--
A lockless page table walk should be safe against parallel THP collapse, THP
split and madvise(MADV_DONTNEED)/parallel fault. This patch makes sure kernel
won't reload the pteval when checking for different conditions. The patch also
added
a check for pte_present to make sure the kernel is indeed
read_user_stack_slow is called with interrupts soft disabled and it copies
contents
from the page which we find mapped to a specific address. To convert
userspace address to pfn, the kernel now uses lockless page table walk.
The kernel needs to make sure the pfn value read remains stable and is n
These functions can get called in realmode. Hence use low level
arch_spin_lock which is safe to be called in realmode.
Cc: Suraj Jitindar Singh
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_hv_rm_mmu.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arc
The locking rules for walking partition scoped table is different from process
scoped table. Hence add a helper for secondary linux page table walk and also
add check whether we are holding the right locks.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/kvm_book3s_64.h | 13 +++
The locking rules for walking nested shadow linux page table is different from
process
scoped table. Hence add a helper for nested page table walk and also
add check whether we are holding the right locks.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_hv_nested.c | 28
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/kvm_book3s_64.h | 16
1 file changed, 16 insertions(+)
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h
b/arch/powerpc/include/asm/kvm_book3s_64.h
index 2860521992b6..1ca1f6495012 100644
--- a/arch/powerpc/includ
update kvmppc_hv_handle_set_rc to use find_kvm_nested_guest_pte and
find_kvm_secondary_pte
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/kvm_book3s.h| 2 +-
arch/powerpc/include/asm/kvm_book3s_64.h | 3 +++
arch/powerpc/kvm/book3s_64_mmu_radix.c | 18 +-
ar
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 6404df613ea3..121146f5a331 100644
--- a/arch/powerpc/kvm/book3s_6
We now depend on kvm->mmu_lock
Cc: Alexey Kardashevskiy
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_64_vio_hv.c | 38 +++--
1 file changed, 9 insertions(+), 29 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c
b/arch/powerpc/kvm/book3s_64_vi
Since kvmppc_do_h_enter can get called in realmode use low level
arch_spin_lock which is safe to be called in realmode.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 5 ++---
arch/powerpc/kvm/book3s_hv_rm_mmu.c | 22 ++
2 files changed, 8 insertio
Current code just hold rmap lock to ensure parallel page table update is
prevented. That is not sufficient. The kernel should also check whether
a mmu_notifer callback was running in parallel.
Cc: Alexey Kardashevskiy
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_64_vio_hv.c | 30
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c
b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 7b18b298f513..8ce3191cc801 100644
--- a/arch/powerpc/kvm
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_hv_rm_mmu.c | 32 ++---
1 file changed, 11 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 83e987fecf97..3b168c69d503 100644
--- a/arch
This adds _PAGE_PTE check and makes sure we validate the pte value returned via
find_kvm_host_pte.
NOTE: this also considers _PAGE_INVALID to the software valid bit.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/kvm_book3s_64.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-
Now that all the lockless page table walk is careful w.r.t the PTE
address returned, we can now revert
commit: 13bd817bb884 ("powerpc/thp: Serialize pmd clear against a linux page
table walk.")
We also drop the equivalent IPI from other pte updates routines. We still keep
IPI in hash pmdp collaps
We will use this in later patch to do tlb flush when clearing pmd entries.
Cc: kir...@shutemov.name
Cc: a...@linux-foundation.org
Signed-off-by: Aneesh Kumar K.V
---
arch/s390/include/asm/pgtable.h | 4 ++--
include/asm-generic/pgtable.h | 4 ++--
mm/huge_memory.c| 4 ++--
3 fi
MADV_DONTNEED holds mmap_sem in read mode and that implies a
parallel page fault is possible and the kernel can end up with a level 1 PTE
entry (THP entry) converted to a level 0 PTE entry without flushing
the THP TLB entry.
Most architectures including POWER have issues with kernel instantiating
On Fri, 2020-04-17 at 15:47 +1000, Alexey Kardashevskiy wrote:
>
> On 17/04/2020 11:26, Russell Currey wrote:
> >
> > For what it's worth this sounds like a good idea to me, it just sounds
> > tricky to implement. You're adding another layer of complexity on top
> > of EEH (well, making things l
> On Apr 12, 2020, at 3:48 PM, Mike Rapoport wrote:
>
> From: Baoquan He
>
> When called during boot the memmap_init_zone() function checks if each PFN
> is valid and actually belongs to the node being initialized using
> early_pfn_valid() and early_pfn_in_nid().
>
> Each such check may cos
On Mon, Apr 20, 2020 at 5:06 AM Wang Wenhu wrote:
>
> A generic User-Kernel interface that allows a misc device created
> by it to support file-operations of ioctl and mmap to access SRAM
> memory from user level. Different kinds of SRAM alloction and free
> APIs could be registered by specific SR
On Sun, Apr 19, 2020 at 08:05:38PM -0700, Wang Wenhu wrote:
> A generic User-Kernel interface that allows a misc device created
> by it to support file-operations of ioctl and mmap to access SRAM
> memory from user level. Different kinds of SRAM alloction and free
> APIs could be registered by spec
m/cailca/linux-mm/master/arm64.config
[ 54.172683][T1] UBSAN: shift-out-of-bounds in
./include/linux/hugetlb.h:555:34
[ 54.180411][T1] shift exponent 4294967285 is too large for 64-bit type
'unsigned long'
[ 54.15][T1] CPU: 130 PID: 1 Comm: swapper/0
On the same way as already done on PPC32, drop __get_datapage()
function and use get_datapage inline macro instead.
See commit ec0895f08f99 ("powerpc/vdso32: inline __get_datapage()")
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/vdso64/cacheflush.S | 9
arch/powerpc/kerne
The VDSO datapage and the text pages are always located immediately
next to each other, so it can be hardcoded without an indirection
through __kernel_datapage_offset
In order to ease things, move the data page in front like other
arches, that way there is no need to know the size of the library
t
This is the seventh version of a series to switch powerpc VDSO to
generic C implementation.
Main changes since v6 are:
- Added -fasynchronous-unwind-tables in CFLAGS
- Split patch 2 in two parts
- Split patch 5 (which was patch 4) in two parts
This series applies on today's powerpc/merge branch.
The \tmp param is not used anymore, remove it.
Signed-off-by: Christophe Leroy
---
v7: New patch, splitted out of preceding patch
---
arch/powerpc/include/asm/vdso_datapage.h | 2 +-
arch/powerpc/kernel/vdso32/cacheflush.S | 2 +-
arch/powerpc/kernel/vdso32/datapage.S | 4 ++--
arch/power
cpu_relax() need to be in asm/vdso/processor.h to be used by
the C VDSO generic library.
Move it there.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/processor.h | 10 ++
arch/powerpc/include/asm/vdso/processor.h | 23 +++
2 files changed, 25 inse
Signed-off-by: Christophe Leroy
---
drivers/tty/sysrq.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
index 5e0d0813da55..a0760bcd7a97 100644
--- a/drivers/tty/sysrq.c
+++ b/drivers/tty/sysrq.c
@@ -74,6 +74,7 @@ int sysrq_mask(void)
Prepare for switching VDSO to generic C implementation in following
patch. Here, we:
- Modify __get_datapage() to take an offset
- Prepare the helpers to call the C VDSO functions
- Prepare the required callbacks for the C VDSO functions
- Prepare the clocksource.h files to define VDSO_ARCH_CLOCKMO
For VDSO32 on PPC64, we create a fake 32 bits config, on the same
principle as MIPS architecture, in order to get the correct parts of
the different asm header files.
With the C VDSO, the performance is slightly lower, but it is worth
it as it will ease maintenance and evolution, and also brings c
On Mon, Apr 20, 2020 at 02:31:58PM +1000, Nicholas Piggin wrote:
> Excerpts from Rich Felker's message of April 20, 2020 2:09 pm:
> > On Mon, Apr 20, 2020 at 12:32:21PM +1000, Nicholas Piggin wrote:
> >> Excerpts from Rich Felker's message of April 20, 2020 11:34 am:
> >> > On Mon, Apr 20, 2020 at
On 4/14/20 2:56 PM, Greg Kroah-Hartman wrote:
On Tue, Apr 14, 2020 at 02:43:00PM +0200, Emanuele Giuseppe Esposito wrote:
A bunch of code is duplicated between debugfs and tracefs, unify it to the
simplefs library.
The code is very similar, except that dentry and inode creation are unified
i
On 4/14/20 3:01 PM, Greg Kroah-Hartman wrote:
On Tue, Apr 14, 2020 at 02:42:58PM +0200, Emanuele Giuseppe Esposito wrote:
It is a common special case for new_inode to initialize the
time to the current time and the inode to get_next_ino().
Introduce a core function that does it and use it thr
On 4/16/20 8:44 AM, Luis Chamberlain wrote:
On Tue, Apr 14, 2020 at 02:42:55PM +0200, Emanuele Giuseppe Esposito wrote:
aa_mk_null_file is using simple_pin_fs/simple_release_fs with local
variables as arguments, for what would amount to a simple
vfs_kern_mount/mntput pair if everything was in
On 4/16/20 8:59 AM, Luis Chamberlain wrote:
On Tue, Apr 14, 2020 at 02:42:54PM +0200, Emanuele Giuseppe Esposito wrote:
This series of patches introduce wrappers for functions,
arguments simplification in functions calls and most importantly
groups duplicated code in a single header, simplefs
On Mon, Apr 20, 2020 at 03:57:48PM +0200, Emanuele Giuseppe Esposito wrote:
>
>
> On 4/14/20 2:56 PM, Greg Kroah-Hartman wrote:
> > On Tue, Apr 14, 2020 at 02:43:00PM +0200, Emanuele Giuseppe Esposito wrote:
> > > A bunch of code is duplicated between debugfs and tracefs, unify it to the
> > > si
On 20/04/20 16:28, Greg Kroah-Hartman wrote:
>> I assume you meant a new file. These new functions are used only by a few
>> filesystems, and I didn't want to include them in vmlinux unconditionally,
>> so I introduced simplefs.c and CONFIG_SIMPLEFS instead of extending libfs.c.
>> In this way only
h:555:34
> [ 54.180411][T1] shift exponent 4294967285 is too large for 64-bit type
> 'unsigned long'
> [ 54.15][T1] CPU: 130 PID: 1 Comm: swapper/0 Not tainted
> 5.7.0-rc2-next-20200420 #1
> [ 54.197284][T1] Hardware name: HPE Apollo 70
&
_ALIGN_UP() is specific to powerpc
ALIGN() is generic and does the same
Replace _ALIGN_UP() by ALIGN()
Signed-off-by: Christophe Leroy
---
drivers/ps3/ps3-lpm.c | 6 +++---
drivers/vfio/pci/vfio_pci_nvlink2.c | 2 +-
drivers/video/fbdev/ps3fb.c | 4 ++--
sound/ppc/snd_ps3.
_ALIGN_UP() is specific to powerpc
ALIGN() is generic and does the same
Replace _ALIGN_UP() by ALIGN()
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/iommu.h | 4 ++--
arch/powerpc/kernel/head_booke.h | 2 +-
arch/powerpc/kernel/nvram_64.c |
These three powerpc macros have been replaced by
equivalent generic macros and are not used anymore.
Remove them.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/page.h | 7 ---
1 file changed, 7 deletions(-)
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/
_ALIGN() is specific to powerpc
ALIGN() is generic and does the same
Replace _ALIGN() by ALIGN()
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 2 +-
arch/powerpc/include/asm/nohash/32/pgtable.h | 2 +-
arch/powerpc/kernel/prom_init.c | 8 ---
_ALIGN_DOWN() is specific to powerpc
ALIGN_DOWN() is generic and does the same
Replace _ALIGN_DOWN() by ALIGN_DOWN()
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 2 +-
arch/powerpc/include/asm/nohash/32/pgtable.h | 2 +-
arch/powerpc/kernel/pci_64.c
thout worrying about
> warning messages.
>
> Signed-off-by: Mike Kravetz
> Acked-by: Mina Almasry
When I build an arm64 kernel on today's next-20200420 and ran that in
qemu I got the following output [1]:
...
[ 311.326817][T1] kobject: 'drivers' ((ptrval
On Mon, Apr 20, 2020 at 08:50:30AM +0200, Michal Suchánek wrote:
> Hello,
>
> On Mon, Apr 20, 2020 at 04:15:39PM +1000, Michael Ellerman wrote:
> > Michal Suchánek writes:
...
> >
> >
> > And I've just hit it with your config on a machine here, but the crash
> > is different:
> That does not lo
; >
> > Reverted this series fixed many undefined behaviors on arm64 with the
> > config,
> >
> > https://raw.githubusercontent.com/cailca/linux-mm/master/arm64.config
> >
> > [ 54.172683][T1] UBSAN: shift-out-of-bounds in
> > ./includ
On Mon, Apr 20, 2020 at 6:56 PM Christophe Leroy
wrote:
>
> This is the seventh version of a series to switch powerpc VDSO to
> generic C implementation.
>
> Main changes since v6 are:
> - Added -fasynchronous-unwind-tables in CFLAGS
> - Split patch 2 in two parts
> - Split patch 5 (which was patc
t; > Reverted this series fixed many undefined behaviors on arm64 with the
> > config,
> >
> > https://raw.githubusercontent.com/cailca/linux-mm/master/arm64.config
> >
> > [ 54.172683][T1] UBSAN: shift-out-of-bounds in
> > ./include/linux/hugetlb.h:555:
On Tue, Dec 10, 2019 at 12:35:14AM -0500, George Spelvin wrote:
> ... in boot_init_stack_canary().
>
> This is the archetypical example of where the extra security of
> get_random_bytes() is wasted. The canary is only important as
> long as it's stored in __stack_chk_guard.
>
> It's also a great
This patch series are intended to test the POWER9 Nest
Accelerator (NX) GZIP engine that is being introduced by
https://lists.ozlabs.org/pipermail/linuxppc-dev/2020-March/205659.html
More information about how to access the NX can be found in that patch,
also a complete userspace library and more
Add files to access the powerpc NX-GZIP engine in user space.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../selftests/powerpc/nx-gzip/include/crb.h | 155 ++
.../selftests/powerpc/nx-gzip/include/nx.h| 38 +
.../powerpc/nx-gzip/include/vas-
Add files to be able to compress and decompress files using the
powerpc NX-GZIP engine.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../powerpc/nx-gzip/include/copy-paste.h | 56 ++
.../powerpc/nx-gzip/include/nx_dbg.h | 95 +++
.../selftests/powerpc/nx
Add a compression testcase for the powerpc NX-GZIP engine.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
tools/testing/selftests/powerpc/Makefile | 1 +
.../selftests/powerpc/nx-gzip/Makefile| 8 +
.../selftests/powerpc/nx-gzip/gzfht_test.c| 433
Include a decompression testcase for the powerpc NX-GZIP
engine.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../selftests/powerpc/nx-gzip/Makefile|2 +-
.../selftests/powerpc/nx-gzip/gunz_test.c | 1028 +
.../selftests/powerpc/nx-gzip/n
Include a README file with the instructions to use the
testcases at selftests/powerpc/nx-gzip.
Signed-off-by: Bulent Abali
Signed-off-by: Raphael Moreira Zinsly
---
.../powerpc/nx-gzip/99-nx-gzip.rules | 1 +
.../testing/selftests/powerpc/nx-gzip/README | 45 +++
2 fi
On Tue, 14 Apr 2020 14:56:06 +0800, Shengjiu Wang wrote:
> EASRC (Enhanced Asynchronous Sample Rate Converter) is a new
> IP module found on i.MX8MN.
>
> Signed-off-by: Shengjiu Wang
> ---
> .../devicetree/bindings/sound/fsl,easrc.yaml | 101 ++
> 1 file changed, 101 insertions(
* Nicholas Piggin [2020-04-20 12:08:36 +1000]:
> Excerpts from Rich Felker's message of April 20, 2020 11:29 am:
> > Also, allowing patching of executable pages is generally frowned upon
> > these days because W^X is a desirable hardening property.
>
> Right, it would want be write-protected afte
On 4/20/20 1:29 PM, Anders Roxell wrote:
> On Mon, 20 Apr 2020 at 20:23, Mike Kravetz wrote:
>> On 4/20/20 8:34 AM, Qian Cai wrote:
>>>
>>> Reverted this series fixed many undefined behaviors on arm64 with the
>>> config,
>> While rearranging the code (patch 3 in series), I made the incorrect
>>
On Tue, 14 Apr 2020 18:48:26 +0200
Mauro Carvalho Chehab wrote:
> Patches 1 to 5 contain changes to the documentation toolset:
>
> - The first 3 patches help to reduce a lot the number of reported
> kernel-doc issues, by making the tool more smart.
>
> - Patches 4 and 5 are meant to partially
The two patches here force AER and DPC to honor the Host Bridge's Native
AER/DPC settings. This is under the assumption that when these bits are set,
that Firmware-First AER/DPC should not be in use for these ports. This
assumption seems to be true in ACPI, which explicitly clears these capability
Some platforms have a mix of ports whose capabilities can be negotiated
by _OSC, and some ports which are not described by ACPI and instead
managed by Native drivers. The existing Firmware-First HEST model can
incorrectly tag these Native, Non-ACPI ports as Firmware-First managed
ports by advertisi
The existing portdrv model prevents DPC services without either OS
control (_OSC) granted to AER services, a Host Bridge requesting Native
AER, or using one of the 'pcie_ports=' parameters of 'native' or
'dpc-native'.
The DPC port service driver itself will also fail to probe if the kernel
assumes
On 4/7/20 1:47 AM, Gautham R. Shenoy wrote:
> From: "Gautham R. Shenoy"
>
> Hi,
>
> This is the fifth version of the patches to track and expose idle PURR
> and SPURR ticks. These patches are required by tools such as lparstat
> to compute system utilization for capacity planning purposes.
>
>
On Mon, 20 Apr 2020 at 23:43, Mike Kravetz wrote:
>
> On 4/20/20 1:29 PM, Anders Roxell wrote:
> > On Mon, 20 Apr 2020 at 20:23, Mike Kravetz wrote:
> >> On 4/20/20 8:34 AM, Qian Cai wrote:
> >>>
> >>> Reverted this series fixed many undefined behaviors on arm64 with the
> >>> config,
> >> While
Hi Christophe,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on robh/for-next char-misc/char-misc-testing
tip/locking/core linus/master v5.7-rc2 next-20200420]
[if your patch is applied to the wrong git tree, please drop us a note to
On Mon, 20 Apr 2020 at 18:37, Christophe Leroy wrote:
>
> _ALIGN_UP() is specific to powerpc
> ALIGN() is generic and does the same
>
> Replace _ALIGN_UP() by ALIGN()
>
> Signed-off-by: Christophe Leroy
I was curious, so I expanded out the kernel one. Here's the diff:
- (((addr)+((size)-1))&(~(
On Mon, 20 Apr 2020 at 18:38, Christophe Leroy wrote:
>
> _ALIGN_DOWN() is specific to powerpc
> ALIGN_DOWN() is generic and does the same
>
> Replace _ALIGN_DOWN() by ALIGN_DOWN()
This one is a bit less obvious. It becomes (leaving the typeof's alone
for clarity):
-((addr)&(~((typeof(addr))(siz
1 - 100 of 126 matches
Mail list logo