Suraj Jitindar Singh writes:
> The host process table base is stored in the partition table by calling
> the function native_register_process_table(). Currently this just sets
> the entry in memory and is missing a proceeding cache invalidation
> instruction. Any update to the partition table sho
On Fri, Jul 21, 2017 at 09:57:39PM -0700, Haren Myneni wrote:
>
> Configure CRB is moved to nx842_configure_crb() so that it can
> be used for icswx and VAS exec functions. VAS function will be
> added later with P9 support.
>
> Signed-off-by: Haren Myneni
Your patch does not apply against cryp
Hi Pavel,
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.13-rc3 next-20170802]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Pavel-Tatashin/complete-deferred-page
Hi Pavel,
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.13-rc3]
[cannot apply to next-20170802]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Pavel-Tatashin/complete
Hi Maddy,
I've gone over this series a few times and it looks pretty good
to me. I'd like others to have a look before I do any more bike
shedding of it :)
Just with this one there are still a couple of places where this
is comparing the entire mask and not the LINUX bit:
> @@ -156,7 +156,7 @@ s
Hi Pavel,
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.13-rc3 next-20170802]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Pavel-Tatashin/complete-deferred-page
Hi Pavel,
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.13-rc3]
[cannot apply to next-20170802]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Pavel-Tatashin/complete
The host process table base is stored in the partition table by calling
the function native_register_process_table(). Currently this just sets
the entry in memory and is missing a proceeding cache invalidation
instruction. Any update to the partition table should be followed by a
cache invalidation
QEIC was supported on PowerPC, and dependent on PPC,
Now it is supported on other platforms, so remove PPCisms.
Signed-off-by: Zhao Qiang
---
arch/powerpc/platforms/83xx/km83xx.c | 1 -
arch/powerpc/platforms/83xx/misc.c| 1 -
arch/powerpc/platforms/83xx/mpc832x_mds.c
qeic_of_init just get device_node of qeic from dtb and call qe_ic_init,
pass the device_node to qe_ic_init.
So merge qeic_of_init into qe_ic_init to get the qeic node in
qe_ic_init.
Signed-off-by: Zhao Qiang
---
drivers/irqchip/irq-qeic.c | 90 --
incl
The codes of qe_ic init from a variety of platforms are redundant,
merge them to a common function and put it to irqchip/irq-qeic.c
For non-p1021_mds mpc85xx_mds boards, use "qe_ic_init(np, 0,
qe_ic_cascade_low_mpic, qe_ic_cascade_high_mpic);" instead of
"qe_ic_init(np, 0, qe_ic_cascade_muxed_mpic
move the driver from drivers/soc/fsl/qe to drivers/irqchip,
merge qe_ic.h and qe_ic.c into irq-qeic.c.
Signed-off-by: Zhao Qiang
---
MAINTAINERS| 6 ++
drivers/irqchip/Makefile | 1 +
drivers/{soc/fsl/qe/qe_ic.c => irqchip/irq
QEIC is supported more than just powerpc boards, so remove PPCisms.
changelog:
Changes for v8:
- use IRQCHIP_DECLARE() instead of subsys_initcall in qeic driver
- remove include/soc/fsl/qe/qe_ic.h
Changes for v9:
- rebase
- fix the compile issue whe
Force use of soft_enabled_set() wrapper to update paca-soft_enabled
wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
added to force the paca->soft_enabled updates.
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/hw_irq.h | 14 ++
arch/p
Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.
Here is the design of this patch
To support disabling and enabling of irq with PMI, set of
new powerpc_local_irq_pmu_save() and powerpc_local_irq_restore()
functions are added. And powerpc_local_irq_save() implemented,
by adding a new soft_disable_mask manipulation function
soft_disable_mask_or_return().
Local_irq_pmu_* macros ar
New Kconfig is added "CONFIG_IRQ_DEBUG_SUPPORT" to add warn_on
to alert the invalid transitions. Also moved the code under
the CONFIG_TRACE_IRQFLAGS in arch_local_irq_restore() to new Kconfig.
Reviewed-by: Nicholas Piggin
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/Kconfig | 4
Two new bit mask field "IRQ_DISABLE_MASK_PMU" is introduced to support
the masking of PMI and "IRQ_DISABLE_MASK_ALL" to aid interrupt masking checking.
Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*" added to
use in the exception code to check for PMI interrupts.
In the masked_int
To support addition of "bitmask" to MASKABLE_* macros,
factor out the EXCPETION_PROLOG_1 macro.
Make it explicit the interrupt masking supported
by a gievn interrupt handler. Patch correspondingly
extends the MASKABLE_* macros with an addition's parameter.
"bitmask" parameter is passed to SOFTEN_T
Currently we use both EXCEPTION_PROLOG_1 and __EXCEPTION_PROLOG_1
in the MASKABLE_* macros. As a cleanup, this patch makes MASKABLE_*
to use only __EXCEPTION_PROLOG_1. There is not logic change.
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/exception-64s.h | 6 +++---
1 file ch
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabledMSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts
Rename the paca->soft_enabled to paca->soft_disable_mask as
it is no more used as a flag for interrupt state.
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/hw_irq.h | 24
arch/powerpc/include/asm/kvm_ppc.h | 2 +-
arch/powerpc/include/asm/paca.h|
In powerpc book3s, arch_local_irq_disable() function is not a "void"
when compared to other arch. And only user for this function is
arch_local_irq_save().
Patch modify the arch_local_irq_save() and makes arch_local_irq_disable()
to use arch_local_irq_save() instead.
Suggested-by: Nicholas Piggin
Minor cleanup to use helper function for manipulating
paca->soft_enabled variable.
Suggested-by: Nicholas Piggin
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/hw_irq.h | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/hw_irq.h
Add new soft_enabled_* manipulation function and implement
arch_local_* using the soft_enabled_* wrappers.
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/hw_irq.h | 32 ++--
1 file changed, 14 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/
Move set_soft_enabled() from powerpc/kernel/irq.c to
asm/hw_irq.c, to force updates to paca-soft_enabled
done via these access function. Add "memory" clobber
to hint compiler since paca->soft_enabled memory is the target
here
Renaming it as soft_enabled_set() will make
namespaces works better as p
Two #defs IRQ_ENABLED and IRQ_DISABLED
are added to be used when updating paca->soft_enabled.
Replace the hardcoded values used when updating
paca->soft_enabled with IRQ_[EN/DIS]ABLED #def.
No logic change.
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/exception-64s.h | 2 +-
Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.
Here is the design of the patchs
On Thu, 2017-08-03 at 10:01 +1000, Michael Ellerman wrote:
> Benjamin Herrenschmidt writes:
>
> > On Wed, 2017-08-02 at 18:43 +0200, Cédric Le Goater wrote:
> > > If xive_find_target_in_mask() fails to find a cpu, the fuzz value used
> > > in xive_pick_irq_target() is decremented and reused in th
On Thu, 2017-08-03 at 10:19 +1000, Michael Ellerman wrote:
> arch/powerpc/kernel/entry_32.S:(.text+0x7ac): undefined reference to
> `do_break'
>
> For now I've just wrapped the asm above in:
>
> +#if !(defined(CONFIG_4xx) || defined(CONFIG_BOOKE) ||
> defined(CONFIG_PPC_8xx))
>
> But would b
>From fd0abf5c61b6041fdb75296e8580b86dc91d08d6 Mon Sep 17 00:00:00 2001
From: Benjamin Herrenschmidt
Date: Tue, 1 Aug 2017 20:54:41 -0500
Subject: [PATCH] powerpc: xive: ensure active irqd when setting affinity
Ensure irqd is active before attempting to set affinity. This should
make the set affi
Benjamin Herrenschmidt writes:
> On legacy 6xx 32-bit procesors, we checked for the DABR match bit
> in DSISR from do_page_fault(), in the middle of a pile of ifdef's
> because all other CPU types do it in assembly prior to calling
> do_page_fault. Fix that.
>
> Signed-off-by: Benjamin Herrenschm
Paul Clarke writes:
> Coincidentally, I just saw a developer stumble upon this within the last
> week. Could this be pushed upstream soon?
acme's tree is upstream for perf.
I assume you mean into Linus' tree? If so this should land in 4.14.
cheers
Benjamin Herrenschmidt writes:
> On Wed, 2017-08-02 at 18:43 +0200, Cédric Le Goater wrote:
>> If xive_find_target_in_mask() fails to find a cpu, the fuzz value used
>> in xive_pick_irq_target() is decremented and reused in the last
>> returning call to xive_find_target_in_mask(). This can result
"Aneesh Kumar K.V" writes:
> Michael Ellerman writes:
>
>> On 64-bit book3s, with the hash MMU, we currently define the kernel
>> virtual space (vmalloc, ioremap etc.), to be 16T in size. This is a
>> leftover from pre v3.7 when our user VM was also 16T.
>>
>> Of that 16T we split it 50/50, with
On Wed, 2017-08-02 at 14:42 -0300, Thiago Jung Bauermann wrote:
> Mimi Zohar writes:
>
> > On Thu, 2017-07-06 at 19:17 -0300, Thiago Jung Bauermann wrote:
> >> --- a/security/integrity/ima/ima_appraise.c
> >> +++ b/security/integrity/ima/ima_appraise.c
> >> @@ -200,18 +200,40 @@ int ima_read_xatt
On Wed, 2017-08-02 at 18:43 +0200, Cédric Le Goater wrote:
> If xive_find_target_in_mask() fails to find a cpu, the fuzz value used
> in xive_pick_irq_target() is decremented and reused in the last
> returning call to xive_find_target_in_mask(). This can result in such
> WARNINGs if the initial fuz
On Tue, Jul 18, 2017 at 04:43:21PM -0500, Rob Herring wrote:
> Now that we have a custom printf format specifier, convert users of
> full_name to use %pOF instead. This is preparation to remove storing
> of the full path string for each node.
>
> Signed-off-by: Rob Herring
> Cc: Thomas Petazzoni
On Tue, Aug 1, 2017 at 8:29 PM, Michael Ellerman wrote:
> Currently KERN_IO_START is defined as:
>
> #define KERN_IO_START (KERN_VIRT_START + (KERN_VIRT_SIZE >> 1))
>
> Although it looks like a constant, both the components are actually
> variables, to allow us to have a different value between
The wf_sensor_ops structures are only stored in the ops field of a
wf_sensor structure, which is declared as const. Thus the
wf_sensor_ops structures themselves can be const.
Done with the help of Coccinelle.
//
@r disable optional_qualifier@
identifier i;
position p;
@@
static struct wf_sensor
Coincidentally, I just saw a developer stumble upon this within the last
week. Could this be pushed upstream soon?
PC
On 08/02/2017 10:06 AM, Arnaldo Carvalho de Melo wrote:
> Em Wed, Aug 02, 2017 at 08:12:16PM +0530, Naveen N. Rao escreveu:
>> Before patch:
>> $ uname -m
>> ppc64le
>
> Tha
Soon vmemmap_alloc_block() will no longer zero the block, so zero memory
at its call sites for everything except struct pages. Struct page memory
is zero'd by struct page initialization.
Signed-off-by: Pavel Tatashin
Reviewed-by: Steven Sistare
Reviewed-by: Daniel Jordan
Reviewed-by: Bob Picco
A new variant of memblock_virt_alloc_* allocations:
memblock_virt_alloc_try_nid_raw()
- Does not zero the allocated memory
- Does not panic if request cannot be satisfied
Signed-off-by: Pavel Tatashin
Reviewed-by: Steven Sistare
Reviewed-by: Daniel Jordan
Reviewed-by: Bob Picco
---
in
There is existing use after free bug when deferred struct pages are
enabled:
The memblock_add() allocates memory for the memory array if more than
128 entries are needed. See comment in e820__memblock_setup():
* The bootstrap memblock region count maximum is 128 entries
* (INIT_MEMBLOCK_REGI
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin
Reviewed-by: Steven Sistare
Reviewed-by: Daniel Jordan
Reviewed-by: Bob Picco
---
mm/memb
Add struct page zeroing as a part of initialization of other fields in
__init_single_page().
Signed-off-by: Pavel Tatashin
Reviewed-by: Steven Sistare
Reviewed-by: Daniel Jordan
Reviewed-by: Bob Picco
---
include/linux/mm.h | 9 +
mm/page_alloc.c| 1 +
2 files changed, 10 insertio
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
Review
Clients can call alloc_large_system_hash() with flag: HASH_ZERO to specify
that memory that was allocated for system hash needs to be zeroed,
otherwise the memory does not need to be zeroed, and client will initialize
it.
If memory does not need to be zero'd, call the new
memblock_virt_alloc_raw()
Replace allocators in sprase-vmemmap to use the non-zeroing version. So,
we will get the performance improvement by zeroing the memory in parallel
when struct pages are zeroed.
Signed-off-by: Pavel Tatashin
Reviewed-by: Steven Sistare
Reviewed-by: Daniel Jordan
Reviewed-by: Bob Picco
---
mm/s
In deferred_init_memmap() where all deferred struct pages are initialized
we have a check like this:
if (page->flags) {
VM_BUG_ON(page_zone(page) != zone);
goto free_range;
}
This way we are checking if the current deferred page has already been
initialized. It wor
Without deferred struct page feature (CONFIG_DEFERRED_STRUCT_PAGE_INIT),
flags and other fields in "struct page"es are never changed prior to first
initializing struct pages by going through __init_single_page().
With deferred struct page feature enabled there is a case where we set some
fields pr
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
Review
Struct pages are initialized by going through __init_single_page(). Since
the existing physical memory in memblock is represented in memblock.memory
list, struct page for every page from this list goes through
__init_single_page().
The second memblock list: memblock.reserved, manages the allocated
Changelog:
v3 - v2
- Rewrote code to zero sturct pages in __init_single_page() as
suggested by Michal Hocko
- Added code to handle issues related to accessing struct page
memory before they are initialized.
v2 - v3
- Addressed David Miller comments about one change per patch:
* Splited cha
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without
calling memset(). We do eight regular stores, thus avoid cost of membar.
Signed-off-by: Pavel Tatashin
Reviewed-by: Steven Sistare
Reviewed-by: Daniel Jordan
Reviewed-by: Bob Picco
---
arch/sparc/include/asm/pgtable_6
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin
Reviewed-by: Steven Sistare
Reviewed-by: Daniel Jordan
Reviewed-by: Bob Picco
---
arch/sparc/mm/init_64.c | 23 ++-
1 file changed, 6 insertions(+
Without deferred struct page feature (CONFIG_DEFERRED_STRUCT_PAGE_INIT),
flags and other fields in "struct page"es are never changed prior to first
initializing struct pages by going through __init_single_page().
With deferred struct page feature enabled there is a case where we set some
fields pr
With the hash memory model, all TLBIs become global when the cxl
driver is active, i.e. as soon as one context is open.
It is theoretically possible to send a TLBI with the wrong scope as
there's currently no memory barrier between when the driver is marked
as in use, and attaching a context to the
The PSL and XSL need to see all TLBIs pertinent to the memory contexts
used on the adapter. For the hash memory model, it is done by making
all TLBIs global as soon as the cxl driver is in use. For radix, we
need something similar, but we can refine and only convert to global
the invalidations for
Introduce a new 'flags' attribute per context and define its first bit
to be a marker requiring all TLBIs for that context to be broadcasted
globally. Once that marker is set on a context, it cannot be removed.
Such a marker is useful for memory contexts used by devices behind the
NPU and CAPP/PSL
capi2 and opencapi require the TLB invalidations being sent for
addresses used on the cxl adapter or opencapi device to be global, as
there's a translation cache in the PSL (for capi2) or NPU (for
opencapi). The CAPP, on behalf of the PSL, and NPU snoop the power bus.
This is not new: for the hash
If tracing is enabled and you get into xmon, the tracing buffer
continues to be updated, causing possible loss of data and unnecessary
tracing information coming from xmon functions.
This patch simple disables tracing when entering xmon, and re-enables it
if the kernel is resumed (with 'x').
Sign
Current xmon 'dt' command dumps the tracing buffer for all the CPUs,
which makes it very hard to read due to the fact that most of
powerpc machines currently have many CPUs. Other than that, the CPU
lines are interleaved in the ftrace log.
This new option just dumps the ftrace buffer for the curre
> -Original Message-
> From: Julia Lawall [mailto:julia.law...@lip6.fr]
> Sent: Wednesday, August 02, 2017 10:29 AM
> To: Leo Li
> Cc: kernel-janit...@vger.kernel.org; Felipe Balbi ; Greg
> Kroah-Hartman ; linux-...@vger.kernel.org;
> linuxppc-dev@lists.ozlabs.org; linux-ker...@vger.kern
Em Wed, Aug 02, 2017 at 10:46:17AM -0700, Sukadev Bhattiprolu escreveu:
> Hi Arnaldo,
>
> Please pull some updates/cleanups to the POWER9 PMU events.
>
> The following changes since commit 81e3d8b2af2e7417f1d5164aab5c1a75955e8a5d:
>
> perf trace beautify ioctl: Beautify perf ioctl's 'cmd' arg
Exclude core xmon files from ftrace (along with an xmon xive helper
outside of xmon/) to minimize impact of ftrace while within xmon.
Before patch:
root@ubuntu:/sys/kernel/debug/tracing# cat available_filter_functions | grep
-i xmon
xmon_xive_do_dump
xmon_dbgfs_get
xmon_print_symbol
xmo
When DLPAR adding or removing memory we need to check the device
offline status before trying to online/offline the memory. This is
needed because calls device_online() and device_offline() will return
non-zero for memory that is already online and offline respectively.
This update resolves two sc
Hi Michael,
In Rob's reply to you email, he said:
I'd like to move towards dropping 'linux,phandle' including changing
dtc to stop generating both properties by default. Perhaps we should
just be more explicit that we are doing that. Stop exposing it first and
then change how phand
Declare bin_attribute structures as const as they are only passed as an
argument to the function sysfs_create_bin_file. This argument is of
type const, so declare the structure as const.
Signed-off-by: Bhumika Goyal
---
arch/powerpc/platforms/powernv/opal-flash.c | 2 +-
arch/powerpc/sysdev/mv64
Hi Arnaldo,
Please pull some updates/cleanups to the POWER9 PMU events.
The following changes since commit 81e3d8b2af2e7417f1d5164aab5c1a75955e8a5d:
perf trace beautify ioctl: Beautify perf ioctl's 'cmd' arg (2017-08-01
13:33:50 -0300)
are available in the git repository at:
https://github
Mimi Zohar writes:
> On Thu, 2017-07-06 at 19:17 -0300, Thiago Jung Bauermann wrote:
>> --- a/security/integrity/ima/ima_appraise.c
>> +++ b/security/integrity/ima/ima_appraise.c
>> @@ -200,18 +200,40 @@ int ima_read_xattr(struct dentry *dentry,
>> */
>> int ima_appraise_measurement(enum ima_
Hello Mimi,
Thanks for your review!
The patch at the end of the email implements your suggestions, what do
you think?
Mimi Zohar writes:
> On Thu, 2017-07-06 at 19:17 -0300, Thiago Jung Bauermann wrote:
>> A separate struct evm_hmac_xattr is introduced, with the original
>> definition of evm_i
If xive_find_target_in_mask() fails to find a cpu, the fuzz value used
in xive_pick_irq_target() is decremented and reused in the last
returning call to xive_find_target_in_mask(). This can result in such
WARNINGs if the initial fuzz value is zero :
[0.094480] WARNING: CPU: 10 PID: 1 at
..
On Tue, 1 Aug 2017 11:46:46 -0700
"Paul E. McKenney" wrote:
> On Mon, Jul 31, 2017 at 04:27:57PM +0100, Jonathan Cameron wrote:
> > On Mon, 31 Jul 2017 08:04:11 -0700
> > "Paul E. McKenney" wrote:
> >
> > > On Mon, Jul 31, 2017 at 12:08:47PM +0100, Jonathan Cameron wrote:
> > > > On Fri, 28
On 2017/08/02 11:58AM, Breno Leitao wrote:
> If tracing is enabled and you get into xmon, the tracing buffer
> continues to be updated, causing possible loss of data and unnecessary
> tracing information coming from xmon functions.
>
> This patch simple disables tracing when entering xmon, and ree
On 2017/08/02 11:58AM, Breno Leitao wrote:
> Current xmon 'dt' command dumps the tracing buffer for all the CPUs,
> which makes it very hard to read due to the fact that most of
> powerpc machines currently have many CPUs. Other than that, the CPU
> lines are interleaved in the ftrace log.
>
> Thi
qe_ep0_desc is only passed as the second argument to qe_ep_init, which is
const, so qe_ep0_desc can be const too.
Done with the help of Coccinelle.
Signed-off-by: Julia Lawall
---
I got a lot of warnings when compiling this file, but none seemed to be
related to the change.
drivers/usb/gadget
Hi Breno,
[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.13-rc3 next-20170802]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Breno-Leitao/powerpc-xmon-Dump-ftrace-buffers
Em Wed, Aug 02, 2017 at 08:12:16PM +0530, Naveen N. Rao escreveu:
> Before patch:
> $ uname -m
> ppc64le
Thanks, applied,
- Arnaldo
> $ ./perf script -s ./scripts/python/syscall-counts.py
> Install the audit-libs-python package to get syscall names.
> For example:
> # apt-get insta
If tracing is enabled and you get into xmon, the tracing buffer
continues to be updated, causing possible loss of data and unnecessary
tracing information coming from xmon functions.
This patch simple disables tracing when entering xmon, and reenables it
if the kernel is resumed (with 'x').
Signe
Current xmon 'dt' command dumps the tracing buffer for all the CPUs,
which makes it very hard to read due to the fact that most of
powerpc machines currently have many CPUs. Other than that, the CPU
lines are interleaved in the ftrace log.
This new option just dumps the ftrace buffer for the curre
On 08/02/2017 05:55 AM, Daniel Henrique Barboza wrote:
>
> On 08/01/2017 11:39 AM, Daniel Henrique Barboza wrote:
>>
>> On 08/01/2017 11:05 AM, Nathan Fontenot wrote:
>>
>>> At this point I don't think we need this patch to disable auto online
>>> for ppc64. I would be curious if this is still bro
On Wed, Aug 02, 2017 at 06:51:24PM +0530, Naveen N. Rao wrote:
> On 2017/08/01 11:21AM, Breno Leitao wrote:
> > Hi Naveen,
> >
> > On Tue, Aug 01, 2017 at 12:10:24PM +0530, Naveen N. Rao wrote:
> > > On 2017/07/31 02:22PM, Breno Leitao wrote:
> > > > If tracing is enabled and you get into xmon, th
Before patch:
$ uname -m
ppc64le
$ ./perf script -s ./scripts/python/syscall-counts.py
Install the audit-libs-python package to get syscall names.
For example:
# apt-get install python-audit (Ubuntu)
# yum install audit-libs-python (Fedora)
etc.
Press control+C to stop and
> arch/powerpc/net/bpf: Basic EBPF support
Perhaps:
powerpc/bpf: Set JIT memory read-only
On 2017/08/01 09:25PM, Balbir Singh wrote:
> Signed-off-by: Balbir Singh
> ---
> arch/powerpc/net/bpf_jit_comp64.c | 13 +
> 1 file changed, 1 insertion(+), 12 deletions(-)
>
> diff --git a/ar
get_pteptr() and __mapin_ram_chunk() are only used locally,
so define them static
Signed-off-by: Christophe Leroy
---
v3: no change
arch/powerpc/include/asm/book3s/32/pgtable.h | 3 ---
arch/powerpc/include/asm/nohash/32/pgtable.h | 3 ---
arch/powerpc/mm/pgtable_32.c | 4 ++--
__set_fixmap() uses __fix_to_virt() then does the boundary checks
by it self. Instead, we can use fix_to_virt() which does the
verification at build time. For this, we need to use it inline
so that GCC can see the real value of idx at buildtime.
In the meantime, we remove the 'fixmaps' variable.
T
This patch implements STRICT_KERNEL_RWX on PPC32.
As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
in order to allow page protection setup at the level of each page.
As BAT/LTLB mappings are deactivated, there might be a performance
impact.
Signed-off-by: Christophe Leroy
---
As seen below, allthough the init sections have been freed, the
associated memory area is still marked as executable in the
page tables.
~ dmesg
[5.860093] Freeing unused kernel memory: 592K (c057 - c0604000)
~ cat /sys/kernel/debug/kernel_page_tables
---[ Start of kernel VM ]---
0xc0
__change_page_attr() uses flush_tlb_page().
flush_tlb_page() uses tlbie instruction, which also invalidates
pinned TLBs, which is not what we expect.
This patch modifies the implementation to use flush_tlb_kernel_range()
instead. This will make use of tlbia which will preserve pinned TLBs.
Signed
This patch set implements STRICT_KERNEL_RWX on Powerpc32
after fixing a few issues related to kernel code page protection.
At the end we take the opportunity to get rid of some unneccessary/outdated
fixmap stuff.
Changes from v2 to v3:
* Rebased on latest linux-powerpc/merge branch
* Function rem
On 2017/08/01 11:21AM, Breno Leitao wrote:
> Hi Naveen,
>
> On Tue, Aug 01, 2017 at 12:10:24PM +0530, Naveen N. Rao wrote:
> > On 2017/07/31 02:22PM, Breno Leitao wrote:
> > > If tracing is enabled and you get into xmon, the tracing buffer
> > > continues to be updated, causing possible loss of da
Hi Aneesh,
On Wed, Aug 02, 2017 at 11:10:15AM +0530, Aneesh Kumar K.V wrote:
> +static int plpar_bluk_remove(unsigned long *param, int index, unsigned long
> slot,
s/bluk/bulk/ :-)
Segher
On Sat, Jul 29, 2017 at 03:24:28PM +0800, SZ Lin wrote:
> Fix styling WARNINGs and Errors of tpm_ibmvtpm.c driver by using checkpatch.pl
Changes are great but you should revise the patch series so that you
expain in each commit what goes wrong instead of copy paste of the
checkpatch output and why
Le 14/07/2017 à 08:51, Michael Ellerman a écrit :
Currently even with STRICT_KERNEL_RWX we leave the __init text marked
executable after init, which is bad.
Add a hook to mark it NX (no-execute) before we free it, and implement
it for radix and hash.
Note that we use __init_end as the end add
On 08/01/2017 11:39 AM, Daniel Henrique Barboza wrote:
On 08/01/2017 11:05 AM, Nathan Fontenot wrote:
At this point I don't think we need this patch to disable auto online
for ppc64. I would be curious if this is still broken with the latest
mainline code though.
If the auto_online featu
On Wed, Aug 2, 2017 at 8:09 PM, Aneesh Kumar K.V
wrote:
> Balbir Singh writes:
>
>> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
>> feature support we got support for changing the page permissions
>> for pte ranges. This patch adds support for both radix and hash
>> so that
Balbir Singh writes:
> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
> feature support we got support for changing the page permissions
> for pte ranges. This patch adds support for both radix and hash
> so that we can change their permissions via set/clear masks.
>
> A new h
Thiago Jung Bauermann writes:
> Michael Ellerman writes:
>
>> Thiago Jung Bauermann writes:
>>> Ram Pai writes:
>> ...
+
+ /* We got one, store it and use it from here on out */
+ if (need_to_set_mm_pkey)
+ mm->context.execute_only_pkey = execute_only_pkey;
Le 02/08/2017 à 10:10, Christophe LEROY a écrit :
Le 02/08/2017 à 09:31, Aneesh Kumar K.V a écrit :
Christophe LEROY writes:
Hi,
Le 28/07/2017 à 07:01, Aneesh Kumar K.V a écrit :
With commit aa888a74977a8 ("hugetlb: support larger than MAX_ORDER")
we added
support for allocating giganti
1 - 100 of 107 matches
Mail list logo