christophe leroy writes:
> Le 01/06/2017 à 16:30, Aneesh Kumar K.V a écrit :
>> With commit aa888a74977a8 ("hugetlb: support larger than MAX_ORDER") we added
>> support for allocating gigantic hugepages via kernel command line. Switch
>> ppc64 arch specific code to use that.
>
> Is it only ppc64
Michael Ellerman writes:
> Tyrel Datwyler writes:
>> On 06/01/2017 09:42 AM, christophe leroy wrote:
>>> Le 01/06/2017 à 16:30, Aneesh Kumar K.V a écrit :
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 6981a52b3887
The dw_pcie_host_ops structures are never modified. Constify these
structures such that these can be write-protected.
Signed-off-by: Jisheng Zhang
---
drivers/pci/dwc/pci-dra7xx.c | 2 +-
drivers/pci/dwc/pci-exynos.c | 2 +-
drivers/pci/dwc/pci-imx6.c | 2 +-
driv
On Fri, 2016-08-05 at 11:28:05 UTC, Christophe Leroy wrote:
> Signed-off-by: Christophe Leroy
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/362957c27ed0d9ff485d3266ed22d9
cheers
On Fri, 2017-03-10 at 10:37:01 UTC, Christophe Leroy wrote:
> IRQ 0 is a valid HW interrupt. So get_irq() shall return 0 when
> there is no irq, instead of returning irq_linear_revmap(... ,0)
>
> Fixes: f2a0bd3753dad ("[POWERPC] 8xx: powerpc port of core CPM PIC")
> Signed-off-by: Christophe Leroy
On Thu, 2017-03-16 at 08:55:45 UTC, Christophe Leroy wrote:
> It often happens to have simultaneous interrupts, for instance
> when having double Ethernet attachment. With the current
> implementation, we suffer the cost of kernel entry/exit for each
> interrupt.
>
> This patch introduces a loop i
On Wed, 2017-04-19 at 12:56:24 UTC, Christophe Leroy wrote:
> Function store_updates_sp() checks whether the faulting
> instruction is a store updating r1. Therefore we can limit its calls
> to stores exceptions.
>
> This patch is an improvement of commit a7a9dcd882a67 ("powerpc: Avoid
> taking a
On Fri, 2017-04-21 at 11:18:46 UTC, Christophe Leroy wrote:
> With the ffs() function as defined in arch/powerpc/include/asm/bitops.h
> GCC will not optimise the code in case of constant parameter, as shown
> by the small exemple below.
...
>
> In addition, when reading the generated vmlinux, we c
On Mon, 2017-05-22 at 09:34:23 UTC, Hari Bathini wrote:
> With commit 11550dc0a00b ("powerpc/fadump: reuse crashkernel parameter
> for fadump memory reservation"), 'fadump_reserve_mem=' parameter is
> deprecated in favor of 'crashkernel=' parameter. Add a warning if
> 'fadump_reserve_mem=' is still
On Tue, 2017-05-23 at 23:45:59 UTC, Matt Brown wrote:
> The xor_vmx.c file is used for the RAID5 xor operations. In these functions
> altivec is enabled to run the operation and then disabled. However due to
> compiler instruction reordering, altivec instructions are being run before
> enable_altiv
On Thu, 2017-05-25 at 03:36:50 UTC, Balbir Singh wrote:
> The check in hpte_find() should be < and not <= for PAGE_OFFSET
>
> Signed-off-by: Balbir Singh
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/e63739b1687ea37390e5463f43d054
cheers
On Sat, 2017-05-27 at 15:46:15 UTC, Michal Suchanek wrote:
> - log an error message when registration fails and no error code listed
> in the switch is returned
> - translate the hv error code to posix error code and return it from
> fw_register
> - return the posix error code from fw_register
On Mon, 2017-05-29 at 15:32:06 UTC, Christophe Leroy wrote:
> This function has not been used since commit 9494a1e8428ea
> ("powerpc: use generic fixmap.h)
>
> Signed-off-by: Christophe Leroy
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/9affa9e228d3ece66ed322909e84cf
cheer
On Fri, 2017-06-02 at 07:30:27 UTC, Hari Bathini wrote:
> By default, 5% of system RAM is reserved for preserving boot memory.
> Alternatively, a user can specify the amount of memory to reserve.
> See Documentation/powerpc/firmware-assisted-dump.txt for details. In
> addition to the memory reserve
Christophe LEROY writes:
> Le 02/06/2017 à 11:26, Michael Ellerman a écrit :
>> Christophe Leroy writes:
>>
>>> Only the get_user() in store_updates_sp() has to be done outside
>>> the mm semaphore. All the comparison can be done within the semaphore,
>>> so only when really needed.
>>>
>>> As
Christophe LEROY writes:
> Le 02/06/2017 à 14:11, Benjamin Herrenschmidt a écrit :
>> On Fri, 2017-06-02 at 11:39 +0200, Christophe LEROY wrote:
>>> The difference between get_user() and __get_user() is that get_user()
>>> performs an access_ok() in addition.
>>>
>>> Doesn't access_ok() only chec
On Mon, 2017-06-05 at 20:21 +1000, Michael Ellerman wrote:
> On Thu, 2017-03-16 at 08:55:45 UTC, Christophe Leroy wrote:
> > It often happens to have simultaneous interrupts, for instance
> > when having double Ethernet attachment. With the current
> > implementation, we suffer the cost of kernel e
Rather than doing this, the base should just be split for an ELF
interpreter like PaX. It makes sense for a standalone executable to be
as low in the address space as possible. Doing that ASAP fixes issues
like this and opens up the possibility of fixing stack mapping ASLR
entropy on various archit
Power9 has In-Memory-Collection (IMC) infrastructure which contains
various Performance Monitoring Units (PMUs) at Nest level (these are
on-chip but off-core), Core level and Thread level.
The Nest PMU counters are handled by a Nest IMC microcode which runs
in the OCC (On-Chip Controller) complex.
From: Madhavan Srinivasan
Create a new header file to add the data structures and
macros needed for In-Memory Collection (IMC) counter support.
Signed-off-by: Anju T Sudhakar
Signed-off-by: Hemant Kumar
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/include/asm/imc-pmu.h | 99 ++
Code to create platform device for the IMC counters.
Paltform devices are created based on the IMC compatibility
string.
New Config flag "CONFIG_HV_PERF_IMC_CTRS" add to contain the
IMC counter changes.
Signed-off-by: Anju T Sudhakar
Signed-off-by: Hemant Kumar
Signed-off-by: Madhavan Srinivasa
From: Madhavan Srinivasan
Parse device tree to detect IMC units. Traverse through each IMC unit
node to find supported events and corresponding unit/scale files (if any).
The device tree for IMC counters starts at the node "imc-counters".
This node contains all the IMC PMU nodes and event nodes
Device tree IMC driver code parses the IMC units and their events. It
passes the information to IMC pmu code which is placed in powerpc/perf
as "imc-pmu.c".
Patch adds a set of generic imc pmu related event functions to be
used by each imc pmu unit. Add code to setup format attribute and to
regis
From: Madhavan Srinivasan
This patch adds support for detection of core IMC events along with the
Nest IMC events. It adds a new domain IMC_DOMAIN_CORE and its determined
w
Code to add support for detection of thread IMC events. It adds a new
domain IMC_DOMAIN_THREAD and it is determined with the help of the
"type" property in the imc device-tree.
Signed-off-by: Anju T Sudhakar
Signed-off-by: Hemant Kumar
Signed-off-by: Madhavan Srinivasan
---
arch/powerpc/includ
Code to add PMU functions required for event initialization,
read, update, add, del etc. for thread IMC PMU. Thread IMC PMUs are used
for per-task monitoring.
Adds cpumask attribute to be used by each IMC pmu. Only one cpu (any
online CPU) from each chip for nest PMUs is designated to read counters.
On CPU hotplug, dying CPU is checked to see whether it i
From: Madhavan Srinivasan
Code to add PMU function to initialize a core IMC event. It also
adds cpumask initialization function for core IMC PMU. For
i
Code to add support for thread IMC on cpuhotplug.
When a cpu goes offline, the LDBAR for that cpu is disabled, and when it comes
back online the previous ldbar value is written back to the LDBAR for that cpu.
To register the hotplug functions for thread_imc, a new state
CPUHP_AP_PERF_POWERPC_THRE
NF_CT_PROTO_{SCTP,UDPLITE,DCCP} can't be set to 'm' anymore, since they
have been redefined as 'bool': fix defconfig for linkstation, mvme5100 and
ppc6xx platforms accordingly.
Signed-off-by: Davide Caratti
---
arch/powerpc/configs/linkstation_defconfig | 2 +-
arch/powerpc/configs/mvme5100_defc
Currently tsk->thread.load_tm is not initialized in the task creation
and can contain garbage on a new task.
This is an undesired behaviour, since it affects the timing to enable
and disable the transactional memory laziness (disabling and enabling
the MSR TM bit, which affects TM reclaim and rech
A deadlock problem was detected in the hot-add CPU operation when
modifying a Shared CPU Topology configuration on powerpc. The system
build is the latest 4.12 pull from Github plus the following patches:
[PATCH 0/2] powerpc/dlpar: Correct display of hot-add/hot-remove CPUs and
memory
This build appears to be using V3 of the patch. V4 of the patch corrected the
placement
of the local variables with respect to '#ifdef CONFIG_PPC_SPLPAR'.
On 06/03/2017 10:13 PM, kbuild test robot wrote:
> Hi Michael,
>
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.12
Le 05/06/2017 à 12:45, Michael Ellerman a écrit :
Christophe LEROY writes:
Le 02/06/2017 à 11:26, Michael Ellerman a écrit :
Christophe Leroy writes:
Only the get_user() in store_updates_sp() has to be done outside
the mm semaphore. All the comparison can be done within the semaphore,
so
On Monday, June 5, 2017 4:54 AM, Jisheng Zhang wrote:
>
> The dw_pcie_host_ops structures are never modified. Constify these
> structures such that these can be write-protected.
>
> Signed-off-by: Jisheng Zhang
Acked-by: Jingoo Han
Best regards,
Jingoo Han
> ---
> drivers/pci/dwc/pci-dra7xx
Hi Breno,
Looks good to me.
> Currently tsk->thread.load_tm is not initialized in the task creation
> and can contain garbage on a new task.
>
> This is an undesired behaviour, since it affects the timing to enable
> and disable the transactional memory laziness (disabling and enabling
> the MSR
Memory protection keys enable applications to protect its
address space from inadvertent access or corruption from
itself.
The overall idea:
A process allocates a key and associates it with
a address range withinits address space.
The process than can dynamically set read/wri
Rearrange PTE bits to free up bits 3, 4, 5 and 6 for
memory keys. Bit 3, 4, 5, 6 and 57 shall be used for memory
keys.
The patch does the following change to the 64K PTE format
H_PAGE_BUSY moves from bit 3 to bit 7
H_PAGE_F_SECOND which occupied bit 4 moves to the second part
o
Sys_pkey_alloc() allocates and returns available pkey
Sys_pkey_free() frees up the key.
Total 32 keys are supported on powerpc. However key 0,1 and 31
are reserved. So effectively we have 29 keys.
Signed-off-by: Ram Pai
---
arch/powerpc/Kconfig | 15
arch/powerpc/
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/processor.h | 5 +
arch/powerpc/kernel/process.c| 18 ++
2 files changed, 23 insertions(+)
diff --git a/arch/powerpc/include/asm/processor.h
b/arch/powerpc/include/asm/processor.h
index a2123f2..1f714df 100644
---
This system call, associates the pkey with the pte of all
pages corresponding to the given address range.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 22 ++-
arch/powerpc/include/asm/mman.h | 29 +
arch/powerpc/include/asm/pkeys.h
Map the PTE protection key bits to the HPTE key protection bits,
while creatiing HPTE entries.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 5 +
arch/powerpc/include/asm/pkeys.h | 7 +++
arch/powerpc/mm/hash_utils_64.c | 5 +
Handle Data and Instruction exceptions caused by memory
protection-key.
Signed-off-by: Ram Pai
---
arch/powerpc/include/asm/mmu_context.h | 12 +
arch/powerpc/include/asm/pkeys.h | 9
arch/powerpc/include/asm/reg.h | 10 +++-
arch/powerpc/kernel/exceptions-64s.S | 2 +-
The value of the AMR register at the time of the exception
is made available in gp_regs[PT_AMR] of the siginfo.
This field can be used to reprogram the permission bits of
any valid protection key.
Similarly the value of the key, whose protection got violated,
is made available at si_pkey field of
Enable STRICT_KERNEL_RWX for PPC64/BOOK3S
These patches enable RX mappings of kernel text.
rodata is mapped RX as well as a trade-off, there
are more details in the patch description
As a prerequisite for R/O text, patch_instruction
is moved over to using a separate mapping that
allows write to k
Today our patching happens via direct copy and
patch_instruction. The patching code is well
contained in the sense that copying bits are limited.
While considering implementation of CONFIG_STRICT_RWX,
the first requirement is to a create another mapping
that will allow for patching. We create the
arch_arm/disarm_probe use direct assignment for copying
instructions, replace them with patch_instruction
Signed-off-by: Balbir Singh
---
arch/powerpc/kernel/kprobes.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kpro
With text moving to read-only migrate optprobes to using
the patch_instruction infrastructure. Without this optprobes
will fail and complain.
Signed-off-by: Balbir Singh
---
arch/powerpc/kernel/optprobes.c | 58 ++---
1 file changed, 37 insertions(+), 21 delet
Move from mwrite() to patch_instruction() for xmon for
breakpoint addition and removal.
Signed-off-by: Balbir Singh
---
arch/powerpc/xmon/xmon.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index a728e19..08e367e 1
For CONFIG_STRICT_KERNEL_RWX align __init_begin to 16M.
We use 16M since its the larger of 2M on radix and 16M
on hash for our linear mapping. The plan is to have
.text, .rodata and everything upto __init_begin marked
as RX. Note we still have executable read only data.
We could further align read
PAPR has pp0 in bit 55, currently we assumed that bit
pp0 is bit 0 (all bits in IBM order). This patch fixes
the pp0 bits for both these routines that use H_PROTECT
Signed-off-by: Balbir Singh
---
arch/powerpc/platforms/pseries/lpar.c | 13 +++--
1 file changed, 11 insertions(+), 2 delet
With hash we update the bolted pte to mark it read-only. We rely
on the MMU_FTR_KERNEL_RO to generate the correct permissions
for read-only text. The radix implementation just prints a warning
in this implementation
Signed-off-by: Balbir Singh
---
arch/powerpc/include/asm/book3s/64/hash.h | 3
We have the basic support in the form of patching R/O
text sections, linker scripts for extending alignment
of text data. We've also got mark_rodata_ro()
Signed-off-by: Balbir Singh
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/K
The patch splits the linear page mapping such that
the ones with kernel text are mapped as 2M and others
are mapped with the largest possible size - 1G. The downside
of this is that we split a 1G mapping into 512 2M mappings
for the kernel, but in the absence of that we cannot support
R/O areas in
Daniel Axtens writes:
> Hi Breno,
>
> Looks good to me.
>
>> Currently tsk->thread.load_tm is not initialized in the task creation
>> and can contain garbage on a new task.
>>
>> This is an undesired behaviour, since it affects the timing to enable
>> and disable the transactional memory laziness
There are two cases outside the normal address space management
where a CPU's local TLB is to be flushed:
1. Host boot; in case something has left stale entries in the
TLB (e.g., kexec).
2. Machine check; to clean corrupted TLB entries.
CPU state restore from deep idle states also flush
Currently we map the whole linear mapping with PAGE_KERNEL_X. Instead we
should check if the page overlaps the kernel text and only then add
PAGE_KERNEL_X.
Note that we still use 1G pages if they're available, so this will
typically still result in a 1G executable page at KERNELBASE. So this fix i
On Tue, Jun 6, 2017 at 3:48 PM, Michael Ellerman wrote:
> Currently we map the whole linear mapping with PAGE_KERNEL_X. Instead we
> should check if the page overlaps the kernel text and only then add
> PAGE_KERNEL_X.
>
> Note that we still use 1G pages if they're available, so this will
> typical
58 matches
Mail list logo