Michael Ellerman writes:
> Stephen Rothwell writes:
>
>> Hi all,
>>
>> Changes since 20170530:
>>
>> The mfd tree gained a build failure so I used the version from
>> next-20170530.
>>
>> The drivers-x86 tree gained the same build failure as the mfd tree so
>> I used the version from next-201705
Stephen Rothwell writes:
> Hi Michael,
>
> On Thu, 01 Jun 2017 16:07:51 +1000 Michael Ellerman
> wrote:
>>
>> Stephen Rothwell writes:
>>
>> > Changes since 20170530:
>> >
>> > Non-merge commits (relative to Linus' tree): 3325
>> > 3598 files changed, 135000 insertions(+), 72065 deletions(-)
Hello Wolfram,
On Tue, May 23, 2017 at 3:34 PM, Javier Martinez Canillas
wrote:
>
> This series is a follow-up to patch [0] that added an OF device ID table
> to the at24 EEPROM driver. As you suggested [1], this version instead of
> adding entries for every used tuple, only adds a single
> entr
On Wed 31-05-17 23:35:48, Pasha Tatashin wrote:
> >OK, so why cannot we make zero_struct_page 8x 8B stores, other arches
> >would do memset. You said it would be slower but would that be
> >measurable? I am sorry to be so persistent here but I would be really
> >happier if this didn't depend on the
Michael Bringmann writes:
> On 05/29/2017 12:32 AM, Michael Ellerman wrote:
>> Reza Arbab writes:
>>
>>> On Fri, May 26, 2017 at 01:46:58PM +1000, Michael Ellerman wrote:
Reza Arbab writes:
> On Thu, May 25, 2017 at 04:19:53PM +1000, Michael Ellerman wrote:
>> The commit mess
On Saturday 27 May 2017 09:16 PM, Michal Suchanek wrote:
- log an error message when registration fails and no error code listed
in the switch is returned
- translate the hv error code to posix error code and return it from
fw_register
- return the posix error code from fw_register to
Nicholas Piggin writes:
> On Tue, 30 May 2017 16:28:09 +1000
> Michael Ellerman wrote:
>
>> From: Nicholas Piggin
>>
>> Provide a dt_cpu_ftrs= cmdline option to disable the dt_cpu_ftrs CPU
>> feature discovery, and fall back to the "cputable" based version.
>>
>> Also allow control of adverti
Porting PPC to libdw only needs an architecture-specific hook to move
the register state from perf to libdw.
The ARM and x86 architectures already use libdw, and it is useful to
have as much common code for the unwinder as possible. Mark Wielaard
has contributed a frame-based unwinder to libdw, s
This is v2 of some of the patches from:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg11.html
The first two patches should probably go to -stable. The others can go
into v4.12.
Thanks,
Naveen
Naveen N. Rao (4):
powerpc/kprobes: Pause function_graph tracing during jprobes ha
This fixes a crash when function_graph and jprobes are used together.
This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
conflict between jprobes and function graph tracing"), but for powerpc.
Jprobes breaks function_graph tracing since the jprobe hook needs to use
jprobe_return(),
ftrace_caller() depends on a modified regs->nip to detect if a certain
function has been livepatched. However, with KPROBES_ON_FTRACE, it is
possible for regs->nip to have been modified by the kprobes pre_handler
(jprobes, for instance). In this case, we do not want to invoke the
livepatch_handler
For DYNAMIC_FTRACE_WITH_REGS, we should be passing-in the original set
of registers in pt_regs, to capture the state _before_ ftrace_caller.
However, we are instead passing the stack pointer *after* allocating a
stack frame in ftrace_caller. Fix this by saving the proper value of r1
in pt_regs. Als
Disable ftrace when we enter xmon so as not to clobber/pollute the
trace buffer. In addition, we cannot have function_graph enabled while
in xmon since we use setjmp/longjmp which confuses the function call
history maintained by function_graph.
Signed-off-by: Naveen N. Rao
---
v2: Disable ftrace,
On Thu, 2017-05-11 at 11:24:41 UTC, Nicholas Piggin wrote:
> Provide a dt_cpu_ftrs= cmdline option to disable the dt_cpu_ftrs CPU
> feature discovery, and fall back to the "cputable" based version.
>
> Also allow control of advertising unknown features to userspace and
> with this parameter, and r
On Mon, 2017-05-22 at 20:44:37 UTC, Michael Bringmann wrote:
> When adding or removing memory, the aa_index (affinity value) for the
> memblock must also be converted to match the endianness of the rest
> of the 'ibm,dynamic-memory' property. Otherwise, subsequent retrieval
> of the attribute will
On Wed, 2017-05-24 at 08:01:55 UTC, Christophe Leroy wrote:
> of_mm_gpiochip_add_data() generates an Oops for NULL pointer dereference.
>
> of_mm_gpiochip_add_data() calls mm_gc->save_regs() before
> setting the data, therefore ->save_regs() cannot use gpiochip_get_data()
>
> Fixes: 937daafca774b
On Mon, 2017-05-29 at 01:53:10 UTC, Michael Ellerman wrote:
> We are running low on CPU feature bits, so we only want to use them when
> it's really necessary.
>
> CPU_FTR_SUBCORE is only used in one place, and only in C, so we don't
> need it in order to make asm patching work. It can only be set
On Mon, 2017-05-29 at 10:26:07 UTC, Michael Ellerman wrote:
> If a process dumps core while it has SPU contexts active then we have
> code to also dump information about the SPU contexts.
>
> Unfortunately it's been broken for 3 1/2 years, and we didn't notice. In
> commit 7b1f4020d0d1 ("spufs: ge
For ppc64, we want to call this function when we are not running as guest.
Also, if we failed to allocate hugepages, let the user know.
Signed-off-by: Aneesh Kumar K.V
---
include/linux/hugetlb.h | 1 +
mm/hugetlb.c| 5 -
2 files changed, 5 insertions(+), 1 deletion(-)
diff --gi
With commit aa888a74977a8 ("hugetlb: support larger than MAX_ORDER") we added
support for allocating gigantic hugepages via kernel command line. Switch
ppc64 arch specific code to use that.
W.r.t FSL support, we now limit our allocation range using
BOOTMEM_ALLOC_ACCESSIBLE.
We use the kernel com
Now that we made sure that lockless walk of linux page table is mostly limitted
to current task(current->mm->pgdir) we can update the THP update sequence to
only send IPI to cpus on which this task has run. This helps in reducing the IPI
overload on systems with large number of CPUs.
W.r.t kvm eve
Add newer helpers to make the usage simpler. It is always recommended to use
find_current_mm_pte() for walking the page table. If cannot used, it should
be documented why the said usage of __find_linux_pte() is safe against a
parallel THP split.
For now we have KVM code use __find_linux_pte(). Thi
We use mm cpumask for serializing against lockless page table walk. Anybody
who is doing a lockless page table walk is expected to disable irq and only
cpus in mm cpumask is expected do the lockless walk. This ensure that
a THP split can send IPI to only cpus in the mm cpumask, to make sure there
a
Supporting 512TB requires us to do a order 3 allocation for level 1 page
table(pgd). This results in page allocation failures with certain workloads.
For now limit 4k linux page size config to 64TB.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 2 +-
arch/po
> > I am curious as to what is going on there. Do you have the output from
> > these failed allocations?
>
> I thought the relevant output was in my mail. I did skip the Mem-Info
> dump, since that just seemed noise in this case: we know memory can get
> fragmented. What more output are you loo
drivers/watchdog/wdrtas.c is of insterest of linuxppc maintainers.
Signed-off-by: Murilo Opsfelder Araujo
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 9609ca6..3f05927 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7670,6 +7670,7 @@ F:
Le 01/06/2017 à 16:30, Aneesh Kumar K.V a écrit :
With commit aa888a74977a8 ("hugetlb: support larger than MAX_ORDER") we added
support for allocating gigantic hugepages via kernel command line. Switch
ppc64 arch specific code to use that.
Is it only ppc64 ? Your patch removes things defined
fadump sets up crash memory ranges to be used for creating PT_LOAD
program headers in elfcore header. Memory chunk RMA_START through
boot memory area size is added as the first memory range because
firmware, at the time of crash, moves this memory chunk to different
location specified during fadump
To register fadump, boot memory area - the size of low memory chunk that
is required for a kernel to boot successfully when booted with restricted
memory, is assumed to have no holes. But this memory area is currently
not protected from hot-remove operations. So, fadump could fail to
re-register af
fadump fails to register when there are holes in boot memory area.
Provide a helpful error message to the user in such case.
Signed-off-by: Hari Bathini
---
* No changes since v2.
arch/powerpc/kernel/fadump.c | 36
1 file changed, 36 insertions(+)
diff
On Thu, 1 Jun 2017, Hugh Dickins wrote:
> CONFIG_SLUB_DEBUG_ON=y. My SLAB|SLUB config options are
>
> CONFIG_SLUB_DEBUG=y
> # CONFIG_SLUB_MEMCG_SYSFS_ON is not set
> # CONFIG_SLAB is not set
> CONFIG_SLUB=y
> # CONFIG_SLAB_FREELIST_RANDOM is not set
> CONFIG_SLUB_CPU_PARTIAL=y
> CONFIG_SLABINFO=y
On Thu, 1 Jun 2017, Christoph Lameter wrote:
>
> Ok so debugging was off but the slab cache has a ctor callback which
> mandates that the free pointer cannot use the free object space when
> the object is not in use. Thus the size of the object must be increased to
> accomodate the freepointer.
T
Hi Aleksa,
On Thu, Jun 1, 2017 at 7:38 PM, Aleksa Sarai wrote:
> --- a/arch/alpha/include/uapi/asm/ioctls.h
> +++ b/arch/alpha/include/uapi/asm/ioctls.h
> @@ -94,6 +94,7 @@
> #define TIOCSRS485 _IOWR('T', 0x2F, struct serial_rs485)
> #define TIOCGPTN _IOR('T',0x30, unsigned int) /* Ge
From: Stephen Rothwell
Date: Wed, 31 May 2017 15:43:37 +1000
> asm-generic/socket.h already has an exception for the differences that
> powerpc needs, so just include it after defining the differences.
>
> Signed-off-by: Stephen Rothwell
> ---
> arch/powerpc/include/uapi/asm/socket.h | 92
> +
On 06/01/2017 09:42 AM, christophe leroy wrote:
>
>
> Le 01/06/2017 à 16:30, Aneesh Kumar K.V a écrit :
>> With commit aa888a74977a8 ("hugetlb: support larger than MAX_ORDER") we added
>> support for allocating gigantic hugepages via kernel command line. Switch
>> ppc64 arch specific code to use
Around 95% of memory is reserved by fadump/capture kernel. All this
memory is freed, one page at a time, on writing '1' to the node
/sys/kernel/fadump_release_mem. On systems with large memory, this
can take a long time to complete, leading to soft lockup warning
messages. To avoid this, add resche
On 06/01/2017 01:00 PM, Aleksa Sarai wrote:
--- a/arch/alpha/include/uapi/asm/ioctls.h
+++ b/arch/alpha/include/uapi/asm/ioctls.h
@@ -94,6 +94,7 @@
#define TIOCSRS485 _IOWR('T', 0x2F, struct serial_rs485)
#define TIOCGPTN _IOR('T',0x30, unsigned int) /* Get Pty Number (of
pty-mux d
Hello,
The hypervisor interface to access 24x7 performance counters (which collect
performance information from system power on to system power off) has been
extended in POWER9 adding new fields to the request and result element
structures.
Also, results for some domains now return more than one
H_GET_24X7_CATALOG_PAGE needs to be passed the version number obtained from
the first catalog page obtained previously. This is a 64 bit number, but
create_events_from_catalog truncates it to 32-bit.
This worked on POWER8, but POWER9 actually uses the upper bits so the call
fails with H_P3 because
hv-24x7.h has a comment mentioning that result_buffer->results can't be
indexed as a normal array because it may contain results of variable sizes,
so fix the loop in h_24x7_event_commit_txn to take the variation into
account when iterating through results.
Another problem in that loop is that it
The H_GET_24X7_CATALOG_PAGE hcall can return a signed error code, so fix
this in the code.
The H_GET_24X7_DATA hcall can return a signed error code, so fix this in
the code. Also, don't truncate it to 32 bit to use as return value for
make_24x7_request. In case of error h_24x7_event_commit_txn pas
request_buffer can hold 254 requests, so if it already has that number of
entries we can't add a new one.
Also, define constant to show where the number comes from.
Fixes: e3ee15dc5d19 ("powerpc/perf/hv-24x7: Define add_event_to_24x7_request()")
Signed-off-by: Thiago Jung Bauermann
---
Notes:
POWER9 introduces a new version of the hypervisor API to access the 24x7
perf counters. The new version changed some of the structures used for
requests and results.
Signed-off-by: Thiago Jung Bauermann
---
arch/powerpc/perf/hv-24x7.c| 145 +++--
arch/powe
make_24x7_request already calls log_24x7_hcall if it fails, so callers
don't have to do it again.
In fact, since the latter is now only called from the former, there's no
need for a separate log_24x7_hcall anymore so remove it.
Signed-off-by: Thiago Jung Bauermann
---
arch/powerpc/perf/hv-24x7.
There's an H24x7_DATA_BUFFER_SIZE constant, so use it in init_24x7_request.
There's also an HV_PERF_DOMAIN_MAX constant, so use it in
h_24x7_event_init. This makes the comment above the check redundant,
so remove it.
In add_event_to_24x7_request, a statement is terminated with a comma
instead of
On POWER9 SMT8 the 24x7 API returns two result elements for physical core
and virtual CPU events and we need to add their counts to get the final
result.
Signed-off-by: Thiago Jung Bauermann
---
arch/powerpc/perf/hv-24x7.c | 58 ++---
1 file changed, 44 in
On Thu, Jun 01, 2017 at 07:36:31PM +1000, Michael Ellerman wrote:
I don't think that's what the patch does. It just marks 32 (!?) nodes
as online. Or if you're talking about reverting 3af229f2071f that
leaves you with 256 possible nodes. Both of which are wasteful.
To be clear, with Balbir's s
When opening the slave end of a PTY, it is not possible for userspace to
safely ensure that /dev/pts/$num is actually a slave (in cases where the
mount namespace in which devpts was mounted is controlled by an
untrusted process). In addition, there are several unresolvable
race conditions if usersp
--- a/arch/alpha/include/uapi/asm/ioctls.h
+++ b/arch/alpha/include/uapi/asm/ioctls.h
@@ -94,6 +94,7 @@
#define TIOCSRS485 _IOWR('T', 0x2F, struct serial_rs485)
#define TIOCGPTN _IOR('T',0x30, unsigned int) /* Get Pty Number (of
pty-mux device) */
#define TIOCSPTLCK _IOW('T',0
Tyrel Datwyler writes:
> On 06/01/2017 09:42 AM, christophe leroy wrote:
>> Le 01/06/2017 à 16:30, Aneesh Kumar K.V a écrit :
>>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> index 6981a52b3887..67766e60a6b6 100644
>>> --- a/ar
Hugh Dickins writes:
> On Thu, 1 Jun 2017, Christoph Lameter wrote:
>>
>> Ok so debugging was off but the slab cache has a ctor callback which
>> mandates that the free pointer cannot use the free object space when
>> the object is not in use. Thus the size of the object must be increased to
>>
Nicholas Piggin writes:
> The i-side 0111b case was missed by 7b9f71f974 ("powerpc/64s: POWER9
> machine check handler").
>
> It is possible to trigger this exception by branching to a foreign real
> address (bits [8:12] != 0) with instruction relocation off, and verify
> the exception cause is f
On Fri, 02 Jun 2017 13:14:40 +1000
Michael Ellerman wrote:
> Nicholas Piggin writes:
>
> > The i-side 0111b case was missed by 7b9f71f974 ("powerpc/64s: POWER9
> > machine check handler").
> >
> > It is possible to trigger this exception by branching to a foreign real
> > address (bits [8:12] !
In commit 8c272261194d ("powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID"), we
switched to the generic implementation of cpu_to_node(), which uses a percpu
variable to hold the NUMA node for each CPU.
Unfortunately we neglected to notice that we use cpu_to_node() in the allocation
of our percpu areas
On 06/01/2017 04:36 AM, Michael Ellerman wrote:
> Michael Bringmann writes:
>
>> On 05/29/2017 12:32 AM, Michael Ellerman wrote:
>>> Reza Arbab writes:
>>>
On Fri, May 26, 2017 at 01:46:58PM +1000, Michael Ellerman wrote:
> Reza Arbab writes:
>
>> On Thu, May 25, 2017 at 04:1
On Fri, 2 Jun 2017 15:14:47 +1000
Michael Ellerman wrote:
> In commit 8c272261194d ("powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID"), we
> switched to the generic implementation of cpu_to_node(), which uses a percpu
> variable to hold the NUMA node for each CPU.
>
> Unfortunately we neglected to
56 matches
Mail list logo