eeh_handle_special_event() is called when an EEH event is detected but
can't be narrowed down to a specific PE. This function looks through
every PE to find one in an erroneous state, then calls the regular event
handler eeh_handle_normal_event() once it knows which PE has an error.
However, if e
Remove unnecessary tags in eeh_handle_normal_event(), and add function
comments for eeh_handle_normal_event() and eeh_handle_special_event().
The only functional difference is that in the case of a PE reaching the
maximum number of failures, rather than one message telling you of this
and suggesti
v5:
---
Mainly the v5 patch series are updated according to
comments from Jiri Olsa .
The kernel part doesn't have functional change. It just
solve the merge issue.
In userspace, the functions of branch type counting and
branch type name resolving are moved to the new files:
It is often useful to know the branch types while analyzing branch
data. For example, a call is very different from a conditional branch.
Currently we have to look it up in binary while the binary may later
not be available and even the binary is available but user has to take
some time. It is ver
Perf already has support for disassembling the branch instruction
and using the branch type for filtering. The patch just records
the branch type in perf_branch_entry.
Before recording, the patch converts the x86 branch type to
common branch type.
Change log
--
v5: Just fix the merge err
The option indicates the kernel to save branch type during sampling.
One example:
perf record -g --branch-filter any,save_type
Change log
--
v5: Not changed.
Signed-off-by: Jin Yao
---
tools/perf/Documentation/perf-record.txt | 1 +
tools/perf/util/parse-branch-options.c | 1 +
2 f
The branch info such as predicted/cycles/... are printed at the
callchain entries.
For example: perf report --branch-history --no-children --stdio
--1.07%--main div.c:39 (predicted:52.4% cycles:1 iterations:17)
main div.c:44 (predicted:52.4% cycles:1)
main div.c:42
Create new util/branch.c and util/branch.h to contain the common
branch functions. Such as:
branch_type_count(): Count the numbers of branch types
branch_type_name() : Return the name of branch type
The branch type is saved in branch_flags.
Change log
--
v5: It's a new patch in v5 patch
Show the branch type statistics at the end of perf report --stdio.
For example:
perf report --stdio
JCC forward: 27.6%
JCC backward: 10.0%
CROSS_4K: 0.0%
CROSS_2M: 14.3%
JCC: 37.6%
JMP: 0.0%
IND_JMP: 6.5%
CALL: 26.6%
IND_CALL: 0.0%
Show branch type in callchain entry. The branch type is printed
with other LBR information (such as cycles/abort/...).
For example:
perf report --branch-history --stdio --no-children
--23.56%--main div.c:42 (RET CROSS_2M cycles:2)
compute_flag div.c:28 (cycles:2)
compute_flag
Excerpts from David Laight's message of April 18, 2017 18:22:
From: Naveen N. Rao
Sent: 12 April 2017 11:58
...
+kprobe_opcode_t *kprobe_lookup_name(const char *name)
+{
...
+ char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
+ const char *modsym;
+ bool dot_appended = fa
From: Naveen N. Rao
> Sent: 19 April 2017 09:09
> To: David Laight; Michael Ellerman
> Cc: linux-ker...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Masami
> Hiramatsu; Ingo Molnar
> Subject: RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a
> function
>
> Excerpts from David Laig
Oliver O'Halloran writes:
> On Wed, Apr 19, 2017 at 2:46 AM, Rob Herring wrote:
>> On Mon, Apr 17, 2017 at 7:32 PM, Tyrel Datwyler
>> wrote:
>>> This patch introduces event tracepoints for tracking a device_nodes
>>> reference cycle as well as reconfig notifications generated in response
>>> to
This patch set implements CONFIG_DEBUG_RODATA on Powerpc32
after fixing a few issues related to kernel code page protection.
The second patch of the set was initially submitted as standalone.
This new version takes into account Michael comments. It is part
of the set because it is now based on fun
__change_page_attr() uses flush_tlb_page().
flush_tlb_page() uses tlbie instruction, which also invalidates
pinned TLBs, which is not what we expect.
This patch modifies the implementation to use flush_tlb_kernel_range()
instead. This will make use of tlbia which will preserve pinned TLBs.
Signed
As seen below, allthough the init sections have been freed, the
associated memory area is still marked as executable in the
page tables.
~ dmesg
[5.860093] Freeing unused kernel memory: 592K (c057 - c0604000)
~ cat /sys/kernel/debug/kernel_page_tables
---[ Start of kernel VM ]---
0xc0
This patch implements CONFIG_DEBUG_RODATA on PPC32.
As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
in order to allow page protection setup at the level of each page.
As BAT/LTLB mappings are deactivated, their might be performance
impact. For this reason, we keep it OFF by de
On 2017/04/19 08:48AM, David Laight wrote:
> From: Naveen N. Rao
> > Sent: 19 April 2017 09:09
> > To: David Laight; Michael Ellerman
> > Cc: linux-ker...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Masami
> > Hiramatsu; Ingo Molnar
> > Subject: RE: [PATCH v2 1/5] kprobes: convert kprobe_looku
Hi Balbir,
> > FTRACE is quite CPU consumming, shouldn't it really be on by
> > default ?
>
> It does some work at boot to NOP out function entry points at _mcount
> locations. Is that what you are referring to? Or the overhead of the
> code in terms of size? Most distro kernels have tracing on
On Wed, 2017-04-19 at 21:13 +1000, Anton Blanchard wrote:
> Hi Balbir,
>
> > > FTRACE is quite CPU consumming, shouldn't it really be on by
> > > default ?
> >
> > It does some work at boot to NOP out function entry points at _mcount
> > locations. Is that what you are referring to? Or the over
v2:
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1375870.html
v3 changes:
- Patch 3/5 in the previous series ("powerpc: introduce a new helper to
obtain function entry points") has been dropped from this series and
will instead be posted as part of the KPROBES_ON_FTRACE patchse
commit 239aeba76409 ("perf powerpc: Fix kprobe and kretprobe handling
with kallsyms on ppc64le") changed how we use the offset field in struct
kprobe on ABIv2. perf now offsets from the GEP (Global entry point) if an
offset is specified and otherwise chooses the LEP (Local entry point).
Fix the sa
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided field, let's ensure that the length of the string is
within expected limits.
Signed-off-by: Naveen N. Rao
---
include/linux/kprobes.h | 1 +
The macro is now pretty long and ugly on powerpc. In the light of
further changes needed here, convert it to a __weak variant to be
over-ridden with a nicer looking function.
Suggested-by: Masami Hiramatsu
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/kpr
On kprobe handler re-entry, try to emulate the instruction rather than
single stepping always.
Acked-by: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/pow
Convert usage of strncpy()/strncat() to memcpy()/strlcat() for simpler
and safer string manipulation.
Reported-by: David Laight
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 11 +--
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/kpro
set_current_kprobe() already saves regs->msr into kprobe_saved_msr. Remove
the redundant save.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 067e9863bfdf..5c0a1
No functional changes.
Acked-by: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 52 ++-
1 file changed, 31 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobe
Move the stack setup and teardown code to the ftrace_graph_caller().
This way, we don't incur the cost of setting it up unless function graph
is enabled for this function.
Also, remove the extraneous LR restore code after the function graph
stub. LR has previously been restored and neither livepat
Allow kprobes to be placed on ftrace _mcount() call sites. This
optimization avoids the use of a trap, by riding on ftrace
infrastructure.
This depends on HAVE_DYNAMIC_FTRACE_WITH_REGS which depends on
MPROFILE_KERNEL, which is only currently enabled on powerpc64le with
newer toolchains.
Based on
v3:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg116800.html
For v4, patch 5/6 is new and has been moved into this series. It has
also been updated to use strlcat() instead of strncat(). No other
changes.
Also, though patch 3/6 is generic, it needs to be carried in this
series as
From: Masami Hiramatsu
Skip preparing optprobe if the probe is ftrace-based, since anyway, it
must not be optimized (or already optimized by ftrace).
Tested-by: Naveen N. Rao
Signed-off-by: Masami Hiramatsu
---
kernel/kprobes.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
Pass the real LR to the ftrace handler. This is needed for
KPROBES_ON_FTRACE for the pre handlers.
Also, with KPROBES_ON_FTRACE, the link register may be updated by the
pre handlers or by a registed kretprobe. Honor updated LR by restoring
it from pt_regs, rather than from the stack save area.
Li
kprobe_lookup_name() is specific to the kprobe subsystem and may not
always return the function entry point (in a subsequent patch for
KPROBES_ON_FTRACE). For looking up function entry points, introduce a
separate helper and use the same in optprobes.c
Signed-off-by: Naveen N. Rao
---
arch/power
KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
eliminates the need for a trap, as well as the need to emulate or
single-step instructions.
Though OPTPROBES provides us with similar performance, we have limited
optprobes trampoline slots. As such, when asked to probe at a
Function store_updates_sp() checks whether the faulting
instruction is a store updating r1. Therefore we can limit its calls
to stores exceptions.
This patch is an improvement of commit a7a9dcd882a67 ("powerpc: Avoid
taking a data miss on every userspace instruction miss")
With the same microbenc
This patchset is a split of previous patch called the same way as
this summary. Comments from Michael are taken into account.
Christophe Leroy (5):
powerpc/mm: only call store_updates_sp() on stores in do_page_fault()
powerpc/mm: split store_updates_sp() in two parts in do_page_fault()
power
The result of (trap == 0x400) is already in is_exec.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/fault.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 9d21e5fd383d..b56bf472db6d 100644
--- a/arch/powerpc/mm/fau
Only the get_user() in store_updates_sp() has to be done outside
the mm semaphore. All the comparison can be done within the semaphore,
so only when really needed.
As we got a DSI exception, the address pointed by regs->nip is
obviously valid, otherwise we would have had a instruction exception.
S
Analysis of the assembly code shows that when using user_mode(regs),
at least the 'andi.' is redone all the time, and also
the 'lwz ,132(r31)' most of the time. With the new form, the 'is_user'
is mapped to cr4, then all further use of is_user results in just
things like 'beq cr4,218 '
Without the
The 8xx has a dedicated exception for breakpoints, that directly
calls do_break()
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/fault.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 8d1639eee3af..400f2d0d42f8 100
Since last posting:
- accounted for some minor omments
- improved changelogs
- updated to powerpc next which includes Gautham's idle changes.
- Fixed CONFIG_RELOCATABLE build with the new patch 1.
- Also added the last patch which simplifies the DD1 workaround,
which is possible with HSPRG0 wake
The system reset idle handler system_reset_idle_common is relocated, so
relocation is not required to branch to kvm_start_guest. The superfluous
relocation does not result in incorrect code, but it does not compile
outside of exception-64s.S (with fixed section definitions).
Signed-off-by: Nichola
No functional change.
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/exceptions-64s.S | 26 +
arch/powerpc/kernel/idle_book3s.S| 73 +++-
2 files changed, 48 insertions(+), 51 deletions(-)
diff --git a/arch/
The POWER8 idle code has a neat trick of programming the power on engine
to restore a low bit into HSPRG0, so idle wakeup code can test and see
if it has been programmed this way and therefore lost all state. Restore
time can be reduced if winkle has not been reached.
However this messes with our
This reduces the number of nops for POWER8.
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/idle_book3s.S | 23 ++-
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/kernel/idle_book3s.S
b/arch/powerpc/kernel/i
The ISA specifies power save wakeup due to a machine check exception can
cause a machine check interrupt (rather than the usual system reset
interrupt).
The machine check handler copes with this by doing low level machine
check recovery without restoring full state from idle, then queues up a
mach
In preparation for adding more bits to the core idle state word, move
the lock bit up, and unlock by flipping the lock bit rather than masking
off all but the thread bits.
Add branch hints for atomic operations while we're here.
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Nicholas Piggin
---
When taking the core idle state lock, grab it immediately like a regular
lock, rather than adding more tests in there. Holding the lock keeps it
stable, so there is no need to do it whole holding the reservation.
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Nicholas Piggin
---
arch/powerpc/ker
If not all threads were in winkle, full state loss recovery is not
necessary and can be avoided. A previous patch removed this optimisation
due to some complexity with the implementation. Re-implement it by
counting the number of threads in winkle with the per-core idle state.
Only restore full sta
The idle workaround does not need to load PACATOC, and it does not
need to be called within a nested function that requires LR to be
saved.
Load the PACATOC at entry to the idle wakeup. It does not matter which
PACA this comes from, so it's okay to call before the workaround. Then
apply the workar
"Paul E. McKenney" writes:
> On Thu, Apr 13, 2017 at 06:37:57PM +0200, Peter Zijlstra wrote:
>> On Thu, Apr 13, 2017 at 09:26:51AM -0700, Paul E. McKenney wrote:
>>
>> > ARCH_WEAK_RELEASE_ACQUIRE actually works both ways.
>> >
>> > To see this, imagine some strange alternate universe in which t
Balbir Singh writes:
> On Wed, 2017-04-19 at 21:13 +1000, Anton Blanchard wrote:
>> Hi Balbir,
>>
>> > > FTRACE is quite CPU consumming, shouldn't it really be on by
>> > > default ?
>> >
>> > It does some work at boot to NOP out function entry points at _mcount
>> > locations. Is that what y
Christophe Leroy writes:
> diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
> index 32509de6ce4c..4af81fb23653 100644
> --- a/arch/powerpc/kernel/ftrace.c
> +++ b/arch/powerpc/kernel/ftrace.c
> @@ -526,7 +526,9 @@ void ftrace_replace_code(int enable)
> */
> void arch_ft
On Wed, 19 Apr 2017 14:19:47 +1000
Michael Ellerman wrote:
> Michal Suchánek writes:
> > On Mon, 17 Apr 2017 20:43:02 +0530
> > Hari Bathini wrote:
> >> On Friday 14 April 2017 01:28 AM, Michal Suchánek wrote:
> >> > more (optional) properties cannot be added?
> >>
> >> Kernel change s
On Wed, Apr 19, 2017 at 11:48:14PM +0800, Jin Yao wrote:
SNIP
> +static int branch_type_str(struct branch_type_stat *stat,
> +char *bf, int bfsize)
> +{
> + int i, j = 0, printed = 0;
> + u64 total = 0;
> +
> + for (i = 0; i < PERF_BR_MAX; i++)
> +
On Wed, Apr 19, 2017 at 11:48:11PM +0800, Jin Yao wrote:
SNIP
> +
> static int counts_str_build(char *bf, int bfsize,
>u64 branch_count, u64 predicted_count,
>u64 abort_count, u64 cycles_count,
>u64 iter_count, u
On Wed, Apr 19, 2017 at 11:48:13PM +0800, Jin Yao wrote:
SNIP
> +static void branch_type_stat_display(FILE *fp, struct branch_type_stat *stat)
> +{
> + u64 total = 0;
> + int i;
> +
> + for (i = 0; i < PERF_BR_MAX; i++)
> + total += stat->counts[i];
> +
> + if (total =
On Wed, Apr 19, 2017 at 11:48:11PM +0800, Jin Yao wrote:
SNIP
> static int counts_str_build(char *bf, int bfsize,
>u64 branch_count, u64 predicted_count,
>u64 abort_count, u64 cycles_count,
>u64 iter_count, u64 s
On Wed, Apr 19, 2017 at 11:48:14PM +0800, Jin Yao wrote:
SNIP
> +static int count_str_printf(int index, const char *str,
> + char *bf, int bfsize)
> +{
> + int printed;
> +
> + printed = scnprintf(bf, bfsize,
> + "%s%s",
> + (index) ? " " : " (", str);
> +
> +
On Wed, Apr 19, 2017 at 11:48:13PM +0800, Jin Yao wrote:
SNIP
> +static void branch_type_stat_display(FILE *fp, struct branch_type_stat *stat)
> +{
> + u64 total = 0;
> + int i;
> +
> + for (i = 0; i < PERF_BR_MAX; i++)
> + total += stat->counts[i];
> +
> + if (total =
Le 19/04/2017 à 16:01, Michael Ellerman a écrit :
Christophe Leroy writes:
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 32509de6ce4c..4af81fb23653 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -526,7 +526,9 @@ void ftrace_re
Add powerpc support for mmap_rnd_bits and mmap_rnd_compat_bits, which are two
sysctls that allow a user to configure the number of bits of randomness used for
ASLR.
Because of the way the Kconfig for ARCH_MMAP_RND_BITS is defined, we have to
construct at least the MIN value in Kconfig, vs in a hea
On Wednesday 19 April 2017 10:20 AM, Michael Ellerman wrote:
Peter Zijlstra writes:
On Tue, Apr 11, 2017 at 07:21:05AM +0530, Madhavan Srinivasan wrote:
From: Sukadev Bhattiprolu
perf_mem_data_src is an union that is initialized via the ->val field
and accessed via the bitmap fields. For
On Wed, 19 Apr 2017 18:21:02 +0530
"Naveen N. Rao" wrote:
> When a kprobe is being registered, we use the symbol_name field to
> lookup the address where the probe should be placed. Since this is a
> user-provided field, let's ensure that the length of the string is
> within expected limits.
Wou
Hi Hemant,
[auto build test WARNING on linus/master]
[also build test WARNING on v4.11-rc7]
[cannot apply to next-20170419]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Anju-T-Sudhakar
BTW, as I pointed, 5/7 and 6/7 should be merged since this actually
makes meaningful change.
Thank you,
On Wed, 19 Apr 2017 18:21:05 +0530
"Naveen N. Rao" wrote:
> On kprobe handler re-entry, try to emulate the instruction rather than
> single stepping always.
>
> Acked-by: Ananth N Mavinakay
On Wed, 19 Apr 2017 18:21:04 +0530
"Naveen N. Rao" wrote:
Factor out code to emulate instruction into a try_to_emulate()
helper function. This makes ...
> No functional changes.
Thanks,
>
> Acked-by: Ananth N Mavinakayanahalli
> Signed-off-by: Naveen N. Rao
> ---
> arch/powerpc/kernel/kpro
On Wed, 19 Apr 2017 18:21:06 +0530
"Naveen N. Rao" wrote:
> set_current_kprobe() already saves regs->msr into kprobe_saved_msr. Remove
> the redundant save.
>
Looks good to me.
Reviewed-by: Masami Hiramatsu
Thank you,
> Signed-off-by: Naveen N. Rao
> ---
> arch/powerpc/kernel/kprobes.c |
On Wed, Apr 19, 2017 at 11:38:22PM +1000, Michael Ellerman wrote:
> "Paul E. McKenney" writes:
>
> > On Thu, Apr 13, 2017 at 06:37:57PM +0200, Peter Zijlstra wrote:
> >> On Thu, Apr 13, 2017 at 09:26:51AM -0700, Paul E. McKenney wrote:
> >>
> >> > ARCH_WEAK_RELEASE_ACQUIRE actually works both wa
From: Christophe Leroy
> By default, PPC8xx PINs an ITLB on the first 8M of memory in order
> to avoid any ITLB miss on kernel code.
> However, with some debug functions like DEBUG_PAGEALLOC and
> (soon to come) DEBUG_RODATA, the PINned TLB is invalidated soon
> after startup so ITLB missed start t
This is the first of a series of patches to build up a suitable kprobes
blacklist. This series only blacklists the exception vectors.
While I have more patches in the works to blacklist other symbols, I
wanted to get some early feedback on these two patches to ensure that
the approach is ok. So, I
Introduce __head_end to mark end of the early fixed sections and use the
same to blacklist all exception handlers from kprobes.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/sections.h | 1 +
arch/powerpc/kernel/kprobes.c | 9 +
arch/powerpc/kernel/vmlinux.lds.S | 2 +
Blacklist all the exception common/OOL handlers as the kernel stack is
not yet setup, which means we can't take a trap at this point.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/head-64.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/include/asm/head-64.h
b/arch
Excerpts from Masami Hiramatsu's message of April 19, 2017 20:07:
On Wed, 19 Apr 2017 18:21:02 +0530
"Naveen N. Rao" wrote:
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided field, let's ensure
Excerpts from Masami Hiramatsu's message of April 19, 2017 20:13:
BTW, as I pointed, 5/7 and 6/7 should be merged since this actually
makes meaningful change.
Yes, sorry if I wasn't clear in my previous reply in the (!) previous
patch series.
Since this has to go through the powerpc tree, I
The definition of smp_mb__after_unlock_lock() is currently smp_mb()
for CONFIG_PPC and a no-op otherwise. It would be better to instead
provide an architecture-selectable Kconfig option, and select the
strength of smp_mb__after_unlock_lock() based on that option. This
commit therefore creates ARC
On Thu, 21 Apr 2016 13:48:42 +0200
Petr Mladek wrote:
> printk() takes some locks and could not be used a safe way in NMI context.
I just found a problem with this solution. It kills ftrace dumps from
NMI context :-(
[ 1295.168495]<...>-67423 10dNh1 38217us : do_raw_spin_lock
<-_raw_s
On Wed, Apr 19, 2017 at 01:13:41PM -0400, Steven Rostedt wrote:
> On Thu, 21 Apr 2016 13:48:42 +0200
> Petr Mladek wrote:
>
> > printk() takes some locks and could not be used a safe way in NMI context.
>
> I just found a problem with this solution. It kills ftrace dumps from
> NMI context :-(
>
On Wed, Apr 19, 2017 at 7:29 AM, Michael Ellerman wrote:
> Add powerpc support for mmap_rnd_bits and mmap_rnd_compat_bits, which are two
> sysctls that allow a user to configure the number of bits of randomness used
> for
> ASLR.
>
> Because of the way the Kconfig for ARCH_MMAP_RND_BITS is define
On 04/17/17 17:32, Tyrel Datwyler wrote:
> This patch introduces event tracepoints for tracking a device_nodes
> reference cycle as well as reconfig notifications generated in response
> to node/property manipulations.
>
> With the recent upstreaming of the refcount API several device_node
> under
On Tue, Apr 18, 2017 at 9:12 PM, Yongji Xie wrote:
> On 19 April 2017 at 09:47, Michael Ellerman wrote:
>> Bjorn Helgaas writes:
>>
>>> On Mon, Apr 17, 2017 at 4:36 PM, Bjorn Helgaas wrote:
From: Yongji Xie
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
b/arch/powerpc/p
On 04/18/2017 07:31 PM, Frank Rowand wrote:
> On 04/18/17 18:31, Michael Ellerman wrote:
>> Frank Rowand writes:
>>
>>> On 04/17/17 17:32, Tyrel Datwyler wrote:
This patch introduces event tracepoints for tracking a device_nodes
reference cycle as well as reconfig notifications generated
Hi Michael,
On Wed, Apr 19, 2017 at 7:59 PM, Michael Ellerman wrote:
> Add powerpc support for mmap_rnd_bits and mmap_rnd_compat_bits, which are two
> sysctls that allow a user to configure the number of bits of randomness used
> for
> ASLR.
>
> Because of the way the Kconfig for ARCH_MMAP_RND_B
Le 19/04/2017 à 16:22, Christophe LEROY a écrit :
Le 19/04/2017 à 16:01, Michael Ellerman a écrit :
Christophe Leroy writes:
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 32509de6ce4c..4af81fb23653 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/power
On 04/18/2017 07:49 PM, Steven Rostedt wrote:
> On Tue, 18 Apr 2017 18:42:32 -0700
> Frank Rowand wrote:
>
>> And of course the other issue with using tracepoints is the extra space
>> required to hold the tracepoint info. With the pr_debug() approach, the
>> space usage can be easily removed fo
This patch series enables DPAA1 QBMan devices for ARM and
ARM64 architectures. This allows the LS1043A and LS1046A to use
QBMan functionality.
Changes since v1:
Reworked private memory allocations to use shared-dma-pool on ARM platforms
Claudiu Manoil (2):
soc/fsl/qbman: Drop L1_CACHE_BYTES com
Use the shared-memory-pool mechanism for free buffer proxy record
area allocation.
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/bman_ccsr.c | 35 ++-
drivers/soc/fsl/qbman/bman_priv.h | 3 +++
2 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/
Use the shared-memory-pool mechanism for frame queue descriptor and
packed frame descriptor record area allocations.
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/qman_ccsr.c | 136 +-
drivers/soc/fsl/qbman/qman_priv.h | 4 +-
drivers/soc/fsl/qbman/qma
Updates the QMan and BMan device tree bindings for reserved memory
nodes. This makes the reserved memory allocation compatiable with
the shared-dma-pool usage.
Signed-off-by: Roy Pledge
---
Documentation/devicetree/bindings/soc/fsl/bman.txt | 11 ++-
Documentation/devicetree/bindings/soc
From: Madalin Bucur
Replace PPC specific set/clear_bits API with standard
bit twiddling so driver is portalable outside PPC.
Signed-off-by: Madalin Bucur
Signed-off-by: Claudiu Manoil
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/bman.c | 2 +-
drivers/soc/fsl/qbman/qman.c | 8
From: Claudiu Manoil
Not relevant and arch dependent. Overkill for PPC.
Signed-off-by: Claudiu Manoil
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/dpaa_sys.h | 4
1 file changed, 4 deletions(-)
diff --git a/drivers/soc/fsl/qbman/dpaa_sys.h b/drivers/soc/fsl/qbman/dpaa_sys.h
index
From: Madalin Bucur
Add revision 3.2 of the QBMan block. This is the version
for LS1043A and LS1046A SoCs.
Signed-off-by: Madalin Bucur
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/qman_ccsr.c | 2 ++
drivers/soc/fsl/qbman/qman_priv.h | 1 +
2 files changed, 3 insertions(+)
diff --gi
Rework ioremap() for PPC and ARM. The PPC devices require a
non-coherent mapping while ARM will work with a non-cachable/write
combine mapping.
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/bman_portal.c | 16 +---
drivers/soc/fsl/qbman/qman_portal.c | 16 +---
2 fi
From: Madalin Bucur
Signed-off-by: Madalin Bucur
Signed-off-by: Claudiu Manoil
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/bman.c | 22 ++
drivers/soc/fsl/qbman/qman.c | 38 ++
2 files changed, 60 insertions(+)
diff --git a/driv
From: Madalin Bucur
Signed-off-by: Madalin Bucur
Signed-off-by: Claudiu Manoil
[Stuart: changed to use ARCH_LAYERSCAPE]
Signed-off-by: Stuart Yoder
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/soc/fsl/
From: Valentin Rothberg
The Kconfig symbol for 32bit ARM is 'ARM', not 'ARM32'.
Signed-off-by: Valentin Rothberg
Signed-off-by: Claudiu Manoil
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/dpaa_sys.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/soc/fsl/qb
From: Claudiu Manoil
Unlike PPC builds, ARM builds need following headers
explicitly:
+#include for ioread32be()
+#includefor udelay()
Signed-off-by: Claudiu Manoil
Signed-off-by: Roy Pledge
---
drivers/soc/fsl/qbman/dpaa_sys.h | 2 ++
1 file changed, 2 insertions(+)
On 04/19/2017 03:13 AM, Michael Ellerman wrote:
> Oliver O'Halloran writes:
>
>> On Wed, Apr 19, 2017 at 2:46 AM, Rob Herring wrote:
>>> On Mon, Apr 17, 2017 at 7:32 PM, Tyrel Datwyler
>>> wrote:
This patch introduces event tracepoints for tracking a device_nodes
reference cycle as we
Hi Hemant,
[auto build test WARNING on linus/master]
[also build test WARNING on v4.11-rc7]
[cannot apply to next-20170419]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Anju-T-Sudhakar
1 - 100 of 147 matches
Mail list logo