emulate_step is the basic infrastructure which is used by number of other
kernel infrastructures like kprobe, hw-breakpoint(data breakpoint) etc.
In case of kprobe, enabling emulation of load/store instructions will
speedup the execution of probed instruction. In case of kernel-space
breakpoint, ca
emulate_step() uses a number of underlying kernel functions that were
initially not enabled for LE. This has been rectified since. So, fix
emulate_step() for LE for the corresponding instructions.
Reported-by: Anton Blanchard
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/sstep.c | 20 --
Add new selftest that test emulate_step for Normal, Floating Point,
Vector and Vector Scalar - load/store instructions. Test should run
at boot time if CONFIG_KPROBES_SANITY_TEST and CONFIG_PPC64 is set.
Sample log:
[0.762063] emulate_step smoke test: start.
[0.762219] emulate_step sm
commit 239aeba76409 ("perf powerpc: Fix kprobe and kretprobe handling
with kallsyms on ppc64le") changed how we use the offset field in struct
kprobe on ABIv2. perf now offsets from the GEP (Global entry point) if an
offset is specified and otherwise chooses the LEP (Local entry point).
Fix the sa
This helper will be used in a subsequent patch to emulate instructions
on re-entering the kprobe handler. No functional change.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 52 ++-
1 file changed, 31 insertions(+), 21 deletions(-)
diff
On kprobe handler re-entry, try to emulate the instruction rather than
single stepping always.
As a related change, remove the duplicate saving of msr as that is
already done in set_current_kprobe()
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 9 -
1 file changed, 8
On Tue, Feb 14, 2017 at 02:08:01PM +0530, Naveen N. Rao wrote:
> commit 239aeba76409 ("perf powerpc: Fix kprobe and kretprobe handling
> with kallsyms on ppc64le") changed how we use the offset field in struct
> kprobe on ABIv2. perf now offsets from the GEP (Global entry point) if an
> offset is s
On Tue, Feb 14, 2017 at 02:08:02PM +0530, Naveen N. Rao wrote:
> This helper will be used in a subsequent patch to emulate instructions
> on re-entering the kprobe handler. No functional change.
>
> Signed-off-by: Naveen N. Rao
Acked-by: Ananth N Mavinakayanahalli
On Tue, Feb 14, 2017 at 02:08:03PM +0530, Naveen N. Rao wrote:
> On kprobe handler re-entry, try to emulate the instruction rather than
> single stepping always.
>
> As a related change, remove the duplicate saving of msr as that is
> already done in set_current_kprobe()
>
> Signed-off-by: Naveen
Paolo Bonzini writes:
> On 10/02/2017 04:59, Stephen Rothwell wrote:
>> Hi all,
>>
>> Today's linux-next merge of the kvm tree got a conflict in:
>>
>> arch/powerpc/include/asm/head-64.h
>>
>> between commit:
>>
>> 852e5da99d15 ("powerpc/64s: Tidy up after exception handler rework")
>>
>
Hi Michael,
Can you please pull this patch.
Thanks,
Ravi
On Tuesday 22 November 2016 02:55 PM, Ravi Bangoria wrote:
> Xmon data-breakpoint feature is broken.
>
> Whenever there is a watchpoint match occurs, hw_breakpoint_handler will
> be called by do_break via notifier chains mechanism. If watc
On 2017/02/14 01:32PM, Ravi Bangoria wrote:
> emulate_step() uses a number of underlying kernel functions that were
> initially not enabled for LE. This has been rectified since. So, fix
> emulate_step() for LE for the corresponding instructions.
>
> Reported-by: Anton Blanchard
> Signed-off-by:
emulate_step is the basic infrastructure which is used by number of other
kernel infrastructures like kprobe, hw-breakpoint(data breakpoint) etc.
In case of kprobe, enabling emulation of load/store instructions will
speedup the execution of probed instruction. In case of kernel-space
breakpoint, ca
emulate_step() uses a number of underlying kernel functions that were
initially not enabled for LE. This has been rectified since. So, fix
emulate_step() for LE for the corresponding instructions.
Reported-by: Anton Blanchard
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/sstep.c | 20 --
Add new selftest that test emulate_step for Normal, Floating Point,
Vector and Vector Scalar - load/store instructions. Test should run
at boot time if CONFIG_KPROBES_SANITY_TEST and CONFIG_PPC64 is set.
Sample log:
[0.762063] emulate_step smoke test: start.
[0.762219] emulate_step sm
On Tuesday 14 February 2017 02:17 PM, Naveen N. Rao wrote:
> On 2017/02/14 01:32PM, Ravi Bangoria wrote:
>> emulate_step() uses a number of underlying kernel functions that were
>> initially not enabled for LE. This has been rectified since. So, fix
>> emulate_step() for LE for the corresponding
Ravi Bangoria writes:
> emulate_step() uses a number of underlying kernel functions that were
> initially not enabled for LE. This has been rectified since.
When exactly? ie. which commit.
Should we backport this? ie. is it actually a bug people are hitting in
the real world much?
cheers
Ravi Bangoria writes:
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index fce05a3..5c5ae66 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -528,6 +528,8 @@ int __kprobes longjmp_break_handler(struct kprobe *p,
> struct pt_r
Ravi Bangoria writes:
> diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
> index 0e649d7..ddc879d 100644
> --- a/arch/powerpc/lib/Makefile
> +++ b/arch/powerpc/lib/Makefile
> @@ -33,3 +33,7 @@ obj-$(CONFIG_ALTIVEC) += xor_vmx.o
> CFLAGS_xor_vmx.o += -maltivec $(call cc-op
"Aneesh Kumar K.V" writes:
> On Tuesday 14 February 2017 11:19 AM, Michael Ellerman wrote:
>> "Aneesh Kumar K.V" writes:
>>
>>> Autonuma preserves the write permission across numa fault to avoid taking
>>> a writefault after a numa fault (Commit: b191f9b106ea " mm: numa: preserve
>>> PTE
>>> wr
Michael Neuling writes:
> On Thu, 2017-02-09 at 08:30 +0530, Aneesh Kumar K.V wrote:
>> With this our protnone becomes a present pte with READ/WRITE/EXEC bit
>> cleared.
>> By default we also set _PAGE_PRIVILEGED on such pte. This is now used to help
>> us identify a protnone pte that as saved w
"Aneesh Kumar K.V" writes:
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> index 0735d5a8049f..8720a406bbbe 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> @@ -16,
Thanks Michael,
On Tuesday 14 February 2017 03:50 PM, Michael Ellerman wrote:
> Ravi Bangoria writes:
>
>> emulate_step() uses a number of underlying kernel functions that were
>> initially not enabled for LE. This has been rectified since.
> When exactly? ie. which commit.
I found couple of com
"Guilherme G. Piccoli" writes:
> Currently the xmon debugger is set only via kernel boot command-line.
> It's disabled by default, and can be enabled with "xmon=on" on the
> command-line. Also, xmon may be accessed via sysrq mechanism, but once
> we enter xmon via sysrq, it's kept enabled until
On Tuesday 14 February 2017 04:16 PM, Michael Ellerman wrote:
> Ravi Bangoria writes:
>
>> diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
>> index 0e649d7..ddc879d 100644
>> --- a/arch/powerpc/lib/Makefile
>> +++ b/arch/powerpc/lib/Makefile
>> @@ -33,3 +33,7 @@ obj-$(CONFIG_A
Pan Xinhui writes:
> 在 2017/2/14 10:35, Nicholas Piggin 写道:
>> On Mon, 13 Feb 2017 19:00:42 -0200
>> "Guilherme G. Piccoli" wrote:
>>> * I had this patch partially done for some time, and after a discussion
>>> at the kernel slack channel latest week, I decided to rebase and fix
>>> some remainin
Currently the build breaks if CMA=n and SPAPR_TCE_IOMMU=y:
arch/powerpc/mm/mmu_context_iommu.c: In function ‘mm_iommu_get’:
arch/powerpc/mm/mmu_context_iommu.c:193:42: error: ‘MIGRATE_CMA’ undeclared
(first use in this function)
if (get_pageblock_migratetype(page) == MIGRATE_CMA) {
^~
Pan Xinhui writes:
> Once xmon is triggered, there is no interface to turn it off again.
> However there exists disable/enable xmon code flows. And more important,
> System reset interrupt on powerVM will fire an oops to make a dump. At
> that time, xmon should not be triggered.
>
> So add 'z' op
Michael Ellerman writes:
> "Aneesh Kumar K.V" writes:
>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>> b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>> index 0735d5a8049f..8720a406bbbe 100644
>> --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>> +++ b/arch/powerpc/include/
On Fri, 2016-12-09 at 00:07:35 UTC, David Gibson wrote:
> This adds the hypercall numbers and wrapper functions for the hash page
> table resizing hypercalls.
>
> It also adds a new firmware feature flag to track the presence of the
> HPT resizing calls.
>
> Signed-off-by: David Gibson
> Reviewe
On Thu, 2017-01-12 at 15:09:21 UTC, Wei Yongjun wrote:
> From: Wei Yongjun
>
> Fix typo in parameter description.
>
> Signed-off-by: Wei Yongjun
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/b0b5a76579ea62a9eeb720e71fdaa9
cheers
On Wed, 2017-02-01 at 22:52:42 UTC, Shailendra Singh wrote:
> The generic implementation of of_node_to_nid is EXPORT_SYMBOL.
>
> The powerpc implementation added by following commit is EXPORT_SYMBOL_GPL.
> commit 953039c8df7b ("[PATCH] powerpc: Allow devices to register with numa
> topology")
>
>
On Tue, 2017-02-07 at 10:01:01 UTC, Michael Ellerman wrote:
> Currently the opal_exit tracepoint usually shows the opcode as 0:
>
> -0 [047] d.h. 635.654292: opal_entry: opcode=63
> -0 [047] d.h. 635.654296: opal_exit: opcode=0 retval=0
> kopald-1209 [019] d... 636.420943: opa
On Tue, 2017-02-07 at 19:54:14 UTC, "Naveen N. Rao" wrote:
> kprobe_exceptions_notify() is not used on some of the architectures such
> as arm[64] and powerpc anymore. Introduce a weak variant for such
> architectures.
>
> Signed-off-by: Naveen N. Rao
> Acked-by: Masami Hiramatsu
Applied to pow
On Tue, 2017-02-07 at 19:54:16 UTC, "Naveen N. Rao" wrote:
> ... as the weak variant will do.
>
> Signed-off-by: Naveen N. Rao
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/0ddde5004d26c483c9e67005b2be5b
cheers
On Wed, 2017-02-08 at 08:57:29 UTC, Anju T wrote:
> From: "Naveen N. Rao"
>
> Introduce __PPC_SH64() as a 64-bit variant to encode shift field in some
> of the shift and rotate instructions operating on double-words. Convert
> some of the BPF instruction macros to use the same.
>
> Signed-off-by
On Wed, 2017-02-08 at 09:50:51 UTC, Anju T wrote:
> Current infrastructure of kprobe uses the unconditional trap instruction
> to probe a running kernel. Optprobe allows kprobe to replace the trap with
> a branch instruction to a detour buffer. Detour buffer contains instructions
> to create an in
On Fri, 2017-02-10 at 01:16:59 UTC, Anton Blanchard wrote:
> From: Anton Blanchard
>
> The final paragraph of the help text is reversed - we want to
> enable this option by default, and disable it if the toolchain
> has a working -mprofile-kernel.
>
> Signed-off-by: Anton Blanchard
Applied to
On Fri, 2017-02-10 at 02:40:02 UTC, Michael Ellerman wrote:
> Currently we get a warning that _mcount() can't be versioned:
>
> WARNING: EXPORT symbol "_mcount" [vmlinux] version generation failed,
> symbol will not be versioned.
>
> Add a prototype to asm-prototypes.h to fix it.
>
> The prot
On 14/02/2017 09:45, Michael Ellerman wrote:
>> If possible, please pull only up to "powerpc/64: Allow for relocation-on
>> interrupts from guest to host" and cherry-pick the top two patches
>> ("powerpc/64: CONFIG_RELOCATABLE support for hmi interrupts" and
>> "powerpc/powernv: Remove separate e
Hello Wei Yang,
The patch 9312bc5bab59: "powerpc/powernv: Support EEH reset for VF
PE" from Mar 4, 2016, leads to the following static checker warning:
arch/powerpc/platforms/powernv/eeh-powernv.c:1033 pnv_eeh_reset_vf_pe()
info: return a literal instead of 'ret'
arch/powerpc/pla
emulate_step is the basic infrastructure which is used by number of other
kernel infrastructures like kprobe, hw-breakpoint(data breakpoint) etc.
In case of kprobe, enabling emulation of load/store instructions will
speedup the execution of probed instruction. In case of kernel-space
breakpoint, ca
emulate_step() uses a number of underlying kernel functions that were
initially not enabled for LE. This has been rectified since. So, fix
emulate_step() for LE for the corresponding instructions.
Reported-by: Anton Blanchard
Signed-off-by: Ravi Bangoria
---
arch/powerpc/lib/sstep.c | 20 --
Add new selftest that test emulate_step for Normal, Floating Point,
Vector and Vector Scalar - load/store instructions. Test should run
at boot time if CONFIG_KPROBES_SANITY_TEST and CONFIG_PPC64 is set.
Sample log:
[0.762063] emulate_step smoke test: start.
[0.762219] emulate_step sm
Hello Alistair Popple,
The patch a295af24d0d2: "powernv/opal: Convert opal message events to
opal irq domain" from May 15, 2015, leads to the following static
checker warning:
arch/powerpc/platforms/powernv/opal.c:297 opal_message_init()
info: return a literal instead of 'irq'
ar
On 02/13/2017 07:09 PM, Michael Ellerman wrote:
> Michael Ellerman writes:
>
>> In commit 88baa78d1f31 ("selftests: remove duplicated all and clean
>> target"), the "all" target was removed from individual Makefiles and
>> added to lib.mk.
>>
>> However the "all" target was added to lib.mk *after
This series attempts to clean the page fault handler in the way it has
been done previously for the x86 architecture [1].
The goal is to manage the mmap_sem earlier and only in
do_page_fault(). This done by handling the retry case earlier, before
handling the error case. This way the semaphore ca
Move mmap_sem releasing in the do_sigbus()'s unique caller : mm_fault_error()
No functional changes.
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/fault.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 62a50d
In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry
the page fault handling before anything else.
This would simplify the handling of the mmap_sem lock in this part of
the code.
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/fault.c | 67
Since the fault retry is now handled earlier, we can release the
mmap_sem lock earlier too and remove later unlocking previously done in
mm_fault_error().
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/fault.c | 19 ---
1 file changed, 4 insertions(+), 15 deletions(-)
diff --
On 14/02/2017 01:58, Pan Xinhui wrote:
>
>
> 在 2017/2/14 10:35, Nicholas Piggin 写道:
>> On Mon, 13 Feb 2017 19:00:42 -0200
>>
>> xmon state changing after the first sysrq+x violates principle of least
>> astonishment, so I think that should be fixed.
>>
> hi, Nick
> yes, as long as xmon is disable
On 14/02/2017 09:37, Michael Ellerman wrote:
> "Guilherme G. Piccoli" writes:
>
>> Currently the xmon debugger is set only via kernel boot command-line.
>> It's disabled by default, and can be enabled with "xmon=on" on the
>> command-line. Also, xmon may be accessed via sysrq mechanism, but once
kprobe_lookup_name() is specific to the kprobe subsystem and may not
always return the function entry point (in a subsequent patch). For
looking up function entry points, introduce a separate helper and use
the same in optprobes.c
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/code-pa
KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
eliminates the need for a trap, as well as the need to emulate or
single-step instructions.
Though OPTPROBES provides us with similar performance, we have limited
optprobes trampoline slots. As such, when asked to probe at a
Allow kprobes to be placed on ftrace _mcount() call sites. This
optimization avoids the use of a trap, by riding on ftrace
infrastructure.
This depends on HAVE_DYNAMIC_FTRACE_WITH_REGS which depends on
MPROFILE_KERNEL, which is only currently enabled on powerpc64le with
newer toolchains.
Based on
On 15 February 2017 03:14:24 GMT+11:00, Shuah Khan
wrote:
>On 02/13/2017 07:09 PM, Michael Ellerman wrote:
>> Michael Ellerman writes:
>>
>>> In commit 88baa78d1f31 ("selftests: remove duplicated all and clean
>>> target"), the "all" target was removed from individual Makefiles and
>>> added
On Tue, 2017-02-14 at 16:39 +0300, Dan Carpenter wrote:
> Hello Wei Yang,
>
> The patch 9312bc5bab59: "powerpc/powernv: Support EEH reset for VF
> PE" from Mar 4, 2016, leads to the following static checker warning:
>
> arch/powerpc/platforms/powernv/eeh-powernv.c:1033 pnv_eeh_reset_vf_pe()
"Guilherme G. Piccoli" writes:
> On 14/02/2017 01:58, Pan Xinhui wrote:
>> 在 2017/2/14 10:35, Nicholas Piggin 写道:
>>> On Mon, 13 Feb 2017 19:00:42 -0200
>>>
>>> xmon state changing after the first sysrq+x violates principle of least
>>> astonishment, so I think that should be fixed.
>>>
>> hi, Nic
On Tue, 14 Feb 2017 21:59:23 +1100 Michael Ellerman
wrote:
> "Aneesh Kumar K.V" writes:
>
> > On Tuesday 14 February 2017 11:19 AM, Michael Ellerman wrote:
> >> "Aneesh Kumar K.V" writes:
> >>
> >>> Autonuma preserves the write permission across numa fault to avoid taking
> >>> a writefault a
"Aneesh Kumar K.V" writes:
> Michael Ellerman writes:
>
>> "Aneesh Kumar K.V" writes:
>>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> index 0735d5a8049f..8720a406bbbe 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/mmu-
Hi all,
Today's linux-next merge of the kvm tree got a conflict in:
arch/powerpc/kvm/book3s_hv_rm_xics.c
between commit:
ab9bad0ead9a ("powerpc/powernv: Remove separate entry for OPAL real mode
calls")
from the powerpc tree and commit:
21acd0e4df04 ("KVM: PPC: Book 3S: XICS: Don't lock
Once xmon is triggered by sysrq-x, it is enabled always afterwards even
if it is disabled during boot. This will cause a system reset interrut
fail to dump. So keep xmon in its original state after exit.
Signed-off-by: Pan Xinhui
---
arch/powerpc/xmon/xmon.c | 5 -
1 file changed, 4 insertio
Hi Mukesh,
> Moves the return value check of 'opal_dump_info' to a proper place which
> was previously unnecessarily filling all the dump info even on failure.
Acked-by: Jeremy Kerr
Thanks!
Jeremy
Hi Mukesh,
> Converts all the return explicit number to a more proper IRQ_HANDLED,
> which looks proper incase of interrupt handler returning case.
This looks good to me, but can you describe the effects of those changes
to the interrupt handler's return code? ie, what happened in the
erroneous
resize_hpt_release(), called once the HPT resize of a KVM guest is
completed (successfully or unsuccessfully) free()s the state structure for
the resize. It is currently not safe to call with a NULL pointer.
However, one of the error paths in kvm_vm_ioctl_resize_hpt_commit() can
invoke it with a
在 2017/2/15 上午1:35, Guilherme G. Piccoli 写道:
> On 14/02/2017 01:58, Pan Xinhui wrote:
>>
>>
>> 在 2017/2/14 10:35, Nicholas Piggin 写道:
>>> On Mon, 13 Feb 2017 19:00:42 -0200
>>>
>>> xmon state changing after the first sysrq+x violates principle of least
>>> astonishment, so I think that should be
On Wed, Feb 15, 2017 at 12:28:34AM +0530, Naveen N. Rao wrote:
> Allow kprobes to be placed on ftrace _mcount() call sites. This
> optimization avoids the use of a trap, by riding on ftrace
> infrastructure.
>
> This depends on HAVE_DYNAMIC_FTRACE_WITH_REGS which depends on
> MPROFILE_KERNEL, whic
Michael Ellerman writes:
> Vipin K Parashar writes:
>
>> OPAL returns OPAL_WRONG_STATE for XSCOM operations
>>
>> done to read any core FIR which is sleeping, offline.
>
> OK.
>
> Do we know why Linux is causing that to happen?
>
> It's also returned from many of the XIVE routines if we're in the
Mukesh Ojha writes:
> Moves the return value check of 'opal_dump_info' to a proper place which
> was previously unnecessarily filling all the dump info even on failure.
>
> Signed-off-by: Mukesh Ojha
> ---
> arch/powerpc/platforms/powernv/opal-dump.c | 9 ++---
> 1 file changed, 6 insertions
Mukesh Ojha writes:
> Converts all the return explicit number to a more proper IRQ_HANDLED,
> which looks proper incase of interrupt handler returning case.
>
> Signed-off-by: Mukesh Ojha
> Reviewed-by: Vasant Hegde
> ---
> arch/powerpc/platforms/powernv/opal-dump.c | 9 +++--
> 1 file cha
Thank You Michael. :)
On Tuesday 14 February 2017 06:10 PM, Michael Ellerman wrote:
On Wed, 2017-02-08 at 09:50:51 UTC, Anju T wrote:
Current infrastructure of kprobe uses the unconditional trap instruction
to probe a running kernel. Optprobe allows kprobe to replace the trap with
a branch in
On Wednesday 15 February 2017 10:38 AM, Stewart Smith wrote:
Mukesh Ojha writes:
Converts all the return explicit number to a more proper IRQ_HANDLED,
which looks proper incase of interrupt handler returning case.
Signed-off-by: Mukesh Ojha
Reviewed-by: Vasant Hegde
---
arch/powerpc/pla
This series introduces a way for PCI resource allocator to force
MMIO BARs not to share PAGE_SIZE. This would make sense to VFIO
driver. Because current VFIO implementation disallows to mmap
sub-page(size < PAGE_SIZE) MMIO BARs which may share the same page
with other BARs for security reasons. Thu
In case that one device's alignment is greater than its size,
we may get an incorrect size and alignment for its bus's memory
window in pbus_size_mem(). This patch fixes this case.
Signed-off-by: Yongji Xie
---
drivers/pci/setup-bus.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
When vfio passthroughs a PCI device of which MMIO BARs are
smaller than PAGE_SIZE, guest will not handle the mmio
accesses to the BARs which leads to mmio emulations in host.
This is because vfio will not allow to passthrough one BAR's
mmio page which may be shared with other BARs. Otherwise,
ther
Currently we reassign the alignment by extending resources' size in
pci_reassigndev_resource_alignment(). This could potentially break
some drivers when the driver uses the size to locate register
whose length is related to the size. Some examples as below:
- misc\Hpilo.c:
off = pci_resource_len(p
Hi Naveen,
On Wed, 15 Feb 2017 00:28:34 +0530
"Naveen N. Rao" wrote:
> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
> index e51a045f3d3b..a8f414a0b141 100644
> --- a/arch/powerpc/kernel/optprobes.c
> +++ b/arch/powerpc/kernel/optprobes.c
> @@ -70,6 +70,9 @@ stat
77 matches
Mail list logo