On Wed, May 16, 2018 at 10:11:11AM +0530, Souptick Joarder wrote:
> On Thu, May 10, 2018 at 11:57 PM, Souptick Joarder
> wrote:
> > Use new return type vm_fault_t for fault handler
> > in struct vm_operations_struct. For now, this is
> > just documenting that the function returns a
> > VM_FAULT v
[Adding Mikey]
Ravi Bangoria wrote:
emulate_step() is not checking runtime VSX feature flag before
emulating an instruction. This can cause kernel oops when kernel
is compiled with CONFIG_VSX=y but running on machine where VSX is
not supported or disabled. Ex, while running emulate_step tests on
Hi!
The nice little inline i2c_8bit_addr_from_msg is not getting
enough use. This series improves the situation and drops a
bunch of lines in the process.
I have only compile-tested (that part fine, at least over here).
Changes since v1 https://lkml.org/lkml/2018/5/14/919
- Squashed patches
Because it looks neater.
Signed-off-by: Peter Rosin
---
drivers/i2c/algos/i2c-algo-bit.c | 4 +---
drivers/i2c/algos/i2c-algo-pca.c | 5 +
drivers/i2c/algos/i2c-algo-pcf.c | 8 ++--
3 files changed, 4 insertions(+), 13 deletions(-)
diff --git a/drivers/i2c/algos/i2c-algo-bit.c b/drivers
Because it looks neater.
For diolan, this allows factoring out some code that is now common
between if and else.
For eg20t, pch_i2c_writebytes is always called with a write in
msgs->flags, and pch_i2c_readbytes with a read.
For imx, i2c_imx_dma_write and i2c_imx_write are always called with a
wr
On Wed, May 16, 2018 at 02:48:29PM +1000, Stephen Rothwell wrote:
> Hi all,
>
> I have decided that any email sent to the linuxppc-dev mailing list
> that contains an HTML attachment (or is just an HTML email) will be
> rejected. The vast majority of such mail are spam (and I have to spend
> time
Hello Peter,
On Wed, May 16, 2018 at 09:16:47AM +0200, Peter Rosin wrote:
> Acked-by: Uwe Kleine-König [emf32 and imx]
s/emf/efm/
Best regards
Uwe
--
Pengutronix e.K. | Uwe Kleine-König|
Industrial Linux Solutions | http://www.pengutronix.
On Wed, May 16, 2018 at 12:38 PM, Paul Mackerras wrote:
> On Wed, May 16, 2018 at 10:11:11AM +0530, Souptick Joarder wrote:
>> On Thu, May 10, 2018 at 11:57 PM, Souptick Joarder
>> wrote:
>> > Use new return type vm_fault_t for fault handler
>> > in struct vm_operations_struct. For now, this is
From: Simon Guo
There is some room to optimize memcmp() in powerpc 64 bits version for
following 2 cases:
(1) Even src/dst addresses are not aligned with 8 bytes at the beginning,
memcmp() can align them and go with .Llong comparision mode without
fallback to .Lshort comparision mode do compare b
From: Simon Guo
Currently memcmp() 64bytes version in powerpc will fall back to .Lshort
(compare per byte mode) if either src or dst address is not 8 bytes aligned.
It can be opmitized in 2 situations:
1) if both addresses are with the same offset with 8 bytes boundary:
memcmp() can compare the
From: Simon Guo
This patch add VMX primitives to do memcmp() in case the compare size
exceeds 4K bytes. KSM feature can benefit from this.
Test result with following test program(replace the "^>" with ""):
--
># cat tools/testing/selftests/powerpc/stringloops/memcmp.c
>#include
>#include
>
From: Simon Guo
This patch is based on the previous VMX patch on memcmp().
To optimize ppc64 memcmp() with VMX instruction, we need to think about
the VMX penalty brought with: If kernel uses VMX instruction, it needs
to save/restore current thread's VMX registers. There are 32 x 128 bits
VMX re
From: Simon Guo
This patch reworked selftest memcmp_64 so that memcmp selftest can
cover more test cases.
It adds testcases for:
- memcmp over 4K bytes size.
- s1/s2 with different/random offset on 16 bytes boundary.
- enter/exit_vmx_ops pairness.
Signed-off-by: Simon Guo
---
.../selftests/po
> -Original Message-
> From: Laurentiu Tudor
> Sent: Monday, May 14, 2018 7:10 PM
> To: Nipun Gupta ; robin.mur...@arm.com;
> will.dea...@arm.com; mark.rutl...@arm.com; catalin.mari...@arm.com
> Cc: h...@lst.de; gre...@linuxfoundation.org; j...@8bytes.org;
> robh...@kernel.org; m.szyprow.
Le 08/05/2018 à 11:56, Aneesh Kumar K.V a écrit :
Christophe Leroy writes:
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 1 +
arch/powerpc/mm/ioremap.c| 126 +++
2 files changed, 34 insertions(+), 93 dele
The purpose of this serie is to implement hardware assistance for TLB table walk
on the 8xx.
First part is to make L1 entries and L2 entries independant.
For that, we need to alter ioremap functions in order to handle GUARD attribute
at the PGD/PMD level.
Last part is to reuse PTE fragment implem
This reverts commit 4f94b2c7462d9720b2afa7e8e8d4c19446bb31ce.
That commit was buggy, as it used rlwinm instead of rlwimi.
Instead of fixing that bug, we revert the previous commit in order to
reduce the dependency between L1 entries and L2 entries
Signed-off-by: Christophe Leroy
---
arch/powerp
This patch is the first of a serie that intends to make
io mappings common to PPC32 and PPC64.
It moves ioremap/unmap fonctions into a new file called ioremap.c with
no other modification to the functions.
For the time being, the PPC32 and PPC64 parts get enclosed into #ifdef.
Following patches wi
Today, early ioremap maps from IOREMAP_BASE down to up on PPC64
and from IOREMAP_TOP up to down on PPC32
This patchs modifies PPC32 behaviour to get same behaviour as PPC64
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 29 ++--
arch/power
__ioremap(), ioremap(), ioremap_wc() et ioremap_prot() are
very similar between PPC32 and PPC64, they can easily be
made common.
_PAGE_WRITE equals to _PAGE_RW on PPC32
_PAGE_RO and _PAGE_HWWRITE are 0 on PPC64
iounmap() can also be made common by renaming the PPC32
iounmap() as __iounmap() then
Use _ALIGN_DOWN macro instead of open coding in define of VMALLOC_BASE
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h
b/arch/powerpc/include/asm
On the 8xx, the GUARDED attribute of the pages is managed in the
L1 entry, therefore to avoid having to copy it into L1 entry
at each TLB miss, we have to set it in the PMD
In order to allow this, this patch splits the VM alloc space in two
parts, one for VM alloc and non Guarded IO, and one for G
On the 8xx, the GUARDED attribute of the pages is managed in the
L1 entry, therefore to avoid having to copy it into L1 entry
at each TLB miss, we set it in the PMD.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/pte-8xx.h | 3 ++-
arch/powerpc/kernel/head_8xx.S
commit 1bc54c03117b9 ("powerpc: rework 4xx PTE access and TLB miss")
introduced non atomic PTE updates and started the work of removing
PTE updates in TLB miss handlers, but kept PTE_ATOMIC_UPDATES for the
8xx with the following comment:
/* Until my rework is finished, 8xx still needs atomic PTE up
Today, on the 8xx the TLB handlers do SW tablewalk by doing all
the calculation in ASM, in order to match with the Linux page
table structure.
The 8xx offers hardware assistance which allows significant size
reduction of the TLB handlers, hence also reduces the time spent
in the handlers.
However
Each handler must not exceed 64 instructions to fit into the main
exception area.
Following the significant size reduction of TLB handler routines,
the side handlers can be brought back close to the main part.
In the worst case:
Main part of ITLB handler is 45 insn, side part is 9 insn ==> total 5
We can now use SPRN_M_TW in the DAR Fixup code, freeing
SPRN_SPRG_SCRATCH2
Then SPRN_SPRG_SCRATCH2 may be used for something else in the future.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/po
In order to allow the 8xx to handle pte_fragments, this patch
makes it common to PPC32 and PPC64 by moving the related code
to common files and by defining a new config item called
CONFIG_NEED_PTE_FRAG
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu_context.h | 28 ++
In 16k page size mode, the 8xx need only 4k for a page table.
This patch makes use of the pte_fragment functions in order
to avoid wasting memory space
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu-8xx.h | 4 +++
arch/powerpc/include/asm/nohash/32/pgalloc.h | 44 ++
In order to simplify time critical exceptions handling 8xx
specific SW perf counters, this patch moves the counters into
the begining of memory. This is possible because .text is readable
and the counters are never modified outside of the handlers.
By doing this, we avoid having to set a second re
Le 11/05/2018 à 08:01, Michael Ellerman a écrit :
Christophe Leroy writes:
[...]
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+#include
+#include
+#include
+#include
+#include
+#include
I needed:
+#include
Oops, yes it wa
Le 11/05/2018 à 08:48, Michael Ellerman a écrit :
Christophe Leroy writes:
The purpose of this serie is to implement hardware assistance for TLB table walk
on the 8xx.
First part is to make L1 entries and L2 entries independant.
For that, we need to alter ioremap functions in order to handl
Init all present cpus for deep states instead of "all possible" cpus.
Init fails if the possible cpu is gaurded. Resulting in making only
non-deep states available for cpuidle/hotplug.
Signed-off-by: Akshay Adiga
---
arch/powerpc/platforms/powernv/idle.c | 4 ++--
1 file changed, 2 insertions(+)
Deb McLemore writes:
> Problem being solved is when issuing a BMC soft poweroff during IPL,
> the poweroff was being lost so the machine would not poweroff.
>
> Opal messages were being received before the opal-power code
> registered its notifiers.
>
> Alternatives discussed (option #3 was chosen
On Mon, 2018-04-16 at 11:27:14 UTC, "Aneesh Kumar K.V" wrote:
> From: "Aneesh Kumar K.V"
>
> Only code movement and avoid #ifdef.
>
> Signed-off-by: Aneesh Kumar K.V
Series applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/59879d542a8e880391863d82cddf38
cheers
On Mon, 2018-04-16 at 14:39:02 UTC, Michael Ellerman wrote:
> The expected case for this test was wrong, the source of the alternate
> code sequence is:
>
> FTR_SECTION_ELSE
> 2: or 2,2,2
> PPC_LCMPI r3,1
> beq 3f
> blt 2b
> b 3f
> b
On Fri, 2018-04-20 at 17:32:39 UTC, Souptick Joarder wrote:
> Use new return type vm_fault_t for fault handler. For
> now, this is just documenting that the function returns
> a VM_FAULT value rather than an errno. Once all instances
> are converted, vm_fault_t will become a distinct type.
>
> Ref
On Wed, 2018-05-09 at 13:42:27 UTC, Michael Ellerman wrote:
> In commit e6a6928c3ea1 ("of/fdt: Convert FDT functions to use
> libfdt") (Apr 2014), the generic flat device tree code dropped support
> for flat device tree's older than version 0x10 (16).
>
> We still have code in our CPU scanning to
On Thu, 2018-05-10 at 13:09:13 UTC, Michael Ellerman wrote:
> Currently memtrace doesn't build if NUMA=n:
>
> In function âmemtrace_alloc_nodeâ:
> arch/powerpc/platforms/powernv/memtrace.c:134:6:
> error: the address of âcontig_page_dataâ will always evaluate as
> âtrueâ
> i
On Thu, 2018-05-10 at 15:54:39 UTC, Colin King wrote:
> From: Colin Ian King
>
> Trivial fix to spelling mistake in debug messages of a structure
> field name
>
> Signed-off-by: Colin Ian King
Applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/942cc40ae4354fee1e97137346434a
ch
On Sat, 2018-05-12 at 03:35:24 UTC, Nicholas Piggin wrote:
> The exec_target binary could segfault calling _exit(2) because r13
> is not set up properly (and libc looks at that when performing a
> syscall). Call SYS_exit using syscall(2) which doesn't seem to
> have this problem.
>
> Signed-off-by
On Mon, 2018-05-14 at 09:39:22 UTC, Alexey Kardashevskiy wrote:
> At the moment we assume that IODA2 and newer PHBs can always do 4K/64K/16M
> IOMMU pages, however this is not the case for POWER9 and now skiboot
> advertises the supported sizes via the device so we use that instead
> of hard coding
This series of patches improves th powerpc kbuild system. The
motivation was to to be compatible with the new Kconfig scripting
language that Yamada-san has proposed here:
https://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git/?h=kconfig-shell-v3
I have tested on top of that t
Some 64-bit toolchains uses the wrong ISA variant for compiling 32-bit
kernels, even with -m32. Debian's powerpc64le is one such case, and
that is because it is built with --with-cpu=power8.
So when cross compiling a 32-bit kernel with a 64-bit toolchain, set
-mcpu=powerpc initially, which is the
Switch VDSO32 build over to use CROSS32_COMPILE directly, and have
it pass in -m32 after the standard c_flags. This allows endianness
overrides to be removed and the endian and bitness flags moved into
standard flags variables.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/Makefile
The powerpc toolchain can compile combinations of 32/64 bit and
big/little endian, so it's convenient to consider, e.g.,
`CC -m64 -mbig-endian`
To be the C compiler for the purpose of invoking it to build target
artifacts. So overriding the the CC variable to include these flags
works for this
This eliminates the workaround that requires disabling
-mprofile-kernel by default in Kconfig.
[ Note: this depends on
https://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git
kconfig-shell-v3 ]
Signed-off-by: Nicholas Piggin
---
Since v3:
- Moved a stray hunk back to patch 3
Stephen Rothwell writes:
> I have decided that any email sent to the linuxppc-dev mailing list
> that contains an HTML attachment (or is just an HTML email) will be
> rejected. The vast majority of such mail are spam (and I have to spend
> time dropping them manually at the moment) and, I presume
On Fri, May 11, 2018 at 02:04:46PM -0500, Uma Krishnan wrote:
> The following Oops may be encountered if the device is reset, i.e. EEH
> recovery, while there is heavy I/O traffic:
>
> 59:mon> t
> [c000200db64bb680] c00809264c40 cxlflash_queuecommand+0x3b8/0x500
>
Akshay Adiga writes:
> Init all present cpus for deep states instead of "all possible" cpus.
> Init fails if the possible cpu is gaurded. Resulting in making only
> non-deep states available for cpuidle/hotplug.
Should this also head to stable? It means that for single threaded
workloads, if you
On Fri, May 11, 2018 at 02:05:08PM -0500, Uma Krishnan wrote:
> The kernel log can get filled with debug messages from send_cmd_ioarrin()
> when dynamic debug is enabled for the cxlflash module and there is a lot
> of legacy I/O traffic.
>
> While these messages are necessary to debug issues that
On Fri, May 11, 2018 at 02:05:22PM -0500, Uma Krishnan wrote:
> When a superpipe process that makes use of virtual LUNs is terminated or
> killed abruptly, there is a possibility that the cxlflash driver could
> hang and deprive other operations on the adapter.
>
> The release fop registered to be
On Fri, May 11, 2018 at 02:05:51PM -0500, Uma Krishnan wrote:
> The new header file, backend.h, that was recently added is missing
> the include guards. This commit adds the guards.
>
> Signed-off-by: Uma Krishnan
Acked-by: Matthew R. Ochs
On Fri, May 11, 2018 at 02:06:05PM -0500, Uma Krishnan wrote:
> As a staging cleanup to support transport specific builds of the cxlflash
> module, relocate device dependent assignments to header files. This will
> avoid littering the core driver with conditional compilation logic.
>
> Signed-off-
On Fri, May 11, 2018 at 02:06:19PM -0500, Uma Krishnan wrote:
> Depending on the underlying transport, cxlflash has a dependency on either
> the CXL or OCXL drivers, which are enabled via their Kconfig option.
> Instead of having a module wide dependency on these config options, it is
> better to i
Le 14/05/2018 à 10:27, Philippe Bergheaud a écrit :
Failure to synchronize the tunneled operations does not prevent
the initialization of the cxl card. This patch reports the tunneled
operations status via /sys.
Signed-off-by: Philippe Bergheaud
---
Thanks for adding the sysfs documentation
On 05/14/2018 08:34 AM, Florian Weimer wrote:
>>> The initial PKRU value can currently be configured by the system
>>> administrator. I fear this approach has too many moving parts to be
>>> viable.
>>
>> Honestly, I think we should drop that option. I don’t see how we can
>> expect an administrat
On Mon, Apr 30, 2018 at 06:44:26PM -0400, Mathieu Desnoyers wrote:
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index c32a181a7cbb..ed21a777e8c6 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -223,6 +223,7 @@ config PPC
> select HAVE_SYSCALL_TRACEPOINTS
- On May 16, 2018, at 12:18 PM, Peter Zijlstra pet...@infradead.org wrote:
> On Mon, Apr 30, 2018 at 06:44:26PM -0400, Mathieu Desnoyers wrote:
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index c32a181a7cbb..ed21a777e8c6 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/pow
On Tue, May 08, 2018 at 02:40:46PM +0200, Florian Weimer wrote:
> On 05/08/2018 04:49 AM, Andy Lutomirski wrote:
> >On Mon, May 7, 2018 at 2:48 AM Florian Weimer wrote:
> >
> >>On 05/03/2018 06:05 AM, Andy Lutomirski wrote:
> >>>On Wed, May 2, 2018 at 7:11 PM Ram Pai wrote:
> >>>
> On Wed, Ma
On Wed, May 16, 2018 at 1:35 PM Ram Pai wrote:
> On Tue, May 08, 2018 at 02:40:46PM +0200, Florian Weimer wrote:
> > On 05/08/2018 04:49 AM, Andy Lutomirski wrote:
> > >On Mon, May 7, 2018 at 2:48 AM Florian Weimer
wrote:
> > >
> > >>On 05/03/2018 06:05 AM, Andy Lutomirski wrote:
> > >>>On Wed,
On Mon, May 14, 2018 at 02:01:23PM +0200, Florian Weimer wrote:
> On 05/09/2018 04:41 PM, Andy Lutomirski wrote:
> >Hmm. I can get on board with the idea that fork() / clone() /
> >pthread_create() are all just special cases of the idea that the thread
> >that*calls* them should have the right pk
On Wed, May 16, 2018 at 1:52 PM Ram Pai wrote:
> On Mon, May 14, 2018 at 02:01:23PM +0200, Florian Weimer wrote:
> > On 05/09/2018 04:41 PM, Andy Lutomirski wrote:
> > >Hmm. I can get on board with the idea that fork() / clone() /
> > >pthread_create() are all just special cases of the idea that
On Wed, May 16, 2018 at 01:37:46PM -0700, Andy Lutomirski wrote:
> On Wed, May 16, 2018 at 1:35 PM Ram Pai wrote:
>
> > On Tue, May 08, 2018 at 02:40:46PM +0200, Florian Weimer wrote:
> > > On 05/08/2018 04:49 AM, Andy Lutomirski wrote:
> > > >On Mon, May 7, 2018 at 2:48 AM Florian Weimer
> wrot
On Tue, Apr 10, 2018 at 08:34:37AM +0200, Christophe Leroy wrote:
> This reverts commit 6ad966d7303b70165228dba1ee8da1a05c10eefe.
>
> That commit was pointless, because csum_add() sums two 32 bits
> values, so the sum is 0x1fffe at the maximum.
> And then when adding upper part (1) and lower p
On Mon, May 07, 2018 at 02:20:11PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> This patch reimplements non-SIMD LOAD/STORE instruction MMIO emulation
> with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
> properties exported by analyse_instr() and invokes
> kvmppc_h
On Mon, May 07, 2018 at 02:20:13PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> This patch reimplements LOAD_FP/STORE_FP instruction MMIO emulation with
> analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
> analyse_instr() and invokes kvmppc_handle_load(s)/
On Mon, May 07, 2018 at 02:20:06PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> We already have analyse_instr() which analyzes instructions for the
> instruction
> type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is
> somehow
> duplicated and it will be good
"Naveen N. Rao" writes:
> diff --git a/tools/testing/selftests/powerpc/utils.c
> b/tools/testing/selftests/powerpc/utils.c
> index d46916867a6f..c6b1d20ed3ba 100644
> --- a/tools/testing/selftests/powerpc/utils.c
> +++ b/tools/testing/selftests/powerpc/utils.c
> @@ -104,3 +111,149 @@ int pick_on
On Wed, May 16, 2018 at 04:13:16PM -0400, Mathieu Desnoyers wrote:
> - On May 16, 2018, at 12:18 PM, Peter Zijlstra pet...@infradead.org wrote:
>
> > On Mon, Apr 30, 2018 at 06:44:26PM -0400, Mathieu Desnoyers wrote:
> >> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> >> index c32a
On 16/05/18 14:48, Stephen Rothwell wrote:
Hi all,
I have decided that any email sent to the linuxppc-dev mailing list
that contains an HTML attachment (or is just an HTML email) will be
rejected. The vast majority of such mail are spam (and I have to spend
time dropping them manually at the mo
On Thu, May 17, 2018 at 09:52:07AM +1000, Paul Mackerras wrote:
> On Mon, May 07, 2018 at 02:20:13PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo
> >
> > This patch reimplements LOAD_FP/STORE_FP instruction MMIO emulation with
> > analyse_intr() input. It utilizes the FPCONV/UPDATE p
Hi Paul,
On Thu, May 17, 2018 at 09:49:18AM +1000, Paul Mackerras wrote:
> On Mon, May 07, 2018 at 02:20:11PM +0800, wei.guo.si...@gmail.com wrote:
> > From: Simon Guo
> >
> > This patch reimplements non-SIMD LOAD/STORE instruction MMIO emulation
> > with analyse_intr() input. It utilizes the BYT
The current asm statement in __patch_instruction() for the cache flushes
doesn't have a "volatile" statement and no memory clobber. That means
gcc can potentially move it around (or move the store done by put_user
past the flush).
Add both to ensure gcc doesn't play games.
Found by code inspectio
On Mon, May 14, 2018 at 02:04:10PM +1000, Michael Ellerman wrote:
[snip]
> OK good, in commit:
>
> c17b98cf6028 ("KVM: PPC: Book3S HV: Remove code for PPC970 processors") (Dec
> 2014)
>
> So we should be able to do the patch below.
>
> cheers
>
>
> diff --git a/arch/powerpc/include/asm/kvm_ho
tree: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git merge
head: 46cf23553743d51ea53be61efce633061fa47f17
commit: 900be8ab1549359ba980cfb042a043128204a963 [138/143] Automatic merge of
branches 'master', 'next' and 'fixes' into merge
config: powerpc-kilauea_defconfig (attache
In this change:
e2a800beac powerpc/hw_brk: Fix off by one error when validating DAWR region
end
We fixed setting the DAWR end point to its max value via
PPC_PTRACE_SETHWDEBUG. Unfortunately we broke PTRACE_SET_DEBUGREG when
setting a 512 byte aligned breakpoint.
PTRACE_SET_DEBUGREG currently s
Back when we first introduced the DAWR in this commit:
4ae7ebe952 powerpc: Change hardware breakpoint to allow longer ranges
We screwed up the constraint making it a 1024 byte boundary rather
than a 512. This makes the check overly permissive. Fortunately GDB is
the only real user and it always
Yes this needs to be sent to stable.
Fixes: d405a98c ("powerpc/powernv: Move cpuidle related code from setup.c
to new file")
On 05/15/2018 08:32 PM, Guenter Roeck wrote:
> On Thu, Mar 22, 2018 at 04:24:32PM +0530, Shilpasri G Bhat wrote:
>> This patch series adds support to enable/disable OCC based
>> inband-sensor groups at runtime. The environmental sensor groups are
>> managed in HWMON and the remaining platform spe
The imm field of a bpf instruction is a signed 32-bit integer.
For JIT bpf-to-bpf function calls, it stores the offset of the
start address of the callee's JITed image from __bpf_call_base.
For some architectures, such as powerpc64, this offset may be
as large as 64 bits and cannot be accomodated
This adds new two new fields to struct bpf_prog_info. For
multi-function programs, these fields can be used to pass
a list of kernel symbol addresses for all functions in a
given program and to userspace using the bpf system call
with the BPF_OBJ_GET_INFO_BY_FD command.
When bpf_jit_kallsyms is en
Currently, we resolve the callee's address for a JITed function
call by using the imm field of the call instruction as an offset
from __bpf_call_base. If bpf_jit_kallsyms is enabled, we further
use this address to get the callee's kernel symbol's name.
For some architectures, such as powerpc64, th
Currently, for multi-function programs, we cannot get the JITed
instructions using the bpf system call's BPF_OBJ_GET_INFO_BY_FD
command. Because of this, userspace tools such as bpftool fail
to identify a multi-function program as being JITed or not.
With the JIT enabled and the test program runni
This adds support for bpf-to-bpf function calls in the powerpc64
JIT compiler. The JIT compiler converts the bpf call instructions
to native branch instructions. After a round of the usual passes,
the start addresses of the JITed images for the callee functions
are known. Finally, to fixup the bran
Syncing the bpf.h uapi header with tools so that struct
bpf_prog_info has the two new fields for passing on the
addresses of the kernel symbols corresponding to each
function in a JITed program.
Signed-off-by: Sandipan Das
---
tools/include/uapi/linux/bpf.h | 2 ++
1 file changed, 2 insertions(+
This patch series introduces the following:
[1] Support for bpf-to-bpf function calls in the powerpc64 JIT compiler.
[2] Provide a way for resolving function calls because of the way JITed
images are allocated in powerpc64.
[3] Fix to get JITed instruction dumps for multi-function programs f
87 matches
Mail list logo