On 2017/02/07 11:10AM, Michael Ellerman wrote:
> Michael Neuling writes:
>
> > This enables BCC (https://github.com/iovisor/bcc) on powernv.
> >
> > This adds 225KB to the vmlinux size.
> >
> > Signed-off-by: Michael Neuling
> > ---
> > arch/powerpc/configs/powernv_defconfig | 5 +
> > 1 fi
On 2017/02/07 10:05AM, Masami Hiramatsu wrote:
> On Sat, 4 Feb 2017 01:09:49 +0530
> "Naveen N. Rao" wrote:
>
> > Hi Michael,
> > Thanks for the review! I'll defer to Anju on most of the aspects, but...
> >
> > On 2017/02/01 09:53PM, Michael Ellerman wrote:
> > > Anju T Sudhakar writes:
> > >
在 2017/2/7 下午2:46, Eric Dumazet 写道:
On Mon, Feb 6, 2017 at 10:21 PM, panxinhui wrote:
hi all
I do some netperf tests and get some benchmark results.
I also attach my test script and netperf-result(Excel)
There are two machine. one runs netserver and the other runs netperf
benchmark.
VFIO on sPAPR already implements guest memory pre-registration
when the entire guest RAM gets pinned. This can be used to translate
the physical address of a guest page containing the TCE list
from H_PUT_TCE_INDIRECT.
This makes use of the pre-registrered memory API to access TCE list
pages in ord
For the emulated devices it does not matter much if we get a broken TCE
half way handling a TCE list but for VFIO it will matter as it has
more chances to fail so we try to do our best and check as much as we
can before proceeding.
This separates a guest view table update from validation. No chang
This allows the host kernel to handle H_PUT_TCE, H_PUT_TCE_INDIRECT
and H_STUFF_TCE requests targeted an IOMMU TCE table used for VFIO
without passing them to user space which saves time on switching
to user space and back.
This adds H_PUT_TCE/H_PUT_TCE_INDIRECT/H_STUFF_TCE handlers to KVM.
KVM tr
The guest view TCE tables are per KVM anyway (not per VCPU) so pass kvm*
there. This will be used in the following patches where we will be
attaching VFIO containers to LIOBNs via ioctl() to KVM (rather than
to VCPU).
Signed-off-by: Alexey Kardashevskiy
Reviewed-by: David Gibson
---
arch/powerp
It does not make much sense to have KVM in book3s-64 and
not to have IOMMU bits for PCI pass through support as it costs little
and allows VFIO to function on book3s KVM.
Having IOMMU_API always enabled makes it unnecessary to have a lot of
"#ifdef IOMMU_API" in arch/powerpc/kvm/book3s_64_vio*. Wi
This adds a capability number for in-kernel support for VFIO on
SPAPR platform.
The capability will tell the user space whether in-kernel handlers of
H_PUT_TCE can handle VFIO-targeted requests or not. If not, the user space
must not attempt allocating a TCE table in the host kernel via
the KVM_CR
So far iommu_table obejcts were only used in virtual mode and had
a single owner. We are going to change this by implementing in-kernel
acceleration of DMA mapping requests. The proposed acceleration
will handle requests in real mode and KVM will keep references to tables.
This adds a kref to iomm
At the moment iommu_table can be disposed by either calling
iommu_table_free() directly or it_ops::free(); the only implementation
of free() is in IODA2 - pnv_ioda2_table_free() - and it calls
iommu_table_free() anyway.
As we are going to have reference counting on tables, we need an unified
way o
In real mode, TCE tables are invalidated using special
cache-inhibited store instructions which are not available in
virtual mode
This defines and implements exchange_rm() callback. This does not
define set_rm/clear_rm/flush_rm callbacks as there is no user for those -
exchange/exchange_rm are onl
This makes mm_iommu_lookup() able to work in realmode by replacing
list_for_each_entry_rcu() (which can do debug stuff which can fail in
real mode) with list_for_each_entry_lockless().
This adds realmode version of mm_iommu_ua_to_hpa() which adds
explicit vmalloc'd-to-linear address conversion.
Un
This is my current queue of patches to add acceleration of TCE
updates in KVM.
This is based on 283725af0bd2 which was a Linus master tree
6 days ago.
Please comment. Thanks.
Changes:
v4:
* addressed comments from v3
* updated subject lines with correct component names
* regrouped the patchset i
If a container already has a group attached, attaching a new group
should just program already created IOMMU tables to the hardware via
the iommu_table_group_ops::set_window() callback.
However 6f01cc692 "vfio/spapr: Add a helper to create default DMA window"
did not just simplify the code but als
From: Benjamin Herrenschmidt
All entry points already read the MSR so they can easily do
the right thing.
Signed-off-by: Benjamin Herrenschmidt
Signed-off-by: Paul Mackerras
---
This version is rebased against the topic/ppc-kvm branch of the
powerpc git tree.
arch/powerpc/include/asm/opal.h
On Tue, 2017-02-07 at 09:47 +0530, Aneesh Kumar K.V wrote:
> > Benjamin Herrenschmidt writes:
>
> > That's 48 bits. I would keep the limit at 47 without some explicit
> > opt-in by applications. That's what users get on x86 and we know
> > some GPUs have limits there.
>
> The idea is to have lin
Benjamin Herrenschmidt writes:
> We have all sort of variants of MMIO accessors for the real mode
> instructions. This creates a clean set of accessors based on
> Linux normal naming conventions, replacing all occurrences of
> the old ones in the tree.
>
> I have purposefully removed the "out/in"
Hi mpe,
Any update on this series. Have also fixed the naming issue with patch 12
and with this series applied,
"paca->soft_enabled" becomes "paca->soft_disabled_mask"
Kindly let me know your comments.
Maddy
On Monday 09 January 2017 07:06 PM, Madhavan Srinivasan wrote:
Local atomic operati
Benjamin Herrenschmidt writes:
> That's 48 bits. I would keep the limit at 47 without some explicit
> opt-in by applications. That's what users get on x86 and we know
> some GPUs have limits there.
The idea is to have linux personality values that will limit the
effective address to different me
Nicholas Piggin writes:
> System reset is a non-maskable interrupt from Linux's point of view
> (occurs under local_irq_disable()), so it should use nmi_enter/exit.
...
> diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
> index 802aa6bbe97b..c65c88fb6482 100644
> --- a/arch/
That's 48 bits. I would keep the limit at 47 without some explicit
opt-in by applications. That's what users get on x86 and we know
some GPUs have limits there.
Cheers,
Ben.
>From 5f32fd8fef0cb762ddb0939265455a0b3db2911b Mon Sep 17 00:00:00 2001
From: Benjamin Herrenschmidt
Date: Tue, 7 Feb 2017 15:01:55 +1100
Subject:
All entry points already read the MSR so they can easily do
the right thing.
Signed-off-by: Benjamin Herrenschmidt
---
v2. Test both IR and DR and
On Mon, 2017-02-06 at 20:51 -0600, Segher Boessenkool wrote:
> On Tue, Feb 07, 2017 at 01:17:59PM +1100, Benjamin Herrenschmidt
> wrote:
> > @@ -123,7 +149,6 @@ opal_tracepoint_entry:
> > ld r9,STK_REG(R30)(r1)
> > ld r10,STK_REG(R31)(r1)
> > LOAD_REG_ADDR(r11,opal_tracepoint_
Not-Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 2 +-
arch/powerpc/include/asm/book3s/64/hash-64k.h | 2 +-
arch/powerpc/include/asm/page_64.h| 2 +-
arch/powerpc/include/asm/processor.h | 12 +++-
arch/powerpc/mm/slice.c
We still do 19 bit context. for p4 and p5 we do a 65 bit VA
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 125 --
arch/powerpc/include/asm/mmu.h| 19 ++--
arch/powerpc/mm/mmu_context_book3s64.c| 8 +-
arch/p
This enables us to limit the max context based on platforms.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 39 ++---
arch/powerpc/include/asm/mmu_context.h| 2 -
arch/powerpc/kvm/book3s_64_mmu_host.c | 2 +-
arch/powerpc/mm/hash_uti
This avoid copying the slice_struct as function return value
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/slice.c | 63 +++--
1 file changed, 29 insertions(+), 34 deletions(-)
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index
In followup patch we want to increase the va range which will result
in us requiring high_slices to have more than 64 bits. To enable this
convert high_slices to bitmap. We keep the number bits same in this patch
and later change that to larger value
Signed-off-by: Aneesh Kumar K.V
---
arch/powe
This patch series update ppc64 to use a 68 bit virtual address. The goal here
is to help us increase the effective address range to 512TB. I still haven't
comeup with a mechanism to enable application to selectively use address about
the 64TB (the current limit). The last patch in this series is ju
From: LiuHailong
Debug interrupts can be taken during regular program or a standard
interrupt, the EA of the instruction causing the interrupt will be
kept in DSRR0.
Kernel will check if this value is between [interrupt_base_book3e,
__end_interrupts].
However, when the kernel build with CONFIG_RE
On Mon, Feb 06, 2017 at 04:58:16PM -0200, Thiago Jung Bauermann wrote:
> [ 447.714064] Querying DEAD? cpu 134 (134) shows 2
> cpu 0x86: Vector: 300 (Data Access) at [c7b0fd40]
> pc: 1ec3072c
> lr: 1ec2fee0
> sp: 1faf6bd0
>msr: 800102801000
>dar: 212d
On Tue, Feb 07, 2017 at 01:17:59PM +1100, Benjamin Herrenschmidt wrote:
> @@ -123,7 +149,6 @@ opal_tracepoint_entry:
> ld r9,STK_REG(R30)(r1)
> ld r10,STK_REG(R31)(r1)
> LOAD_REG_ADDR(r11,opal_tracepoint_return)
> - mfcrr12
> std r11,16(r1)
> stw
Bhupesh Sharma writes:
> powerpc: arch_mmap_rnd() uses hard-coded values, (23-PAGE_SHIFT) for
> 32-bit and (30-PAGE_SHIFT) for 64-bit, to generate the random offset
> for the mmap base address.
>
> This value represents a compromise between increased
> ASLR effectiveness and avoiding address-spac
Hal Murray writes:
> b...@kernel.crashing.org said:
>> Ok, I do have one though somewhere with OS X on it. If you give me
>> instructions on how to test (I know near to nothing about ntpsec), I should
>> be able to compile and run it.
>
> I'm assuming you are already running the normal ntpd from
We have all sort of variants of MMIO accessors for the real mode
instructions. This creates a clean set of accessors based on
Linux normal naming conventions, replacing all occurrences of
the old ones in the tree.
I have purposefully removed the "out/in" variants in favor of
only including __raw v
All entry points already read the MSR so they can easily do
the right thing.
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/include/asm/opal.h| 7 ---
arch/powerpc/kernel/idle_book3s.S | 6 +--
arch/powerpc/kvm/book3s_hv_builtin.c | 13 +++--
arch
On Tue, Feb 07, 2017 at 12:09:27AM +0530, Aneesh Kumar K.V wrote:
> Without this we will always find the feature disabled
>
> Fixes: 984d7a1ec6 ("powerpc/mm: Fixup kernel read only mapping")
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/mmu.h | 1 +
> 1 file changed, 1 inser
Greg KH writes:
> On Mon, Feb 06, 2017 at 12:15:59PM +1100, Andrew Donnellan wrote:
>> On 27/01/17 11:57, Andrew Donnellan wrote:
>> > On 27/01/17 11:40, Michael Ellerman wrote:
>> > > Applied to powerpc next, thanks.
>> > >
>> > > https://git.kernel.org/powerpc/c/14a3ae34bfd0bcb1cc12d55b06a858
Thiago Jung Bauermann writes:
> When testing DLPAR CPU add/remove on a system under stress, pseries_cpu_die
> doesn't wait long enough for a CPU to die and the kernel ends up crashing:
>
> [ 446.143648] cpu 152 (hwid 152) Ready to die...
> [ 446.464057] cpu 153 (hwid 153) Ready to die...
> [ 4
On 11/01/17 12:09, Gavin Shan wrote:
The local variable @iov isn't used, to remove it.
Signed-off-by: Gavin Shan
Reviewed-by: Andrew Donnellan
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnel...@au1.ibm.com IBM Australia Limited
On Mon, Feb 06, 2017 at 04:58:16PM -0200, Thiago Jung Bauermann wrote:
> This was reproduced in v4.10-rc6 as well, but I don't have a crash log
> handy for that version right now. Sorry.
>
This is the crash log of v4.10:
roselp4 login: [ 505.097727] sysrq: SysRq : Changing Loglevel
[ 505.097743
On Sat, 4 Feb 2017 01:09:49 +0530
"Naveen N. Rao" wrote:
> Hi Michael,
> Thanks for the review! I'll defer to Anju on most of the aspects, but...
>
> On 2017/02/01 09:53PM, Michael Ellerman wrote:
> > Anju T Sudhakar writes:
> >
> > > +static void optimized_callback(struct optimized_kprobe *op
Douglas Miller writes:
> Hi Michael,
>
> Yes, your patch seems a more complete solution. The idea of "d1", "d2",
> "d4", and "d8" commands is more what I needed and makes more sense to
> someone hitting xmon "cold". I'll work on getting your patch submitted.
>
>
> Question on these sorts of pat
We don't need asm/xics.h
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/platforms/powernv/opal-lpc.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/opal-lpc.c
b/arch/powerpc/platforms/powernv/opal-lpc.c
index 1a8cd54..990e8b1 100644
--- a/arch/powerpc/
migrate_irqs() is used by some platforms to migrate interrupts
away from a CPU about to be offlined.
The current implementation had various issues such as not taking
the descriptor lock before manipulating it. This refactors it
a bit, fixing that problem at the same time.
Signed-off-by: Benjamin
Otherwise KVM will fail to pass them through to the host
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/sysdev/xics/icp-opal.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/sysdev/xics/icp-opal.c
b/arch/powerpc/sysdev/xics/icp-opal.c
index c96c0c
The IPIs come in as HVI not EE, so we need to test the appropriate
SRR1 bits. The encoding is such that it won't have false positives
on P7 and P8 so we can just test it like that. We also need to handle
the icp-opal variant of the flush.
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/in
Michael Neuling writes:
> This enables BCC (https://github.com/iovisor/bcc) on powernv.
>
> This adds 225KB to the vmlinux size.
>
> Signed-off-by: Michael Neuling
> ---
> arch/powerpc/configs/powernv_defconfig | 5 +
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/powerpc/configs/p
On 02/06/2017 01:36 PM, Christophe JAILLET wrote:
> If 'dlpar_configure_connector()' fails, 'parent_dn' should be released as
> already done in the normal case.
>
> Signed-off-by: Christophe JAILLET
> ---
> arch/powerpc/platforms/pseries/mobility.c | 7 +--
> 1 file changed, 5 insertions(+),
If 'dlpar_configure_connector()' fails, 'parent_dn' should be released as
already done in the normal case.
Signed-off-by: Christophe JAILLET
---
arch/powerpc/platforms/pseries/mobility.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/mob
On Fri, 2017-02-03 at 11:05:51 UTC, Michael Ellerman wrote:
> From: Benjamin Herrenschmidt
>
> It's an kernel private macro, it doesn't belong there
>
> Signed-off-by: Benjamin Herrenschmidt
> Signed-off-by: Michael Ellerman
Series applied to powerpc next.
https://git.kernel.org/powerpc/c/2a
On 02/06/2017 07:50 AM, Douglas Miller wrote:
Hi Michael,
Yes, your patch seems a more complete solution. The idea of "d1",
"d2", "d4", and "d8" commands is more what I needed and makes more
sense to someone hitting xmon "cold". I'll work on getting your patch
submitted.
Question on these
On Wed, 2017-02-01 at 03:22:07 UTC, Andrew Donnellan wrote:
> Stub out the debugfs functions so that the build doesn't break when
> CONFIG_DEBUG_FS=n.
>
> Reported-by: Michael Ellerman
> Signed-off-by: Andrew Donnellan
> Acked-by: Ian Munsie
Applied to powerpc next, thanks.
https://git.kernel
On Tue, 2016-12-06 at 06:27:58 UTC, Andrew Donnellan wrote:
> The variable DISABLE_LATENT_ENTROPY_PLUGIN is defined when
> CONFIG_PAX_LATENT_ENTROPY is set. This is leftover from the original PaX
> version of the plugin code and doesn't actually exist. Change the condition
> to depend on CONFIG_GCC
On Mon, Feb 06, 2017 at 05:44:31PM +0100, Petr Mladek wrote:
> > > > @@ -347,22 +354,37 @@ static int __klp_enable_patch(struct klp_patch
> > > > *patch)
> > > >
> > > > pr_notice("enabling patch '%s'\n", patch->mod->name);
> > > >
> > > > + klp_init_transition(patch, KLP_PATCHED
On Thu, 26 Jan 2017 16:18:27 +0100
wrote:
> From: Mark Marshall
>
> The commit 7a654172161c ("mtd/ifc: Add support for IFC controller
> version 2.0") added support for version 2.0 of the IFC controller.
> The version 2.0 controller has the ECC status registers at a different
> location to the p
When testing DLPAR CPU add/remove on a system under stress, pseries_cpu_die
doesn't wait long enough for a CPU to die and the kernel ends up crashing:
[ 446.143648] cpu 152 (hwid 152) Ready to die...
[ 446.464057] cpu 153 (hwid 153) Ready to die...
[ 446.473525] cpu 154 (hwid 154) Ready to die.
Without this we will always find the feature disabled
Fixes: 984d7a1ec6 ("powerpc/mm: Fixup kernel read only mapping")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/mmu.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/
On Fri 2017-02-03 14:39:16, Josh Poimboeuf wrote:
> On Thu, Feb 02, 2017 at 12:51:16PM +0100, Petr Mladek wrote:
> > !!! This is the right version. I am sorry again for the confusion. !!!
> >
> > > static int __klp_disable_patch(struct klp_patch *patch)
> > > {
> > > - struct klp_object *obj;
> >
Le 03/02/2017 à 07:57, Vaibhav Jain a écrit :
During an eeh event when the cxl card is fenced and card sysfs attr
perst_reloads_same_image is set following warning message is seen in the
kernel logs:
[ 60.622727] Adapter context unlocked with 0 active contexts
[ 60.622762] [
On Fri, Feb 03, 2017 at 05:41:28PM +0100, Miroslav Benes wrote:
>
> Petr has already mentioned majority of things I too found out, so only
> couple of nits...
>
> > diff --git a/Documentation/ABI/testing/sysfs-kernel-livepatch
> > b/Documentation/ABI/testing/sysfs-kernel-livepatch
> > index da8
On Mon, Feb 06, 2017 at 07:10:48AM -0800, Paul E. McKenney wrote:
> On Mon, Feb 06, 2017 at 11:53:10AM +0530, Sachin Sant wrote:
> >
> > >>> I've seen it on tip. It looks like hot unplug goes really slow when
> > >>> there's running tasks on the CPU being taken down.
> > >>>
> > >>> What I did wa
On Mon, Feb 06, 2017 at 11:53:10AM +0530, Sachin Sant wrote:
>
> >>> I've seen it on tip. It looks like hot unplug goes really slow when
> >>> there's running tasks on the CPU being taken down.
> >>>
> >>> What I did was something like:
> >>>
> >>> taskset -p $((1<<1)) $$
> >>> for ((i=0; i<20
Hi Michael,
Yes, your patch seems a more complete solution. The idea of "d1", "d2",
"d4", and "d8" commands is more what I needed and makes more sense to
someone hitting xmon "cold". I'll work on getting your patch submitted.
Question on these sorts of patches (PPC only), do we submit initia
On Fri, Feb 03, 2017 at 05:10:28PM +1100, Benjamin Herrenschmidt wrote:
> When autonuma marks a PTE inaccessible it clears all the protection
> bits but leave the PTE valid.
>
> With the Radix MMU, an attempt at executing from such a PTE will
> take a fault with bit 35 of SRR1 set "SRR1_ISI_N_OR_G
This is a very basic test of the new cache shape AUXV entries. All it
does at the moment is look for the entries and error out if we don't
find all the ones we expect. Primarily intended for folks bringing up a
new chip to check that the cache info is making it all the way to
userspace correctly.
Refactor the AUXV routines so they are more composable. In a future test
we want to look for many AUXV entries and we don't want to have to read
/proc/self/auxv each time.
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/include/utils.h | 6 ++-
tools/testing/selftests/powerp
On Mon, Feb 06, 2017 at 12:15:59PM +1100, Andrew Donnellan wrote:
> On 27/01/17 11:57, Andrew Donnellan wrote:
> > On 27/01/17 11:40, Michael Ellerman wrote:
> > > Applied to powerpc next, thanks.
> > >
> > > https://git.kernel.org/powerpc/c/14a3ae34bfd0bcb1cc12d55b06a858
> >
> > Will fix the rem
69 matches
Mail list logo