Re: [PATCH v3 0/4] implement vcpu preempted check

2016-09-30 Thread Paolo Bonzini
> > > > Please consider s390 and (x86/arm) KVM. Once we have a few, more can
> > > > follow later, but I think its important to not only have PPC support for
> > > > this.
> > >
> > > Actually the s390 preemted check via sigp sense running  is available for
> > > all hypervisors (z/VM, LPAR and KVM) which implies everywhere as you can
> > > no longer buy s390 systems without LPAR.
> > >
> > > As Heiko already pointed out we could simply use a small inline function
> > > that calls cpu_is_preempted from arch/s390/lib/spinlock (or
> > > smp_vcpu_scheduled from smp.c)
> >
> > Sure, and I had vague memories of Heiko's email. This patch set however
> > completely fails to do that trivial hooking up.
> 
> sorry for that.
> I will try to work it out on x86.

x86 has no hypervisor support, and I'd like to understand the desired
semantics first, so I don't think it should block this series.  In
particular, there are at least the following choices:

1) exit to userspace (5-10.000 clock cycles best case) counts as
lock holder preemption

2) any time the vCPU thread not running counts as lock holder
preemption

To implement the latter you'd need a hypercall or MSR (at least as
a slow path), because the KVM preempt notifier is only active
during the KVM_RUN ioctl.

Paolo


Re: [PATCH v3 0/4] implement vcpu preempted check

2016-09-30 Thread Pan Xinhui


hi, Paolo
thanks for your reply.

在 2016/9/30 14:58, Paolo Bonzini 写道:

Please consider s390 and (x86/arm) KVM. Once we have a few, more can
follow later, but I think its important to not only have PPC support for
this.


Actually the s390 preemted check via sigp sense running  is available for
all hypervisors (z/VM, LPAR and KVM) which implies everywhere as you can
no longer buy s390 systems without LPAR.

As Heiko already pointed out we could simply use a small inline function
that calls cpu_is_preempted from arch/s390/lib/spinlock (or
smp_vcpu_scheduled from smp.c)


Sure, and I had vague memories of Heiko's email. This patch set however
completely fails to do that trivial hooking up.


sorry for that.
I will try to work it out on x86.


x86 has no hypervisor support, and I'd like to understand the desired
semantics first, so I don't think it should block this series.  In


Once a guest do a hypercall or something similar, IOW, there is a 
kvm_guest_exit. we think this is a lock holder preemption.
Adn PPC implement it in this way.


particular, there are at least the following choices:

1) exit to userspace (5-10.000 clock cycles best case) counts as
lock holder preemption


just to avoid any misunderstanding.
You are saying that the guest does an IO operation for example and then exit to 
QEMU right?
Yes, in this scenario it's hard to guarntee that such IO operation or someghing 
like that could be finished in time.

 

2) any time the vCPU thread not running counts as lock holder
preemption

To implement the latter you'd need a hypercall or MSR (at least as
a slow path), because the KVM preempt notifier is only active
during the KVM_RUN ioctl.


seems a little expensive. :(
How many clock cycles it might cost.

I am still looking for one shared struct between kvm and guest kernel on x86.
and every time kvm_guest_exit/enter called, we store some info in it. So guest 
kernel can check one vcpu is running or not quickly.

thanks
xinhui


Paolo





Re: [PATCH v3 0/4] implement vcpu preempted check

2016-09-30 Thread Paolo Bonzini


On 30/09/2016 10:52, Pan Xinhui wrote:
>> x86 has no hypervisor support, and I'd like to understand the desired
>> semantics first, so I don't think it should block this series.  In
> 
> Once a guest do a hypercall or something similar, IOW, there is a
> kvm_guest_exit. we think this is a lock holder preemption.
> Adn PPC implement it in this way.

Ok, good.

>> particular, there are at least the following choices:
>>
>> 1) exit to userspace (5-10.000 clock cycles best case) counts as
>> lock holder preemption
>>
>> 2) any time the vCPU thread not running counts as lock holder
>> preemption
>>
>> To implement the latter you'd need a hypercall or MSR (at least as
>> a slow path), because the KVM preempt notifier is only active
>> during the KVM_RUN ioctl.
>
> seems a little expensive. :(
> How many clock cycles it might cost.

An MSR read is about 1500 clock cycles, but it need not be the fast path
(e.g. use a bit to check if the CPU is running, if not use the MSR to
check if the CPU is in userspace but the CPU thread is scheduled).  But
it's not necessary if you are just matching PPC semantics.

Then the simplest thing is to use the kvm_steal_time struct, and add a
new field to it that replaces pad[0].  You can write a 0 to the flag in
record_steal_time (not preempted) and a 1 in kvm_arch_vcpu_put
(preempted).  record_steal_time is called before the VM starts running,
immediately after KVM_RUN and also after every sched_in.

If KVM doesn't implement the flag, it won't touch that field at all.  So
the kernel can write a 0, meaning "not preempted", and not care if the
hypervisor implements the flag or not: the answer will always be safe.

The pointer to the flag can be placed in a per-cpu u32*, and again if
the u32* is NULL that means "not preempted".

Paolo


> I am still looking for one shared struct between kvm and guest kernel on
> x86.
> and every time kvm_guest_exit/enter called, we store some info in it. So
> guest kernel can check one vcpu is running or not quickly.
> 
> thanks
> xinhui
> 
>> Paolo
>>
> 


Re: [PATCH v21 00/20] perf, tools: Add support for PMU events in JSON format

2016-09-30 Thread Jiri Olsa
On Thu, Sep 29, 2016 at 07:19:48PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Tue, Sep 27, 2016 at 04:18:46PM +0200, Jiri Olsa escreveu:
> > On Mon, Sep 26, 2016 at 09:59:54AM -0700, Andi Kleen wrote:
> > > On Mon, Sep 26, 2016 at 12:03:43PM -0300, Arnaldo Carvalho de Melo wrote:
> > > > Em Mon, Sep 26, 2016 at 10:35:33AM +0200, Jiri Olsa escreveu:
> > > > > ping.. is that working for you? IMO we can include this
> > > > > as additional patch to the set..
> > > > 
> > > > No, it doesn't fails to build on the first cross env I tried, fixing it
> > > > now, resulting patch:
> > > 
> > > Yes it shouldn't be difficult to fix cross building. I don't think
> > > there are any fundamental problems.
> > 
> > right, how about attached patch
> > 
> > Arnaldo,
> > could you please try it on cross build.. I still dont have setup for that 
> > :-\
> > 
> > thanks,
> > jirka
> 
> So, this makes it work for me in one of the cross build envs I have (all
> in https://hub.docker.com/r/acmel/) if I apply this patch on top:
> 
> diff --git a/tools/build/Makefile b/tools/build/Makefile
> index 653faee2a055..8332959fbca4 100644
> --- a/tools/build/Makefile
> +++ b/tools/build/Makefile
> @@ -42,7 +42,7 @@ $(OUTPUT)fixdep-in.o: FORCE
>   $(Q)$(MAKE) $(build)=fixdep
>  
>  $(OUTPUT)fixdep: $(OUTPUT)fixdep-in.o
> - $(QUIET_LINK)$(CC) $(LDFLAGS) -o $@ $<
> + $(QUIET_LINK)$(HOSTCC) $(LDFLAGS) -o $@ $<
>  
>  FORCE:
>  
> ---
> 
> I've broken up the patch into multiple ones, to get first fixdep
> working, then to move to jevents, I'm putting this on a
> tmp.perf/hostprog branch till I've tested it all.

looks great, thanks

jirka


Re: [PATCH v3 0/4] implement vcpu preempted check

2016-09-30 Thread Pan Xinhui



在 2016/9/30 17:08, Paolo Bonzini 写道:



On 30/09/2016 10:52, Pan Xinhui wrote:

x86 has no hypervisor support, and I'd like to understand the desired
semantics first, so I don't think it should block this series.  In


Once a guest do a hypercall or something similar, IOW, there is a
kvm_guest_exit. we think this is a lock holder preemption.
Adn PPC implement it in this way.


Ok, good.


particular, there are at least the following choices:

1) exit to userspace (5-10.000 clock cycles best case) counts as
lock holder preemption

2) any time the vCPU thread not running counts as lock holder
preemption

To implement the latter you'd need a hypercall or MSR (at least as
a slow path), because the KVM preempt notifier is only active
during the KVM_RUN ioctl.


seems a little expensive. :(
How many clock cycles it might cost.


An MSR read is about 1500 clock cycles, but it need not be the fast path
(e.g. use a bit to check if the CPU is running, if not use the MSR to
check if the CPU is in userspace but the CPU thread is scheduled).  But
it's not necessary if you are just matching PPC semantics.

Then the simplest thing is to use the kvm_steal_time struct, and add a
new field to it that replaces pad[0].  You can write a 0 to the flag in
record_steal_time (not preempted) and a 1 in kvm_arch_vcpu_put
(preempted).  record_steal_time is called before the VM starts running,
immediately after KVM_RUN and also after every sched_in.

If KVM doesn't implement the flag, it won't touch that field at all.  So
the kernel can write a 0, meaning "not preempted", and not care if the
hypervisor implements the flag or not: the answer will always be safe.

The pointer to the flag can be placed in a per-cpu u32*, and again if
the u32* is NULL that means "not preempted".


really nice suggestion!  That's what I want :)

thanks
xinhui


Paolo



I am still looking for one shared struct between kvm and guest kernel on
x86.
and every time kvm_guest_exit/enter called, we store some info in it. So
guest kernel can check one vcpu is running or not quickly.

thanks
xinhui


Paolo









Re: [PATCH] powerpc/fadump: Fix build break when CONFIG_PROC_VMCORE=n

2016-09-30 Thread Balbir Singh


On 30/09/16 10:51, Michael Ellerman wrote:
> The fadump code calls vmcore_cleanup() which only exists if
> CONFIG_PROC_VMCORE=y. We don't want to depend on CONFIG_PROC_VMCORE,
> because it's user selectable, so just wrap the call in an #ifdef.
> 
> Signed-off-by: Michael Ellerman 
> ---
>  arch/powerpc/kernel/fadump.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index b3a66d36..8d461303dd13 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -1104,7 +1104,9 @@ static ssize_t fadump_release_memory_store(struct 
> kobject *kobj,
>* Take away the '/proc/vmcore'. We are releasing the dump
>* memory, hence it will not be valid anymore.
>*/
> +#ifdef CONFIG_PROC_VMCORE
>   vmcore_cleanup();
> +#endif

I wonder if this should be fixed in crash_dump.h more generically.

Balbir Singh.


[PATCH 1/4] powerpc/configs: Enable VMX crypto

2016-09-30 Thread Anton Blanchard
From: Anton Blanchard 

We see big improvements with the VMX crypto functions (often 10x or more),
so enable it as a module.

Signed-off-by: Anton Blanchard 
---
 arch/powerpc/configs/powernv_defconfig | 2 ++
 arch/powerpc/configs/ppc64_defconfig   | 2 ++
 arch/powerpc/configs/pseries_defconfig | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/arch/powerpc/configs/powernv_defconfig 
b/arch/powerpc/configs/powernv_defconfig
index dce352e..3f6226b 100644
--- a/arch/powerpc/configs/powernv_defconfig
+++ b/arch/powerpc/configs/powernv_defconfig
@@ -310,6 +310,8 @@ CONFIG_CRYPTO_TEA=m
 CONFIG_CRYPTO_TWOFISH=m
 CONFIG_CRYPTO_LZO=m
 CONFIG_CRYPTO_DEV_NX=y
+CONFIG_CRYPTO_DEV_VMX=y
+CONFIG_CRYPTO_DEV_VMX_ENCRYPT=m
 CONFIG_VIRTUALIZATION=y
 CONFIG_KVM_BOOK3S_64=m
 CONFIG_KVM_BOOK3S_64_HV=m
diff --git a/arch/powerpc/configs/ppc64_defconfig 
b/arch/powerpc/configs/ppc64_defconfig
index 0a8d250..861471d 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs/ppc64_defconfig
@@ -347,6 +347,8 @@ CONFIG_CRYPTO_LZO=m
 # CONFIG_CRYPTO_ANSI_CPRNG is not set
 CONFIG_CRYPTO_DEV_NX=y
 CONFIG_CRYPTO_DEV_NX_ENCRYPT=m
+CONFIG_CRYPTO_DEV_VMX=y
+CONFIG_CRYPTO_DEV_VMX_ENCRYPT=m
 CONFIG_VIRTUALIZATION=y
 CONFIG_KVM_BOOK3S_64=m
 CONFIG_KVM_BOOK3S_64_HV=m
diff --git a/arch/powerpc/configs/pseries_defconfig 
b/arch/powerpc/configs/pseries_defconfig
index 654aeff..670b8d5 100644
--- a/arch/powerpc/configs/pseries_defconfig
+++ b/arch/powerpc/configs/pseries_defconfig
@@ -314,6 +314,8 @@ CONFIG_CRYPTO_LZO=m
 # CONFIG_CRYPTO_ANSI_CPRNG is not set
 CONFIG_CRYPTO_DEV_NX=y
 CONFIG_CRYPTO_DEV_NX_ENCRYPT=m
+CONFIG_CRYPTO_DEV_VMX=y
+CONFIG_CRYPTO_DEV_VMX_ENCRYPT=m
 CONFIG_VIRTUALIZATION=y
 CONFIG_KVM_BOOK3S_64=m
 CONFIG_KVM_BOOK3S_64_HV=m
-- 
2.7.4



[PATCH 2/4] powerpc/configs: Bump kernel ring buffer size on 64 bit configs

2016-09-30 Thread Anton Blanchard
From: Anton Blanchard 

When we issue a system reset, every CPU in the box prints an Oops,
including a backtrace. Each of these can be quite large (over 4kB)
and we may end up wrapping the ring buffer and losing important
information.

Bump the base size from 128kB to 256kB and the per CPU size from
4kB to 8kB.

Signed-off-by: Anton Blanchard 
---
 arch/powerpc/configs/powernv_defconfig | 2 ++
 arch/powerpc/configs/ppc64_defconfig   | 2 ++
 arch/powerpc/configs/pseries_defconfig | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/arch/powerpc/configs/powernv_defconfig 
b/arch/powerpc/configs/powernv_defconfig
index 3f6226b..95b52b4 100644
--- a/arch/powerpc/configs/powernv_defconfig
+++ b/arch/powerpc/configs/powernv_defconfig
@@ -15,6 +15,8 @@ CONFIG_TASK_XACCT=y
 CONFIG_TASK_IO_ACCOUNTING=y
 CONFIG_IKCONFIG=y
 CONFIG_IKCONFIG_PROC=y
+CONFIG_LOG_BUF_SHIFT=18
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=13
 CONFIG_NUMA_BALANCING=y
 CONFIG_CGROUPS=y
 CONFIG_MEMCG=y
diff --git a/arch/powerpc/configs/ppc64_defconfig 
b/arch/powerpc/configs/ppc64_defconfig
index 861471d..886b237 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs/ppc64_defconfig
@@ -10,6 +10,8 @@ CONFIG_TASKSTATS=y
 CONFIG_TASK_DELAY_ACCT=y
 CONFIG_IKCONFIG=y
 CONFIG_IKCONFIG_PROC=y
+CONFIG_LOG_BUF_SHIFT=18
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=13
 CONFIG_CGROUPS=y
 CONFIG_CPUSETS=y
 CONFIG_BLK_DEV_INITRD=y
diff --git a/arch/powerpc/configs/pseries_defconfig 
b/arch/powerpc/configs/pseries_defconfig
index 670b8d5..2a5dac7 100644
--- a/arch/powerpc/configs/pseries_defconfig
+++ b/arch/powerpc/configs/pseries_defconfig
@@ -15,6 +15,8 @@ CONFIG_TASK_XACCT=y
 CONFIG_TASK_IO_ACCOUNTING=y
 CONFIG_IKCONFIG=y
 CONFIG_IKCONFIG_PROC=y
+CONFIG_LOG_BUF_SHIFT=18
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=13
 CONFIG_NUMA_BALANCING=y
 CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
 CONFIG_CGROUPS=y
-- 
2.7.4



[PATCH 3/4] powerpc/configs: Change a few things from built in to modules

2016-09-30 Thread Anton Blanchard
From: Anton Blanchard 

Change a few devices and filesystems that are seldom used any more
from built in to modules. This reduces our vmlinux about 500kB.

Signed-off-by: Anton Blanchard 
---
 arch/powerpc/configs/powernv_defconfig | 14 +++---
 arch/powerpc/configs/ppc64_defconfig   | 14 +++---
 arch/powerpc/configs/pseries_defconfig | 14 +++---
 3 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/configs/powernv_defconfig 
b/arch/powerpc/configs/powernv_defconfig
index 95b52b4..b4744fe 100644
--- a/arch/powerpc/configs/powernv_defconfig
+++ b/arch/powerpc/configs/powernv_defconfig
@@ -97,7 +97,7 @@ CONFIG_BLK_DEV_IDECD=y
 CONFIG_BLK_DEV_GENERIC=y
 CONFIG_BLK_DEV_AMD74XX=y
 CONFIG_BLK_DEV_SD=y
-CONFIG_CHR_DEV_ST=y
+CONFIG_CHR_DEV_ST=m
 CONFIG_BLK_DEV_SR=y
 CONFIG_BLK_DEV_SR_VENDOR=y
 CONFIG_CHR_DEV_SG=y
@@ -109,7 +109,7 @@ CONFIG_SCSI_CXGB4_ISCSI=m
 CONFIG_SCSI_BNX2_ISCSI=m
 CONFIG_BE2ISCSI=m
 CONFIG_SCSI_MPT2SAS=m
-CONFIG_SCSI_SYM53C8XX_2=y
+CONFIG_SCSI_SYM53C8XX_2=m
 CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
 CONFIG_SCSI_IPR=y
 CONFIG_SCSI_QLA_FC=m
@@ -151,10 +151,10 @@ CONFIG_TUN=m
 CONFIG_VETH=m
 CONFIG_VIRTIO_NET=m
 CONFIG_VHOST_NET=m
-CONFIG_VORTEX=y
+CONFIG_VORTEX=m
 CONFIG_ACENIC=m
 CONFIG_ACENIC_OMIT_TIGON_I=y
-CONFIG_PCNET32=y
+CONFIG_PCNET32=m
 CONFIG_TIGON3=y
 CONFIG_BNX2X=m
 CONFIG_CHELSIO_T1=m
@@ -240,7 +240,7 @@ CONFIG_EXT2_FS_SECURITY=y
 CONFIG_EXT4_FS=y
 CONFIG_EXT4_FS_POSIX_ACL=y
 CONFIG_EXT4_FS_SECURITY=y
-CONFIG_REISERFS_FS=y
+CONFIG_REISERFS_FS=m
 CONFIG_REISERFS_FS_XATTR=y
 CONFIG_REISERFS_FS_POSIX_ACL=y
 CONFIG_REISERFS_FS_SECURITY=y
@@ -255,10 +255,10 @@ CONFIG_NILFS2_FS=m
 CONFIG_AUTOFS4_FS=m
 CONFIG_FUSE_FS=m
 CONFIG_OVERLAY_FS=m
-CONFIG_ISO9660_FS=y
+CONFIG_ISO9660_FS=m
 CONFIG_UDF_FS=m
 CONFIG_MSDOS_FS=y
-CONFIG_VFAT_FS=y
+CONFIG_VFAT_FS=m
 CONFIG_PROC_KCORE=y
 CONFIG_TMPFS=y
 CONFIG_TMPFS_POSIX_ACL=y
diff --git a/arch/powerpc/configs/ppc64_defconfig 
b/arch/powerpc/configs/ppc64_defconfig
index 886b237..447d733 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs/ppc64_defconfig
@@ -92,7 +92,7 @@ CONFIG_BLK_DEV_AMD74XX=y
 CONFIG_BLK_DEV_IDE_PMAC=y
 CONFIG_BLK_DEV_IDE_PMAC_ATA100FIRST=y
 CONFIG_BLK_DEV_SD=y
-CONFIG_CHR_DEV_ST=y
+CONFIG_CHR_DEV_ST=m
 CONFIG_BLK_DEV_SR=y
 CONFIG_BLK_DEV_SR_VENDOR=y
 CONFIG_CHR_DEV_SG=y
@@ -105,7 +105,7 @@ CONFIG_BE2ISCSI=m
 CONFIG_SCSI_MPT2SAS=m
 CONFIG_SCSI_IBMVSCSI=y
 CONFIG_SCSI_IBMVFC=m
-CONFIG_SCSI_SYM53C8XX_2=y
+CONFIG_SCSI_SYM53C8XX_2=m
 CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
 CONFIG_SCSI_IPR=y
 CONFIG_SCSI_QLA_FC=m
@@ -151,10 +151,10 @@ CONFIG_NETCONSOLE=y
 CONFIG_TUN=m
 CONFIG_VIRTIO_NET=m
 CONFIG_VHOST_NET=m
-CONFIG_VORTEX=y
+CONFIG_VORTEX=m
 CONFIG_ACENIC=m
 CONFIG_ACENIC_OMIT_TIGON_I=y
-CONFIG_PCNET32=y
+CONFIG_PCNET32=m
 CONFIG_TIGON3=y
 CONFIG_BNX2X=m
 CONFIG_CHELSIO_T1=m
@@ -271,7 +271,7 @@ CONFIG_EXT2_FS_SECURITY=y
 CONFIG_EXT4_FS=y
 CONFIG_EXT4_FS_POSIX_ACL=y
 CONFIG_EXT4_FS_SECURITY=y
-CONFIG_REISERFS_FS=y
+CONFIG_REISERFS_FS=m
 CONFIG_REISERFS_FS_XATTR=y
 CONFIG_REISERFS_FS_POSIX_ACL=y
 CONFIG_REISERFS_FS_SECURITY=y
@@ -286,10 +286,10 @@ CONFIG_NILFS2_FS=m
 CONFIG_AUTOFS4_FS=m
 CONFIG_FUSE_FS=m
 CONFIG_OVERLAY_FS=m
-CONFIG_ISO9660_FS=y
+CONFIG_ISO9660_FS=m
 CONFIG_UDF_FS=m
 CONFIG_MSDOS_FS=y
-CONFIG_VFAT_FS=y
+CONFIG_VFAT_FS=m
 CONFIG_PROC_KCORE=y
 CONFIG_TMPFS=y
 CONFIG_TMPFS_POSIX_ACL=y
diff --git a/arch/powerpc/configs/pseries_defconfig 
b/arch/powerpc/configs/pseries_defconfig
index 2a5dac7..e8f950d 100644
--- a/arch/powerpc/configs/pseries_defconfig
+++ b/arch/powerpc/configs/pseries_defconfig
@@ -97,7 +97,7 @@ CONFIG_BLK_DEV_IDECD=y
 CONFIG_BLK_DEV_GENERIC=y
 CONFIG_BLK_DEV_AMD74XX=y
 CONFIG_BLK_DEV_SD=y
-CONFIG_CHR_DEV_ST=y
+CONFIG_CHR_DEV_ST=m
 CONFIG_BLK_DEV_SR=y
 CONFIG_BLK_DEV_SR_VENDOR=y
 CONFIG_CHR_DEV_SG=y
@@ -110,7 +110,7 @@ CONFIG_BE2ISCSI=m
 CONFIG_SCSI_MPT2SAS=m
 CONFIG_SCSI_IBMVSCSI=y
 CONFIG_SCSI_IBMVFC=m
-CONFIG_SCSI_SYM53C8XX_2=y
+CONFIG_SCSI_SYM53C8XX_2=m
 CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
 CONFIG_SCSI_IPR=y
 CONFIG_SCSI_QLA_FC=m
@@ -152,10 +152,10 @@ CONFIG_TUN=m
 CONFIG_VETH=m
 CONFIG_VIRTIO_NET=m
 CONFIG_VHOST_NET=m
-CONFIG_VORTEX=y
+CONFIG_VORTEX=m
 CONFIG_ACENIC=m
 CONFIG_ACENIC_OMIT_TIGON_I=y
-CONFIG_PCNET32=y
+CONFIG_PCNET32=m
 CONFIG_TIGON3=y
 CONFIG_BNX2X=m
 CONFIG_CHELSIO_T1=m
@@ -243,7 +243,7 @@ CONFIG_EXT2_FS_SECURITY=y
 CONFIG_EXT4_FS=y
 CONFIG_EXT4_FS_POSIX_ACL=y
 CONFIG_EXT4_FS_SECURITY=y
-CONFIG_REISERFS_FS=y
+CONFIG_REISERFS_FS=m
 CONFIG_REISERFS_FS_XATTR=y
 CONFIG_REISERFS_FS_POSIX_ACL=y
 CONFIG_REISERFS_FS_SECURITY=y
@@ -258,10 +258,10 @@ CONFIG_NILFS2_FS=m
 CONFIG_AUTOFS4_FS=m
 CONFIG_FUSE_FS=m
 CONFIG_OVERLAY_FS=m
-CONFIG_ISO9660_FS=y
+CONFIG_ISO9660_FS=m
 CONFIG_UDF_FS=m
 CONFIG_MSDOS_FS=y
-CONFIG_VFAT_FS=y
+CONFIG_VFAT_FS=m
 CONFIG_PROC_KCORE=y
 CONFIG_TMPFS=y
 CONFIG_TMPFS_POSIX_ACL=y
-- 
2.7.4



[PATCH 4/4] powerpc/configs: Enable Intel i40e on 64 bit configs

2016-09-30 Thread Anton Blanchard
From: Anton Blanchard 

We are starting to see i40e adapters in recent machines, so enable
it in our configs.

Signed-off-by: Anton Blanchard 
---
 arch/powerpc/configs/powernv_defconfig | 1 +
 arch/powerpc/configs/ppc64_defconfig   | 1 +
 arch/powerpc/configs/pseries_defconfig | 1 +
 3 files changed, 3 insertions(+)

diff --git a/arch/powerpc/configs/powernv_defconfig 
b/arch/powerpc/configs/powernv_defconfig
index b4744fe..d98b6eb 100644
--- a/arch/powerpc/configs/powernv_defconfig
+++ b/arch/powerpc/configs/powernv_defconfig
@@ -165,6 +165,7 @@ CONFIG_E1000=y
 CONFIG_E1000E=y
 CONFIG_IXGB=m
 CONFIG_IXGBE=m
+CONFIG_I40E=m
 CONFIG_MLX4_EN=m
 CONFIG_MYRI10GE=m
 CONFIG_QLGE=m
diff --git a/arch/powerpc/configs/ppc64_defconfig 
b/arch/powerpc/configs/ppc64_defconfig
index 447d733..58a98d4 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs/ppc64_defconfig
@@ -167,6 +167,7 @@ CONFIG_E1000=y
 CONFIG_E1000E=y
 CONFIG_IXGB=m
 CONFIG_IXGBE=m
+CONFIG_I40E=m
 CONFIG_MLX4_EN=m
 CONFIG_MYRI10GE=m
 CONFIG_PASEMI_MAC=y
diff --git a/arch/powerpc/configs/pseries_defconfig 
b/arch/powerpc/configs/pseries_defconfig
index e8f950d..8a3bc01 100644
--- a/arch/powerpc/configs/pseries_defconfig
+++ b/arch/powerpc/configs/pseries_defconfig
@@ -168,6 +168,7 @@ CONFIG_E1000=y
 CONFIG_E1000E=y
 CONFIG_IXGB=m
 CONFIG_IXGBE=m
+CONFIG_I40E=m
 CONFIG_MLX4_EN=m
 CONFIG_MYRI10GE=m
 CONFIG_QLGE=m
-- 
2.7.4



Re: [PATCH v3 0/4] implement vcpu preempted check

2016-09-30 Thread Christian Borntraeger
On 09/30/2016 08:58 AM, Paolo Bonzini wrote:
> Please consider s390 and (x86/arm) KVM. Once we have a few, more can
> follow later, but I think its important to not only have PPC support for
> this.

 Actually the s390 preemted check via sigp sense running  is available for
 all hypervisors (z/VM, LPAR and KVM) which implies everywhere as you can
 no longer buy s390 systems without LPAR.

 As Heiko already pointed out we could simply use a small inline function
 that calls cpu_is_preempted from arch/s390/lib/spinlock (or
 smp_vcpu_scheduled from smp.c)
>>>
>>> Sure, and I had vague memories of Heiko's email. This patch set however
>>> completely fails to do that trivial hooking up.
>>
>> sorry for that.
>> I will try to work it out on x86.
> 
> x86 has no hypervisor support, and I'd like to understand the desired
> semantics first, so I don't think it should block this series.  In
> particular, there are at least the following choices:

I think the semantics can be slightly different for different architectures
after all it is still a heuristics to improve performance.
> 
> 1) exit to userspace (5-10.000 clock cycles best case) counts as
> lock holder preemption
> 
> 2) any time the vCPU thread not running counts as lock holder
> preemption
> 
> To implement the latter you'd need a hypercall or MSR (at least as
> a slow path), because the KVM preempt notifier is only active
> during the KVM_RUN ioctl.

FWIW, The s390 implementation uses kvm_arch_vcpu_put/load as trigger
points for (un)setting the CPUSTAT_RUNNING. Strictly speaking an exit to
userspace is not preempted, But as KVM has no control if we are being
scheduled out when in QEMU this is the compromise that seems to work quite
well for the s390 spinlock code (which checks the running state before 
doing a yield hypercall). 
In addition an exit to QEMU is really a rare case.



[PATCH 0/2] powerpc: stack protector (-fstack-protector) support

2016-09-30 Thread Christophe Leroy
Add HAVE_CC_STACKPROTECTOR to powerpc. This is copied from ARM.

Not tested on PPC64, compile ok with ppc64_defconfig

Christophe Leroy (2):
  powerpc: initial stack protector (-fstack-protector) support
  powerpc/32: stack protector: change the canary value per task

 arch/powerpc/Kconfig  |  1 +
 arch/powerpc/include/asm/stackprotector.h | 38 +++
 arch/powerpc/kernel/Makefile  |  5 
 arch/powerpc/kernel/asm-offsets.c |  3 +++
 arch/powerpc/kernel/entry_32.S|  6 -
 arch/powerpc/kernel/process.c |  6 +
 6 files changed, 58 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/stackprotector.h

-- 
2.1.0



[PATCH 1/2] powerpc: initial stack protector (-fstack-protector) support

2016-09-30 Thread Christophe Leroy
Partialy copied from commit c743f38013aef ("ARM: initial stack protector
(-fstack-protector) support")

This is the very basic stuff without the changing canary upon
task switch yet.  Just the Kconfig option and a constant canary
value initialized at boot time.

Cc: Nicolas Pitre 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/Kconfig  |  1 +
 arch/powerpc/include/asm/stackprotector.h | 38 +++
 arch/powerpc/kernel/Makefile  |  5 
 arch/powerpc/kernel/process.c |  6 +
 4 files changed, 50 insertions(+)
 create mode 100644 arch/powerpc/include/asm/stackprotector.h

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 59e53f4..2947dab 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -162,6 +162,7 @@ config PPC
select HAVE_VIRT_CPU_ACCOUNTING
select HAVE_ARCH_HARDENED_USERCOPY
select HAVE_KERNEL_GZIP
+   select HAVE_CC_STACKPROTECTOR
 
 config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/stackprotector.h 
b/arch/powerpc/include/asm/stackprotector.h
new file mode 100644
index 000..de00332
--- /dev/null
+++ b/arch/powerpc/include/asm/stackprotector.h
@@ -0,0 +1,38 @@
+/*
+ * GCC stack protector support.
+ *
+ * Stack protector works by putting predefined pattern at the start of
+ * the stack frame and verifying that it hasn't been overwritten when
+ * returning from the function.  The pattern is called stack canary
+ * and gcc expects it to be defined by a global variable called
+ * "__stack_chk_guard" on ARM.  This unfortunately means that on SMP
+ * we cannot have a different canary value per task.
+ */
+
+#ifndef _ASM_STACKPROTECTOR_H
+#define _ASM_STACKPROTECTOR_H 1
+
+#include 
+#include 
+
+extern unsigned long __stack_chk_guard;
+
+/*
+ * Initialize the stackprotector canary value.
+ *
+ * NOTE: this must only be called from functions that never return,
+ * and it must always be inlined.
+ */
+static __always_inline void boot_init_stack_canary(void)
+{
+   unsigned long canary;
+
+   /* Try to get a semi random initial value. */
+   get_random_bytes(&canary, sizeof(canary));
+   canary ^= LINUX_VERSION_CODE;
+
+   current->stack_canary = canary;
+   __stack_chk_guard = current->stack_canary;
+}
+
+#endif /* _ASM_STACKPROTECTOR_H */
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index e59ed6a..4a62179 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -19,6 +19,11 @@ CFLAGS_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
 CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
 CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
 
+# -fstack-protector triggers protection checks in this code,
+# but it is being used too early to link to meaningful stack_chk logic.
+nossp_flags := $(call cc-option, -fno-stack-protector)
+CFLAGS_prom_init.o := $(nossp_flags)
+
 ifdef CONFIG_FUNCTION_TRACER
 # Do not trace early boot code
 CFLAGS_REMOVE_cputable.o = -mno-sched-epilog $(CC_FLAGS_FTRACE)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index ce8a26a..ba8f32a 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -64,6 +64,12 @@
 #include 
 #include 
 
+#ifdef CONFIG_CC_STACKPROTECTOR
+#include 
+unsigned long __stack_chk_guard __read_mostly;
+EXPORT_SYMBOL(__stack_chk_guard);
+#endif
+
 /* Transactional Memory debug */
 #ifdef TM_DEBUG_SW
 #define TM_DEBUG(x...) printk(KERN_INFO x)
-- 
2.1.0



[PATCH 2/2] powerpc/32: stack protector: change the canary value per task

2016-09-30 Thread Christophe Leroy
Partially copied from commit df0698be14c66 ("ARM: stack protector:
change the canary value per task")

A new random value for the canary is stored in the task struct whenever
a new task is forked.  This is meant to allow for different canary values
per task.  On powerpc, GCC expects the canary value to be found in a global
variable called __stack_chk_guard.  So this variable has to be updated
with the value stored in the task struct whenever a task switch occurs.

Because the variable GCC expects is global, this cannot work on SMP
unfortunately.  So, on SMP, the same initial canary value is kept
throughout, making this feature a bit less effective although it is still
useful.

Cc: Nicolas Pitre 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/asm-offsets.c | 3 +++
 arch/powerpc/kernel/entry_32.S| 6 +-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/asm-offsets.c 
b/arch/powerpc/kernel/asm-offsets.c
index a51ae9b..ede2fc4 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -91,6 +91,9 @@ int main(void)
DEFINE(TI_livepatch_sp, offsetof(struct thread_info, livepatch_sp));
 #endif
 
+#ifdef CONFIG_CC_STACKPROTECTOR
+   DEFINE(TSK_STACK_CANARY, offsetof(struct task_struct, stack_canary));
+#endif
DEFINE(KSP, offsetof(struct thread_struct, ksp));
DEFINE(PT_REGS, offsetof(struct thread_struct, regs));
 #ifdef CONFIG_BOOKE
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 3841d74..5742dbd 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -674,7 +674,11 @@ BEGIN_FTR_SECTION
mtspr   SPRN_SPEFSCR,r0 /* restore SPEFSCR reg */
 END_FTR_SECTION_IFSET(CPU_FTR_SPE)
 #endif /* CONFIG_SPE */
-
+#if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP)
+   lwz r0,TSK_STACK_CANARY(r2)
+   lis r4,__stack_chk_guard@ha
+   stw r0,__stack_chk_guard@l(r4)
+#endif
lwz r0,_CCR(r1)
mtcrf   0xFF,r0
/* r3-r12 are destroyed -- Cort */
-- 
2.1.0



Re: [PATCH 0/2] powerpc: stack protector (-fstack-protector) support

2016-09-30 Thread Christophe LEROY



Le 30/09/2016 à 18:26, Denis Kirjanov a écrit :



On Friday, September 30, 2016, Christophe Leroy mailto:christophe.le...@c-s.fr>> wrote:

Add HAVE_CC_STACKPROTECTOR to powerpc. This is copied from ARM.

Not tested on PPC64, compile ok with ppc64_


Hi Christophe,

are you going to test it on ppc64? If not, I can take it


Thanks Denis, you are welcome to test it.

I don't have any ppc64 target, I only have mpc8xx and mpc83xx which both 
are ppc32


Christophe





Christophe Leroy (2):
  powerpc: initial stack protector (-fstack-protector) support
  powerpc/32: stack protector: change the canary value per task

 arch/powerpc/Kconfig  |  1 +
 arch/powerpc/include/asm/stackprotector.h | 38
+++
 arch/powerpc/kernel/Makefile  |  5 
 arch/powerpc/kernel/asm-offsets.c |  3 +++
 arch/powerpc/kernel/entry_32.S|  6 -
 arch/powerpc/kernel/process.c |  6 +
 6 files changed, 58 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/stackprotector.h

--
2.1.0



Re: [PATCH] dma/fsldma : Unmap region obtained by of_iomap

2016-09-30 Thread Vinod Koul
On Wed, Sep 28, 2016 at 04:15:11PM +0530, Arvind Yadav wrote:
> Free memory mapping, if probe is not successful.

Please use proper subsystem tags for patches. Hint: use git log to find that
out

Applied after fixing the tag

-- 
~Vinod


Re: [PATCH 0/2] powerpc: stack protector (-fstack-protector) support

2016-09-30 Thread Benjamin Herrenschmidt
On Fri, 2016-09-30 at 18:38 +0200, Christophe LEROY wrote:
> I don't have any ppc64 target, I only have mpc8xx and mpc83xx which
> both 
> are ppc32

That's what qemu is for :-)

Cheers,
Ben.



Re: [PATCH 0/2] powerpc: stack protector (-fstack-protector) support

2016-09-30 Thread Denis Kirjanov
On Friday, September 30, 2016, Christophe Leroy 
wrote:

> Add HAVE_CC_STACKPROTECTOR to powerpc. This is copied from ARM.
>
> Not tested on PPC64, compile ok with ppc64_


Hi Christophe,

are you going to test it on ppc64? If not, I can take it


>
> Christophe Leroy (2):
>   powerpc: initial stack protector (-fstack-protector) support
>   powerpc/32: stack protector: change the canary value per task
>
>  arch/powerpc/Kconfig  |  1 +
>  arch/powerpc/include/asm/stackprotector.h | 38
> +++
>  arch/powerpc/kernel/Makefile  |  5 
>  arch/powerpc/kernel/asm-offsets.c |  3 +++
>  arch/powerpc/kernel/entry_32.S|  6 -
>  arch/powerpc/kernel/process.c |  6 +
>  6 files changed, 58 insertions(+), 1 deletion(-)
>  create mode 100644 arch/powerpc/include/asm/stackprotector.h
>
> --
> 2.1.0
>
>