在 2016/11/16 18:23, Peter Zijlstra 写道:
On Wed, Nov 16, 2016 at 12:19:09PM +0800, Pan Xinhui wrote:
Hi, Peter.
I think we can avoid a function call in a simpler way. How about below
static inline bool vcpu_is_preempted(int cpu)
{
/* only set in pv case*/
if
在 2016/11/15 23:47, Peter Zijlstra 写道:
On Wed, Nov 02, 2016 at 05:08:33AM -0400, Pan Xinhui wrote:
diff --git a/arch/x86/include/asm/paravirt_types.h
b/arch/x86/include/asm/paravirt_types.h
index 0f400c0..38c3bb7 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm
It allows us to update some status or field of one struct partially.
We can also save one kvm_read_guest_cached if we just update one filed
of the struct regardless of its current value.
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
include/linux/kvm_host.h | 2 ++
virt/kvm
u has been preempted.
Signed-off-by: Pan Xinhui
Acked-by: Radim Krčmář
Acked-by: Paolo Bonzini
---
Documentation/virtual/kvm/msr.txt | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..
spin loops upon the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan Xinhui
kvm_steal_time ::preempted to indicate that if
one vcpu is running or not.
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a
quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross
Signed-off-by: Pan Xinhui
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ar
concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
Acked-by: Paolo Bonzini
---
arch/x86/kernel/kvm.c | 12
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kernel/kvm.c
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Pa
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (9):
kernel/sched: introduce vcpu preempted check interface
locking/
->yiled_count keeps zero on
PowerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/asm/spinloc
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Paolo Bonzini
Tested-by: Juergen Gross
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
CIF_ENABLED_WAIT
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Acked-by: Paolo Bonzini
Tested-by: Juergen Gross
---
kernel/locking/mutex.c | 13 +++--
kernel/locking/rwsem-xad
在 2016/10/30 00:52, Davidlohr Bueso 写道:
On Fri, 28 Oct 2016, Pan Xinhui wrote:
/*
* If we need to reschedule bail... so we can block.
+ * Use vcpu_is_preempted to detech lock holder preemption issue
^^ detect
ok. thanks for
在 2016/10/29 03:38, Konrad Rzeszutek Wilk 写道:
On Fri, Oct 28, 2016 at 04:11:16AM -0400, Pan Xinhui wrote:
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4
在 2016/10/29 03:43, Konrad Rzeszutek Wilk 写道:
On Fri, Oct 28, 2016 at 04:11:26AM -0400, Pan Xinhui wrote:
From: Juergen Gross
Support the vcpu_is_preempted() functionality under Xen. This will
enhance lock performance on overcommitted hosts (more runnable vcpus
than physical cpus in the
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/include/
kvm_steal_time ::preempted to indicate that if
one vcpu is running or not.
Signed-off-by: Pan Xinhui
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/uapi
spin loops upon on the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan
0419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (9):
kernel/sched: introduce vcpu preempted check interface
locking/osq: Drop the overload of osq_lock()
kernel/locking: Drop the overl
.
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44c1ce7 100644
u has been preempted.
Signed-off-by: Pan Xinhui
Acked-by: Radim Krčmář
---
Documentation/virtual/kvm/msr.txt | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..ab2ab76 100644
--- a/Docum
quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross
Signed-off-by: Pan Xinhui
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ar
concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui
---
arch/x86/kernel/kvm.c | 12
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
ind
It allows us to update some status or field of one struct partially.
We can also save one kvm_read_guest_cached if we just update one filed
of the struct regardless of its current value.
Signed-off-by: Pan Xinhui
---
include/linux/kvm_host.h | 2 ++
virt/kvm/kvm_main.c | 20
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
CIF_ENABLED_WAIT
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Juergen Gross
---
kernel/locking/mutex.c | 15 +--
kernel/locking/rwsem-xadd.c | 16 +---
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
Acked-by: Christian Borntraeger
Tested-by: Ju
在 2016/10/19 23:58, Juergen Gross 写道:
On 19/10/16 12:20, Pan Xinhui wrote:
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip
30 matches
Mail list logo