On Tue, 28 May 2019 at 13:16, Tao Xu wrote:
>
>
> On 27/05/2019 18:30, Peter Zijlstra wrote:
> > On Fri, May 24, 2019 at 03:56:35PM +0800, Tao Xu wrote:
> >> This patch adds support for UMONITOR, UMWAIT and TPAUSE instructions
> >> in kvm, and by default dont't expose it to kvm and provide a capab
;> + }
>> +
>
> Yes, the above suggestion is a much better approach. The code has probably
> changed from the time I wrote the first version. I will refresh with the
> above suggestion.
Do you mind to send a new version since the merge window is closed?
Regards,
Wan
2017-11-14 16:15 GMT+08:00 Quan Xu :
>
>
> On 2017/11/14 15:12, Wanpeng Li wrote:
>>
>> 2017-11-14 15:02 GMT+08:00 Quan Xu :
>>>
>>>
>>> On 2017/11/13 18:53, Juergen Gross wrote:
>>>>
>>>> On 13/11/17 11:06, Quan Xu wrote:
>
c poll (halt_poll_ns=0):
> 29031.6 bit/s -- 76.1 %CPU
>
> 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
> 35787.7 bit/s -- 129.4 %CPU
>
> 3. w/ kvm dynamic poll:
> 35735.6 bit/s -- 200.0 %CPU
Actually we can reduce the CPU utilization by sleeping a perio
on pointer, maybe guarded
> by a static key, be enough? A further advantage would be that this would
> work on other architectures, too.
There is a "Adaptive halt-polling" which are merged to upstream more
than two years ago avoids to thread the critical path and has already
been
2017-11-10 15:59 GMT+08:00 Peter Zijlstra :
> On Fri, Nov 10, 2017 at 10:07:56AM +0800, Wanpeng Li wrote:
>
>> >> Also, you should not put cpumask_t on stack, that's 'broken'.
>>
>> Thanks pointing out this. I found a useful comments in arch/x86/kern
2017-11-10 0:00 GMT+08:00 Radim Krcmar :
> 2017-11-09 20:43+0800, Wanpeng Li:
>> 2017-11-07 4:26 GMT+08:00 Eduardo Valentin :
>> > Currently, the existing qspinlock implementation will fallback to
>> > test-and-set if the hypervisor has not set the PV_UNHALT flag.
>
'.
Thanks pointing out this. I found a useful comments in arch/x86/kernel/irq.c:
/* These two declarations are only used in check_irq_vectors_for_cpu_disable()
* below, which is protected by stop_machine(). Putting them on the stack
* results in a stack frame overflow. Dynamically allocating
,8 @@ void __init kvm_spinlock_init(void)
> {
> if (!kvm_para_available())
> return;
> + if (kvm_para_has_feature(KVM_FEATURE_PV_DEDICATED))
> + return;
> /* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> return;
> --
> 2.7.4
>
You should also add a cpuid flag in kvm part.
Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
execute mwait without an exit. also I have tested our patch "[RFC PATCH
> v2 0/7] x86/idle: add halt poll support",
> upstream linux, and idle=poll.
>
> the following is the result (which seems better than ever berfore, as I ran
> test case on a more powerful machine):
&
2017-09-01 14:44 GMT+08:00 Yang Zhang :
> On 2017/8/29 22:02, Wanpeng Li wrote:
>>>
>>> Here is the data we get when running benchmark netperf:
>>>
>>> 2. w/ patch:
>>>halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
>>
2017-09-01 14:32 GMT+08:00 Yang Zhang :
> On 2017/8/29 22:02, Wanpeng Li wrote:
>>>
>>> Here is the data we get when running benchmark netperf:
>>>
>>> 2. w/ patch:
>>>halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
>>
ption if the customer enable the
polling in the linux guest. Anyway, if the patchset is finally
acceptable by maintainer, I will introduce the generic adaptive
halt-polling framework in kvm to avoid the duplicate logic.
Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "
o get more and more complaints
> from our customers in both KVM and Xen compare to bare-mental.After
> investigations, the root cause is known to us: big cost in message passing
> workload(David show it in KVM forum 2015)
>
> A typical message workload like below:
> vcpu 0
_task_running() to guest?
>
> I think vcpu_is_preempted is a good enough replacement.
For example, vcpu->arch.st.steal.preempted is 0 when the vCPU is sched
in and vmentry, then several tasks are enqueued on the same pCPU and
waiting on cfs red-black tree, the guest should avoid to poll in
2017-06-23 12:08 GMT+08:00 Yang Zhang :
> On 2017/6/22 19:50, Wanpeng Li wrote:
>>
>> 2017-06-22 19:22 GMT+08:00 root :
>>>
>>> From: Yang Zhang
>>>
>>> Some latency-intensive workload will see obviously performance
>>> drop when ru
w/ this patchset and w/o the adaptive
halt-polling in kvm, and w/o this patchset and w/ the adaptive
halt-polling in kvm? In addition, both linux and windows guests can
get benefit as we have already done this in kvm.
Regards,
Wanpeng Li
> Yang Zhang (2):
> x86/idle: add halt poll for halt id
)
>>
>> changes from v2:
>> - add a capability to allow host userspace to detect new kernels
>> - more documentation to clarify the semantics of the feature flag
>> and why it's useful
>> - svm support as suggested by Radim
>>
>> changes from v1:
> entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
>> >
>> > + if (this_cpu_has(X86_FEATURE_MWAIT))
>> > + entry->eax = (1 << KVM_FEATURE_MWAIT);
s/"="/"|=", otherwise you almost kill other features.
Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
be used and the effect of tuning the module parameters.
How about replace "halt-polling" by "Adaptive halt-polling"? Btw,
thanks for your docs.
Regards,
Wanpeng Li
>
> Signed-off-by: Suraj Jitindar Singh
> ---
> Documentation/virtual/kvm/00-INDEX | 2 +
&
2016-08-20 0:21 GMT+08:00 Waiman Long :
> On 08/19/2016 01:57 AM, Wanpeng Li wrote:
>>
>> 2016-08-19 5:11 GMT+08:00 Waiman Long:
>>>
>>> When the count value is in between 0 and RWSEM_WAITING_BIAS, there
>>> are 2 possibilities.
>>> Either a w
00X
However, RWSEM_WAITING_BIAS is equal to 0x, so both these two
cases are beyond RWSEM_WAITING_BIAS, right?
Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
22 matches
Mail list logo