On 10/04/2012 06:14 PM, Avi Kivity wrote:
On 10/04/2012 12:56 PM, Raghavendra K T wrote:
On 10/03/2012 10:55 PM, Avi Kivity wrote:
On 10/03/2012 04:29 PM, Raghavendra K T wrote:
* Avi Kivity [2012-09-27 14:03:59]:
On 09/27/2012 01:23 PM, Raghavendra K T wrote:
[...]
2) looking at the re
On 10/04/2012 12:56 PM, Raghavendra K T wrote:
> On 10/03/2012 10:55 PM, Avi Kivity wrote:
>> On 10/03/2012 04:29 PM, Raghavendra K T wrote:
>>> * Avi Kivity [2012-09-27 14:03:59]:
>>>
On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>>
>>> [...]
> 2) looking at the result (comparing A
On 10/03/2012 10:55 PM, Avi Kivity wrote:
On 10/03/2012 04:29 PM, Raghavendra K T wrote:
* Avi Kivity [2012-09-27 14:03:59]:
On 09/27/2012 01:23 PM, Raghavendra K T wrote:
[...]
2) looking at the result (comparing A & C) , I do feel we have
significant in iterating over vcpus (when compar
On 10/03/2012 04:29 PM, Raghavendra K T wrote:
> * Avi Kivity [2012-09-27 14:03:59]:
>
>> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>> >>
> [...]
>> > 2) looking at the result (comparing A & C) , I do feel we have
>> > significant in iterating over vcpus (when compared to even vmexit)
>> > s
* Avi Kivity [2012-09-27 14:03:59]:
> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
> >>
[...]
> > 2) looking at the result (comparing A & C) , I do feel we have
> > significant in iterating over vcpus (when compared to even vmexit)
> > so We still would need undercommit fix sugested by PeterZ (
On 09/28/2012 01:40 PM, Andrew Theurer wrote:
>>
>> >>
>> >> IIRC, with defer preemption :
>> >> we will have hook in spinlock/unlock path to measure depth of lock held,
>> >> and shared with host scheduler (may be via MSRs now).
>> >> Host scheduler 'prefers' not to preempt lock holding vcpu. (or
On 09/28/2012 05:10 PM, Andrew Theurer wrote:
On Fri, 2012-09-28 at 11:08 +0530, Raghavendra K T wrote:
On 09/27/2012 05:33 PM, Avi Kivity wrote:
On 09/27/2012 01:23 PM, Raghavendra K T wrote:
[...]
Also there may be a lot of false positives (deferred preemptions even
when there is no cont
On Fri, 2012-09-28 at 06:40 -0500, Andrew Theurer wrote:
> It will be interesting to see how this behaves with a very high lock
> activity in a guest. Once the scheduler defers preemption, is it for
> a
> fixed amount of time, or does it know to cut the deferral short as
> soon
> as the lock depth
On Fri, 2012-09-28 at 11:08 +0530, Raghavendra K T wrote:
> On 09/27/2012 05:33 PM, Avi Kivity wrote:
> > On 09/27/2012 01:23 PM, Raghavendra K T wrote:
> >>>
> >>> This gives us a good case for tracking preemption on a per-vm basis. As
> >>> long as we aren't preempted, we can keep the PLE window
On Fri, 2012-09-28 at 11:08 +0530, Raghavendra K T wrote:
>
> Peter, Can I post your patch with your from/sob.. in V2?
> Please let me know..
Yeah I guess ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More maj
On 09/28/2012 11:15 AM, H. Peter Anvin wrote:
On 09/27/2012 10:38 PM, Raghavendra K T wrote:
+
+bool kvm_overcommitted()
+{
This better not be C...
I think you meant I should have had like kvm_overcommitted(void) and
(different function name perhaps)
or is it the body of function?
--
To
On 09/27/2012 10:38 PM, Raghavendra K T wrote:
+
+bool kvm_overcommitted()
+{
This better not be C...
-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
--
To unsubscribe from this list: send the line "unsubscribe linux-kern
On 09/27/2012 05:33 PM, Avi Kivity wrote:
On 09/27/2012 01:23 PM, Raghavendra K T wrote:
This gives us a good case for tracking preemption on a per-vm basis. As
long as we aren't preempted, we can keep the PLE window high, and also
return immediately from the handler without looking for candid
On Thu, 2012-09-27 at 14:03 +0200, Avi Kivity wrote:
> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
> >>
> >> This gives us a good case for tracking preemption on a per-vm basis. As
> >> long as we aren't preempted, we can keep the PLE window high, and also
> >> return immediately from the handl
On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>>
>> This gives us a good case for tracking preemption on a per-vm basis. As
>> long as we aren't preempted, we can keep the PLE window high, and also
>> return immediately from the handler without looking for candidates.
>
> 1) So do you think, def
On 09/27/2012 03:58 PM, Andrew Jones wrote:
On Thu, Sep 27, 2012 at 03:19:45PM +0530, Raghavendra K T wrote:
On 09/25/2012 08:30 PM, Dor Laor wrote:
On 09/24/2012 02:02 PM, Raghavendra K T wrote:
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought
On 09/27/2012 02:06 PM, Avi Kivity wrote:
On 09/25/2012 03:40 PM, Raghavendra K T wrote:
On 09/24/2012 07:46 PM, Raghavendra K T wrote:
On 09/24/2012 07:24 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where
On 09/27/2012 12:28 PM, Andrew Jones wrote:
>> No, I am not there yet.
>>
>> So In summary, we are suffering with inconsistent benchmark result,
>> while measuring the benefit of our improvement in PLE/pvlock etc..
>
> Are you measuring the combined throughput of all running guests, or
> just lo
On 09/27/2012 11:49 AM, Raghavendra K T wrote:
On 09/25/2012 08:30 PM, Dor Laor wrote:
On 09/24/2012 02:02 PM, Raghavendra K T wrote:
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a pr
On Thu, Sep 27, 2012 at 03:19:45PM +0530, Raghavendra K T wrote:
> On 09/25/2012 08:30 PM, Dor Laor wrote:
> >On 09/24/2012 02:02 PM, Raghavendra K T wrote:
> >>On 09/24/2012 02:12 PM, Dor Laor wrote:
> >>>In order to help PLE and pvticketlock converge I thought that a small
> >>>test code should b
On 09/26/2012 06:27 PM, Andrew Jones wrote:
On Mon, Sep 24, 2012 at 02:36:05PM +0200, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote:
On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
In some special scenari
On 09/26/2012 05:57 PM, Konrad Rzeszutek Wilk wrote:
On Tue, Sep 25, 2012 at 05:00:30PM +0200, Dor Laor wrote:
On 09/24/2012 02:02 PM, Raghavendra K T wrote:
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought that a small
test code should be develo
On 09/25/2012 08:30 PM, Dor Laor wrote:
On 09/24/2012 02:02 PM, Raghavendra K T wrote:
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a predictable,
deterministic way.
The idea is to ha
On 09/25/2012 03:40 PM, Raghavendra K T wrote:
> On 09/24/2012 07:46 PM, Raghavendra K T wrote:
>> On 09/24/2012 07:24 PM, Peter Zijlstra wrote:
>>> On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where runqueue is not
equally distr
On Wed, 2012-09-26 at 15:39 +0200, Andrew Jones wrote:
> On Wed, Sep 26, 2012 at 03:26:11PM +0200, Peter Zijlstra wrote:
> > On Wed, 2012-09-26 at 15:20 +0200, Andrew Jones wrote:
> > > Wouldn't a clean solution be to promote a task's scheduler
> > > class to the spinner class when we PLE (or come
On Wed, Sep 26, 2012 at 03:26:11PM +0200, Peter Zijlstra wrote:
> On Wed, 2012-09-26 at 15:20 +0200, Andrew Jones wrote:
> > Wouldn't a clean solution be to promote a task's scheduler
> > class to the spinner class when we PLE (or come from some special
> > syscall
> > for userspace spinlocks?)?
>
On Wed, 2012-09-26 at 15:20 +0200, Andrew Jones wrote:
> Wouldn't a clean solution be to promote a task's scheduler
> class to the spinner class when we PLE (or come from some special
> syscall
> for userspace spinlocks?)?
Userspace spinlocks are typically employed to avoid syscalls..
> That cla
On Mon, Sep 24, 2012 at 06:20:12PM +0200, Avi Kivity wrote:
> On 09/24/2012 06:03 PM, Peter Zijlstra wrote:
> > On Mon, 2012-09-24 at 17:51 +0200, Avi Kivity wrote:
> >> On 09/24/2012 03:54 PM, Peter Zijlstra wrote:
> >> > On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
> >> >> However Ri
On Mon, Sep 24, 2012 at 02:36:05PM +0200, Peter Zijlstra wrote:
> On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote:
> > On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
> > > On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
> > >> In some special scenarios like #vcpu<= #pcpu, PLE hand
On Tue, Sep 25, 2012 at 05:00:30PM +0200, Dor Laor wrote:
> On 09/24/2012 02:02 PM, Raghavendra K T wrote:
> >On 09/24/2012 02:12 PM, Dor Laor wrote:
> >>In order to help PLE and pvticketlock converge I thought that a small
> >>test code should be developed to test this in a predictable,
> >>determ
On 09/24/2012 02:02 PM, Raghavendra K T wrote:
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a predictable,
deterministic way.
The idea is to have a guest kernel module that spawn a new
On 09/24/2012 07:46 PM, Raghavendra K T wrote:
On 09/24/2012 07:24 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where runqueue is not
equally distributed and lockholder might actually be on a different run
que
On 09/24/2012 06:03 PM, Peter Zijlstra wrote:
> On Mon, 2012-09-24 at 17:51 +0200, Avi Kivity wrote:
>> On 09/24/2012 03:54 PM, Peter Zijlstra wrote:
>> > On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
>> >> However Rik had a genuine concern in the cases where runqueue is not
>> >> equal
On Mon, 2012-09-24 at 17:51 +0200, Avi Kivity wrote:
> On 09/24/2012 03:54 PM, Peter Zijlstra wrote:
> > On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
> >> However Rik had a genuine concern in the cases where runqueue is not
> >> equally distributed and lockholder might actually be on a
On 09/24/2012 03:54 PM, Peter Zijlstra wrote:
> On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
>> However Rik had a genuine concern in the cases where runqueue is not
>> equally distributed and lockholder might actually be on a different run
>> queue but not running.
>
> Load should ev
On 09/24/2012 07:24 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where runqueue is not
equally distributed and lockholder might actually be on a different run
queue but not running.
Load should eventually get
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
> However Rik had a genuine concern in the cases where runqueue is not
> equally distributed and lockholder might actually be on a different run
> queue but not running.
Load should eventually get distributed equally -- that's what the
loa
On 09/24/2012 06:06 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote:
On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
In some special scenarios like #vcpu<= #pcpu, PLE handler may
prove very costly, becau
On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote:
> On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
> > On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
> >> In some special scenarios like #vcpu<= #pcpu, PLE handler may
> >> prove very costly, because there is no need to iterate over
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a predictable,
deterministic way.
The idea is to have a guest kernel module that spawn a new thread each
time you write to a /sys/ entry
On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
In some special scenarios like #vcpu<= #pcpu, PLE handler may
prove very costly, because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
What's the costly th
On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
> In some special scenarios like #vcpu <= #pcpu, PLE handler may
> prove very costly, because there is no need to iterate over vcpus
> and do unsuccessful yield_to burning CPU.
What's the costly thing? The vm-exit, the yield (which should
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a predictable,
deterministic way.
The idea is to have a guest kernel module that spawn a new thread each
time you write to a /sys/ entry.
Each such a thread spins over a sp
On 09/21/2012 06:48 PM, Chegu Vinod wrote:
On 9/21/2012 4:59 AM, Raghavendra K T wrote:
In some special scenarios like #vcpu <= #pcpu, PLE handler may
prove very costly,
Yes.
because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
An idea to solve this is:
1)
On 9/21/2012 4:59 AM, Raghavendra K T wrote:
In some special scenarios like #vcpu <= #pcpu, PLE handler may
prove very costly,
Yes.
because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
An idea to solve this is:
1) As Avi had proposed we can modify hardwar
In some special scenarios like #vcpu <= #pcpu, PLE handler may
prove very costly, because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
An idea to solve this is:
1) As Avi had proposed we can modify hardware ple_window
dynamically to avoid frequent PL-exit. (IMH
46 matches
Mail list logo