On Thu, May 31, 2007 at 02:15:34AM -0700, William Lee Irwin III wrote:
> Yes, the larger number of schedulable entities and hence slower
> convergence to groupwise weightings is a disadvantage of the flattening.
> A hybrid scheme seems reasonable enough.
Cool! This puts me back on track to implem
On Thu, May 31, 2007 at 02:03:53PM +0530, Srivatsa Vaddagiri wrote:
>> Its ->wait_runtime will drop less significantly, which lets it be
>> inserted in rb-tree much to the left of those 1000 tasks (and which
>> indirectly lets it gain back its fair share during subsequent
>> schedule cycles).
>> Hm
On Thu, May 31, 2007 at 02:03:53PM +0530, Srivatsa Vaddagiri wrote:
> Its ->wait_runtime will drop less significantly, which lets it be
> inserted in rb-tree much to the left of those 1000 tasks (and which indirectly
> lets it gain back its fair share during subsequent schedule cycles).
>
> Hmm ..
On Wed, May 30, 2007 at 11:36:47PM -0700, William Lee Irwin III wrote:
>> Temporarily, yes. All this only works when averaged out.
On Thu, May 31, 2007 at 02:03:53PM +0530, Srivatsa Vaddagiri wrote:
> So essentially when we calculate delta_mine component for each of those
> 1000 tasks, we will fin
On Wed, May 30, 2007 at 11:36:47PM -0700, William Lee Irwin III wrote:
> On Thu, May 31, 2007 at 11:18:28AM +0530, Srivatsa Vaddagiri wrote:
> > Hmm ..the fact that each task runs for a minimum of 1 tick seems to
> > complicate the matters to me (when doing group fairness given a single
> > level h
On Wed, May 30, 2007 at 09:09:26PM -0700, William Lee Irwin III wrote:
>> It's not all that tricky.
On Thu, May 31, 2007 at 11:18:28AM +0530, Srivatsa Vaddagiri wrote:
> Hmm ..the fact that each task runs for a minimum of 1 tick seems to
> complicate the matters to me (when doing group fairness g
On Wed, May 30, 2007 at 09:09:26PM -0700, William Lee Irwin III wrote:
> It's not all that tricky.
Hmm ..the fact that each task runs for a minimum of 1 tick seems to
complicate the matters to me (when doing group fairness given a single
level hierarchy). A user with 1000 (or more) tasks can be u
On Wed, May 30, 2007 at 01:13:59PM -0700, William Lee Irwin III wrote:
>> The step beyond was to show how nice numbers can be done with all that
>> hierarchical task grouping so they have global effects instead of
>> effects limited to the scope of the narrowest grouping hierarchy
>> containing the
On Wed, May 30, 2007 at 01:13:59PM -0700, William Lee Irwin III wrote:
> On Wed, May 30, 2007 at 10:44:05PM +0530, Srivatsa Vaddagiri wrote:
> > Hmm ..so do you think this weight decomposition can be used to flatten
> > the tree all the way to a single level in case of cfs? That would mean we
> >
On Sat, May 26, 2007 at 08:41:12AM -0700, William Lee Irwin III wrote:
>> The smpnice affair is better phrased in terms of task weighting. It's
>> simple to honor nice in such an arrangement. First unravel the
>> grouping hierarchy, then weight by nice. This looks like
[...]
>> conveniently collaps
On Sat, May 26, 2007 at 08:41:12AM -0700, William Lee Irwin III wrote:
> The smpnice affair is better phrased in terms of task weighting. It's
> simple to honor nice in such an arrangement. First unravel the
> grouping hierarchy, then weight by nice. This looks like
>
> tasknicehier1 hie
William Lee Irwin III wrote:
On Wed, May 30, 2007 at 10:09:28AM +1000, Peter Williams wrote:
So what you're saying is that you think dynamic priority (or its
equivalent) should be used for load balancing instead of static priority?
It doesn't do much in other schemes, but when fairness is dire
On Mon, May 28, 2007 at 10:09:19PM +0530, Srivatsa Vaddagiri wrote:
> What do these task weights control? Timeslice primarily? If so, I am not
> sure how well it can co-exist with cfs then (unless you are planning to
> replace cfs with a equally good interactive/fair scheduler :)
> I would be very
William Lee Irwin III wrote:
>> Lag is the deviation of a task's allocated CPU time from the CPU time
>> it would be granted by the ideal fair scheduling algorithm (generalized
>> processor sharing; take the limit of RR with per-task timeslices
>> proportional to load weight as the scale factor app
On Mon, May 28, 2007 at 10:09:19PM +0530, Srivatsa Vaddagiri wrote:
> On Fri, May 25, 2007 at 10:14:58AM -0700, Li, Tong N wrote:
> > is represented by a weight of 10. Inside the group, let's say the two
> > tasks, P1 and P2, have weights 1 and 2. Then the system-wide weight for
> > P1 is 10/3 and
William Lee Irwin III wrote:
William Lee Irwin III wrote:
Lag should be considered in lieu of load because lag
On Sun, May 27, 2007 at 11:29:51AM +1000, Peter Williams wrote:
What's the definition of lag here?
Lag is the deviation of a task's allocated CPU time from the CPU time
it would be
William Lee Irwin III wrote:
>> Lag should be considered in lieu of load because lag
On Sun, May 27, 2007 at 11:29:51AM +1000, Peter Williams wrote:
> What's the definition of lag here?
Lag is the deviation of a task's allocated CPU time from the CPU time
it would be granted by the ideal fair sch
Peter Williams wrote:
Srivatsa Vaddagiri wrote:
On Sat, May 26, 2007 at 10:17:42AM +1000, Peter Williams wrote:
I don't think that ignoring cpu affinity is an option. Setting the
cpu affinity of tasks is a deliberate policy action on the part of
the system administrator and has to be honoured
On 5/28/07, Peter Williams <[EMAIL PROTECTED]> wrote:
In any case, there's no point having cpu affinity if it's going to be
ignored. Maybe you could have two levels of affinity: 1. if set by a
root it must be obeyed; and 2. if set by an ordinary user it can be
overridden if the best interests o
Srivatsa Vaddagiri wrote:
On Sat, May 26, 2007 at 10:17:42AM +1000, Peter Williams wrote:
I don't think that ignoring cpu affinity is an option. Setting the cpu
affinity of tasks is a deliberate policy action on the part of the
system administrator and has to be honoured.
mmm ..but users c
On Sat, May 26, 2007 at 10:17:42AM +1000, Peter Williams wrote:
> I don't think that ignoring cpu affinity is an option. Setting the cpu
> affinity of tasks is a deliberate policy action on the part of the
> system administrator and has to be honoured.
mmm ..but users can set cpu affinity w/o
On Fri, May 25, 2007 at 10:14:58AM -0700, Li, Tong N wrote:
> Nice work, Vatsa. When I wrote the DWRR algorithm, I flattened the
> hierarchies into one level, so maybe that approach can be applied to
> your code as well. What I did is to maintain task and task group weights
> and reservations separ
William Lee Irwin III wrote:
Srivatsa Vaddagiri wrote:
Ingo/Peter, any thoughts here? CFS and smpnice probably is "broken"
with respect to such example as above albeit for nice-based tasks.
On Sat, May 26, 2007 at 10:17:42AM +1000, Peter Williams wrote:
See above. I think that faced with cpu
Srivatsa Vaddagiri wrote:
>> Ingo/Peter, any thoughts here? CFS and smpnice probably is "broken"
>> with respect to such example as above albeit for nice-based tasks.
On Sat, May 26, 2007 at 10:17:42AM +1000, Peter Williams wrote:
> See above. I think that faced with cpu affinity use by the sys
Srivatsa Vaddagiri wrote:
Good example :) USER2's single task will have to share its CPU with
USER1's 50 tasks (unless we modify the smpnice load balancer to
disregard cpu affinity that is - which I would not prefer to do).
I don't think that ignoring cpu affinity is an option. Setting the cpu
On Fri, May 25, 2007 at 08:18:56PM +0400, Kirill Korotaev wrote:
> 2 physical CPUs can't select the same VCPU at the same time.
> i.e. VCPU can be running on 1 PCPU only at the moment.
> and vice versa: PCPU can run only 1 VCPU at the given moment.
>
> So serialization is done when we need to assi
Srivatsa Vaddagiri wrote:
> On Fri, May 25, 2007 at 05:05:16PM +0400, Kirill Korotaev wrote:
>
>>>That way the scheduler would first pick a "virtual CPU" to schedule, and
>>>then pick a user from that virtual CPU, and then a task from the user.
>>
>>don't you mean the vice versa:
>>first use to
On Fri, May 25, 2007 at 05:05:16PM +0400, Kirill Korotaev wrote:
> > That way the scheduler would first pick a "virtual CPU" to schedule, and
> > then pick a user from that virtual CPU, and then a task from the user.
>
> don't you mean the vice versa:
> first use to scheduler, then VCPU (which i
28 matches
Mail list logo