On Tue, Apr 24, 2007 at 10:55:45AM -0700, Christoph Lameter wrote:
> On Tue, 24 Apr 2007, Siddha, Suresh B wrote:
> > yes, we were planning to move this to a different percpu section, where
> > all the elements in this new section will be cacheline aligned(both
> > at the start, aswell as end)
>
>
On Tue, 24 Apr 2007, Siddha, Suresh B wrote:
> On Tue, Apr 24, 2007 at 10:47:45AM -0700, Christoph Lameter wrote:
> > On Tue, 24 Apr 2007, Siddha, Suresh B wrote:
> > > Anyhow, this is a straight forward optimization and needs to be done. Do
> > > you
> > > have any specific concerns?
> >
> > Ye
On Tue, Apr 24, 2007 at 10:47:45AM -0700, Christoph Lameter wrote:
> On Tue, 24 Apr 2007, Siddha, Suresh B wrote:
> > Anyhow, this is a straight forward optimization and needs to be done. Do you
> > have any specific concerns?
>
> Yes there should not be contention on per cpu data in principle. Th
On Tue, 24 Apr 2007, Siddha, Suresh B wrote:
> > .5% is usually in the noise ratio. Are you consistently seeing an
> > improvement or is that sporadic?
>
> No. This is consistent. I am waiting for the perf data on a much much bigger
> NUMA box.
>
> Anyhow, this is a straight forward optimizatio
On Tue, Apr 24, 2007 at 10:39:48AM -0700, Christoph Lameter wrote:
> On Fri, 20 Apr 2007, Siddha, Suresh B wrote:
>
> > > Last I checked it was workload-dependent, but there were things that
> > > hammer it. I mostly know of the remote wakeup issue, but there could
> > > be other things besides wa
On Fri, 20 Apr 2007, Siddha, Suresh B wrote:
> > Last I checked it was workload-dependent, but there were things that
> > hammer it. I mostly know of the remote wakeup issue, but there could
> > be other things besides wakeups that do it, too.
>
> remote wakeup was the main issue and the 0.5% imp
On Fri, Apr 20, 2007 at 12:24:17PM -0700, Christoph Lameter wrote:
> On Wed, 18 Apr 2007, William Lee Irwin III wrote:
>
> >
> > Mark the runqueues cacheline_aligned_in_smp to avoid false sharing.
>
> False sharing for a per cpu data structure? Are we updating that
> structure from other pr
On Fri, Apr 20, 2007 at 01:03:22PM -0700, William Lee Irwin III wrote:
> On Fri, 20 Apr 2007, William Lee Irwin III wrote:
> >> I'm not really convinced it's all that worthwhile of an optimization,
> >> essentially for the same reasons as you, but presumably there's a
> >> benchmark result somewher
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>> Suppose a table of nice weights like the following is tuned via
>> /proc/:
>> -20 21 0 1
>> -1 2 19 0.0476
> > Essentially 1/(n+1) when n >= 0 and 1-n when n < 0.
On Sat, Apr 21, 2007 at
Peter Williams wrote:
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
I retract this suggestion as it's a very bad idea. It introduces the
possibility of starvation via the poor sods at the bottom of the
queue having their "on CPU" forever postponed and we all know that
even
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
I retract this suggestion as it's a very bad idea. It introduces the
possibility of starvation via the poor sods at the bottom of the queue
having their "on CPU" forever postponed and we all know that even the
smallest possibilit
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> I suppose this is a special case of the dreaded priority inversion.
> What of, say, nice 19 tasks holding fs semaphores and/or mutexes that
> nice -19 tasks are waiting to acquire? Perhaps rt_mutex should be the
> default mutex implementatio
On Sat, Apr 21, 2007 at 09:54:01AM +0200, Ingo Molnar wrote:
> In practice they can starve a bit when one renices thousands of tasks,
> so i was thinking about the following special-case: to at least make
> them easily killable: if a nice 0 task sends a SIGKILL to a nice 19 task
> then we could
Peter Williams wrote:
Peter Williams wrote:
Ingo Molnar wrote:
your suggestion concentrates on the following scenario: if a task
happens to schedule in an 'unlucky' way and happens to hit a busy
period while there are many idle periods. Unless i misunderstood your
suggestion, that is the main
* Peter Williams <[EMAIL PROTECTED]> wrote:
> I retract this suggestion as it's a very bad idea. It introduces the
> possibility of starvation via the poor sods at the bottom of the queue
> having their "on CPU" forever postponed and we all know that even the
> smallest possibility of starvat
Peter Williams wrote:
William Lee Irwin III wrote:
William Lee Irwin III wrote:
On Sat, Apr 21, 2007 at 10:23:07AM +1000, Peter Williams wrote:
If some form of precise timer was used (instead) to trigger
pre-emption then, where there is more than one task with the same
expected "on CPU" time,
William Lee Irwin III wrote:
William Lee Irwin III wrote:
This essentially doesn't look correct because while you want to enforce
the CPU bandwidth allocation, this doesn't have much to do with that
apart from the CPU bandwidth appearing as a term. It's more properly
a rate of service as opposed
William Lee Irwin III wrote:
>> This essentially doesn't look correct because while you want to enforce
>> the CPU bandwidth allocation, this doesn't have much to do with that
>> apart from the CPU bandwidth appearing as a term. It's more properly
>> a rate of service as opposed to a time at which
William Lee Irwin III wrote:
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
I have a suggestion I'd like to make that addresses both nice and
fairness at the same time. As I understand the basic principle behind
this scheduler it to work out a time by which a task should make
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
>> I'm not really convinced it's all that worthwhile of an optimization,
>> essentially for the same reasons as you, but presumably there's a
>> benchmark result somewhere that says it matters. I've just not seen it.
On Fri, Apr 20, 2007 at 12:44:5
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
> I'm not really convinced it's all that worthwhile of an optimization,
> essentially for the same reasons as you, but presumably there's a
> benchmark result somewhere that says it matters. I've just not seen it.
If it is true that we frequently
Fri, Apr 20, 2007 at 12:24:17PM -0700, Christoph Lameter wrote:
>>> False sharing for a per cpu data structure? Are we updating that
>>> structure from other processors?
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
>> Primarily in the load balancer, but also in wakeups.
On Fri, Apr 20, 2007
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
> On Wed, 18 Apr 2007, William Lee Irwin III wrote:
> >>
> >> Mark the runqueues cacheline_aligned_in_smp to avoid false sharing.
>
> On Fri, Apr 20, 2007 at 12:24:17PM -0700, Christoph Lameter wrote:
> > False sharing for a per cpu data stru
On Wed, 18 Apr 2007, William Lee Irwin III wrote:
>>
>> Mark the runqueues cacheline_aligned_in_smp to avoid false sharing.
On Fri, Apr 20, 2007 at 12:24:17PM -0700, Christoph Lameter wrote:
> False sharing for a per cpu data structure? Are we updating that
> structure from other processors?
On Wed, 18 Apr 2007, William Lee Irwin III wrote:
>
> Mark the runqueues cacheline_aligned_in_smp to avoid false sharing.
False sharing for a per cpu data structure? Are we updating that
structure from other processors?
-
To unsubscribe from this list: send the line "unsubscribe linux-kerne
* Peter Williams <[EMAIL PROTECTED]> wrote:
> BTW Given that I'm right and dynamic priorities have been dispensed
> with what do you intend exporting (in their place) to user space for
> display by top and similar?
well i thought of only displaying static ones, i.e. like the current
patch doe
Peter Williams wrote:
Ingo Molnar wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still naive, i'll
address the many nice level related suggestio
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
> I have a suggestion I'd like to make that addresses both nice and
> fairness at the same time. As I understand the basic principle behind
> this scheduler it to work out a time by which a task should make it onto
> the CPU and th
Peter Williams wrote:
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still naive, i'll
addre
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still naive, i'll
address the many nice level
On Fri, Apr 20, 2007 at 04:02:41PM +1000, Peter Williams wrote:
> Willy Tarreau wrote:
> >On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
> >>Ingo Molnar wrote:
> >>>- bugfix: use constant offset factor for nice levels instead of
> >>> sched_granularity_ns. Thus nice levels work e
* Peter Williams <[EMAIL PROTECTED]> wrote:
> > - bugfix: use constant offset factor for nice levels instead of
> > sched_granularity_ns. Thus nice levels work even if someone sets
> > sched_granularity_ns to 0. NOTE: nice support is still naive, i'll
> > address the many nice level relat
Peter Williams wrote:
Willy Tarreau wrote:
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
Ingo Molnar wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: n
Willy Tarreau wrote:
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
Ingo Molnar wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still na
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
> Ingo Molnar wrote:
> >
> > - bugfix: use constant offset factor for nice levels instead of
> > sched_granularity_ns. Thus nice levels work even if someone sets
> > sched_granularity_ns to 0. NOTE: nice support is still naive, i
Ingo Molnar wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still naive, i'll
address the many nice level related suggestions in -v4.
I have a s
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> It appears to me that the following can be taken in for mainline (or
> rejected for mainline) independently of the rest of the cfs patch.
yeah - it's a patch written by Suresh, and this should already be in the
for-v2.6.22 -mm queue. See:
On Wed, Apr 18, 2007 at 07:50:17PM +0200, Ingo Molnar wrote:
> this is the third release of the CFS patchset (against v2.6.21-rc7), and
> can be downloaded from:
>http://redhat.com/~mingo/cfs-scheduler/
> this is a pure "fix reported regressions" release so there's much less
> churn:
>5 f
38 matches
Mail list logo