On Tue, Aug 14, 2007 at 05:23:00AM +0200, Nick Piggin wrote:
> On Mon, Aug 13, 2007 at 08:00:38PM -0700, Andrew Morton wrote:
> > Put it this way: if a 50% slowdown in context switch times yields a 5%
> > improvement in, say, balancing decisions then it's probably a net win.
> >
> > Guys, repeat a
From: Andrew Morton <[EMAIL PROTECTED]>
Date: Mon, 13 Aug 2007 20:00:38 -0700
> Guys, repeat after me: "context switch is not a fast path". Take
> that benchmark and set fire to it.
Nothing in this world is so absolute :-)
Regardless of the value of lat_ctx, we should thank it for showing
that
On Mon, Aug 13, 2007 at 08:00:38PM -0700, Andrew Morton wrote:
> On Mon, 13 Aug 2007 14:30:31 +0200 Jens Axboe <[EMAIL PROTECTED]> wrote:
>
> > On Mon, Aug 06 2007, Nick Piggin wrote:
> > > > > What CPU did you get these numbers on? Do the indirect calls hurt
> > > > > much
> > > > > on those wi
On Mon, 13 Aug 2007 14:30:31 +0200 Jens Axboe <[EMAIL PROTECTED]> wrote:
> On Mon, Aug 06 2007, Nick Piggin wrote:
> > > > What CPU did you get these numbers on? Do the indirect calls hurt much
> > > > on those without an indirect predictor? (I'll try running some tests).
> > >
> > > it was on a
On Mon, Aug 06 2007, Nick Piggin wrote:
> > > What CPU did you get these numbers on? Do the indirect calls hurt much
> > > on those without an indirect predictor? (I'll try running some tests).
> >
> > it was on an older Athlon64 X2. I never saw indirect calls really
> > hurting on modern x86 CP
On Sat, Aug 04, 2007 at 08:50:37AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > Oh good. Thanks for getting to the bottom of it. We have normally
> > disliked too much runtime tunables in the scheduler, so I assume these
> > are mostly going away or under a CONFI
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> Oh good. Thanks for getting to the bottom of it. We have normally
> disliked too much runtime tunables in the scheduler, so I assume these
> are mostly going away or under a CONFIG option for 2.6.23? Or...?
yeah, they are all already under CONFIG_SCHE
On Thu, Aug 02, 2007 at 05:44:47PM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > > > > One thing to check out is whether the lmbench numbers are
> > > > > "correct". Especially on SMP systems, the lmbench numbers are
> > > > > actually *best* when the two processe
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > > One thing to check out is whether the lmbench numbers are
> > > > "correct". Especially on SMP systems, the lmbench numbers are
> > > > actually *best* when the two processes run on the same CPU, even
> > > > though that's not really at all the
On Thu, Aug 02, 2007 at 09:19:56AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > > One thing to check out is whether the lmbench numbers are "correct".
> > > Especially on SMP systems, the lmbench numbers are actually *best*
> > > when the two processes run on the
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > One thing to check out is whether the lmbench numbers are "correct".
> > Especially on SMP systems, the lmbench numbers are actually *best*
> > when the two processes run on the same CPU, even though that's not
> > really at all the best scheduling
On Wed, Aug 01, 2007 at 07:31:26PM -0700, Linus Torvalds wrote:
>
>
> On Thu, 2 Aug 2007, Nick Piggin wrote:
> >
> > lmbench 3 lat_ctx context switching time with 2 processes bound to a
> > single core increases by between 25%-35% on my Core2 system (didn't do
> > enough runs to get more signifi
On Thu, 2 Aug 2007, Nick Piggin wrote:
>
> lmbench 3 lat_ctx context switching time with 2 processes bound to a
> single core increases by between 25%-35% on my Core2 system (didn't do
> enough runs to get more significance, but it is around 30%). The problem
> bisected to the main CFS commit.
Hi,
I didn't follow all of the scheduler debates and flamewars, so apologies
if this was already covered. Anyway.
lmbench 3 lat_ctx context switching time with 2 processes bound to a
single core increases by between 25%-35% on my Core2 system (didn't do
enough runs to get more significance, but i
14 matches
Mail list logo