> -Original Message-
> From: [EMAIL PROTECTED] [mailto:linux-kernel-
> [EMAIL PROTECTED] On Behalf Of Bill Davidsen
> Sent: dinsdag 17 april 2007 21:38
> To: linux-kernel@vger.kernel.org
> Cc: Buytaert, Steven; [EMAIL PROTECTED]; linux-kernel@vger.kernel.org
> Sub
Mark Lord wrote:
[EMAIL PROTECTED] wrote:
From: Bill Davidsen
And having gotten same, are you going to code up what appears to be a
solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments
against my approach. The real proof is in the pudding.
[EMAIL PROTECTED] wrote:
From: Bill Davidsen
And having gotten same, are you going to code up what appears to be a
solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments against
my approach. The real proof is in the pudding.
I'm running a kern
> From: Bill Davidsen
>
> And having gotten same, are you going to code up what appears to be a
> solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments against
my approach. The real proof is in the pudding.
I'm running a kernel with these change
[EMAIL PROTECTED] wrote:
-Original Message-
Besides - but I guess you're aware of it - any randomized
algorithms tend to drive benchmarkers and performance analysts
crazy because their performance cannot be repeated. So it's usually
better to avoid them unless there is really no altern
On Thu, Apr 12, 2007 at 11:27:22PM +1000, Nick Piggin wrote:
> This one should be pretty rare (actually I think it is dead code in
> practice, due to the way the page allocator works).
> Avoiding sched_yield is a really good idea outside realtime scheduling.
> Since we have gone this far with the
On Thu, Apr 12, 2007 at 03:31:31PM +0200, Andi Kleen wrote:
> The only way I could think of to make sched_yield work the way they
> expect would be to define some way of gang scheduling and give
> sched_yield semantics that it preferably yields to other members
> of the gang.
> But it would be sti
On Thu, 2007-04-12 at 10:15 -0400, [EMAIL PROTECTED] wrote:
> > -Original Message-
> > > Agreed, but $ find . -name "*.[ch]" | xargs grep -E "yield[ ]*\(" | wc
> > over
> > > the 2.6.16 kernel yields 105 hits, note including comments... An
> > interesting spot is e.g. fs/buffer.c free_more_
> -Original Message-
> > Agreed, but $ find . -name "*.[ch]" | xargs grep -E "yield[ ]*\(" | wc
> over
> > the 2.6.16 kernel yields 105 hits, note including comments... An
> interesting spot is e.g. fs/buffer.c free_more_memory()
>
> A lot of those are probably broken in some way agreed.
On Thu, Apr 12, 2007 at 09:05:25AM -0400, [EMAIL PROTECTED] wrote:
> > -Original Message-
> > From: Andi Kleen
> > [ ... about use of sched_yield ...]
> > On the other hand when they fix their code to not rely on sched_yield
> > but use [...]
>
> Agreed, but $ find . -name "*.[ch]" | xargs
[EMAIL PROTECTED] wrote:
-Original Message-
From: Andi Kleen
[ ... about use of sched_yield ...]
On the other hand when they fix their code to not rely on sched_yield
but use [...]
Agreed, but $ find . -name "*.[ch]" | xargs grep -E "yield[ ]*\(" | wc over
the 2.6.16 kernel yields 105
> -Original Message-
> From: Andi Kleen
> [ ... about use of sched_yield ...]
> On the other hand when they fix their code to not rely on sched_yield
> but use [...]
Agreed, but $ find . -name "*.[ch]" | xargs grep -E "yield[ ]*\(" | wc over
the 2.6.16 kernel yields 105 hits, note includin
[EMAIL PROTECTED] writes:
> Since the new 2.6.x O(1) scheduler I'm having latency problems. Probably due
> to excessive use of sched_yield in code in components I don't have control
> over. This 'problem'/behavioral change has been reported also by other
> applications (e.g. OpenLDAP, Gnome netmee
On Thu, 2007-04-12 at 04:31 -0400, [EMAIL PROTECTED] wrote:
> Since the new 2.6.x O(1) scheduler I'm having latency problems.
1. Have you elevated the process priority?
2. Have you tried running SCHED_FIFO, or SCHED_RR?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" i
14 matches
Mail list logo