In article <[EMAIL PROTECTED]> you wrote:
> a) it may do so for a short and bound time, typically less than the
> maximum acceptable latency for other tasks
if you have n threads in runq and each of them can have mhttp://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.t
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
>>> Adjustments to the lag computation for for arrivals and departures
>>> during execution are among the missing pieces. Some algorithmic devices
>>> are also needed to account for the varying growth rates of lags of tasks
>>>
On Thu, Apr 26, 2007 at 10:57:48AM -0700, Li, Tong N wrote:
> On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
> > On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
> >
> > > Adjustments to the lag computation for for arrivals and departures
> > > during execution are a
On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
> On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
>
> > Adjustments to the lag computation for for arrivals and departures
> > during execution are among the missing pieces. Some algorithmic devices
> > are also needed
On Tuesday 24 April 2007 16:36, Ingo Molnar wrote:
> So, my point is, the nice level of X for desktop users should not be set
> lower than a low limit suggested by that particular scheduler's author.
> That limit is scheduler-specific. Con i think recommends a nice level of
> -1 for X when using S
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
> Adjustments to the lag computation for for arrivals and departures
> during execution are among the missing pieces. Some algorithmic devices
> are also needed to account for the varying growth rates of lags of tasks
> waiting
* Li, Tong N <[EMAIL PROTECTED]> wrote:
>> [...] A corollary of this is that if both threads i and j are
>> continuously runnable with fixed weights in the time interval, then
>> the ratio of their CPU time should be equal to the ratio of their
>> weights. This definition is pretty restrictive s
> > it into some xorg.conf field. (It also makes sure that X isnt preempted
> > by other userspace stuff while it does timing-sensitive operations like
> > setting the video modes up or switching video modes, etc.)
>
> X is priviledged. It can just cli around the critical section.
Not really. X
* Li, Tong N <[EMAIL PROTECTED]> wrote:
> [...] A corollary of this is that if both threads i and j are
> continuously runnable with fixed weights in the time interval, then
> the ratio of their CPU time should be equal to the ratio of their
> weights. This definition is pretty restrictive sin
* Ray Lee <[EMAIL PROTECTED]> wrote:
> It would seem like there should be a penalty associated with sending
> those points as well, so that two processes communicating quickly with
> each other won't get into a mutual love-fest that'll capture the
> scheduler's attention.
it's not really "poi
* Rogan Dawes <[EMAIL PROTECTED]> wrote:
> My concern was that since Ingo said that this is a closed economy,
> with a fixed sum/total, if we lose a nanosecond here and there,
> eventually we'll lose them all.
it's not a closed economy - the CPU constantly produces a resource: "CPU
cycles to
* Pavel Machek <[EMAIL PROTECTED]> wrote:
> > it into some xorg.conf field. (It also makes sure that X isnt
> > preempted by other userspace stuff while it does timing-sensitive
> > operations like setting the video modes up or switching video modes,
> > etc.)
>
> X is priviledged. It can jus
Hi!
> it into some xorg.conf field. (It also makes sure that X isnt preempted
> by other userspace stuff while it does timing-sensitive operations like
> setting the video modes up or switching video modes, etc.)
X is priviledged. It can just cli around the critical section.
--
(english) htt
On Tue, Apr 24, 2007 at 06:22:53PM -0700, Li, Tong N wrote:
> The goal of a proportional-share scheduling algorithm is to minimize the
> above metrics. If the lag function is bounded by a constant for any
> thread in any time interval, then the algorithm is considered to be
> fair. You may notice t
> Could you explain for the audience the technical definition of
fairness
> and what sorts of error metrics are commonly used? There seems to be
> some disagreement, and you're neutral enough of an observer that your
> statement would help.
The definition for proportional fairness assumes that eac
On Tuesday 24 April 2007, Willy Tarreau wrote:
>On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
>> On Tuesday 24 April 2007, Ingo Molnar wrote:
>> >* David Lang <[EMAIL PROTECTED]> wrote:
>> >> > (Btw., to protect against such mishaps in the future i have changed
>> >> > the SysRq-N [
On Tuesday 24 April 2007, Willy Tarreau wrote:
>On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
>> On Tuesday 24 April 2007, Ingo Molnar wrote:
>> >* David Lang <[EMAIL PROTECTED]> wrote:
>> >> > (Btw., to protect against such mishaps in the future i have changed
>> >> > the SysRq-N [
Rogan Dawes wrote:
Chris Friesen wrote:
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of
nanoseconds, we'd end up with rounding errors. I'm not sure if your
algorithm will ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
s
In article <[EMAIL PROTECTED]> you wrote:
> Could you explain for the audience the technical definition of fairness
> and what sorts of error metrics are commonly used? There seems to be
> some disagreement, and you're neutral enough of an observer that your
> statement would help.
And while we ar
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
> I don't know if we've discussed this or not. Since both CFS and SD claim
> to be fair, I'd like to hear more opinions on the fairness aspect of
> these designs. In areas such as OS, networking, and real-time, fairness,
> and its more gen
On Mon, 2007-04-23 at 18:57 -0700, Bill Huey wrote:
> On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
> > I don't know if we've discussed this or not. Since both CFS and SD claim
> > to be fair, I'd like to hear more opinions on the fairness aspect of
> > these designs. In areas such a
On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
> On Tuesday 24 April 2007, Ingo Molnar wrote:
> >* David Lang <[EMAIL PROTECTED]> wrote:
> >> > (Btw., to protect against such mishaps in the future i have changed
> >> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
>
Rogan Dawes wrote:
My concern was that since Ingo said that this is a closed economy, with
a fixed sum/total, if we lose a nanosecond here and there, eventually
we'll lose them all.
I assume Ingo has set it up so that the system doesn't "lose" partial
nanoseconds, but rather they'd just be a
On 4/23/07, Linus Torvalds <[EMAIL PROTECTED]> wrote:
On Mon, 23 Apr 2007, Ingo Molnar wrote:
>
> The "give scheduler money" transaction can be both an "implicit
> transaction" (for example when writing to UNIX domain sockets or
> blocking on a pipe, etc.), or it could be an "explicit transaction
Chris Friesen wrote:
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of
nanoseconds, we'd end up with rounding errors. I'm not sure if your
algorithm will ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
single context switch
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of nanoseconds,
we'd end up with rounding errors. I'm not sure if your algorithm will
ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
single context switch, an error of half a n
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Ingo Molnar <[EMAIL PROTECTED]> wrote:
>> yeah, i guess this has little to do with X. I think in your scenario
>> it might have been smarter to either stop, or to renice the workloads
>> that took away CPU power from others to _positive_ nice levels.
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Ingo Molnar <[EMAIL PROTECTED]> wrote:
>> yeah, i guess this has little to do with X. I think in your scenario
>> it might have been smarter to either stop, or to renice the workloads
>> that took away CPU power from others to _positive_ nice levels.
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* David Lang <[EMAIL PROTECTED]> wrote:
>> > (Btw., to protect against such mishaps in the future i have changed
>> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
>> > change real-time tasks to SCHED_OTHER, but to also renice negative
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Gene Heskett <[EMAIL PROTECTED]> wrote:
>> > (Btw., to protect against such mishaps in the future i have changed
>> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
>> > change real-time tasks to SCHED_OTHER, but to also renice negativ
Ingo Molnar wrote:
* Rogan Dawes <[EMAIL PROTECTED]> wrote:
if (p_to && p->wait_runtime > 0) {
p->wait_runtime >>= 1;
p_to->wait_runtime += p->wait_runtime;
}
the above is the basic expression of: "charge a positive bank balance".
[..]
[note, due
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [...] That way you'd only have had to hit SysRq-N to get the system
> out of the wedge.)
small correction: Alt-SysRq-N.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED
* Rogan Dawes <[EMAIL PROTECTED]> wrote:
> >if (p_to && p->wait_runtime > 0) {
> >p->wait_runtime >>= 1;
> >p_to->wait_runtime += p->wait_runtime;
> >}
> >
> >the above is the basic expression of: "charge a positive bank balance".
> >
>
> [..]
>
* David Lang <[EMAIL PROTECTED]> wrote:
> > (Btw., to protect against such mishaps in the future i have changed
> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
> > change real-time tasks to SCHED_OTHER, but to also renice negative
> > nice levels back to 0 - this will show u
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> yeah, i guess this has little to do with X. I think in your scenario
> it might have been smarter to either stop, or to renice the workloads
> that took away CPU power from others to _positive_ nice levels.
> Negative nice levels can indeed be dangero
On Tue, 24 Apr 2007, Ingo Molnar wrote:
* Gene Heskett <[EMAIL PROTECTED]> wrote:
Gene has done some testing under CFS with X reniced to +10 and the
desktop still worked smoothly for him.
As a data point here, and probably nothing to do with X, but I did
manage to lock it up, solid, reset bu
* Gene Heskett <[EMAIL PROTECTED]> wrote:
> > (Btw., to protect against such mishaps in the future i have changed
> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
> > change real-time tasks to SCHED_OTHER, but to also renice negative
> > nice levels back to 0 - this will show
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Gene Heskett <[EMAIL PROTECTED]> wrote:
>> > Gene has done some testing under CFS with X reniced to +10 and the
>> > desktop still worked smoothly for him.
>>
>> As a data point here, and probably nothing to do with X, but I did
>> manage to lock it u
* Gene Heskett <[EMAIL PROTECTED]> wrote:
> > Gene has done some testing under CFS with X reniced to +10 and the
> > desktop still worked smoothly for him.
>
> As a data point here, and probably nothing to do with X, but I did
> manage to lock it up, solid, reset button time tonight, by wantin
Ingo Molnar wrote:
static void
yield_task_fair(struct rq *rq, struct task_struct *p, struct task_struct *p_to)
{
struct rb_node *curr, *next, *first;
struct task_struct *p_next;
/*
* yield-to support: if we are on the same runqueue then
* give half of o
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Peter Williams <[EMAIL PROTECTED]> wrote:
>> > The cases are fundamentally different in behavior, because in the
>> > first case, X hardly consumes the time it would get in any scheme,
>> > while in the second case X really is CPU bound and will happi
* Peter Williams <[EMAIL PROTECTED]> wrote:
> > The cases are fundamentally different in behavior, because in the
> > first case, X hardly consumes the time it would get in any scheme,
> > while in the second case X really is CPU bound and will happily
> > consume any CPU time it can get.
>
>
Arjan van de Ven wrote:
Within reason, it's not the number of clients that X has that causes its
CPU bandwidth use to sky rocket and cause problems. It's more to to
with what type of clients they are. Most GUIs (even ones that are
constantly updating visual data (e.g. gkrellm -- I can open qu
> Within reason, it's not the number of clients that X has that causes its
> CPU bandwidth use to sky rocket and cause problems. It's more to to
> with what type of clients they are. Most GUIs (even ones that are
> constantly updating visual data (e.g. gkrellm -- I can open quite a
> large n
Linus Torvalds wrote:
On Mon, 23 Apr 2007, Ingo Molnar wrote:
The "give scheduler money" transaction can be both an "implicit
transaction" (for example when writing to UNIX domain sockets or
blocking on a pipe, etc.), or it could be an "explicit transaction":
sched_yield_to(). This latter i'v
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
> I don't know if we've discussed this or not. Since both CFS and SD claim
> to be fair, I'd like to hear more opinions on the fairness aspect of
> these designs. In areas such as OS, networking, and real-time, fairness,
> and its more gen
I don't know if we've discussed this or not. Since both CFS and SD claim
to be fair, I'd like to hear more opinions on the fairness aspect of
these designs. In areas such as OS, networking, and real-time, fairness,
and its more general form, proportional fairness, are well-defined
terms. In fact, p
Linus Torvalds wrote:
> The "perfect" situation would be that when somebody goes to sleep, any
> extra points it had could be given to whoever it woke up last. Note that
> for something like X, it means that the points are 100% ephemeral: it gets
> points when a client sends it a request, but it
2007/4/23, Ingo Molnar <[EMAIL PROTECTED]>:
p->wait_runtime >>= 1;
p_to->wait_runtime += p->wait_runtime;
I have no problem with clients giving some credit to X,
I am more concerned with X giving half of its credit to
a single client, a quarter of its credit to a
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> sorry, i was a bit imprecise here. There is a case where CFS can give
> out a 'loan' to tasks. The scheduler tick has a low resolution, so it
> is fundamentally inevitable [*] that tasks will run a bit more than
> they should, and at a heavy context-s
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> (we obviously dont want to allow people to 'share' their loans with
> others ;), nor do we want to allow a net negative balance. CFS is
> really brutally cold-hearted, it has a strict 'no loans' policy - the
> easiest economic way to manage 'inflation
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> > The "give scheduler money" transaction can be both an "implicit
> > transaction" (for example when writing to UNIX domain sockets or
> > blocking on a pipe, etc.), or it could be an "explicit transaction":
> > sched_yield_to(). This latter i've a
On Mon, 23 Apr 2007, Ingo Molnar wrote:
>
> The "give scheduler money" transaction can be both an "implicit
> transaction" (for example when writing to UNIX domain sockets or
> blocking on a pipe, etc.), or it could be an "explicit transaction":
> sched_yield_to(). This latter i've already im
Hi !
On Mon, Apr 23, 2007 at 09:11:43PM +0200, Ingo Molnar wrote:
>
> * Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> > but the point I'm trying to make is that X shouldn't get more CPU-time
> > because it's "more important" (it's not: and as noted earlier,
> > thinking that it's more importan
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> but the point I'm trying to make is that X shouldn't get more CPU-time
> because it's "more important" (it's not: and as noted earlier,
> thinking that it's more important skews the problem and makes for too
> *much* scheduling). X should get more
On Mon, 23 Apr 2007, Nick Piggin wrote:
> > If you have a single client, the X server is *not* more important than the
> > client, and indeed, renicing the X server causes bad patterns: just
> > because the client sends a request does not mean that the X server should
> > immediately be given
On Sun, Apr 22, 2007 at 04:24:47PM -0700, Linus Torvalds wrote:
>
>
> On Sun, 22 Apr 2007, Juliusz Chroboczek wrote:
> >
> > Why not do it in the X server itself? This will avoid controversial
> > policy in the kernel, and have the added advantage of working with
> > X servers that don't direct
On Sun, 2007-04-22 at 09:16 -0700, Ulrich Drepper wrote:
> On 4/22/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> > On Sun, Apr 22, 2007 at 12:17:31AM -0700, Ulrich Drepper wrote:
> > > For futex(), the extension is needed for the FUTEX_WAIT operation. We
> > > need a new operation FUTEX_W
On Sun, 22 Apr 2007, Juliusz Chroboczek wrote:
>
> Why not do it in the X server itself? This will avoid controversial
> policy in the kernel, and have the added advantage of working with
> X servers that don't directly access hardware.
It's wrong *wherever* you do it.
The X server should not
> Oh I definitely was not advocating against renicing X,
Why not do it in the X server itself? This will avoid controversial
policy in the kernel, and have the added advantage of working with
X servers that don't directly access hardware.
Con, if you tell me ``if you're running under Linux and s
On 4/22/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
On Sun, Apr 22, 2007 at 12:17:31AM -0700, Ulrich Drepper wrote:
> For futex(), the extension is needed for the FUTEX_WAIT operation. We
> need a new operation FUTEX_WAIT_FOR or so which takes another (the
> fourth) parameter which is t
* Mark Lord <[EMAIL PROTECTED]> wrote:
> > i've not experienced a 'runaway X' personally, at most it would
> > crash or lock up ;) The value is boot-time and sysctl configurable
> > as well back to 0.
>
> Mmmm.. I've had to kill off the odd X that was locking in 100% CPU
> usage. In the past,
Ingo Molnar wrote:
well, i just simulated a runaway X at nice -19 on CFS (on a UP box), and
while the box was a tad laggy, i was able to killall it without
problems, within 2 seconds that also included a 'su'. So it's not an
issue in CFS, it can be turned off, and because every distro has ano
Ingo Molnar wrote:
* Jan Engelhardt <[EMAIL PROTECTED]> wrote:
i've attached it below in a standalone form, feel free to put it
into SD! :)
Assume X went crazy (lacking any statistics, I make the unproven
statement that this happens more often than kthreads going berserk),
then having it nice
Con Kolivas wrote:
Oh I definitely was not advocating against renicing X, I just suspect that
virtually all the users who gave glowing reports to CFS comparing it to SD
had no idea it had reniced X to -19 behind their back and that they were
comparing it to SD running X at nice 0.
I really
On Sunday 22 April 2007, William Lee Irwin III wrote:
>On Sat, Apr 21, 2007 at 02:17:02PM -0400, Gene Heskett wrote:
>> CFS-v4 is quite smooth in terms of the users experience but after
>> prolonged observations approaching 24 hours, it appears to choke the cpu
>> hog off a bit even when the system
On 4/22/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>> I'm just looking for what people want the API to be here. With that in
>> hand we can just go out and do whatever needs to be done.
On Sun, Apr 22, 2007 at 12:17:31AM -0700, Ulrich Drepper wrote:
> I think a sched_yield_to is one inte
On Sat, Apr 21, 2007 at 02:17:02PM -0400, Gene Heskett wrote:
> CFS-v4 is quite smooth in terms of the users experience but after prolonged
> observations approaching 24 hours, it appears to choke the cpu hog off a bit
> even when the system has nothing else to do. My amanda runs went from 1 to
On 4/22/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
I'm just looking for what people want the API to be here. With that in
hand we can just go out and do whatever needs to be done.
I think a sched_yield_to is one interface:
int sched_yield_to(pid_t);
For futex(), the extension is n
On 4/21/07, Linus Torvalds <[EMAIL PROTECTED]> wrote:
>> And how the hell do you imagine you'd even *know* what thread holds the
>> futex?
On Sat, Apr 21, 2007 at 06:46:58PM -0700, Ulrich Drepper wrote:
> We know this in most cases. This is information recorded, for
> instance, in the mutex data
On Sun, 2007-04-22 at 10:08 +1000, Con Kolivas wrote:
> On Sunday 22 April 2007 08:54, Denis Vlasenko wrote:
> > On Saturday 21 April 2007 18:00, Ingo Molnar wrote:
> > > correct. Note that Willy reniced X back to 0 so it had no relevance on
> > > his test. Also note that i pointed this change out
Con Kolivas wrote:
> On Sunday 22 April 2007 02:00, Ingo Molnar wrote:
> > * Con Kolivas <[EMAIL PROTECTED]> wrote:
> > > > Feels even better, mouse movements are very smooth even under high
> > > > load. I noticed that X gets reniced to -19 with this scheduler.
> > > > I've not looked at the
On Saturday 21 April 2007, Con Kolivas wrote:
>On Sunday 22 April 2007 04:17, Gene Heskett wrote:
>> More first impressions of sd-0.44 vs CFS-v4
>
>Thanks Gene.
>
>> CFS-v4 is quite smooth in terms of the users experience but after
>> prolonged observations approaching 24 hours, it appears to choke
On Saturday 21 April 2007 22:12, Willy Tarreau wrote:
> 2) SD-0.44
>
>Feels good, but becomes jerky at moderately high loads. I've started
>64 ocbench with a 250 ms busy loop and 750 ms sleep time. The system
>always responds correctly but under X, mouse jumps quite a bit and
>typin
On 4/21/07, Linus Torvalds <[EMAIL PROTECTED]> wrote:
And how the hell do you imagine you'd even *know* what thread holds the
futex?
We know this in most cases. This is information recorded, for
instance, in the mutex data structure. You might have missed my "the
interface must be extended" p
On Sunday 22 April 2007 04:17, Gene Heskett wrote:
> More first impressions of sd-0.44 vs CFS-v4
Thanks Gene.
>
> CFS-v4 is quite smooth in terms of the users experience but after prolonged
> observations approaching 24 hours, it appears to choke the cpu hog off a
> bit even when the system has no
On Sunday 22 April 2007 08:54, Denis Vlasenko wrote:
> On Saturday 21 April 2007 18:00, Ingo Molnar wrote:
> > correct. Note that Willy reniced X back to 0 so it had no relevance on
> > his test. Also note that i pointed this change out in the -v4 CFS
> >
> > announcement:
> > || Changes since -v3:
On Sunday 22 April 2007 02:00, Ingo Molnar wrote:
> * Con Kolivas <[EMAIL PROTECTED]> wrote:
> > > Feels even better, mouse movements are very smooth even under high
> > > load. I noticed that X gets reniced to -19 with this scheduler.
> > > I've not looked at the code yet but this looked sus
On Sat, 21 Apr 2007, Ulrich Drepper wrote:
>
> If you do this, and it has been requested many a times, then please
> generalize it. We have the same issue with futexes. If a FUTEX_WAIT
> call is issues the remaining time in the slot should be given to the
> thread currently owning the futex.
On 4/21/07, Kyle Moffett <[EMAIL PROTECTED]> wrote:
>> It might be nice if it was possible to actively contribute your CPU
>> time to a child process. For example:
>> int sched_donate(pid_t pid, struct timeval *time, int percentage);
On Sat, Apr 21, 2007 at 12:49:52PM -0700, Ulrich Drepper wrote:
On Saturday 21 April 2007 18:00, Ingo Molnar wrote:
> correct. Note that Willy reniced X back to 0 so it had no relevance on
> his test. Also note that i pointed this change out in the -v4 CFS
> announcement:
>
> || Changes since -v3:
> ||
> || - usability fix: automatic renicing of kernel thre
On 4/21/07, Kyle Moffett <[EMAIL PROTECTED]> wrote:
It might be nice if it was possible to actively contribute your CPU
time to a child process. For example:
int sched_donate(pid_t pid, struct timeval *time, int percentage);
If you do this, and it has been requested many a times, then please
g
* Jan Engelhardt <[EMAIL PROTECTED]> wrote:
> > i've attached it below in a standalone form, feel free to put it
> > into SD! :)
>
> Assume X went crazy (lacking any statistics, I make the unproven
> statement that this happens more often than kthreads going berserk),
> then having it niced w
On Apr 21, 2007, at 12:42:41, William Lee Irwin III wrote:
On Sat, 21 Apr 2007, Willy Tarreau wrote:
If you remember, with 50/50, I noticed some difficulties to fork
many processes. I think that during a fork(), the parent has a
higher probability of forking other processes than the child. So
On Saturday 21 April 2007, Willy Tarreau wrote:
>Hi Ingo, Hi Con,
>
>I promised to perform some tests on your code. I'm short in time right now,
>but I observed behaviours that should be commented on.
>
>1) machine : dual athlon 1533 MHz, 1G RAM, kernel 2.6.21-rc7 + either
> scheduler Test: ./ocbe
On 4/21/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
on a simple 'ls' command:
21310 clone(child_stack=0, ...) = 21399
...
21399 execve("/bin/ls",
...
21310 waitpid(-1,
the PID is -1 so we dont actually know which task we are waiting for.
That's a special case. Most programs don't
On Apr 21 2007 18:00, Ingo Molnar wrote:
>* Con Kolivas <[EMAIL PROTECTED]> wrote:
>
>> > Feels even better, mouse movements are very smooth even under high
>> > load. I noticed that X gets reniced to -19 with this scheduler.
>> > I've not looked at the code yet but this looked suspicious
On Apr 21, 2007, at 12:18, Willy Tarreau wrote:
Also, I believe that (in shells), most forked processes do not even
consume
a full timeslice (eg: $(uname -n) is very fast). This means that
assigning
them with a shorter one will not hurt them while preserving the
shell's
performance against
On Sat, Apr 21, 2007 at 06:53:47PM +0200, Ingo Molnar wrote:
>
> * Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> > It would be even better to simply have the rule:
> > - child gets almost no points at startup
> > - but when a parent does a "waitpid()" call and blocks, it will spread
> >out
On Sat, Apr 21, 2007 at 09:34:07AM -0700, Linus Torvalds wrote:
>
>
> On Sat, 21 Apr 2007, Willy Tarreau wrote:
> >
> > If you remember, with 50/50, I noticed some difficulties to fork many
> > processes. I think that during a fork(), the parent has a higher probability
> > of forking other proc
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> It would be even better to simply have the rule:
> - child gets almost no points at startup
> - but when a parent does a "waitpid()" call and blocks, it will spread
>out its points to the childred (the "vfork()" blocking is another case
>t
On Sat, 21 Apr 2007, Willy Tarreau wrote:
>> If you remember, with 50/50, I noticed some difficulties to fork many
>> processes. I think that during a fork(), the parent has a higher probability
>> of forking other processes than the child. So at least, we should use
>> something like 67/33 or 75/2
On Sat, Apr 21, 2007 at 06:00:08PM +0200, Ingo Molnar wrote:
> arch/i386/kernel/ioport.c | 13 ++---
> arch/x86_64/kernel/ioport.c |8 ++--
> drivers/block/loop.c|5 -
> include/linux/sched.h |7 +++
> kernel/sched.c | 40 +++
On Sat, 21 Apr 2007, Willy Tarreau wrote:
>
> If you remember, with 50/50, I noticed some difficulties to fork many
> processes. I think that during a fork(), the parent has a higher probability
> of forking other processes than the child. So at least, we should use
> something like 67/33 or 75/
On Sat, Apr 21, 2007 at 05:46:14PM +0200, Ingo Molnar wrote:
>
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
>
> > I promised to perform some tests on your code. I'm short in time right
> > now, but I observed behaviours that should be commented on.
>
> thanks for the feedback!
>
> > 3) CFS-v4
On Sat, Apr 21, 2007 at 06:00:08PM +0200, Ingo Molnar wrote:
>
> * Con Kolivas <[EMAIL PROTECTED]> wrote:
>
> > > Feels even better, mouse movements are very smooth even under high
> > > load. I noticed that X gets reniced to -19 with this scheduler.
> > > I've not looked at the code yet
* Con Kolivas <[EMAIL PROTECTED]> wrote:
> > Feels even better, mouse movements are very smooth even under high
> > load. I noticed that X gets reniced to -19 with this scheduler.
> > I've not looked at the code yet but this looked suspicious to me.
> > I've reniced it to 0 and it did
On Saturday 21 April 2007 22:12, Willy Tarreau wrote:
> I promised to perform some tests on your code. I'm short in time right now,
> but I observed behaviours that should be commented on.
> Feels even better, mouse movements are very smooth even under high load.
> I noticed that X gets renice
* Willy Tarreau <[EMAIL PROTECTED]> wrote:
> I promised to perform some tests on your code. I'm short in time right
> now, but I observed behaviours that should be commented on.
thanks for the feedback!
> 3) CFS-v4
>
> Feels even better, mouse movements are very smooth even under high
>
On Sat, Apr 21, 2007 at 10:40:18PM +1000, Con Kolivas wrote:
> On Saturday 21 April 2007 22:12, Willy Tarreau wrote:
> > Hi Ingo, Hi Con,
> >
> > I promised to perform some tests on your code. I'm short in time right now,
> > but I observed behaviours that should be commented on.
> >
> > 1) machine
1 - 100 of 102 matches
Mail list logo