On Friday, February 22, 2013 2:12:54 pm Ian Lepore wrote:
> I'm curious why the concept of scheduling niceness applies only to an
> entire process, and it's not possible to have nice threads within a
> process. Is there any fundamental reason why it couldn't be sup
I'm curious why the concept of scheduling niceness applies only to an
entire process, and it's not possible to have nice threads within a
process. Is there any fundamental reason why it couldn't be supported
with some extra bookkeeping to track niceness per
The problem, IMHO, is none of this is in any way:
* documented;
* modellable by a user;
* explorable by a user (eg by an easy version of schedgraph to explore
things in a useful way.
Arnaud raises a valid point - he's given a synthetic benchmark whose
numbers are unpredictable. He's asking why. T
On Tue, 10 Apr 2012 16:50:39 -0400
Arnaud Lacombe wrote:
> On Tue, Apr 10, 2012 at 4:05 PM, Mike Meyer wrote:
> > On Tue, 10 Apr 2012 12:58:00 -0400
> > Arnaud Lacombe wrote:
> >> Let me disagree on your conclusion. If OS A does a task in X seconds,
> >> and OS B does the same task in Y seconds,
Hi,
On Tue, Apr 10, 2012 at 4:05 PM, Mike Meyer wrote:
> On Tue, 10 Apr 2012 12:58:00 -0400
> Arnaud Lacombe wrote:
>> Let me disagree on your conclusion. If OS A does a task in X seconds,
>> and OS B does the same task in Y seconds, if Y > X, then OS B is just
>> not performing good enough.
>
>
On Tue, 10 Apr 2012 12:58:00 -0400
Arnaud Lacombe wrote:
> Let me disagree on your conclusion. If OS A does a task in X seconds,
> and OS B does the same task in Y seconds, if Y > X, then OS B is just
> not performing good enough.
Others have pointed out one problem with this statement. Let me po
On 04/10/12 21:46, Arnaud Lacombe wrote:
On Tue, Apr 10, 2012 at 1:53 PM, Alexander Motin wrote:
On 04/10/12 20:18, Alexander Motin wrote:
On 04/10/12 19:58, Arnaud Lacombe wrote:
2012/4/9 Alexander Motin:
I have strong feeling that while this test may be interesting for
profiling,
it's own
Hi,
On Tue, Apr 10, 2012 at 1:53 PM, Alexander Motin wrote:
> On 04/10/12 20:18, Alexander Motin wrote:
>>
>> On 04/10/12 19:58, Arnaud Lacombe wrote:
>>>
>>> 2012/4/9 Alexander Motin:
[...]
I have strong feeling that while this test may be interesting for
profiling,
On 04/10/12 20:18, Alexander Motin wrote:
On 04/10/12 19:58, Arnaud Lacombe wrote:
2012/4/9 Alexander Motin:
[...]
I have strong feeling that while this test may be interesting for
profiling,
it's own results in first place depend not from how fast scheduler
is, but
from the pipes capacity and
On 04/10/12 19:58, Arnaud Lacombe wrote:
2012/4/9 Alexander Motin:
[...]
I have strong feeling that while this test may be interesting for profiling,
it's own results in first place depend not from how fast scheduler is, but
from the pipes capacity and other alike things. Can somebody hint me w
Hi,
2012/4/9 Alexander Motin :
> [...]
>
> I have strong feeling that while this test may be interesting for profiling,
> it's own results in first place depend not from how fast scheduler is, but
> from the pipes capacity and other alike things. Can somebody hint me what
> except pipe capacity an
deviation of only about 5 seconds. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing
needed
at
all. So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be a final patc
should. With 9 threads I see
regular
and random load move between all 8 CPUs. Measurements on 5 minutes run
show
deviation of only about 5 seconds. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing
needed
at
all. So I believe this code works a
gt;> as well.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I haven't got good idea yet about balancing priorities, but I've
>>>>>> rewritten
>>>>>> balancer itself. As soon as sched_lowest() / sched_
un
show
deviation of only about 5 seconds. It is the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing needed
at
all. So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be a f
>>>> intelligent now, they allowed to remove topology traversing from the
>>>> balancer itself. That should fix double-swapping problem, allow to keep
>>>> some
>>>> affinity while moving threads and make balancing more fair. I did number
>>>
s the same deviation as I see
caused
by only scheduling of 16 threads on 8 cores without any balancing needed
at
all. So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be a final patch of this series (more to come :)) and if
eep
>>> some
>>> affinity while moving threads and make balancing more fair. I did number
>>> of
>>> tests running 4, 8, 9 and 16 CPU-bound threads on 8 CPUs. With 4, 8 and
>>> 16
>>> threads everything is stationary as it should. With 9 threads I see
>>
В Sat, 03 Mar 2012 18:30:50 +0200
Alexander Motin пишет:
> On 03.03.2012 17:26, Ivan Klymenko wrote:
> > I have FreeBSD 10.0-CURRENT #0 r232253M
> > Patch in r232454 broken my DRM
> > My system patched http://people.freebsd.org/~kib/drm/all.13.5.patch
> > After build kernel with only r232454 patc
On 03.03.2012 18:57, Mario Lobo wrote:
On Saturday 03 March 2012 13:30:50 Alexander Motin wrote:
On 03.03.2012 17:26, Ivan Klymenko wrote:
I have FreeBSD 10.0-CURRENT #0 r232253M
Patch in r232454 broken my DRM
My system patched http://people.freebsd.org/~kib/drm/all.13.5.patch
After build kerne
On Saturday 03 March 2012 13:30:50 Alexander Motin wrote:
> On 03.03.2012 17:26, Ivan Klymenko wrote:
> > I have FreeBSD 10.0-CURRENT #0 r232253M
> > Patch in r232454 broken my DRM
> > My system patched http://people.freebsd.org/~kib/drm/all.13.5.patch
> > After build kernel with only r232454 patch
On 03.03.2012 17:26, Ivan Klymenko wrote:
I have FreeBSD 10.0-CURRENT #0 r232253M
Patch in r232454 broken my DRM
My system patched http://people.freebsd.org/~kib/drm/all.13.5.patch
After build kernel with only r232454 patch Xorg log contains:
...
[ 504.865] [drm] failed to load kernel module "i
В Sat, 03 Mar 2012 14:54:17 +0200
Alexander Motin пишет:
> On 03/03/12 11:12, Alexander Motin wrote:
> > On 03/03/12 10:59, Adrian Chadd wrote:
> >> Right. Is this written up in a PR somewhere explaining the problem
> >> in as much depth has you just have?
> >
> > Have no idea. I am new at this a
On 03/03/12 11:12, Alexander Motin wrote:
On 03/03/12 10:59, Adrian Chadd wrote:
Right. Is this written up in a PR somewhere explaining the problem in
as much depth has you just have?
Have no idea. I am new at this area and haven't looked on PRs yet.
And thanks for this, it's great to see so
On 03/03/12 10:59, Adrian Chadd wrote:
Right. Is this written up in a PR somewhere explaining the problem in
as much depth has you just have?
Have no idea. I am new at this area and haven't looked on PRs yet.
And thanks for this, it's great to see some further explanation of the
current issue
Right. Is this written up in a PR somewhere explaining the problem in
as much depth has you just have?
And thanks for this, it's great to see some further explanation of the
current issues the scheduler faces.
Adrian
On 2 March 2012 23:40, Alexander Motin wrote:
> Hi.
>
>
> On 03/03/12 05:24,
В Fri, 2 Mar 2012 19:24:42 -0800
Adrian Chadd пишет:
> He's reporting that your ULE work hasn't improved his (very)
> degenerate case.
That's not true!
Thanks!
>
> Thanks!
>
>
> Adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.free
Hi.
On 03/03/12 05:24, Adrian Chadd wrote:
mav@, can you please take a look at George's traces and see if there's
anything obviously silly going on?
He's reporting that your ULE work hasn't improved his (very) degenerate case.
As I can see, my patch has nothing to do with the problem. My patch
Hi,
CC'ing mav@, who started this thread.
mav@, can you please take a look at George's traces and see if there's
anything obviously silly going on?
He's reporting that your ULE work hasn't improved his (very) degenerate case.
Thanks!
Adrian
On 2 March 2012 16:14, George Mitchell wrote:
> On
On 03/02/12 18:06, Adrian Chadd wrote:
Hi George,
Have you thought about providing schedgraph traces with your
particular workload?
I'm sure that'll help out the scheduler hackers quite a bit.
THanks,
Adrian
I posted a couple back in December but I haven't created any more
recently:
http
Hi George,
Have you thought about providing schedgraph traces with your
particular workload?
I'm sure that'll help out the scheduler hackers quite a bit.
THanks,
Adrian
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/lis
On 02/27/12 06:28, Olivier Smedts wrote:
2012/2/27 George Mitchell:
On 02/27/12 05:35, Olivier Smedts wrote:
2012/2/27 George Mitchell:
I finally got around to trying this on a 9.0-STABLE GENERIC kernel, in
the forlorn hope that it would fix SCHED_ULE's poor performance for
interactive proce
on 27/02/2012 13:28 Olivier Smedts said the following:
> Can you try with hald, or directly with the mouse device, without
> using moused ? Others reported they had better interactivity without
> sysmouse/moused. Really better (no mouse lag or freeze when under high
> load).
>
I wonder if re-nice
2012/2/27 George Mitchell :
> On 02/27/12 05:35, Olivier Smedts wrote:
>>
>> 2012/2/27 George Mitchell:
>>>
>>> I finally got around to trying this on a 9.0-STABLE GENERIC kernel, in
>>> the forlorn hope that it would fix SCHED_ULE's poor performance for
>>> interactive processes with a full load o
On 02/27/12 05:35, Olivier Smedts wrote:
2012/2/27 George Mitchell:
I finally got around to trying this on a 9.0-STABLE GENERIC kernel, in
the forlorn hope that it would fix SCHED_ULE's poor performance for
interactive processes with a full load on interactive processes. It
doesn't help.
2012/2/27 George Mitchell :
> I finally got around to trying this on a 9.0-STABLE GENERIC kernel, in
> the forlorn hope that it would fix SCHED_ULE's poor performance for
> interactive processes with a full load on interactive processes. It
> doesn't help. --
On 02/26/12 19:32, George Mitchell wrote:
> [...] SCHED_ULE's poor performance for
> interactive processes with a full load on interactive processes. It
^^
Should be "of compute-bound".
> doesn't help. -- George Mitchell
>
__
On 02/17/12 12:03, Alexander Motin wrote:
On 17.02.2012 18:53, Arnaud Lacombe wrote:
On Fri, Feb 17, 2012 at 11:29 AM, Alexander Motin wrote:
[...]So I believe this code works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I plan this to be a final patch of
as it should. With 9 threads I see regular
and random load move between all 8 CPUs. Measurements on 5 minutes run show
deviation of only about 5 seconds. It is the same deviation as I see caused
by only scheduling of 16 threads on 8 cores without any balancing needed at
all. So I believe this cod
cing more fair. I did number of
> tests running 4, 8, 9 and 16 CPU-bound threads on 8 CPUs. With 4, 8 and 16
> threads everything is stationary as it should. With 9 threads I see regular
> and random load move between all 8 CPUs. Measurements on 5 minutes run show
> deviation of only ab
n 5 minutes run show deviation of only about 5 seconds. It
is the same deviation as I see caused by only scheduling of 16 threads
on 8 cores without any balancing needed at all. So I believe this code
works as it should.
Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
I pla
increase for 2-5 threads and no penalty in other cases.
>
> Tests on Atom show mostly about the same performance as before in
> database benchmarks: faster for 1 thread, slower for 2-3 and about the
> same for other cases. Single stream network performance improved same as
> for t
On 02/16/12 10:48, Alexander Motin wrote:
On 02/15/12 21:54, Jeff Roberson wrote:
On Wed, 15 Feb 2012, Alexander Motin wrote:
As before I've tested this on Core i7-870 with 4 physical and 8
logical cores and Atom D525 with 2 physical and 4 logical cores. On
Core i7 I've got speedup up to 10-15%
On 02/15/12 21:54, Jeff Roberson wrote:
On Wed, 15 Feb 2012, Alexander Motin wrote:
As before I've tested this on Core i7-870 with 4 physical and 8
logical cores and Atom D525 with 2 physical and 4 logical cores. On
Core i7 I've got speedup up to 10-15% in super-smack MySQL and
PostgreSQL indexe
On 02/15/12 21:54, Jeff Roberson wrote:
On Wed, 15 Feb 2012, Alexander Motin wrote:
On 02/14/12 00:38, Alexander Motin wrote:
I see no much point in committing them sequentially, as they are quite
orthogonal. I need to make one decision. I am going on small vacation
next week. It will give tim
r for 2-3 and about the same for other
cases. Single stream network performance improved same as for the first
patch. That CPU is quite difficult to handle as with mix of effective SMT and
lack of L3 cache different scheduling approaches give different results in
different situations.
Specific perf
On 02/15/12 21:54, Jeff Roberson wrote:
On Wed, 15 Feb 2012, Alexander Motin wrote:
As before I've tested this on Core i7-870 with 4 physical and 8
logical cores and Atom D525 with 2 physical and 4 logical cores. On
Core i7 I've got speedup up to 10-15% in super-smack MySQL and
PostgreSQL indexe
st patch. That CPU is quite difficult to handle as with mix
of effective SMT and lack of L3 cache different scheduling approaches
give different results in different situations.
Specific performance numbers can be found here:
http://people.freebsd.org/~mav/bench.ods
Every point there includes a
On 13.02.2012 23:39, Jeff Roberson wrote:
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/13/12 22:23, Jeff Roberson wrote:
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the
ideas
are alre
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/13/12 22:23, Jeff Roberson wrote:
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the ideas
are already addressed. :) At this moment I am mos
On 02/13/12 22:23, Jeff Roberson wrote:
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the ideas
are already addressed. :) At this moment I am mostly satisfied with
results and after final test
On Mon, 13 Feb 2012, Alexander Motin wrote:
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the ideas
are already addressed. :) At this moment I am mostly satisfied with
results and after final tests today I'll probably publish new version
On 02/11/12 16:21, Alexander Motin wrote:
I've heavily rewritten the patch already. So at least some of the ideas
are already addressed. :) At this moment I am mostly satisfied with
results and after final tests today I'll probably publish new version.
It took more time, but finally I think I'v
on 11/02/2012 15:35 Andriy Gapon said the following:
> It seems that on modern CPUs the caches are either inclusive or some smart "as
> if inclusive" caches. As a result, if two cores have a shared cache at any
> level, then it should be relatively cheap to move a thread from one core to
> the
>
On Sat, Feb 11, 2012 at 04:21:25PM +0200, Alexander Motin wrote:
> At this moment I am using different penalty coefficients for SMT and
> shared caches (for unrelated processes sharing is is not good). No
> problem to add more types there. Separate flag for shared FPU could be
> used to have dif
On 02/11/12 15:35, Andriy Gapon wrote:
on 06/02/2012 09:04 Alexander Motin said the following:
I've analyzed scheduler behavior and think found the problem with HTT. SCHED_ULE
knows about HTT and when doing load balancing once a second, it does right
things. Unluckily, if some other thread gets
on 06/02/2012 09:04 Alexander Motin said the following:
> Hi.
>
> I've analyzed scheduler behavior and think found the problem with HTT.
> SCHED_ULE
> knows about HTT and when doing load balancing once a second, it does right
> things. Unluckily, if some other thread gets in the way, process can
me in lock contention with itself that necessary.
I.e. it's usable only in very borderline cases.
algorithm would be a scheduling infrastructure similar to GEOM. that way it
would be much easier to implement new algorithms (maybe in XML).
I don't think XML would be applicable beyond
s no good candidate, it
just looks for the CPU with minimal load, ignoring thread priority. I
suppose that may lead to priority violation, scheduling thread to CPU
where higher-priority thread is running, where it may wait for a very
long time, while there is some other CPU with minimal priorit
On 02/06/12 21:08, Florian Smeets wrote:
On 06.02.12 08:59, David Xu wrote:
On 2012/2/6 15:44, Alexander Motin wrote:
On 06.02.2012 09:40, David Xu wrote:
On 2012/2/6 15:04, Alexander Motin wrote:
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows abou
m seeing a massive increase
> >>>in
> >>>responsiveness with your patch. with an unpatched kernel, opening xterm
> >>>while
> >>>unrar'ing some huge archive could take up to 3 minutes!!! with your
> >>>patch the
> >>>ti
On 06.02.12 08:59, David Xu wrote:
> On 2012/2/6 15:44, Alexander Motin wrote:
>> On 06.02.2012 09:40, David Xu wrote:
>>> On 2012/2/6 15:04, Alexander Motin wrote:
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows about HTT and when doin
can suggest explanation for this. Original
> code does only one pass looking for CPU where the thread can run
> immediately. That pass limited to the first level of CPU topology (for
> HTT systems it is one physical core). If it sees no good candidate, it
> just looks for the
ity. I
suppose that may lead to priority violation, scheduling thread to CPU
where higher-priority thread is running, where it may wait for a very
long time, while there is some other CPU with minimal priority thread.
My patch does more searches, that allows to handle priorities better.
But w
iately. That pass limited to the first level of CPU topology (for
HTT systems it is one physical core). If it sees no good candidate, it
just looks for the CPU with minimal load, ignoring thread priority. I
suppose that may lead to priority violation, scheduling thread to CPU
where higher-priority
On Mon Feb 6 12, Alexander Motin wrote:
> Hi.
>
> I've analyzed scheduler behavior and think found the problem with HTT.
> SCHED_ULE knows about HTT and when doing load balancing once a second,
> it does right things. Unluckily, if some other thread gets in the way,
> process can be easily pus
On Mon, 06 Feb 2012 09:04:31 +0200
Alexander Motin wrote:
> I've analyzed scheduler behavior and think found the problem with HTT.
> SCHED_ULE knows about HTT and when doing load balancing once a second,
> it does right things. Unluckily, if some other thread gets in the way,
> process can be
On 2012/2/6 15:44, Alexander Motin wrote:
On 06.02.2012 09:40, David Xu wrote:
On 2012/2/6 15:04, Alexander Motin wrote:
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows about HTT and when doing load balancing once a second,
it does right things. Unluc
Hi.
I've analyzed scheduler behavior and think found the problem with HTT.
SCHED_ULE knows about HTT and when doing load balancing once a second,
it does right things. Unluckily, if some other thread gets in the way,
process can be easily pushed out to another CPU, where it will stay for
anot
Bernard van Gastel wrote:
But the descheduling of threads if the mutex is not available is done by the
library. And especially the order of rescheduling of the threads (thats what
I'm interested in). Or am I missing something in the sys/kern/sched files (btw
I don't have the umtx file).
Regar
On Thursday 21 January 2010 11:27:23 Bernard van Gastel wrote:
> In real world application such a proposed queue would work almost always,
> but I'm trying to exclude all starvation situations primarily (speed is
> less relevant). And although such a worker can execute it work and be
> scheduled
On Thu, 21 Jan 2010, Bernard van Gastel wrote:
In real world application such a proposed queue would work almost
always, but I'm trying to exclude all starvation situations primarily
(speed is less relevant). And although such a worker can execute it
work and be scheduled fairly, the addition
Bernard van Gastel writes:
> But the descheduling of threads if the mutex is not available is done
> by the library. And especially the order of rescheduling of the
> threads (thats what I'm interested in). Or am I missing something in
> the sys/kern/sched files (btw I don't have the umtx file).
On Thu, 21 Jan 2010, Bernard van Gastel wrote:
But the descheduling of threads if the mutex is not available is done
by the library. And especially the order of rescheduling of the
threads (thats what I'm interested in). Or am I missing something in
the sys/kern/sched files (btw I don't have t
gende geschreven:
> In the last episode (Jan 19), Bernard van Gastel said:
>> I'm curious to the exact scheduling policy of POSIX threads in relation to
>> mutexes and conditions. If there are two threads (a & b), both with the
>> following code:
>>
>&g
ard
Op 19 jan 2010, om 12:16 heeft Dag-Erling Smørgrav het volgende geschreven:
> Bernard van Gastel writes:
>> What is the scheduling policy of the different thread libraries?
>
> Threads are scheduled by the kernel, not by the library. Look at
> sys/kern/sched_umtx.c and
In the last episode (Jan 19), Bernard van Gastel said:
> I'm curious to the exact scheduling policy of POSIX threads in relation to
> mutexes and conditions. If there are two threads (a & b), both with the
> following code:
>
> while (1) {
>
Bernard van Gastel writes:
> What is the scheduling policy of the different thread libraries?
Threads are scheduled by the kernel, not by the library. Look at
sys/kern/sched_umtx.c and sys/kern/sched_{4bsd,ule}.c.
DES
--
Dag-Erling Smørgrav - d...@des
Hi everyone,
I'm curious to the exact scheduling policy of POSIX threads in relation to
mutexes and conditions. If there are two threads (a & b), both with the
following code:
while (1) {
pthread_mutex_lock(mutex);
...
pthread_mutex_unlock(mutex);
}
Wh
Hi,
Thank you very much again Ulf.
I found this http://en.wikipedia.org/wiki/Native_POSIX_Thread_Library and it
describes 1:1 correspondence of Linux threads. So, you were right and thank
you very much again.
Regards,
Mehmet
On Thu, Jan 8, 2009 at 4:59 PM, Ulf Lilleengen wrote:
> On tor, j
On tor, jan 08, 2009 at 09:16:26am -0500, Mehmet Ali Aksoy TÜYSÜZ wrote:
> Hi,
>
> Thank you very much for your response Ulf. It is a very clear answer. Thanks
> again.
>
> By the way, any information for the Linux case?
>
I think this applies to Linux as well, since it's NPTL(Native Posix Threa
Hi,
Thank you very much for your response Ulf. It is a very clear answer. Thanks
again.
By the way, any information for the Linux case?
Regards,
Mehmet
On Thu, Jan 8, 2009 at 10:08 AM, Ulf Lilleengen wrote:
> On Thu, Jan 08, 2009 at 04:23:08AM -0500, Mehmet Ali Aksoy TÜYSÜZ wrote:
> > Hi al
On Thu, Jan 08, 2009 at 04:23:08AM -0500, Mehmet Ali Aksoy TÜYSÜZ wrote:
> Hi all,
>
> After I had a bit googling I got confused.
>
> My questions are simple and they are as follows :
>
> 1-) "Are pthreads (or threads in general) of one process scheduled to
> different cores on multi-core system
Hi all,
After I had a bit googling I got confused.
My questions are simple and they are as follows :
1-) "Are pthreads (or threads in general) of one process scheduled to
different cores on multi-core systems running Linux or BSD?"
2-) What if there are multiple processes which have multiple th
Hi all, i have a patch from freebsd 4.x not developed for me, but this
is very good for appreciate and or upgrade the patch for versions 5.x
6.x or current. This use sysctl oids to limit memory ram and cpu use.
Regards and sorry for my bad english,
Roberto Lima.
jail_seperation.v7.patch
Descrip
I personally prefer the notion of layering the normal scheduler on top
of a simple fair-share scheduler. This would not add any overhead for
the non-jailed case. Complicating the process scheduler poses
maintenance, scalability, and general performance problems.
-Kip
On 6/11/06, Peter Jerem
On Sun, 2006-Jun-11 14:50:30 +0200, Pieter de Goeje wrote:
>I suppose by limiting the jail CPU usage you mean that jails contending over
>CPU each get their assigned share. But when the system is idle one jail can
>get all the CPU it wants.
IBM MVS had an interesting alternative approach, which
On 11-Jun-06, at 6:50 AM, Pieter de Goeje wrote:
For my CS study I picked up "Operating System Concepts" by
Silberschatz,
Galvin and Gagne. It has a fairly detailed description of the inner
workings
of a scheduler and the various algorithms involved, but no actual
implementation.
Yep, we u
I come in: I'm the guy doing the project, and I've been
> spending the last two weeks coming up to speed on scheduling and the
> like.
>
> What I'd like from freebsd-hackers is the following:
>
>- are there any good references on scheduling that you kn
On 11-Jun-06, at 6:50 AM, Pieter de Goeje wrote:
For my CS study I picked up "Operating System Concepts" by
Silberschatz,
Galvin and Gagne. It has a fairly detailed description of the inner
workings
of a scheduler and the various algorithms involved, but no actual
implementation.
Yep, we u
On Sat, Jun 10, 2006 at 11:51:33PM -0600, Chris Jones wrote:
>
> - what're your thoughts on making the existing scheduler jail-
> aware as opposed to writing a sort of 'meta-scheduler' that would
> schedule between jails, and then delegate to a scheduler per jail
> (which could be very simi
; where I come in: I'm the guy doing the project, and I've been
> spending the last two weeks coming up to speed on scheduling and the
> like.
>
> What I'd like from freebsd-hackers is the following:
>
>- are there any good references on scheduling that you
spending the last two weeks coming up to speed on scheduling and the
like.
What I'd like from freebsd-hackers is the following:
- are there any good references on scheduling that you know of
which I should read? I've already got Design & Implementation of
FreeBSD and
Adam Migus wrote:
> So if you gimme webspace can i promise you code and
> output shortly after? If you want input into design I can
> give you the code now with the understanding that it is
> WIP.
Sure. If you can wait a week, I'll be able to sort you out. Right now, the
server is in need of som
It's very WIP right now and will remain so for another couple
of weeks. I'd planned to show more people a 'working' version
when a) i got a home for the page and b) the numbers its
producing have reasonable variance.
I'd prefer defering a public release until those goals are
reached. You've given
> Mike,
> I don't have the test, but I've built a generic performance
> testing framework for FreeBSD over the past couple of months
> that would make running such a test trivial. I'd post a link
> but the page has no permanent home yet. When it gets one I can
> follow it up with a link.
I'd be
Mike,
I don't have the test, but I've built a generic performance
testing framework for FreeBSD over the past couple of months
that would make running such a test trivial. I'd post a link
but the page has no permanent home yet. When it gets one I can
follow it up with a link.
For now, the applic
On Fri, 28 Feb 2003, Paul Robinson wrote:
> Well, I'm just a hanger-on without a commit bit, so I'll work on making it
> production ready in the next few weeks, post up a patch and if somebody
> wants to commit it, great. At the moment it's all based on 4.3-RELEASE and
> isn't really production r
--- Paul Robinson <[EMAIL PROTECTED]> ha scritto:
>
> The license is actually BSD. Or at least, the one I
> saw last night had a
> remarable resemblance to it. :-)
I thought the same when I glimpsed over it until I saw
the README file :-). Read again, it has 4 statements
ala BSD, including th
[EMAIL PROTECTED] (Thu, Feb 27, 2003 at 06:12:29PM +0100) wrote:
> In message <[EMAIL PROTECTED]>, "" writes:
> >Hello gang.
> >
> >Does anyone know what kind of `Disk Scheduling' algorithm,
> >if any, is used in FreeBSD?
>
> One way elevator
1 - 100 of 143 matches
Mail list logo