On Thu, 19 Apr 2001, Andrey A. Chernov wrote:
> On Thu, Apr 19, 2001 at 10:39:58 -0700, Matt Dillon wrote:
> > Let me explain a little more. If it's commented out, it's fine. But
> > if you are actually setting a value in there you will override whatever
> > is set in the kernel. W
On Thu, Apr 19, 2001 at 10:47:20 -0700, Matt Dillon wrote:
> :> set that default in stone and prevent us from being able to change
> :> it with a new kernel rev. This being a *kernel* specific feature,
> :> we need to have control over the default in the kernel itself.
> :
> :What abo
:> set that default in stone and prevent us from being able to change
:> it with a new kernel rev. This being a *kernel* specific feature,
:> we need to have control over the default in the kernel itself.
:
:What about simple check in the kernel: if total memory is above 64Mb, then
:e
On Thu, Apr 19, 2001 at 10:39:58 -0700, Matt Dillon wrote:
> :But we already have sysctl.conf and appropriate rc.sysctl, haven't we? What's
> :wrong with putting some useful payload into it?
> :
> :-Maxim
>
> Let me explain a little more. If it's commented out, it's fine. But
> if you a
:But we already have sysctl.conf and appropriate rc.sysctl, haven't we? What's
:wrong with putting some useful payload into it?
:
:-Maxim
Let me explain a little more. If it's commented out, it's fine. But
if you are actually setting a value in there you will override whatever
is se
:But we already have sysctl.conf and appropriate rc.sysctl, haven't we? What's
:wrong with putting some useful payload into it?
:
:-Maxim
If it's commented out, it's fine.
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebs
Matt Dillon wrote:
> :
> :What do you think about attached patch?
> :
> :-Maxim
>
> mmm.. I think it would just confuse the issue and prevent us from
> being able to change the kernel default trivially. 99.5% of the
> FreeBSD boxes out there are just going to want it to be on by defa
:
:What do you think about attached patch?
:
:-Maxim
mmm.. I think it would just confuse the issue and prevent us from
being able to change the kernel default trivially. 99.5% of the
FreeBSD boxes out there are just going to want it to be on by default.
We could provide a comment
Doug Barton wrote:
> Maxim Sobolev wrote:
>
> > > > What do you think about attached patch?
>
> Definitely the right idea, however I'm waiting on input from a couple
> people on some additional suggestions, so if you'd hold off I'd appreciate
> it.
Unfortunately I've already cvs ci it. :
Maxim Sobolev wrote:
> > > What do you think about attached patch?
Definitely the right idea, however I'm waiting on input from a couple
people on some additional suggestions, so if you'd hold off I'd appreciate
it.
--
"One thing they don't tell you about doing experimental physics is
On Thu, Apr 19, 2001 at 03:46:39PM +0300, Maxim Sobolev wrote:
>
> What do you think about attached patch?
>
> -Maxim
>
> Index: Makefile
> ===
> RCS file: /home/ncvs/src/etc/Makefile,v
> retrieving revision 1.248
> diff -d -u -r1
* Maxim Sobolev <[EMAIL PROTECTED]> [010419 06:20] wrote:
>
> OOPS, I see. See updated patch.
Looks ok.
> Index: Makefile
> ===
> RCS file: /home/ncvs/src/etc/Makefile,v
> retrieving revision 1.248
> diff -d -u -r1.248 Makefile
> -
Alfred Perlstein wrote:
> * Maxim Sobolev <[EMAIL PROTECTED]> [010419 05:48] wrote:
> > Doug Barton wrote:
> >
> > > Alfred Perlstein wrote:
> > >
> > > > I'm figuring the only time when it may be a problem is on machines
> > > > with a small amount of memory. Since memory is cheap, I plan on
>
* Maxim Sobolev <[EMAIL PROTECTED]> [010419 05:48] wrote:
> Doug Barton wrote:
>
> > Alfred Perlstein wrote:
> >
> > > I'm figuring the only time when it may be a problem is on machines
> > > with a small amount of memory. Since memory is cheap, I plan on
> > > turning it on within the next coup
Doug Barton wrote:
> Alfred Perlstein wrote:
>
> > I'm figuring the only time when it may be a problem is on machines
> > with a small amount of memory. Since memory is cheap, I plan on
> > turning it on within the next couple of days unless a stability
> > issue comes up.
> >
> > I'll leave it
On Tue, 17 Apr 2001, Doug Barton wrote:
> OK... this brings up the question of what other cool optimizations are
> there that may have been disabled in the past for reasons that are no
> longer pertinent? It might be worthwhile to create an /etc/sysctl.conf file
> with commented out example
On Wed, Apr 18, 2001 at 10:33:32AM +0200, Jeroen Ruigrok/Asmodai wrote:
> -On [20010417 20:47], Matt Dillon ([EMAIL PROTECTED]) wrote:
> >Testing it 'on' in stable on production systems and observing the
> >relative change in performance is a worthy experiment. Testing it
> >'on' in c
On 18 Apr 2001, at 22:16, Bruce Evans wrote:
> On Wed, 18 Apr 2001, Jeroen Ruigrok/Asmodai wrote:
>
> > -On [20010417 20:47], Matt Dillon ([EMAIL PROTECTED]) wrote:
> > >Testing it 'on' in stable on production systems and observing the
> > >relative change in performance is a worthy expe
On 18 Apr 2001, at 10:33, Jeroen Ruigrok/Asmodai wrote:
> -On [20010417 20:47], Matt Dillon ([EMAIL PROTECTED]) wrote:
> >Testing it 'on' in stable on production systems and observing the
> >relative change in performance is a worthy experiment. Testing it
> >'on' in current is just
-On [20010418 14:38], Bruce Evans ([EMAIL PROTECTED]) wrote:
[vfs.vmiodirenable]
>So, how much slower was it? ;-)
Not noticeable for me at least.
--
Jeroen Ruigrok van der Werven/Asmodai --=-- asmodai@[wxs.nl|freebsd.org]
Documentation nutter/C-rated Coder BSD: Technical excellence at its bes
On Wed, 18 Apr 2001, Jeroen Ruigrok/Asmodai wrote:
> -On [20010417 20:47], Matt Dillon ([EMAIL PROTECTED]) wrote:
> >Testing it 'on' in stable on production systems and observing the
> >relative change in performance is a worthy experiment. Testing it
> >'on' in current is just an ex
-On [20010418 01:00], Alfred Perlstein ([EMAIL PROTECTED]) wrote:
> (although afaik we're basing it on both Solaris and BSD/os's
> implementation so... well I'm not going to bother defending it.)
You just scared the shit out of me by mentioning Solaris.
I've found Solaris to be a PITA with all
-On [20010417 20:47], Matt Dillon ([EMAIL PROTECTED]) wrote:
>Testing it 'on' in stable on production systems and observing the
>relative change in performance is a worthy experiment. Testing it
>'on' in current is just an experiment.
I have been running vfs.vmiodirenable=1 on two ST
On Tue, Apr 17, 2001 at 10:18:34PM +, E.B. Dreger wrote:
> > Once the mutexes are in place the underlying implementation can
> > change pretty easily from task switching always to only task
> > switching when the mutex is owned by the same CPU that I'm running
>
> I'm not sure that I follow.
(cross-posting to SMP and renaming in an effort to move the thread)
> Date: Tue, 17 Apr 2001 16:04:18 -0700
> From: Alfred Perlstein <[EMAIL PROTECTED]>
(Repeat disclaimer: I am not a kernel hacker.)
> seriously, it would be _trivial_ to:
>
> 1) make interrupts the only thing that could swit
> Date: Tue, 17 Apr 2001 22:18:34 + (GMT)
> From: E.B. Dreger <[EMAIL PROTECTED]>
>
> My instinct (whatever it's worth; remember my disclaimer) is that co-op
> switching using something like tsleep() and wakeup_one() or similar would
> be more efficient than trying to screw with mutexes.
Oop
* Matt Dillon <[EMAIL PROTECTED]> [010417 15:00] wrote:
>
> :Once the mutexes are in place the underlying implementation can
> :change pretty easily from task switching always to only task
> :switching when the mutex is owned by the same CPU that I'm running
> :on. (to avoid spinlock deadlock)
>
> Date: Tue, 17 Apr 2001 15:00:29 -0700 (PDT)
> From: Matt Dillon <[EMAIL PROTECTED]>
>
> WILL be a performance hit. WILL introduce major bugs. IS unnecessary,
> DOESN'T make any sense whatsoever, is at CROSS PURPOSES with goals
> already stated (not having any serious contention in the first p
> Date: Tue, 17 Apr 2001 14:52:06 -0700
> From: Alfred Perlstein <[EMAIL PROTECTED]>
Disclaimer: I am not a kernel hacker.
> The goal is to have a kernel that's able to have more concurrancy,
Right...
> things like pre-emption and task switching on mutex collisions can
> be examined and possib
:Once the mutexes are in place the underlying implementation can
:change pretty easily from task switching always to only task
:switching when the mutex is owned by the same CPU that I'm running
:on. (to avoid spinlock deadlock)
That makes *NO* *SENSE* Alfred! So the first step is to intro
* Matt Dillon <[EMAIL PROTECTED]> [010417 14:07] wrote:
>
> :
> :You need to settle dude, pre-emption isn't a goal, it's mearly a
> :_possible_ side effect.
> :
> :We're not aiming for pre-emption, we're aiming for more concurrancy.
>
> A goal of having more concurrency is laudable, but I t
On Tue, 17 Apr 2001, Matt Dillon wrote:
> :You need to settle dude, pre-emption isn't a goal, it's mearly a
> :_possible_ side effect.
> :
> :We're not aiming for pre-emption, we're aiming for more concurrancy.
>
> A goal of having more concurrency is laudable, but I think you are
> ig
:
:You need to settle dude, pre-emption isn't a goal, it's mearly a
:_possible_ side effect.
:
:We're not aiming for pre-emption, we're aiming for more concurrancy.
A goal of having more concurrency is laudable, but I think you are
ignoring the costs of doing task switches verses the l
You need to settle dude, pre-emption isn't a goal, it's mearly a
_possible_ side effect.
We're not aiming for pre-emption, we're aiming for more concurrancy.
* Matt Dillon <[EMAIL PROTECTED]> [010417 13:51] wrote:
> :
> :There's actually very little code that non-premptable once we get the
> :k
:
:There's actually very little code that non-premptable once we get the
:kernel mutexed. The least complex way to accomplish this is to only
:preempt kernel processes that hold no mutex (low level) locks.
:
:--
:-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
I wish it were that
* Matt Dillon <[EMAIL PROTECTED]> [010417 10:22] wrote:
>
> :* Matt Dillon <[EMAIL PROTECTED]> [010415 23:16] wrote:
> :>
> :> For example, all this work on a preemptive
> :> kernel is just insane. Our entire kernel is built on the concept of
> :> not being
:
:Matt Dillon wrote:
:
:> It is not implying that at all. There is no black and white here.
:> This is a case where spending a huge amount of time and complexity
:> to get the efficiency down to the Nth degree is nothing but a waste
:> of time. What matters is what the user see
:* Matt Dillon <[EMAIL PROTECTED]> [010415 23:16] wrote:
:>
:> For example, all this work on a preemptive
:> kernel is just insane. Our entire kernel is built on the concept of
:> not being preemptable except by interrupts. We virtually guarentee
:> ye
Alfred Perlstein wrote:
> I'm figuring the only time when it may be a problem is on machines
> with a small amount of memory. Since memory is cheap, I plan on
> turning it on within the next couple of days unless a stability
> issue comes up.
>
> I'll leave it to those people with low memory t
* Matt Dillon <[EMAIL PROTECTED]> [010415 23:16] wrote:
>
> For example, all this work on a preemptive
> kernel is just insane. Our entire kernel is built on the concept of
> not being preemptable except by interrupts. We virtually guarentee
> years of
* Doug Barton <[EMAIL PROTECTED]> [010417 01:08] wrote:
> Matt Dillon wrote:
>
> > It is not implying that at all. There is no black and white here.
> > This is a case where spending a huge amount of time and complexity
> > to get the efficiency down to the Nth degree is nothing but
Matt Dillon wrote:
> It is not implying that at all. There is no black and white here.
> This is a case where spending a huge amount of time and complexity
> to get the efficiency down to the Nth degree is nothing but a waste
> of time. What matters is what the user sees, what p
:> It just seems inelegant to have a system that, on paper, is
:> so inefficient. Can't we do better?
:
:Sure. Don't discard buffer contents when recycling a B_MALLOC'ed buffer,
:but manage it using a secondary buffer cache that doesn't have as much
:overhead as the primary one (in particular,
:
:>I don't consider it inefficient. Sure, if you look at this one aspect
:>of the caching taken out of context it may appear to be inefficient,
:>but if you look at the whole enchilada the memory issue is nothing
:>more then a minor footnote - not worth the effort of worrying a
>I don't consider it inefficient. Sure, if you look at this one aspect
>of the caching taken out of context it may appear to be inefficient,
>but if you look at the whole enchilada the memory issue is nothing
>more then a minor footnote - not worth the effort of worrying about.
On Sun, 15 Apr 2001, Justin T. Gibbs wrote:
> >There's no downside, really.
>
> It just seems inelegant to have a system that, on paper, is
> so inefficient. Can't we do better?
Sure. Don't discard buffer contents when recycling a B_MALLOC'ed buffer,
but manage it using a secondary buffer
:
:>There's no downside, really.
:
:It just seems inelegant to have a system that, on paper, is
:so inefficient. Can't we do better?
:
:--
:Justin
I don't consider it inefficient. Sure, if you look at this one aspect
of the caching taken out of context it may appear to be inefficie
>There's no downside, really.
It just seems inelegant to have a system that, on paper, is
so inefficient. Can't we do better?
--
Justin
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message
:It is my understanding that with the new directory layout strategies, this
:will be improved somewhat. ie: a single page is much more likely to cache
:up to 8 directories.
:
:Cheers,
:-Peter
:--
:Peter Wemm - [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
:"All of this is for nothing if
"Justin T. Gibbs" wrote:
> > I notice that this option is off by default. Can you give a general
> >idea of when it should be enabled, when it should be disabled, and what bad
> >things might result with it on?
>
> It consumes a full page per-directory even though the majority of
> directori
:
: I notice that this option is off by default. Can you give a general idea
:of when it should be enabled, when it should be disabled, and what bad
:things might result with it on?
:
:Thanks,
:
:Doug
There's no downside, really. The directory cache is so tiny without
it that you
> I notice that this option is off by default. Can you give a general
>idea of when it should be enabled, when it should be disabled, and what bad
>things might result with it on?
It consumes a full page per-directory even though the majority of
directories in a stock system are a small fr
Matt Dillon wrote:
> If directories are spread all over the disk, caching
> is non-optimal. But if they are relatively close to each other then
> both our VM cache (if vfs.vmiodirenable is set to 1) and the hard
> drive's internal cache become extremely effective.
I notice
:
:Any plan to MFC? I am interesting to see it in 4.3-RELEASE.
:
:--
:David Xu
It will definitely not go in until after the release. It's still
experimental (in our tree).
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubsc
If memory serves me right, David Xu wrote:
[dirpref stuff]
> Any plan to MFC? I am interesting to see it in 4.3-RELEASE.
I'm pretty sure it won't be in 4.3-RELEASE. In case you didn't realize,
RELENG_4 has been in code freeze for some weeks now, preparing for a
release next week. "Code freeze
Hello Matt,
Wednesday, April 11, 2001, 2:24:35 AM, you wrote:
:>> I'm not 100% convinced about the algorithm to avoid clusters filling
:>> up with directory-only entries (it looks like a worst-case would fill
:>> a cluster with 50% directories and 50% files leaving a bad layout when
:>> the d
> Yup, Kirk committed it. I really like the changes -- in the old days
> disk caches were tiny and directories were not well cached on top of that.
> It made sense to try to keep directories close to their files.
So I'm all excited now at the progress that ufs/ffs are making recently
:Why VMIO dir works better if directories are placed close to each other? I
:think it only makes the cache data of an individual directory stay in the
:memory longer. Is there a way to measure the effectiveness of the disk
:drive's cache?
:
:-Zhihui
I wasn't being clear enough. There are tw
> Why VMIO dir works better if directories are placed close to each other? I
> think it only makes the cache data of an individual directory stay in the
> memory longer. Is there a way to measure the effectiveness of the disk
> drive's cache?
The real performance gain is seen when doing stuff wi
On Tue, 10 Apr 2001, Matt Dillon wrote:
> :> I'm not 100% convinced about the algorithm to avoid clusters filling
> :> up with directory-only entries (it looks like a worst-case would fill
> :> a cluster with 50% directories and 50% files leaving a bad layout when
> :> the directories are po
:> I'm not 100% convinced about the algorithm to avoid clusters filling
:> up with directory-only entries (it looks like a worst-case would fill
:> a cluster with 50% directories and 50% files leaving a bad layout when
:> the directories are populated further), but then the non-dirpref
:> sche
[.]
> > The second improvement, contributed by
> > [EMAIL PROTECTED], is a new directory allocation policy (codenamed
> > "dirpref"). Coupled with soft updates, the new dirpref code offers up
> > to a 60x speed increase in gluk's tests, documented here:"
> >
> >
>ht
> Another important change is that it is no longer necessary to run
> tunefs in single user mode to activate soft updates. All that is
> needed is to add the "softdep" mount option to the partitions you
> want soft updates enabled on in /etc/fstab."
[.]
> I especially like not having to run tu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Taken from: http://www.deadly.org/article.php3?sid=20010408202512
Aaron Campbell writes : "Two aspects of the FFS filesystem in OpenBSD
have received significant improvements since 2.8, increasing
performance dramatically. Thanks to art, gluk, csapu
64 matches
Mail list logo