Marcin Kowalski <[EMAIL PROTECTED]>, on Thu Apr 12, 2001 [05:30:59 PM] said:
> Hi
>
> I have applied this(Tom's) patch as well as the small change to
> dcache.c(thanx Andreas, David, Alexander and All), I ran some tests and so
> far so good, both the dcache and inode cache entries in slabinfo a
On Friday 13 April 2001 00:45, Ed Tomlinson wrote:
> On Thursday 12 April 2001 22:03, Alexander Viro wrote:
> > If you are talking about "unused" from the slab POV - _ouch_. Looks like
> > extremely bad fragmentation ;-/ It's surprising, and if that's thte case
> > I'd like to see more details.
On Thursday 12 April 2001 22:03, Alexander Viro wrote:
> On Thu, 12 Apr 2001, Ed Tomlinson wrote:
> > On Thursday 12 April 2001 11:12, Alexander Viro wrote:
> > What prompted my patch was observing situations where the icache (and
> > dcache too) got so big that they were applying artifical pressu
On Thu, 12 Apr 2001, Ed Tomlinson wrote:
> On Thursday 12 April 2001 11:12, Alexander Viro wrote:
> What prompted my patch was observing situations where the icache (and dcache
> too) got so big that they were applying artifical pressure to the page and
> buffer caches. I say artifical since
On Thursday 12 April 2001 11:12, Alexander Viro wrote:
> On Thu, 12 Apr 2001, Rik van Riel wrote:
> > On Thu, 12 Apr 2001, Ed Tomlinson wrote:
> > > I have been playing around with patches that fix this problem. What
> > > seems to happen is that the VM code is pretty efficent at avoiding the
> >
David writes:
> Alexander Viro writes:
> > OK, how about wider testing? Theory: prune_dcache() goes through the
> > list of immediately killable dentries and tries to free given amount.
> > It has a "one warning" policy - it kills dentry if it sees it twice without
> > lookup finding that dent
On Thu, 12 Apr 2001, Alexander Viro wrote:
> IOW. keeping dcache/icache size low is not a good thing, unless you
> have a memory pressure that requires it. More agressive kupdate _is_
> a good thing, though - possibly kupdate sans flushing buffers, so that
> it would just keep the icache clean an
On Thu, 12 Apr 2001, Alexander Viro wrote:
> Bad idea. If you do loops over directory contents you will almost
> permanently have almost all dentries freeable. Doesn't make freeing
> them a good thing - think of the effects it would have.
>
> Simple question: how many of dentries in /usr/src/l
Hi
I have applied this(Tom's) patch as well as the small change to
dcache.c(thanx Andreas, David, Alexander and All), I ran some tests and so
far so good, both the dcache and inode cache entries in slabinfo are keeping
nice and low even though I tested by creating thousands of files and then
On Thu, 12 Apr 2001, Rik van Riel wrote:
> Please take a look at Ed Tomlinson's patch. It also puts pressure
> on the dcache and icache independent of VM pressure, but it does
> so based on the (lack of) pressure inside the dcache and icache
> themselves.
>
> The patch looks simple, sane and it
On Thu, 12 Apr 2001, Rik van Riel wrote:
> On Thu, 12 Apr 2001, Ed Tomlinson wrote:
>
> > I have been playing around with patches that fix this problem. What
> > seems to happen is that the VM code is pretty efficent at avoiding the
> > calls to shrink the caches. When they do get called its
On Thu, 12 Apr 2001, Alexander Viro wrote:
> On Thu, 12 Apr 2001, Jan Harkes wrote:
>
> > But the VM pressure on the dcache and icache only comes into play once
> > the system still has a free_shortage _after_ other attempts of freeing
> > up memory in do_try_to_free_pages.
>
> I don't think tha
On Thu, 12 Apr 2001, Ed Tomlinson wrote:
> I have been playing around with patches that fix this problem. What
> seems to happen is that the VM code is pretty efficent at avoiding the
> calls to shrink the caches. When they do get called its a case of to
> little to late. This is espically bad
On Thu, 12 Apr 2001, Jan Harkes wrote:
> But the VM pressure on the dcache and icache only comes into play once
> the system still has a free_shortage _after_ other attempts of freeing
> up memory in do_try_to_free_pages.
I don't think that it's necessary bad.
> sync_all_inodes, which is call
On Thu, Apr 12, 2001 at 01:45:08AM -0400, Alexander Viro wrote:
> On Wed, 11 Apr 2001, Andreas Dilger wrote:
>
> > I just discovered a similar problem when testing Daniel Philip's new ext2
> > directory indexing code with bonnie++. I was running bonnie under single
> > user mode (basically nothi
On Thu, 12 Apr 2001, Marcin Kowalski wrote:
> Hi
>
> Regarding the patch
>
> I don't have experience with the linux kernel internals but could this patch
> not lead to a run-loop condition as the only thing that can break our of the
> for(;;) loop is the tmp==&dentry_unused statement.
Marcin Kowalski <[EMAIL PROTECTED]> writes:
> Hi
>
> Regarding the patch
>
> I don't have experience with the linux kernel internals but could this patch
> not lead to a run-loop condition as the only thing that can break our of the
> for(;;) loop is the tmp==&dentry_unused statement. So
Hi
Regarding the patch
I don't have experience with the linux kernel internals but could this patch
not lead to a run-loop condition as the only thing that can break our of the
for(;;) loop is the tmp==&dentry_unused statement. So if the required number
of dentries does not exist and thi
Hi,
I have been playing around with patches that fix this problem. What seems to happen is
that the VM code is pretty efficent at avoiding the calls to shrink the caches. When
they
do get called its a case of to little to late. This is espically bad in lightly
loaded
systems. The followin
Alexander Viro writes:
> OK, how about wider testing? Theory: prune_dcache() goes through the
> list of immediately killable dentries and tries to free given amount.
> It has a "one warning" policy - it kills dentry if it sees it twice without
> lookup finding that dentry in the interval. Unf
On Thu, 12 Apr 2001, Jeff Garzik wrote:
> Alexander Viro wrote:
> > We _have_ VM pressure there. However, such loads had never been used, so
> > there's no wonder that system gets unbalanced under them.
> >
> > I suspect that simple replacement of goto next; with continue; in the
> > fs/dcache
Al writes:
> We _have_ VM pressure there. However, such loads had never been used, so
> there's no wonder that system gets unbalanced under them.
>
> I suspect that simple replacement of goto next; with continue; in the
> fs/dcache.c::prune_dcache() may make situation seriously better.
Yes, it a
Alexander Viro wrote:
> We _have_ VM pressure there. However, such loads had never been used, so
> there's no wonder that system gets unbalanced under them.
>
> I suspect that simple replacement of goto next; with continue; in the
> fs/dcache.c::prune_dcache() may make situation seriously better.
On Wed, 11 Apr 2001, Andreas Dilger wrote:
> I just discovered a similar problem when testing Daniel Philip's new ext2
> directory indexing code with bonnie++. I was running bonnie under single
> user mode (basically nothing else running) to create 100k files with 1 data
> block each (in a sin
Marcin Kowalski writes:
> if I do a can on /proc/slabinfo I get on the machine with "MISSING" memory:
>
> slabinfo - version: 1.1 (SMP)
> --- cut out
> inode_cache 920558 930264480 116267 1162831 : 124 6
> --- cut out
> dentry_cache 557245 638430128 21281 21281
Further fun...
Now after bouncing the swap and clearing out memory I decided to run the
test.pl script again to suck up some more memory...
Box dies, pretty much for 15 seconds, when it comes back load is at 8.0
kernel syslog messages..::
Apr 11 16:37:13 mkdexii kernel: sym53c896-1-<3,0>: o
On Wed, 11 Apr 2001, Marcin Kowalski wrote:
> I then do a swapoff /dev/sda3 (250mb used), this completely locks the machine
> for 50 seconds and pushes the load to 31 when I can log back in. Then
> micraculously I am using only 170mb of physical ram. I turn swap back on and
> all is well
> Ca
Hi
Here is my saga continued. I had as mentioned in the preceding post 500mb of
Inode cache entries and about 80mb of dentry_cache entries, accounting for +-
600mb if "missing" memory.
These should be dynamically de-allocatable, so if a program needs the ram it
will be freed as necessary. So I
> To possbile answer my own question:
> if I do a can on /proc/slabinfo I get on the machine with "MISSING" memory:
>
> slabinfo - version: 1.1 (SMP)
> --- cut out
> inode_cache 920558 930264 480 116267 116283 1 : 124 6
> --- cut out
> dentry_cache 557245 638430 128 21281 21281 1 : 252
To possbile answer my own question:
if I do a can on /proc/slabinfo I get on the machine with "MISSING" memory:
slabinfo - version: 1.1 (SMP)
--- cut out
inode_cache 920558 930264480 116267 1162831 : 124 6
--- cut out
dentry_cache 557245 638430128 21281 212811
> I can use "ps" to see memory usage of daemons and user programs.
> I can't find any memory information of kernel with "top" and "ps".
> Do you know how to take memory usage information of kernel ?
> Thanks for your help.
Regarding this issue, I have a similar problem if I do a free on my sys
Thanks.
cat /proc/slabinfo look like as follows.
Each row have three columns.
Could you tell me what do they mean in second and third column?
kmem_cache29 42
pio_request0 0
My second question is:
We can find memory usage of daemon(apache) by ps or top.
e.g. apache use
On Wed, Apr 11, 2001 at 01:42:55AM +0800, gis88530 wrote:
> Hello,
>
> I can use "ps" to see memory usage of daemons and user programs.
> I can't find any memory information of kernel with "top" and "ps".
>
> Do you know how to take memory usage information of kernel ?
Try cat /proc/slabinfo
33 matches
Mail list logo