On 8/6/07, Nick Piggin <[EMAIL PROTECTED]> wrote:
[...]
> > this completely ignores the use case where the
> > swapping was exactly the
> > right thing to do, but memory has been freed up from
> > a program exiting so
> > that you couldnow fill that empty ram with data that
> > was swapped out.
>
>
--- [EMAIL PROTECTED] wrote:
> On Mon, 6 Aug 2007, Nick Piggin wrote:
>
> > [EMAIL PROTECTED] wrote:
> >> On Sun, 29 Jul 2007, Rene Herman wrote:
> >>
> >> > On 07/29/2007 01:41 PM, [EMAIL PROTECTED] wrote:
> >> >
> >> > > I agree that tinkering with the core VM code
> should not be done
>
On Mon, 6 Aug 2007, Nick Piggin wrote:
[EMAIL PROTECTED] wrote:
On Sun, 29 Jul 2007, Rene Herman wrote:
> On 07/29/2007 01:41 PM, [EMAIL PROTECTED] wrote:
>
> > I agree that tinkering with the core VM code should not be done
> > lightly,
> > but this has been put through the proper pro
[EMAIL PROTECTED] wrote:
On Sun, 29 Jul 2007, Rene Herman wrote:
On 07/29/2007 01:41 PM, [EMAIL PROTECTED] wrote:
I agree that tinkering with the core VM code should not be done
lightly,
but this has been put through the proper process and is stalled with no
hints on how to move forward.
Hi!
> > That would just save reading the directories. Not sure
> > it helps that much. Much better would be actually if it didn't stat the
> > individual files (and force their dentries/inodes in). I bet it does that
> > to
> > find out if they are directories or not. But in a modern system it
Matthew Hawkins wrote:
updatedb by itself doesn't really bug me, its just that on occasion
its still running at 7am
You should start it earlier then - assuming it doesn't
already start at the earliest opportunity?
Helge Hafting
-
To unsubscribe from this list: send the line "unsubscribe linux
On Sun, 29 Jul 2007, Rene Herman wrote:
On 07/29/2007 01:41 PM, [EMAIL PROTECTED] wrote:
I agree that tinkering with the core VM code should not be done lightly,
but this has been put through the proper process and is stalled with no
hints on how to move forward.
It has not. Concerns that
On Sunday 29 July 2007 16:00:22 Ray Lee wrote:
> On 7/29/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> > If the problem is reading stuff back in from swap at the *same time*
> > that the application is reading stuff from some user file system, and if
> > that user file system is on the same drive a
On 7/29/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Ray wrote:
> > Ah, so in a normal scenario where a working-set is getting faulted
> > back in, we have the swap storage as well as the file-backed stuff
> > that needs to be read as well. So even if swap is organized perfectly,
> > we're still s
Ray wrote:
> Ah, so in a normal scenario where a working-set is getting faulted
> back in, we have the swap storage as well as the file-backed stuff
> that needs to be read as well. So even if swap is organized perfectly,
> we're still seeking. Damn.
Perhaps this applies in some cases ... perhaps.
On 7/29/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> If the problem is reading stuff back in from swap at the *same time*
> that the application is reading stuff from some user file system, and if
> that user file system is on the same drive as the swap partition
> (typical on laptops), then inter
Ray wrote:
> a log structured scheme, where the writeout happens to sequential spaces
> on the drive instead of scattered about.
If the problem is reading stuff back in from swap quickly when
needed, then this likely helps, by reducing the seeks needed.
If the problem is reading stuff back in fro
On 07/29/2007 07:52 PM, Ray Lee wrote:
Well, that doesn't match my systems. My laptop has 400MB in swap:
Which in your case is slightly more than 1/3 of available swap space. Quite
a lot for a desktop indeed. And if it's more than a few percent fragmented,
please fix current swapout instead
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 07:19 PM, Ray Lee wrote:
> For me, it is generally the case yes. We are still discussing this in the
> context of desktop machines and their problems with being slow as things
> have been swapped out and generally I expect a desktop
> > Is that generally the case on your systems? Every linux system I've
> > run, regardless of RAM, has always pushed things out to swap.
>
> For me, it is generally the case yes. We are still discussing this in the
> context of desktop machines and their problems with being slow as things
> hav
On 07/29/2007 07:19 PM, Ray Lee wrote:
The program is not a real-world issue and if you do not consider it a useful
boundary condition either (okay I guess), how would log structured swap help
if I just assume I have plenty of free swap to begin with?
Is that generally the case on your systems
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 06:04 PM, Ray Lee wrote:
> >> I am very aware of the costs of seeks (on current magnetic media).
> >
> > Then perhaps you can just take it on faith -- log structured layouts
> > are designed to help minimize seeks, read and write.
>
On 07/29/2007 06:04 PM, Ray Lee wrote:
I am very aware of the costs of seeks (on current magnetic media).
Then perhaps you can just take it on faith -- log structured layouts
are designed to help minimize seeks, read and write.
I am particularly bad at faith. Let's take that stupid program t
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 05:20 PM, Ray Lee wrote:
> This seems to be now fixing the different problem of swap-space filling up.
> I'm quite willing to for now assume I've got plenty free.
I was trying to point out that currently, as an example, memory that
On 07/29/2007 05:20 PM, Ray Lee wrote:
I understand what log structure is generally, but how does it help swapin?
Look at the swap out case first.
Right now, when swapping out the kernel places whatever it can
wherever it can inside the swap space. The closer you are to filling
your swap spac
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 04:58 PM, Ray Lee wrote:
> > On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> >> Right over my head. Why does log-structure help anything?
> >
> > Log structured disk layouts allow for better placement of writeout, so
> > that y
On 07/29/2007 04:58 PM, Ray Lee wrote:
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
On 07/29/2007 03:12 PM, Alan Cox wrote:
More radically if anyone wants to do real researchy type work - how about
log structured swap with a cleaner ?
Right over my head. Why does log-structure help
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 03:12 PM, Alan Cox wrote:
> > More radically if anyone wants to do real researchy type work - how about
> > log structured swap with a cleaner ?
>
> Right over my head. Why does log-structure help anything?
Log structured disk lay
On 07/29/2007 03:12 PM, Alan Cox wrote:
What are the tradeoffs here? What wants small chunks? Also, as far as
I'm aware Linux does not do things like up the granularity when it
notices it's swapping in heavily? That sounds sort of promising...
Small chunks means you get better efficiency of me
On 07/29/2007 01:41 PM, [EMAIL PROTECTED] wrote:
And now you do it again :-) There is no conclusion -- just the
inescapable observation that swap-prefetch was (or may have been)
masking the problem of GNU locate being a program that noone in their
right mind should be using.
isn't your concl
> Contrived thing and all, but what it does do is show exactly how bad seeking
> all over swap-space is. If you push it out before hitting enter, the time it
> takes easily grows past 10 minutes (with my 768M) versus sub-second (!) when
> it's all in to start with.
Think in "operations/second"
Andi wrote:
> GNU sort uses a merge sort with temporary files on disk. Not sure
> how much it keeps in memory during that, but it's probably less
> than 150MB.
If I'm reading the source code for GNU sort correctly, then the
following snippet of shell code displays how much memory it uses
for its
On Sun, 29 Jul 2007, Rene Herman wrote:
On 07/28/2007 11:00 PM, [EMAIL PROTECTED] wrote:
> many -mm users use it anyway? He himself said he's not convinced of
> usefulness having not seen it help for him (and notice that most
> developers are also users), turned it off due to it annoying h
On 07/28/2007 11:00 PM, [EMAIL PROTECTED] wrote:
many -mm users use it anyway? He himself said he's not convinced of
usefulness having not seen it help for him (and notice that most
developers are also users), turned it off due to it annoying him at
some point and hasn't seen a serious investi
On 07/28/2007 01:21 PM, Alan Cox wrote:
It is. Prefetched pages can be dropped on the floor without additional
I/O.
Which is essentially free for most cases. In addition your disk access
may well have been in idle time (and should be for this sort of stuff)
Yes. The swap-prefetch patch ensu
On Sat, 28 Jul 2007 21:33:59 -0400 Rik van Riel <[EMAIL PROTECTED]> wrote:
> Andrew Morton wrote:
>
> > What I think is killing us here is the blockdev pagecache: the pagecache
> > which backs those directory entries and inodes. These pages get read
> > multiple times because they hold multiple
Andrew Morton wrote:
What I think is killing us here is the blockdev pagecache: the pagecache
which backs those directory entries and inodes. These pages get read
multiple times because they hold multiple directory entries and multiple
inodes. These multiple touches will put those pages onto t
On Saturday 28 July 2007 17:06:50 [EMAIL PROTECTED] wrote:
> On Sat, 28 Jul 2007, Daniel Hazelton wrote:
> > On Saturday 28 July 2007 04:55:58 [EMAIL PROTECTED] wrote:
> >> On Sat, 28 Jul 2007, Rene Herman wrote:
> >>> On 07/27/2007 09:43 PM, [EMAIL PROTECTED] wrote:
> On Fri, 27 Jul 2007, Re
On Sat, 28 Jul 2007, Daniel Hazelton wrote:
On Saturday 28 July 2007 04:55:58 [EMAIL PROTECTED] wrote:
On Sat, 28 Jul 2007, Rene Herman wrote:
On 07/27/2007 09:43 PM, [EMAIL PROTECTED] wrote:
On Fri, 27 Jul 2007, Rene Herman wrote:
On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
nobody i
On Sat, 28 Jul 2007, Alan Cox wrote:
It is. Prefetched pages can be dropped on the floor without additional I/O.
Which is essentially free for most cases. In addition your disk access
may well have been in idle time (and should be for this sort of stuff)
and if it was in the same chunk as some
On Sat, 28 Jul 2007, Rene Herman wrote:
On 07/28/2007 10:55 AM, [EMAIL PROTECTED] wrote:
it looks to me like unless the code was really bad (and after 23 months in
-mm it doesn't sound like it is)
Not to sound pretentious or anything but I assume that Andrew has a fairly
good overview of
On 7/28/07, Alan Cox <[EMAIL PROTECTED]> wrote:
> Actual physical disk ops are precious resource and anything that mostly
> reduces the number will be a win - not to stay swap prefetch is the right
> answer but accidentally or otherwise there are good reasons it may happen
> to help.
>
> Bigger mor
On Saturday 28 July 2007 04:55:58 [EMAIL PROTECTED] wrote:
> On Sat, 28 Jul 2007, Rene Herman wrote:
> > On 07/27/2007 09:43 PM, [EMAIL PROTECTED] wrote:
> >> On Fri, 27 Jul 2007, Rene Herman wrote:
> >> > On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
> >> > > Questions about it:
> >> > > Q)
On Saturday 28 July 2007 03:48:13 Mike Galbraith wrote:
> On Fri, 2007-07-27 at 18:51 -0400, Daniel Hazelton wrote:
> > Now, once more, I'm going to ask: What is so terribly wrong with swap
> > prefetch? Why does it seem that everyone against it says "Its treating a
> > symptom, so it can't go in"?
> It is. Prefetched pages can be dropped on the floor without additional I/O.
Which is essentially free for most cases. In addition your disk access
may well have been in idle time (and should be for this sort of stuff)
and if it was in the same chunk as something nearby was effectively free
anywa
On 07/28/2007 10:55 AM, [EMAIL PROTECTED] wrote:
in at some situations swap prefetch can help becouse something that used
memory freed it so there is free memory that could be filled with data
(which is something that Linux does agressivly in most other situations)
in some other situations sw
On Sat, 28 Jul 2007, Rene Herman wrote:
On 07/27/2007 09:43 PM, [EMAIL PROTECTED] wrote:
On Fri, 27 Jul 2007, Rene Herman wrote:
> On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
>
> > Questions about it:
> > Q) Does swap-prefetch help with this?
> > A) [From all reports I've seen (*
On 07/28/2007 09:35 AM, Rene Herman wrote:
By the way -- I'm unable to make my slocate grow substantial here but
I'll try what GNU locate does. If it's really as bad as I hear then
regardless of anything else it should really be either fixed or dumped...
Yes. GNU locate is broken and nobody s
On Fri, 2007-07-27 at 18:51 -0400, Daniel Hazelton wrote:
> Now, once more, I'm going to ask: What is so terribly wrong with swap
> prefetch? Why does it seem that everyone against it says "Its treating a
> symptom, so it can't go in"?
And once again, I personally have nothing against swap-pref
On 07/28/2007 01:15 AM, Björn Steinbrink wrote:
On 2007.07.27 20:16:32 +0200, Rene Herman wrote:
Here's swap-prefetch's author saying the same:
http://lkml.org/lkml/2007/2/9/112
| It can't help the updatedb scenario. Updatedb leaves the ram full and
| swap prefetch wants to cost as little a
On 07/27/2007 09:43 PM, [EMAIL PROTECTED] wrote:
On Fri, 27 Jul 2007, Rene Herman wrote:
On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
Questions about it:
Q) Does swap-prefetch help with this?
A) [From all reports I've seen (*)]
Yes, it does.
No it does not. If updatedb filled memory
On 07/27/2007 10:28 PM, Daniel Hazelton wrote:
Check the attitude at the door then re-read what I actually said:
Attitude? You wanted attitude dear boy?
Updatedb or another process that uses the FS heavily runs on a users
256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
On Friday 27 July 2007 19:29:19 Andi Kleen wrote:
> > Any faults in that reasoning?
>
> GNU sort uses a merge sort with temporary files on disk. Not sure
> how much it keeps in memory during that, but it's probably less
> than 150MB. At some point the dirty limit should kick in and write back the
>
On 2007.07.28 01:29:19 +0200, Andi Kleen wrote:
> > Any faults in that reasoning?
>
> GNU sort uses a merge sort with temporary files on disk. Not sure
> how much it keeps in memory during that, but it's probably less
> than 150MB. At some point the dirty limit should kick in and write back the
>
> Any faults in that reasoning?
GNU sort uses a merge sort with temporary files on disk. Not sure
how much it keeps in memory during that, but it's probably less
than 150MB. At some point the dirty limit should kick in and write back the
data of the temporary files; so it's not quite the same as
On 2007.07.27 20:16:32 +0200, Rene Herman wrote:
> On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
>
>> Updatedb or another process that uses the FS heavily runs on a users
>> 256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
>> pressure that causes other applications to be swap
On Friday 27 July 2007 18:08:44 Mike Galbraith wrote:
> On Fri, 2007-07-27 at 13:45 -0400, Daniel Hazelton wrote:
> > On Friday 27 July 2007 06:25:18 Mike Galbraith wrote:
> > > On Fri, 2007-07-27 at 03:00 -0700, Andrew Morton wrote:
> > > > So hrm. Are we sure that updatedb is the problem? There
On Fri, 2007-07-27 at 13:45 -0400, Daniel Hazelton wrote:
> On Friday 27 July 2007 06:25:18 Mike Galbraith wrote:
> > On Fri, 2007-07-27 at 03:00 -0700, Andrew Morton wrote:
> > > So hrm. Are we sure that updatedb is the problem? There are quite a few
> > > heavyweight things which happen in the
On Friday 27 July 2007 14:16:32 Rene Herman wrote:
> On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
> > Updatedb or another process that uses the FS heavily runs on a users
> > 256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
> > pressure that causes other applications to be s
On Fri, 27 Jul 2007, Rene Herman wrote:
On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
Updatedb or another process that uses the FS heavily runs on a users
256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
pressure that causes other applications to be swapped to disk. I
Al Viro wrote:
> BTW, I really wonder how much pain could be avoided if updatedb recorded
> mtime of directories and checked it.
Someone mentioned a variant of slocate above that they called mlocate,
and that Red Hat ships, that seems to do this (if I understand you and
what mlocate does correctly
On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
Updatedb or another process that uses the FS heavily runs on a users
256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
pressure that causes other applications to be swapped to disk. In the
morning the user has to wait for the sy
On Friday 27 July 2007 06:25:18 Mike Galbraith wrote:
> On Fri, 2007-07-27 at 03:00 -0700, Andrew Morton wrote:
> > On Fri, 27 Jul 2007 01:47:49 -0700 Andrew Morton
<[EMAIL PROTECTED]> wrote:
> > > More sophisticated testing is needed - there's something in
> > > ext3-tools which will mmap, page i
On Fri, 2007-07-27 at 03:00 -0700, Andrew Morton wrote:
> On Fri, 27 Jul 2007 01:47:49 -0700 Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > More sophisticated testing is needed - there's something in
> > ext3-tools which will mmap, page in and hold a file for you.
>
> So much for that theory. af
On Fri, 27 Jul 2007 01:47:49 -0700 Andrew Morton <[EMAIL PROTECTED]> wrote:
> More sophisticated testing is needed - there's something in
> ext3-tools which will mmap, page in and hold a file for you.
So much for that theory. afaict mmapped, active pagecache is immune to
updatedb activity. It j
On Fri, 2007-07-27 at 01:47 -0700, Andrew Morton wrote:
> Anyway, blockdev pagecache is a problem, I expect. It's worth playing with
> that patch.
(may tinker a bit, but i'm way rusty. ain't had the urge to mutilate
anything down there in quite a while... works just fine for me these
days)
> A
On Fri, 27 Jul 2007 09:54:41 +0100 Al Viro <[EMAIL PROTECTED]> wrote:
> On Fri, Jul 27, 2007 at 01:47:49AM -0700, Andrew Morton wrote:
> > What I think is killing us here is the blockdev pagecache: the pagecache
> > which backs those directory entries and inodes. These pages get read
> > multiple
On Fri, Jul 27, 2007 at 01:47:49AM -0700, Andrew Morton wrote:
> What I think is killing us here is the blockdev pagecache: the pagecache
> which backs those directory entries and inodes. These pages get read
> multiple times because they hold multiple directory entries and multiple
> inodes. The
On Fri, 27 Jul 2007 09:23:41 +0200 Mike Galbraith <[EMAIL PROTECTED]> wrote:
> On Fri, 2007-07-27 at 07:13 +0200, Mike Galbraith wrote:
> > On Thu, 2007-07-26 at 11:05 -0700, Andrew Morton wrote:
> > > > drops caches prior to both updatedb runs.
> > >
> > > I think that was the wrong thing to do.
On Fri, 2007-07-27 at 07:13 +0200, Mike Galbraith wrote:
> On Thu, 2007-07-26 at 11:05 -0700, Andrew Morton wrote:
> > > drops caches prior to both updatedb runs.
> >
> > I think that was the wrong thing to do. That will leave gobs of free
> > memory for updatedb to populate with dentries and ino
On Thu, 2007-07-26 at 11:05 -0700, Andrew Morton wrote:
> On Thu, 26 Jul 2007 14:46:58 +0200 Mike Galbraith <[EMAIL PROTECTED]> wrote:
>
> > On Thu, 2007-07-26 at 03:09 -0700, Andrew Morton wrote:
> >
> > > Setting it to zero will maximise the preservation of the vfs caches. You
> > > wanted 100
On 7/26/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> wrong, it's active on three of my boxes already :) But then again, i
> never had these hangover problems. (not really expected with gigs of RAM
> anyway)
[...]
> --- /etc/cron.daily/mlocate.cron.orig
[...]
mlocate by design doesn't thrash the ca
On Thu, 26 Jul 2007 14:46:58 +0200 Mike Galbraith <[EMAIL PROTECTED]> wrote:
> On Thu, 2007-07-26 at 03:09 -0700, Andrew Morton wrote:
>
> > Setting it to zero will maximise the preservation of the vfs caches. You
> > wanted 1 there.
> >
> >
>
> drops caches prior to both updatedb runs.
On Thu, Jul 26, 2007 at 02:23:30PM +0200, Andi Kleen wrote:
> That would just save reading the directories. Not sure
> it helps that much. Much better would be actually if it didn't stat the
> individual files (and force their dentries/inodes in). I bet it does that to
> find out if they are dire
On Thu, 2007-07-26 at 03:09 -0700, Andrew Morton wrote:
> Setting it to zero will maximise the preservation of the vfs caches. You
> wanted 1 there.
>
>
drops caches prior to both updatedb runs.
[EMAIL PROTECTED]: df -i
FilesystemInodes IUsed IFree IUse% Mounted on
/dev/hd
> BTW, I really wonder how much pain could be avoided if updatedb recorded
> mtime of directories and checked it. I.e. instead of just doing blind
> find(1), walk the stored directory tree comparing timestamps with those
> in filesystem. If directory mtime has not changed, don't bother rereading
On Thu, 26 Jul 2007 12:27:30 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > > > > ( we _do_ want to baloon the dentry cache otherwise - for things like
> > > > > "find" - having a fast VFS is important. But known-use-once things
> > > > >
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > > > ( we _do_ want to baloon the dentry cache otherwise - for things like
> > > > "find" - having a fast VFS is important. But known-use-once things
> > > > like the daily updatedb job can clearly be annotated properly. )
> > >
> > > Mutter.
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> Setting it to zero will maximise the preservation of the vfs caches.
> You wanted 1 there.
ok, updated patch below :-)
>
wrong, it's active on three of my boxes already :) But then again, i
never had these hangover problems. (not really expe
On Thu, Jul 26, 2007 at 11:40:24AM +0200, Ingo Molnar wrote:
> below is an updatedb hack that sets vfs_cache_pressure down to 0 during
> an updatedb run. Could someone who is affected by the 'morning after'
> problem give it a try? If this works then we can think about any other
> measures ...
On Thu, 26 Jul 2007 11:40:24 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > On Thu, 26 Jul 2007 11:20:25 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
> >
> > > Once we give the kernel the knowledge that the dentry wont be used again
> > > by t
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> On Thu, 26 Jul 2007 11:20:25 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > Once we give the kernel the knowledge that the dentry wont be used again
> > by this app, the kernel can do a lot more intelligent decision and not
> > baloon the dentry
77 matches
Mail list logo