El Tue, 22 Mar 2005 20:13:15 -0500,
Dave Jones <[EMAIL PROTECTED]> escribió:

> With something like this, and some additional bookkeeping to keep track of
> which files we open in the first few minutes of uptime, we could periodically
> reorganise the layout back to an optimal state.

That wouldn't be too hard (a libc wrapper could do it, right?) But how could 
you get
track of "code page faults"? Perhaps it's not worth of it "preloading" the 
parts of the 
executable (and linked libraries) used and it's much simpler to read 
everything? (Andrew
Morton suggested in the past using mincore(2) IIRC)

Altought even if you optimize the disk's layout and pre-read everything you 
need, a big
problem is the initscript scripts. I don't think it's init fault, handling 
absolutely _everything_
trough scripts is not going to help, specially when all^Wmost of linux systems 
use a
shell which claims to be "too big and too slow" in its own man page ;)
There're some shells (like zsh) which can "compile" scripts and generate 
"bytecode"
I wonder if that would help (zsh seems to handle bash scripts so it may be 
interesting
to try) Although like many people suggested, microsoft's "magic wand" to speed 
up
everything could have been "lets save a suspend image of the system just before
detecting  new non-critical hardware and use it to boot the system". I guess 
its not
possible to save/load suspend images to/from a filesystem?


So, a list of steps needed (which doesn't means I'm voluntering to do all of 
them 8) could
be:

1- Be able to keep track of what a process does in its whole life, or in the 
first N
        seconds (optimizing system's startup it's nice, but being able to speed 
up how
        fast openoffice loads when the system is already up would be even 
better).
        Using LD_PRELOAD=/something command could do this?
        
2- Get the on-disk info, port Andrew Morton's "move block" patch to 2.6, and 
use it
        to reorganize the disk's layout periodically (specially when package 
managers install
        something, ie: if people runs mozilla very often, mozilla files should 
be kept in the same
        place of the disk than all its libraries), using stadistics from step 1

3 - Create a tool which looks at all the data got from step 1 and "preloads" 
optimally from
        disk all the neccesary data (ie: using the path of one program, or 
several if you want
        to run two programs at the same time), with the reorganization done in 
step 2 it'd be
        even faster. Boot scripts would be just another user, and gnome and kde 
could use it
        too for single programs. If the tool detects that a program has been 
changed (looking
        at the "changed date" field for example) it could launch the process 
with the tools from
        step 1, so the stadistics get regenerated again.

Is there something crazy in this idea?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to