Say you've got a simple ascii text file, say, 250,000 lines long. Let's
say it's a logfile. Suppose that you wanted to access an arbitrary range
of lines, say, lines 10,000 - 13,000. One way of doing this is:

<snip>
sed -n 10000,13000p foobar.txt
</snip>

Trouble is, the target systems I need to exec this on are ancient and
don't take very kindly to the io hammering this delivers. Can you suggest
a better way of achieving this?

As these *are* logs I'm dealing with I have already implemented rotation
frequency. That helps, but I'm still facing performance issues. I'm
specifically looking for input as to whether my sed-based approach can be
improved.

Because of extremely limited network bandwidth pulling the files off the
wire and processing them on more studly hardware isn't an option. Also, I
cannot install any binaries on these remote systems. Standard POSIX
toolkit is all I've got to work with. :-/

Many thanks, y'all!
--Trey
++----------------------------------------------------------------------------++
Trey Darley - Brussels
mobile: +32/494.766.080
++----------------------------------------------------------------------------++
Quis custodiet ipsos custodes?
++----------------------------------------------------------------------------++



_______________________________________________
Discuss mailing list
Discuss@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to