On Mon, Apr 21, 2008 at 11:18 AM, Gunnar Hjalmarsson <[EMAIL PROTECTED]> wrote:
> Mr. Shawn H. Corey wrote:
>
> > The fastest way to do this is to read every line into Perl and disregard
> everything not relevant.
> >
>
> Don't think so.
>
> I did a benchmark on a text file with 100,000 lines, wh
Richard Lee wrote:
Gunnar Hjalmarsson wrote:
Richard Lee wrote:
I don't have root access.. and then compiler which is gcc on sun
machine messed up my installation.
I tried to install expect and didn't work out. Will gather more
information
Start here:
perldoc -q "own module"
thanks..
Gunnar Hjalmarsson wrote:
Richard Lee wrote:
Gunnar Hjalmarsson wrote:
Richard Lee wrote:
Unfortunately however, the system I am on, I cannot install any
modules other than standard modules that already come with the perl.
Assuming you have at least FTP access, you are wrong. Which are the
Richard Lee wrote:
Gunnar Hjalmarsson wrote:
Richard Lee wrote:
Unfortunately however, the system I am on, I cannot install any
modules other than standard modules that already come with the perl.
Assuming you have at least FTP access, you are wrong. Which are the
restrictions?
I guess ev
Gunnar Hjalmarsson wrote:
Richard Lee wrote:
Unfortunately however, the system I am on, I cannot install any
modules other than standard modules that already come with the perl.
Assuming you have at least FTP access, you are wrong. Which are the
restrictions?
I guess even that, I should lo
Mr. Shawn H. Corey wrote:
The fastest way to do this is to read every line into Perl and disregard
everything not relevant.
Don't think so.
I did a benchmark on a text file with 100,000 lines, where I'm actually
only interested in the 5 last lines. Except for Tie::File, which proved
to be aw
beyhan wrote:
The key is :
use Tie::File
"Tie::File" represents a regular text file as a Perl array. Each ele‐
ment in the array corresponds to a record in the file. The first
line
of the file is element 0 of the array; the second line is element 1,
and so on.
The
Richard Lee wrote:
Unfortunately however, the system I am on, I cannot install any modules
other than standard modules that already come with the perl.
Assuming you have at least FTP access, you are wrong. Which are the
restrictions?
--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/
The key is :
use Tie::File
"Tie::File" represents a regular text file as a Perl array. Each ele‐
ment in the array corresponds to a record in the file. The first
line
of the file is element 0 of the array; the second line is element 1,
and so on.
The file is not loa
On Sun, 2008-04-20 at 20:22 -0400, Chas. Owens wrote:
> No, you obviously don't know how it is implemented. It seeks to the
> end of the file and reads it into a buffer where it searches for line
> endings. It does not read the entire file until you reach the first
> line.
>
That's not the poin
On Sun, 2008-04-20 at 18:10 -0400, Richard Lee wrote:
> There is no way to read say last 10 MB of the file or something? It's
> very surprising why no such thing exists..
>
No, it's not.
It is a text file, not a fixed-sized record fix. There is no way to
compute where the lines of text start.
On Sun, 2008-04-20 at 17:02 -0400, Richard Lee wrote:
> Chas. Owens wrote:
> > On Sun, Apr 20, 2008 at 1:49 PM, Richard Lee <[EMAIL PROTECTED]> wrote:
> > snip
> >
> >> can this be optimized in anyway?
> >> open (my $source, '-|', "tail -10 /server/server.log")
> >>
> >> is this the best
From: Richard Lee <[EMAIL PROTECTED]>
> Mr. Shawn H. Corey wrote:
> > It still has to go through the entire file and mark the offsets to the
> > start of every line.
> >
> > The best way to do this is just to bite the bullet and do it.
>
> There is no way to read say last 10 MB of the file or some
On Sun, Apr 20, 2008 at 8:55 PM, Mr. Shawn H. Corey
<[EMAIL PROTECTED]> wrote:
> On Sun, 2008-04-20 at 20:22 -0400, Chas. Owens wrote:
> > No, you obviously don't know how it is implemented. It seeks to the
> > end of the file and reads it into a buffer where it searches for line
> > endings.
On Sun, Apr 20, 2008 at 5:12 PM, David Moreno <[EMAIL PROTECTED]> wrote:
> Excerpts from Richard Lee's message of Sun Apr 20 17:02:58 -0400 2008:
>
> > This looks very useful.
> >
> > Unfortunately however, the system I am on, I cannot install any modules
> > other than standard modules that alr
On Sun, Apr 20, 2008 at 5:55 PM, Mr. Shawn H. Corey
<[EMAIL PROTECTED]> wrote:
snip
> Sadly, even ReadBackwards in no magic bullet. (And BTW, it should be
> ReadBackward.)
snip
No, it is File::ReadBackwards. If you are complaining about the
regionalism "backwards", well I bet I can find you us
Richard Lee wrote:
Mr. Shawn H. Corey wrote:
Sadly, even ReadBackwards in no magic bullet. (And BTW, it should be
ReadBackward.)
It still has to go through the entire file and mark the offsets to the
start of every line.
The best way to do this is just to bite the bullet and do it.
There i
Excerpts from Richard Lee's message of Sun Apr 20 17:02:58 -0400 2008:
> This looks very useful.
>
> Unfortunately however, the system I am on, I cannot install any modules
> other than standard modules that already come with the perl.
> But I will try this at my own system.
Well, take a deeper
Mr. Shawn H. Corey wrote:
On Sun, 2008-04-20 at 17:02 -0400, Richard Lee wrote:
Chas. Owens wrote:
On Sun, Apr 20, 2008 at 1:49 PM, Richard Lee <[EMAIL PROTECTED]> wrote:
snip
can this be optimized in anyway?
open (my $source, '-|', "tail -10 /server/server.log")
is t
On Sun, 2008-04-20 at 13:49 -0400, Richard Lee wrote:
> can this be optimized in anyway?
> open (my $source, '-|', "tail -10 /server/server.log")
>
> is this the best way to get large portion(well file itself is over 20
> times) of the file into find handle?
>
This will not optimize process
Chas. Owens wrote:
On Sun, Apr 20, 2008 at 1:49 PM, Richard Lee <[EMAIL PROTECTED]> wrote:
snip
can this be optimized in anyway?
open (my $source, '-|', "tail -10 /server/server.log")
is this the best way to get large portion(well file itself is over 20
times) of the file into find ha
On Sun, Apr 20, 2008 at 1:49 PM, Richard Lee <[EMAIL PROTECTED]> wrote:
snip
> can this be optimized in anyway?
> open (my $source, '-|', "tail -10 /server/server.log")
>
> is this the best way to get large portion(well file itself is over 20
> times) of the file into find handle?
snip
Depe
Mr. Shawn H. Corey wrote:
On Sun, 2008-04-06 at 22:36 -0400, Richard Lee wrote:
I am trying to open a big file and go through line by line while
limiting the resource on the system.
What is the best way to do it?
Does below read the entire file and store them in memory(not good if
that's t
def a typo.. sorry about that
No problem at all. Just checking in case the PHP question was
missed or something. I know all about typos. I keep typing "funeral"
instead of "wedding" for June 28th.
LOL. That was a coffee on the monitor minute.
Congrats and good luck!
Thanks for the l
On Mon, Apr 7, 2008 at 11:44 AM, Richard Lee <[EMAIL PROTECTED]> wrote:
> Daniel Brown wrote:
> >
> >Was there a reason this was sent to the PHP list as well? Maybe
> > just a typo?
> >
> def a typo.. sorry about that
>
No problem at all. Just checking in case the PHP question was
miss
On Sun, Apr 6, 2008 at 10:36 PM, Richard Lee <[EMAIL PROTECTED]> wrote:
> I am trying to open a big file and go through line by line while limiting
> the resource on the system.
> What is the best way to do it?
>
> Does below read the entire file and store them in memory(not good if that's
> the
Daniel Brown wrote:
On Sun, Apr 6, 2008 at 10:36 PM, Richard Lee <[EMAIL PROTECTED]> wrote:
I am trying to open a big file and go through line by line while limiting
the resource on the system.
What is the best way to do it?
Does below read the entire file and store them in memory(not good
On Sun, 2008-04-06 at 22:36 -0400, Richard Lee wrote:
> I am trying to open a big file and go through line by line while
> limiting the resource on the system.
> What is the best way to do it?
>
> Does below read the entire file and store them in memory(not good if
> that's the case)..
>
> open
On Sun, Apr 6, 2008 at 10:36 PM, Richard Lee <[EMAIL PROTECTED]> wrote:
> I am trying to open a big file and go through line by line while limiting
> the resource on the system.
> What is the best way to do it?
>
> Does below read the entire file and store them in memory(not good if that's
> the
Richard Lee wrote:
I am trying to open a big file and go through line by line while
limiting the resource on the system.
What is the best way to do it?
Does below read the entire file and store them in memory(not good if
that's the case)..
open(SOURCE, "/tmp/file") || die "not there: $!\n";
I am trying to open a big file and go through line by line while
limiting the resource on the system.
What is the best way to do it?
Does below read the entire file and store them in memory(not good if
that's the case)..
open(SOURCE, "/tmp/file") || die "not there: $!\n";
while () {
## do som
31 matches
Mail list logo