Mr. Shawn H. Corey wrote:
On Sun, 2008-04-06 at 22:36 -0400, Richard Lee wrote:
I am trying to open a big file and go through line by line while
limiting the resource on the system.
What is the best way to do it?
Does below read the entire file and store them in memory(not good if
that's the case)..
open(SOURCE, "/tmp/file") || die "not there: $!\n";
while (<SOURCE>) {
## do something
}
The above code will read the file one line at a time. It is recommended
that you use the three-argument open statement. (See `perldoc -f
open`).
open( SOURCE, '<', "/tmp/file" ) || die "cannot open /tmp/file: $!\n";
while (<SOURCE>) {
# do something
}
sometime ago I saw somewhere it had something like below which look like
it was reading them and going through line by line without storing them
all in memory.
I just cannot remember the syntax exactly.
open(SOURCE, " /tmp/file |") || die "not there: $!\n";
while (<>) {
## do something
}
This syntax is is used for reading the output of another command. It is
not recommended because your program would not be portable across
operating systems. But sometimes you have no choice.
open( SOURCE, '-|', "command" ) || die "cannot pipe from command: $!\n";
while (<SOURCE>) {
# do something
}
Also see `perldoc IPC::Open2` and `perldoc IPC::Open3`
can this be optimized in anyway?
open (my $source, '-|', "tail -100000 /server/server.log")
is this the best way to get large portion(well file itself is over 20
times) of the file into find handle?
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/