Thanks. I'll take a look at perldoc perldebguts.
Just to provide some context here: my script takes an MSDOS/Windows
(columnar)
text file and converts it to xBASE .DBF format, for reading in dBASE IV
for Unix.
I don't use any modules; basically I manipulate the text file and cat it
onto the
end of a .DBF file header, and then update the header's record count.
(Well,
actually, I update the record count first. That way, if I botch up the
append, all
xBASE file readers will gag on the table, and at least we'll all know
something
went wrong.)
This project falls into the "doing it infrequently on a lightly used box
let's show
how fast and clever we are" bin.
For a file of slightly less than 400,000 records dBASE IV took 91 secs
to make the
conversion. Perl 5.6.1. took 7 secs, using the above method. And the
camel book is
always demurring and saying that perl isn't that fast...
"Faster is always better. It gives us more time to correct our
mistakes!"
Also, thanks to Peter and smoot for steering me to the special variable
$/,
which in fact answered my question as to how I can restrain Perl from
eating too much :)
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]