> OK, I had to try the two ways again to see how much difference it made. I
> created a random contents fixed field file 14500 lines long X 80 columns
> wide, and tried processing the lines (using substr($_,)to
> break lines up into 4 sections, substitute based on a few patterns, and
change a
pe of the question asked
when responding here.
Steve Howard
-Original Message-
From: Jos Boumans [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 13, 2001 6:55 AM
To: Steve Howard
Cc: Stephen Henderson; [EMAIL PROTECTED]
Subject: Re: use of split command
I'd like to make a few adju
I'd like to make a few adjustments on this code, and give you some things you might
want
to concider using:
open (ACCS, "C:\\Perl\\BioPerl-0.7\\Seqdata\\Accession.txt") or die "can't open
Accessions
file ", $!;
# this will produce also the error returned by the system call... usefull for figuri
if this is what you want:
while () { $string .= $_ }
you'll want to concider using this instead:
{ local $/; $in = }
which will put the entire contents of HANDLE into $in (by undeffing $/, which
is holds the line seperator character - it will be restored once the block
exits)
but like genie s
ED]
Cc: Stephen Henderson
Subject: Re: use of split command
> I am trying to read a quite large file (ca 13000 lines) directly into an
> array (for speed)
Sorry, it's a bad idea.
One day your file will be 1 GB size and @ets= will kill your PC
trying to load the whole gig into the mem
I see someone has already told you about using the foreach instead of the
for loop like you are doing, But I don't see an answer to your question
about using split. Here is an untested snippet to give an example (This
assumes a tab delimiter between the first and second column, use the
delimiter y
> I am trying to read a quite large file (ca 13000 lines) directly into an
> array (for speed)
Sorry, it's a bad idea.
One day your file will be 1 GB size and @ets= will kill your PC
trying to load the whole gig into the memory ..
while ( ){..}
is the best way for reading large files, I think
--- Stephen Henderson <[EMAIL PROTECTED]> wrote:
> I am trying to read a quite large file (ca 13000 lines) directly into
> an array (for speed)
> like this
>
> open (ACCS, "C:\\Perl\\BioPerl-0.7\\Seqdata\\Accession.txt") or die
> "can't open Accessions file";
> @ets=;
> $ets_count=@ets;
>
> the
I don't imagine you pick up all that much extra speed by reading the whole
file into an array first (anyone?).
I would do something like
open(ACCS, "Accession.txt") || die "blah";
my %values;
while()
{
chomp $_;
my($col1, $col2) = split(/\t/, $_); # you better be sure