JBallinger wrote:
>
On Mar 14, 3:26 pm, [EMAIL PROTECTED] (Manoj) wrote:
>>
When using Data: Dumper is taking more time for my 10000 lines of CSV file.
This solved a few queries...and the benchmark was a new value addition for
me. Thanks
2) Is there any optimal method for reading a CSV file and put to
hash table.
May have better approaches however this read CSV into Hash
use Data::Dumper;
open(INFILE, "<", "sample.csv") or die $!;
my %hsh;
%hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );
That is a *very* inefficient way to populate a hash as you are copying
the entire hash for every record in the file. Better to add the keys
and values individually:
my %hsh;
while ( <INFILE> ) {
chomp;
my ( $key, $value ) = split /,/;
}
print Dumper \%hsh;
Suppose my csv file has 5 columns: f id fa mo ge
Will my ($key, $value) = split/,/; still work?
Is there any other mothod that is more efficient?
A hash element can have only one key and one value, but that value can
be an anonymous array. Something like this may do what you want.
while (<INFILE>) {
my ($key, @vals) = split /,/;
$hash{$key} = [EMAIL PROTECTED];
}
HTH,
Rob
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/