Prabu Ayyappan wrote:

From: Manoj <[EMAIL PROTECTED]>

Have a CSV file with the first column as unique which I am taking as hash
key and rest of the line as hash value. I am opening the file and putting to
hash after reading each line. There are two questions from me.
1)       How to find the performance and time taken by Perl script?
2)       Is there any optimal method for reading a CSV file and put to hash
table.

Note: This is only a part of script doing and I am supposed to do this with
out using any default modules.

1)        How to find the performance and time taken by Perl script?

For benchmarking the perl scripts you can use the Benchmark module.

http://search.cpan.org/~rgarcia/perl-5.10.0/lib/Benchmark.pm

Benchmark.pm *should* be installed as a standard module when Perl was installed. Check the "Standard Modules" section of perlmodlib.pod

perldoc perlmodlib
perldoc Benchmark


2)       Is there any optimal method for reading a CSV file and put to hash 
table.

May have better approaches however this read CSV into Hash

use Data::Dumper;
open(INFILE, "<", "sample.csv") or die $!;
my %hsh;
%hsh = ( %hsh, (split(/,/, $_))[1,2] ) while ( <INFILE> );

That is a *very* inefficient way to populate a hash as you are copying the entire hash for every record in the file. Better to add the keys and values individually:

my %hsh;
while ( <INFILE> ) {
    chomp;
    my ( $key, $value ) = split /,/;
    }


print Dumper \%hsh;


John
--
Perl isn't a toolbox, but a small machine shop where you
can special-order certain sorts of tools at low cost and
in short order.                            -- Larry Wall

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/


Reply via email to