On Dec 8, 2003, at 9:30 PM, Bryan Harris wrote: [..]
[..]Sometimes perl isn't quite the right tool for the job...
% man sort % man uniq
If you code it correctly (unlike the program at the URL above) then a
perl version will be more efficient and faster than using sort and uniq.
Please explain...
That's the last conclusion I thought anyone would be able to reach.
For many simple things 'sort -u' will suffice, which I presume was your argument. the problem of course is getting what john asserts as 'code it correctly'.
oye - I finally used stuart clemmon's suggestion { thank you stuart! } to peek at the code. OYE! I will defer to john as to which piece has him going OYE! more than the funkadelic of the URL's
my @sorted = sort <INFILE>;
and then a bunch of file IO as well...
but back to your side of the question, as you may have noticed from using 'sort' even if you are running 'sort -u' there is a bunch of problems that can come as the volume of data grows - that will lead to the creation of a bunch of cache files.
Ironically, uh, duh, given tassilo's recent thumping of me for whining about acadamia - there are some ugly 'sorting algorithms' that have to be 'ugly' to be 'general enough' that are, well, ugly.
You might want to get your hands on Knuth's and crawl the searching and sorting algorithm sections if you are really interested in some serious analysis of good ways and bad ways to think about solving sorting algorithms.
may I recommend that you peak at
perldoc -q sort
and then look at
@sorted = map { $_->[0] } sort { $a->[1] cmp $b->[1] } map { [ $_, uc( (/\d+\s*(\S+)/)[0]) ] } @data;
and think about what that MIGHT mean were it to have been plugged into the process.
ciao drieux
---
-- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] <http://learn.perl.org/> <http://learn.perl.org/first-response>