> Bryan Harris wrote: >> >>>> Sometimes perl isn't quite the right tool for the job... >>>> >>>> % man sort >>>> % man uniq >>> >>> If you code it correctly (unlike the program at the URL above) then a >>> perl version will be more efficient and faster than using sort and uniq. >> >> Please explain... >> >> That's the last conclusion I thought anyone would be able to reach. > > How about a little demo. The times posted are the fastest from ten runs > of the same programs.
[stuff cut out] > The "sort | uniq" version has to run two processes and pass the whole > file through the pipe from one process to the next. The "sort -u" > version has to sort the whole file first and then outputs only the > unique values. The perl version uses a hash to store the unique values > first and then outputs the sorted values. Depending on the number of > duplicate values, the perl version will usually be faster as it has to > sort a smaller list. I see! I just don't understand... I thought perl's memory management, code interpretation, overhead in creating hashes and just in running would've taken far longer than sort. Heck, why don't they just rewrite sort in perl if it's that much faster? - B -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] <http://learn.perl.org/> <http://learn.perl.org/first-response>