"Paddy" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> If the log has a lot of repeated lines in its original state then
> running uniq twice, once up front to reduce what needs to be sorted,
> might be quicker?
>
>  uniq log_file | sort| uniq | wc -l
>
> - Pad.
>

Why would the second running of uniq remove any additional lines that
weren't removed in the first pass?

For that matter, if this is a log file, wont every line have a timestamp,
making duplicates extremely unlikely?

-- Paul


-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to