Hi,
Thanks for that.
Iam collecting /etc/passwd files from a large number of unix systems and
generating the list of UIDs.Out of this list, I want to pick up the next
available UID.
When I generate the list, there will be lot of duplicate UIDs, which I want
to get rid of.
I hope that explains.
ta,
>From: Casey West <[EMAIL PROTECTED]>
>To: "M.W. Koskamp" <[EMAIL PROTECTED]>
>CC: [EMAIL PROTECTED], cherukuwada subrahmanyam <[EMAIL PROTECTED]>,
>[EMAIL PROTECTED]
>Subject: Re: eliminating duplicate lines in a file
>Date: Wed, 2 May 2001 12:45:41 -0400
>
>On Wed, May 02, 2001 at 07:39:03PM +0200, M.W. Koskamp wrote:
>:
>: ----- Original Message -----
>: From: Paul <[EMAIL PROTECTED]>
>: To: cherukuwada subrahmanyam <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
>: Sent: Wednesday, May 02, 2001 7:08 PM
>: Subject: Re: eliminating duplicate lines in a file
>:
>:
>: >
>: > --- cherukuwada subrahmanyam <[EMAIL PROTECTED]> wrote:
>: > > Hi,
>: > > Iam reading flat text file of 100000 lines. Each line has got data of
>: > > maximum 10 characters.
>: > > I want to eliminate duplicate lines and blank lines out of that file.
>: > > i.e. something like sort -u in unix.
>: >
>: > Got plenty of memory? =o)
>: >
>: > open IN, $file or die $!;
>: > my %uniq;
>: > while(<IN>) {
>: > $uniq{$_}++;
>: > }
>: > print sort keys %uniq;
>: >
>: how about you?
>:
>: open FH, "lines.txt" || die $!;
>: my %uniq;
>: map{$uniq{$_}=1 and print $_ unless $uniq{$_} }<FH>;
>:
>: :o))
>
>While this is fun and amusing, it's not being very helpfull. At least
>provide an easy to understand explination of what your code is doing.
>
>Casey West
>
>--
>"Is forbitten to steal hotel towels please. If you are not person to
>do such thing is please not to read notis."
> --In a Tokyo Hotel
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.