Hi,

In article <[EMAIL PROTECTED]>, John W. Krahn wrote:

> Dan Muey wrote:
[...]
>> I currently have to do this via the command line:
>> 
>>  my @unique = qx(sort -u file.txt);
>> 
>> To remove duplicat elines from file.txt.
>> 
>> What would be the best way to do the same thign with perl instead of
>> calling an external program?
>> 
>> I can always assign each element of the array to a hash I suppose,
>> that way there could only be one key that is whatever element.
>> 
>> I was hoping for a one liner though.
> 
> Ok, but it may be a bit long.  :-)
> 
> my $file = 'file.txt';
> 
> my @unique = do {
>     open my $fh, '<', $file or die "Cannot open $file: $!";
>     my %seen;
>     grep !$seen{$_}++, <$fh>
>     };

How much different is:
 
my $file = 'file.txt';
{
   open my $fh, '<', $file or die "Cannot open $file: $!";
   my %seen;
   my @unique = grep {!$seen{$_}++} <$fh>;
}

(Or, what does 'do' do here that I, too, should do 'do'?) 

Hmmmm, looking at my block, maybe the answer is "it places @unique outside 
of the block so that it is still accessible (but keeps our throwaway 
variables 'local')"?

BTW, doesn't 'sort -u' also sort the list?

-K
-- 
Kevin Pfeiffer
International University Bremen

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to