> > Dan Muey wrote:
> [...]
> >> I currently have to do this via the command line:
> >>
> >> my @unique = qx(sort -u file.txt);
> >>
> >> To remove duplicat elines from file.txt.
> >>
> >> What would be the best way to do the same thign with perl
> instead of
> >> calling an external program?
> >>
> >> I can always assign each element of the array to a hash I suppose,
> >> that way there could only be one key that is whatever element.
> >>
> >> I was hoping for a one liner though.
> >
> > Ok, but it may be a bit long. :-)
> >
> > my $file = 'file.txt';
> >
> > my @unique = do {
> > open my $fh, '<', $file or die "Cannot open $file: $!";
> > my %seen;
> > grep !$seen{$_}++, <$fh>
> > };
>
> How much different is:
>
> my $file = 'file.txt';
> {
> open my $fh, '<', $file or die "Cannot open $file: $!";
> my %seen;
> my @unique = grep {!$seen{$_}++} <$fh>;
> }
>
> (Or, what does 'do' do here that I, too, should do 'do'?)
>
> Hmmmm, looking at my block, maybe the answer is "it places
> @unique outside
> of the block so that it is still accessible (but keeps our throwaway
> variables 'local')"?
Pretty much, I'd say `perldoc -f do` if you're really curious.
>
> BTW, doesn't 'sort -u' also sort the list?
sort -u opens a file and prints to STDOUT or another file the unique lines,
but that is an external command and not every server will necessariy have a
sort command and if it does -u may have a different meaning or it may not
exist at all.
HTH Dan
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]