On Mon, Jul 21, 2014 at 04:01:05PM -0600, Sherman Willden wrote:
>
> I have several files and I attached one of them. I want to sort the file
> and remove duplicate lines. The file is a list of key phrases to search the
> internet for. These are long lines so I don't know if this will work. I
> wo
Thank you all. I got off of windows 7 and went to my Ubuntu. Great stuff.
Now I have to go back to windows 7 to take the course.
Sherman
On Mon, Jul 21, 2014 at 4:14 PM, Paul Johnson wrote:
> On Mon, Jul 21, 2014 at 10:05:29PM +, Danny Wong (dannwong) wrote:
>
> > Do it the perl way, hash
On Mon, Jul 21, 2014 at 10:05:29PM +, Danny Wong (dannwong) wrote:
> Do it the perl way, hash it.
Or do it the unix way:
$ sort -u filename
The -u means unique.
You also have some lines that differ only in case, so you might prefer:
$ sort -uf filename
The -f means fold, which ignores
On Mon, 21 Jul 2014 16:01:05 -0600
Sherman Willden wrote:
> I have several files and I attached one of them. I want to sort the
> file and remove duplicate lines.
If you're running bash:
sort -u file > output_file
--
Don't stop where the ink does.
Shawn
--
To unsubscribe, e-mail: b
;
Date: Monday, July 21, 2014 at 3:01 PM
To: "beginners@perl.org<mailto:beginners@perl.org>"
mailto:beginners@perl.org>>
Subject: Please check my logic
I checked CPAN for remove duplicate lines and only found Code::CutNPaste which
doesn't sound like what I want. So I wi
I checked CPAN for remove duplicate lines and only found Code::CutNPaste
which doesn't sound like what I want. So I will build what I want although
I'm sure it's out there somewhere.
I have several files and I attached one of them. I want to sort the file
and remove duplicate lines. The file is a