As I usually go the straight forward route I would start with grep. I
routinely have to search multiple files and would do


grep "the search string" *filename

The above assumes that the file names are common with maybe a date.

To make things manageable I would do

grep "the search string" *filename > newfile

Then I have it to look at.

I can hear a fellow lopsa-nj member cringing at this and since he routinely
is searching terrabytes of data he uses splunk.

You could use Perl or awk to handle this.

It comes down to how fast you want the data but  a quick and dirty way is
to use grep.

On Thu, Mar 31, 2016 at 11:26 AM, Yves Dorfsman <y...@zioup.com> wrote:

> On 2016-03-31 09:15, Guus Snijders wrote:
> > The first thing that comes to mind is grep, with -A and -B (after/before)
> > parameters. Not sure how it will perform with such big datasets, but it's
> > probably a lot quicker than vi ;).
>
> If you are going to use grep, I strongly suggest that you take a look at ag
> (you should look at it if you don't know of it anyway!):
> https://github.com/ggreer/the_silver_searcher
>
> For more sophisticated, commercial solutions (Splunk, Sumo Logic, etc...)
> work
> well and are easy to maintain but become very expensive quickly, while the
> FLOSS solutions aren't quite as good and can require a lot of maintenance
> (ELK
> is based on ElasticSearch, which can be a beast to maintain).
>
> --
> http://yves.zioup.com
> gpg: 4096R/32B0F416
>
> _______________________________________________
> Tech mailing list
> Tech@lists.lopsa.org
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>



-- 
John J. Boris, Sr.
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to