I'm frequently searching CSV files with 20-30 columns, and when there's a
hit it can be hard to know what the columns are. An option to also print
the first line of a file (either always, or only if that file had a match
to the pattern) in addition to any hits would be nice.
Thanks,
Dan
Daniel Green wrote:
> I'm frequently searching CSV files with 20-30 columns, and when there's a
> hit it can be hard to know what the columns are. An option to also print
> the first line of a file (either always, or only if that file had a match
> to the pattern) in addition to any hits would be
Gawk 4.0.2 is 11 years old. Try timing the current version,
I'll bet it's faster. And it solves your problem NOW,
instead of waiting for a feature that the grep developers
aren't likely to add.
My two cents of course.
Arnold
Daniel Green wrote:
> That works, as well as the Perl version I've b
On 8/21/23 13:37, arn...@skeeve.com wrote:
it solves your problem NOW,
instead of waiting for a feature that the grep developers
aren't likely to add.
Yes, Grep already has a lot of features that in hindsight would have
better addressed by saying "Use Awk".
That works, as well as the Perl version I've been using:
perl -ne 'print if ($. == 1 || /pattern/)'
But timings for a real-life example (3GB file with ~16m lines, CentOS 7)
show the problem:
grep (v2.20):~1.15s
perl (v5.36.1): ~4.48s
awk (v4.0.2): ~10.81s
Admittedly grep