python, perl, sh, bash, awk, sed, ...
$ < /usr/share/dict/words shuf | head -n 2
smoothly
launderer
$ < /usr/share/dict/words perl -e '$a=0; $b=0; while(<>){if(! $a &&
/smoothly/){$a=1}; if(! $b && /launderer/){$b=1}; if($a && $b){ print
"Found all by line $..\n"; exit;}}; print "Did not find all.\n";'
Found all by line 85642.
$ wc -l < /usr/share/dict/words
104334
$ < /usr/share/dict/words perl -e '$a=0; $b=0; while(<>){if(! $a &&
/smoXXXothly/){$a=1}; if(! $b && /launderer/){$b=1}; if($a && $b){
print "Found all by line $..\n"; exit;}}; print "Did not find
all.\n";'
Did not find all.
$

On Fri, Feb 27, 2026 at 1:50 AM Chris Green <[email protected]> wrote:
>
> I know and use grep extensively but this requirement doesn't quite fit
> grep.
>
> I want to search lots of diary/journal entries (which are just plain
> text files) for entries which have two or more specified strings in
> them.
>
> E.g. I'm looking for journal entries which have, say, the words
> 'green', 'water' and 'deep' in them. Ideally the strings searched for
> could be Regular Expressions (though simple command line type wild
> cards would suffice).
>
> Is there a tool out there that can do this?
>
> If not I can probably produce a bash script to do it using grep, i.e.
> use grep to get a list of files with the first word, grep that list of
> files for the second word, and so on.  However if there's a ready made
> tool for doing it I'd like to know about it.

Reply via email to