Strictly speaking, -F describes how the program should interpret the input pattern(s): Basically, it turns off most, if not all, special characters (e.g. '^', '$', '+', '?', '.' etc.) in the pattern.
The word-boundary flag (-w) is, to some extent, adding back in some of the non-trivial cases that previously could be ignored when -F was used. Second-guessing the algorithm that's selected underneath is fraught with hazards. There's already all sorts of heuristics in place. For example, the simple Tuned-Boyer-Moore search in kwset.c was modified a few versions ago by Norihiro, to monitor the progress being made by the T-B-M search, and, if progress was poor, the loop tried memchr and/or memchr2 to see if it could skip bytes more quickly. If a candidate was found, the T-B-M search resumed, but always with the heuristic looking over its shoulder. There's also a heuristic at the next higher level, in dfa.c: It scans the pattern for "must-have" strings ("musts"), and then simply chooses the longest such string. The longest string might have a speed advantage in the T-B-M skip search (more bytes skipped per test), but, for some pathological data cases, the skip size might always be small, whereas choosing one of the shorter strings might lead to larger skips per test, and so run faster. cheers, sur-behoffski (Brenton Hoff) Programmer, Grouse Software