Robert Muir created LUCENE-8348:
-----------------------------------
Summary: Remove [Edge]NgramTokenizer min/max defaults consistent
with Filter
Key: LUCENE-8348
URL: https://issues.apache.org/jira/browse/LUCENE-8348
Project: Lucene - Core
Issue Type: Task
Components: modules/analysis
Environment: LUCENE-7960 fixed a good deal of trappiness here for the
tokenfilters, there aren't ridiculous default min/max values such as 1,2.
Also javadocs are enhanced to present a nice warning about using large ranges:
it seems to surprise people that min=small, max=huge eats up a ton of
resources, but its really like creating (huge-small) separate n-gram indexes,
so of course its expensive.
Finally it keeps it easy to do the typical, more efficient fixed ngram case, vs
forcing someone to do min=X,max=X range which is unintuitive.
We should improve the tokenizers in the same way.
Reporter: Robert Muir
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]