[
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15885144#comment-15885144
]
Amrit Sarkar edited comment on LUCENE-7705 at 2/27/17 5:06 AM:
---------------------------------------------------------------
I did some adjustments, including removing maxTokenLen for
LowerCaseFilterFactory init and included hard test cases for the tokenizers at
solr level: TestMaxTokenLenTokenizer.java
{noformat}
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/KeywordTokenizerFactory.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizer.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizerFactory.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LowerCaseTokenizer.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LowerCaseTokenizerFactory.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/UnicodeWhitespaceTokenizer.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizer.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizerFactory.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/util/CharTokenizer.java
new file:
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestKeywordTokenizer.java
modified:
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChains.java
modified:
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestUnicodeWhitespaceTokenizer.java
modified:
lucene/analysis/common/src/test/org/apache/lucene/analysis/util/TestCharTokenizers.java
new file:
solr/core/src/test-files/solr/collection1/conf/schema-tokenizer-test.xml
new file:
solr/core/src/test/org/apache/solr/util/TestMaxTokenLenTokenizer.java
{noformat}
I think we have covered everything, all tests passed.
was (Author: [email protected]):
I did some adjustments, including removing maxTokenLen for
LowerCaseFilterFactory init and included hard test cases for the tokenizers at
solr level: TestTokenizer.java
{noformat}
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/KeywordTokenizerFactory.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizer.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizerFactory.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LowerCaseTokenizer.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LowerCaseTokenizerFactory.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/UnicodeWhitespaceTokenizer.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizer.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizerFactory.java
modified:
lucene/analysis/common/src/java/org/apache/lucene/analysis/util/CharTokenizer.java
new file:
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestKeywordTokenizer.java
modified:
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChains.java
modified:
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestUnicodeWhitespaceTokenizer.java
modified:
lucene/analysis/common/src/test/org/apache/lucene/analysis/util/TestCharTokenizers.java
new file:
solr/core/src/test-files/solr/collection1/conf/schema-tokenizer-test.xml
new file: solr/core/src/test/org/apache/solr/util/TestTokenizer.java
{noformat}
I think we have covered everything, all tests passed.
> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the
> max token length
> ---------------------------------------------------------------------------------------------
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Amrit Sarkar
> Assignee: Erick Erickson
> Priority: Minor
> Attachments: LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch,
> LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character
> limit for the CharTokenizer? In order to change this limit it requires that
> people copy/paste the incrementToken into some new class since incrementToken
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer)
> (Factories) it would take adding a c'tor to the base class in Lucene and
> using it in the factory.
> Any objections?
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]