make CharTokenizer.MAX_WORD_LEN parametrizable
----------------------------------------------

                 Key: LUCENE-2407
                 URL: https://issues.apache.org/jira/browse/LUCENE-2407
             Project: Lucene - Java
          Issue Type: Improvement
    Affects Versions: 3.0.1
            Reporter: javi
            Priority: Minor
             Fix For: 3.1


as discussed here 
http://n3.nabble.com/are-long-words-split-into-up-to-256-long-tokens-tp739914p739914.html
 it would be nice to be able to parametrize that value. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to