rmuir commented on code in PR #14583:
URL: https://github.com/apache/lucene/pull/14583#discussion_r2083651585
##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/classic/ClassicTokenizerImpl.java:
##########
@@ -438,6 +436,16 @@ public final void setBufferSize(int numChars) {
this.zzReader = in;
}
+ /** Returns the maximum size of the scanner buffer, which limits the size of
tokens. */
+ private int zzMaxBufferLen() {
+ return Integer.MAX_VALUE;
+ }
+
+ /** Whether the scanner buffer can grow to accommodate a larger token. */
+ private boolean zzCanGrow() {
+ return true;
+ }
Review Comment:
I understand it now. these buffer limits/control are new features that we
probably want to adopt (and remove our "skeletons"). first I want to get tests
passing before attempting it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]