I figure out the problem,
I custom an NGramFilter which takes the token's length as a default
maxGramSize,
and there are some documents fulled with non sense data like
'xakldjfklajsdfklajdslkf',
when the token is too big to do NGramFilter , it crushed the IndexWriter.
--
Sent from: http://lucen
Is it possible that exception is thrown when trying to index an extremely
large document?
Mike McCandless
http://blog.mikemccandless.com
On Fri, Sep 1, 2017 at 12:07 AM, bebe1437 wrote:
> My solr version is 5.5.4,
> I set docValues="true" to some old fields,
> and I use dataimport to reindex,
Updated :
Some documents will throw the same exception while update with API,
but the others updated with API still throw the same exception while use
dataimport.
--
Sent from: http://lucene.472066.n3.nabble.com/Lucene-Java-Users-f532864.html
---
My solr version is 5.5.4,
I set docValues="true" to some old fields,
and I use dataimport to reindex,
but it keep throwing the exception: Caused by:
java.lang.ArrayIndexOutOfBoundsException: -65536
In org. apache. Lucene. index. TermsHashPerField.
writeByte(TermsHashPerField.java:197)