Updated :
Some documents will throw the same exception while update with API,
but the others updated with API still throw the same exception while use
dataimport.
--
Sent from: http://lucene.472066.n3.nabble.com/Lucene-Java-Users-f532864.html
---
Hi!
if you want to read facetized fields you need to search trough facet
collector. For example like this
FacetsCollector facetsCollector =new FacetsCollector();
FacetsCollector.search(indexSearcher, query, pageSize, facetsCollector);
FastTaxonomyFacetCounts customFastFacetCounts =new
FastTa
My solr version is 5.5.4,
I set docValues="true" to some old fields,
and I use dataimport to reindex,
but it keep throwing the exception: Caused by:
java.lang.ArrayIndexOutOfBoundsException: -65536
In org. apache. Lucene. index. TermsHashPerField.
writeByte(TermsHashPerField.java:197)
I zeroed in the problem with my updating documents having facet
fields... What I need is a way to load document with all fields that
existing when I was saving the document, meaning, together with facet
fields.
Anyway, here's the example.
When I add my document to index, my document is having 3 f
I did not know that mmap is not considered direct memory, so thanks for
that. Now I can stop barking about why -XX:MaxDirectMemorySize isn't
having any effect :)
--
Erik
On Wed, Aug 30, 2017 at 11:39 PM, Uwe Schindler wrote:
> Hi,
>
> As a suggestion from my side: As a first thing: disable the
Thanks, Robert. I found this bit from that link enlightening:
"Some parts of the cache can't be dropped, not even to accomodate new
applications. This includes mmap'd pages that have been mlocked by some
application, dirty pages that have not yet been written to storage, and
data stored in tmpfs