Strange. This happens after I added all documents using
IndexWriter.addDocument() function. Everything works well at that point.
I then call IndexWriter.forceMerge(1), finally IndexWriter.close(true).
The "out of memory" problem happens after I called forceMerge(1) but before
close(true).
If mer
merging binarydocvalues doesn't use any RAM, it streams the values from the
segments its merging directly to the newly written segment.
So if you have this problem, its unrelated to merging: it means you don't
have enough RAM to support all the stuff you are putting in these
binarydocvalues fields
Hi,
After finishing indexing, we tried to consolidate all segments using
forcemerge, but we continuously get out of memory error even if we
increased the memory up to 4GB.
Exception in thread "main" java.lang.IllegalStateException: this writer hit
an OutOfMemoryError; cannot complete forceMerge
Hi Igor,
About your performance problem with SpanQueries and Payloads:
Try to filter with the corresponding BooleanQuery and use a profiler.
You have an IO-bottleneck because of reading position and payload
information per document.
Possible it would help if you first filter off the "obviously"
Hi,
I have the following scenario: I have an index of very large size
(although I'm testing with around 200,000 documents, but should scale to
many millions) and I want to perform a search on a certain field.
According to that search, I would like to manipulate a different field
for all the matchin
Hi,
I'm currently calling:
FacetsCollector.create(new StandardFacetsAccumulator(facetSearchParams,
indexReader, getTaxonomyReader())
that is calling FacetRequest.createAggregator(...)
and is not working properly;
I'm extending the CountingAggregator and than Aggregator, if I override
FacetAccu
Hi Nicola,
I didn't read the code examples, but I'll relate to your last question
regarding the Aggregator. Indeed, with Lucene 4.2,
FacetRequest.createAggregator is not called by the default
FacetsAccumulator. This method should go away from FacetRequest entirely,
but unfortunately we did not fin
Hi all,
in Lucene 4.1, after some advise from the mailing list I am merging
taxonomies (in memory because of the small size of taxonomies indexes)
and collecting facets values from the merged taxonomy instead of the
single ones; the scenario is:
- you have a Multireader pointing to more indexes
-