On Mon, 2008-01-07 at 14:20 -0800, Otis Gospodnetic wrote:
> Please post your results, Lars!
Tried the patch, and it failed to compile (plain Lucene compiled fine).
In the process, I looked at TermQuery and found that it'd be easier to
copy that code and just hardcode 1.0f for all norms. Did tha
On Tue, 2008-01-01 at 23:38 -0800, Chris Hostetter wrote:
> : On Wed, 2007-12-12 at 11:37 +0100, Lars Clausen wrote:
>
> : Seems there's a reason we still use all this memory:
> : SegmentReader.fakeNorms() creates the full-size array for us anyway, so
> : the memory usage
On Wed, 2007-12-12 at 11:37 +0100, Lars Clausen wrote:
> I've now made trial runs with no norms on the two indexed fields, and
> also tried with varying TermIndexIntervals. Omitting the norms saves
> about 4MB on 50 million entries, much less than I expected.
Seems there'
On Wed, 2007-12-12 at 11:37 +0100, Lars Clausen wrote:
> Increasing
> the TermIndexInterval by a factor of 4 gave no measurable savings.
Following up on myself because I'm not 100% sure that the indexes have
the term index intervals I expect, and I'd like to check. Where can
On Tue, 2007-11-13 at 07:26 -0800, Chris Hostetter wrote:
> : > Can it be right that memory usage depends on size of the index rather
> : > than size of the result?
> :
> : Yes, see IndexWriter.setTermIndexInterval(). How much RAM are you giving to
> : the JVM now?
>
> and in general: yes. Luc
We've run into a blocking problem with our use of Lucene: we get
OutOfMemoryError when performing a one-term search in our index. The
search, if completed, should give only a few thousand hits, but from
inspecting a heap dump it appears that many more documents in the index
get stored in Lucene dur