Hi, Mike
Thank you very much for your help. I will have a try to make a
FilterAtomicReader subclass to solve this issue.
Best Regards!
-- Original --
From: "Michael McCandless";;
Date: Sun, Sep 14, 2014 02:48 AM
To: "Lucene Users";
Subject: Re:
Norms are not stored sparsely by the default codec.
So they take 1 byte per doc per indexed field regardless of whether
that doc had that field.
There is no setting to turn this off in IndexReader, though you could
make a FilterAtomicReader subclass to do this.
Or, you can disable norms for thes
Hi, mike
Is there any config option which let IndexReader to omit all norms and not to
load norms to JVM heap?
Thanks & Best Regards!
-- Original --
From: "lubin";<308181...@qq.com>;
Date: Sat, Sep 13, 2014 08:20 PM
To: "java-user";
Subject: Re: OutO
Hi, Mike
In our use case, we have thousands of index fields, different kind of
document have different fields. Do you meant that norms field will consume
large memory? Why?
If we decide to disable norms, do we need to rebuild our index entirely? By
the way, We have 8 million docu
The warmer just tries to load norms/docValues/etc. for all fields that
have them enabled ... so this is likely telling you an IndexReader
would also hit OOME.
You either need to reduce the number of fields you have indexed, or at
least disable norms (takes 1 byte per doc per indexed field regardle
Hi, all
we got an OutOfMemoryError throwed by SimpleMergedSegmentWarmer. We use
lucene 4.7, and access index file by NRTCachingDirectory/MMapDirectory. Could
any body give me a hand? Strack trace is as follows:
org.apache.lucene.index.MergePolicy$MergeException: java.lang.OutOfMemoryEr