Try upgrading Elasticsearch -- it's up to 6.0 release just a few week ago
now -- its (and Lucene's) memory usage has decreased over time.
The _uid field in particular will always be costly, unfortunately. Since
it's a primary key, every term will be unique, and the term index has to
work hard to
Comments below:
On Tue, Nov 28, 2017 at 4:47 PM, elirev wrote:
> Thanks Mike .
> I did not find any clear way to know it its FST or Norm , or something
> else ( unless i miss something ) the fact the FST is an in memory prefix
> index lead me to think it using most of the heap .
> Our
Hi elirev,
The field "index" of class "org.apache.lucene.codecs.blocktree.FieldReader"
is the fst of each field; its type is FST. I close a index and
pick a shard; wirte some code to directly read the shard and then use the
reflection to get the actual fst object of _uid field. The ramBytesUsed()
Hו yin
How do you determine the size being allocated for your _uid ?
--
Sent from: http://lucene.472066.n3.nabble.com/Lucene-Java-Users-f532864.html
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additio
Thanks, mike. I'm facing a similar problem.
I'm running a 2.0 elasticsearch cluster, and find the fst of _uid field
takes a lot of memory. The _uid field is not analyzed and generated by
elasticsearch, which also has high cardinality.
Is there any ways to reduce memory cost for _uid field? Thanks.
Thanks Mike .
I did not find any clear way to know it its FST or Norm , or something
else ( unless i miss something ) the fact the FST is an in memory prefix
index lead me to think it using most of the heap .
Our mapping is normal with around of 200 columns one of the columns is
nested o
Are you sure its FSTs using your heap?
Do you have many index fields that have high cardinality? Or many
suggesters?
Mike McCandless
http://blog.mikemccandless.com
On Thu, Nov 16, 2017 at 5:03 PM, Eli Revach wrote:
> Hi
> I am using Elasticserach 1.7.5 , our segment memory allocation per nod