Thanks for your support, just sharing what I found until now.
I'm working with SolrCloud with a 2 node deployment. This deployment has
many indexes but a main one 160GB index that has become very slow.
Select *:* rows=1 take 2 seconds.
SolrCloud instances are running in kubernetes and are deployed
I’ve found that each solr instance will take as many cores as it needs per
request. Your 2 sec response sounds like you just started the server and then
did that search. I never trust the first search as nothing has been put into
memory yet. I like to give my jvms 31 gb each and let Linux cache
First look at the system metrics. Is it CPU bound or IO bound? Each request is
single threaded, so a CPU bound system will have one core used at roughly 100%
for that time. An IO bound system will not be using much CPU but will have
threads in iowait and lots of disk reads.
After you know that,
I did it right now in prod environment:
{
"responseHeader":{
"zkConnected":true,
"status":0,
"QTime":1943,
"params":{
"q":"*:*",
"rows":"1"}},
then for a while, the QTime is 0. I assume (obviously) that it is cached,
but after a while the cache expires
On Fri, Ma
Hi all,
While trying to update a document in a collection that uses the implicit
router I encountered a problem updating the field that I have specified as
the router.field
For context, I have a field named Status which is the router.field for my
collection. When I pass an update command to SOLR
Walter I agree, but with large indexes (850+Gb before merge) I just found 31 to
be my happy spot. As well as set xms and xmx to the same value, I have no
proof but it seems to take less processing to keep them the same than to keep
allocating different memory footprints
> On Mar 18, 2022, at
I've also found that: CONTAINER.fs.coreRoot.spins: true
Can be this considered a problem so huge as to affect the overall
performance?
On Fri, Mar 18, 2022 at 6:46 PM Dave wrote:
> Walter I agree, but with large indexes (850+Gb before merge) I just found
> 31 to be my happy spot. As well as set
We have modified the kubernetes configuration and restarted SolrCloud
cluster, now we have 16 cores per Solr instance.
The performance does not seem to be improved though.
The load average is 0.43 0.83 1.00, to me it seems an IO bound problem.
Looking at the index I see 162M documents, 234M maxDocs
You are getting this general advice but, sadly, it depends on your doc
sizes, query complexity, write frequency, and a bunch of other stuff I
don't know about.
I prefer to run with the *minimum* JVM memory to handle throughput (without
OOM) and let the OS do caching because I update/write to the i
On 2022-03-18 1:35 PM, Vincenzo D'Amore wrote:
At last, I'm looking at Solr metric but really not sure how to understand
if it is CPU bound or IO bound.
iostat, iotop, even the regular top has 'wa'(it) number -- you'll
probably have to install them in your container first.
Dima
Ok, everything you said is right, but nevertheless even right now a stupid
*:* rows=1 runs in almost 2 seconds.
The average document size is pretty small, less than roughly 100/200 bytes.
Does someone know if the average doc size is available in the metrics?
{
"responseHeader":{
"zkConnected
Is it possible that there are too frequent commits? I mean if each commit
usually invalidates the cache, even the a stupid *:* rows=1 can be affected.
How can I see how frequent commits are? Or when the latest commit has been
done?
On Fri, Mar 18, 2022 at 8:36 PM Vincenzo D'Amore wrote:
> Ok, ev
My guess is that it's trashing on a "cold" open of the index file. I'm
sure the next query of *:*&rows=2 is pretty fast since caches get populated.
I don't know what to say for next steps - lower the jvm memory and/or check
the stats in the admin console -> core selct -> Plugins/Stats -> CACHE.
You were right, *:* first query rows=1 QTime=1947, second query rows=2
QTime=0
This is the CACHE, not sure how read this:
- perSegFilter
- class:org.apache.solr.search.LRUCache
- description:LRU Cache(maxSize=10, initialSize=0, autowarmCount=10,
regenerator=org.apache.solr.s
perSegFilter
class:org.apache.solr.search.LRUCache
description:LRU Cache(maxSize=10, initialSize=0, autowarmCount=10,
regenerator=org.apache.solr.search.NoOpRegenerator@642f416a)
stats:
CACHE.searcher.perSegFilter.cumulative_evictions:0
CACHE.searcher.perSegFilter.cumulative_evictionsIdleTime:0
CAC
Again, never ever trust the result speed of a cold search. Are you warming
your index?
https://solr.apache.org/guide/6_6/query-settings-in-solrconfig.html
> On Mar 18, 2022, at 4:23 PM, Vincenzo D'Amore wrote:
>
> perSegFilter
> class:org.apache.solr.search.LRUCache
> description:LRU Cache(m
On 3/18/22 12:35, Vincenzo D'Amore wrote:
The INDEX.size is 70GB, what do you think if I raise the size allocated
from the JVM to 64GB in order to have the index in memory?
Solr and Java do not put the index into memory. The OS does. If you
raise the heap size, there will be LESS memory avai
17 matches
Mail list logo