On 3/15/23 06:35, Mark H. Wood wrote:
I'm always flummoxed by this question: what is the total index size?
It's easy to get 'numDocs' from the admin. interface, but there's
nothing I can find there that I would interpret as "index size".
Does this mean the sum of sizes of all files in $CORE/da
At Solr Admin Web Interface I click on "Core Collector" and
select one of the shards.
My maxDocs says 33.972.375 and below the size reports 112.5GB.
Because of 10 shards in my cloud just multiply by 10 which gives
a rough estimate ~340.000.000 Docs and ~1.125TB size.
To calculate growth just take
On Tue, Mar 14, 2023 at 08:21:26AM -0600, Shawn Heisey wrote:
> On 3/14/23 08:01, HariBabu kuruva wrote:
> > Till now it was running with 45GB heap memory. I am trying to tune the
> > performance of solr by adjusting heap memory.
>
> What is the total index size and total doc count of the server?
It may sound counter intuitive, but allocate as little Java heap as possible to
Solr, without causing OOM. Read up on the reference guide links provided as
well as excellent advice on profiling your heap usage.
Jan Høydahl
> 14. mar. 2023 kl. 15:03 skrev HariBabu kuruva :
>
> Hi ,
>
> Till n
Do what I suggested a few days ago. That is how you find out how much heap the
system really needs.
Use a heap analysis tool. You’ll see a sawtooth pattern in the heap size. The
bottom of that sawtooth is the actual amount of memory that Solr is using. Pick
the highest point of the bottom of th
On 3/14/23 08:01, HariBabu kuruva wrote:
Till now it was running with 45GB heap memory. I am trying to tune the
performance of solr by adjusting heap memory.
What is the total index size and total doc count of the server?
In the past I have run Solr servers with 80 million documents across 3
Hi ,
Till now it was running with 45GB heap memory. I am trying to tune the
performance of solr by adjusting heap memory.
So, I am looking for your inputs.
On Tue, Mar 14, 2023 at 3:23 PM Jan Høydahl wrote:
> Why do you believe you need such a huge heap as 31g? Can you support such
> a choice
Why do you believe you need such a huge heap as 31g? Can you support such a
choice by some observations or measurements?
Jan
> 14. mar. 2023 kl. 06:39 skrev HariBabu kuruva :
>
> Thank you all for your responses.
>
> There are no spaces between Xms and the values.
>
> I have updated similar a
No. Don’t breach 31gb unless you go all the way to 47 plus. Where you’re at
sounds pretty good if you index is less than 30 gb so it can get set into memory
> On Mar 14, 2023, at 1:39 AM, HariBabu kuruva
> wrote:
>
> Thank you all for your responses.
>
> There are no spaces between Xms and t
Thank you all for your responses.
There are no spaces between Xms and the values.
I have updated similar arguments(-Xms30720m -Xmx30720m) in one of the
non-prod environments(in mbs instead of gb). It correctly shows the max
heap as 30GB in the Solr UI.
So, I would like to update 31.5 GB similarl
Use a heap analysis tool. You’ll see a sawtooth pattern in the heap size. The
bottom of that sawtooth is the actual amount of memory that Solr is using. Pick
the highest point of the bottom of the sawtooth, then add some headroom, maybe
a gigabyte. Test with that value.
wunder
Walter Underwood
>Set -Xms to "I know it wants at least this much".
>Set -Xmx to significantly, but not wildly, more.
no, always set them to the same no matter what. I like increments of 1024M
so I would start at 2048M and work up to 8gb and see how it performs.
Having a test script that forks to how man
On Thu, Mar 09, 2023 at 01:56:11PM +0100, Jan Høydahl wrote:
> It's a waste to set heap to 30g if your use of Solr only requires 6g to
> function. That is 24G memory not being used for index caching, and it will
> may, depending on chose GC, cause bigger/longer GC events as more garbage
> piles
Agreed, but often times as a developer you are subject to the requests of those
higher up and you end up with 30 facets of strings that are the length of
names. But yes, test as low as you can and try to keep the qtimes low and just
keep adjusting until you are happy with whatever time works fo
It's a waste to set heap to 30g if your use of Solr only requires 6g to
function. That is 24G memory not being used for index caching, and it will may,
depending on chose GC, cause bigger/longer GC events as more garbage piles up
before collection.
You have to measure and experiment to find you
Again, set to less than 32, I liked 30
> On Mar 9, 2023, at 1:04 AM, Deepak Goel wrote:
>
> The max heap could be the max heap used by the process uptill now. And not
> the max value you have set. I would suggest you increase the load by at
> least 20 times to see the max heap to go to 32 gb.
The max heap could be the max heap used by the process uptill now. And not
the max value you have set. I would suggest you increase the load by at
least 20 times to see the max heap to go to 32 gb.
Deepak
"The greatness of a nation can be judged by the way its animals are treated
- Mahatma Gandhi
On 3/8/2023 9:24 AM, HariBabu kuruva wrote:
I have set the Heap memory as -Xms 1g -Xmx 40g in the Production
environment.
But when i see the Heap memory in the Solr UI. I can see the Max Heap below.
Max: 3.8Gb
Used: 2.2Gb
The other answers you've gotten are good. This is mostly just a little
Hi,
There should be no spaces, try -Xms1g.
If you add the sapce, Java will likely fall back to defaults, which is a
certain percentage of physical mem.
See
https://stackoverflow.com/questions/4667483/how-is-the-default-max-java-heap-size-determined
You should follow our advice on memory tuning
-Xms3M
-Xmx3M
Keep them the same, no spaces, I preferred to use M , never go above 32g cause
reasons (jvm gets weird after 32) and make sure your machine still has the
memory to hold your index.
> On Mar 8, 2023, at 11:27 AM, HariBabu kuruva
> wrote:
>
> Hi All,
>
> I have set the
Hi,
As previously said, long GC pauses should be the cause of Solr/Zookeeper
communication issues. Analyse your GC logs with gceasy.io in order to
confirm this. After this, you need to investigate what is causing so much
heap memory consumption. Maybe you will discover misconceptions in
your shema
In logs I could see this WARN.
2021-08-30 13:15:52.301 WARN (zkCallback-12-thread-3) [c:quoteStore
s:shard1 r:core_node6 x:quoteStore_shard1_replica_n5]
o.a.s.c.RecoveryStrategy Stopping recovery for
core=[quoteStore_shard1_replica_n5] coreNodeName=[core_node6]
On Mon, Aug 30, 2021 at 6:43 PM Ha
Hi Zisis,
Thanks for your email.
We are suspecting the issue with one particular solr collection(or store).
Wherever the replicas of that store are present that nodes are going down.
Also now that shard is in recovery mode and Leader is not elected. Could
you please suggest something to bring up
My guess is that the Solr/Zookeeper communication issues are due to GC pauses.
You are saying that you end up with OOM problems. High memory usage puts
pressure on GC. Long GC pauses lead to timeouts in Solr/Zookeeper
communication. We've seen that happening.
First thing I'd do is to get a hea
I can’t help beyond such, I don’t like solr cloud nor zookeeper, I will always,
if I can help it, stick to standalone solr instance.
> On Aug 30, 2021, at 3:23 AM, HariBabu kuruva
> wrote:
>
> Hi Dave
>
> We tried setting the memory as per your suggestions.
>
> But still I see that the sol
Hi Dave
We tried setting the memory as per your suggestions.
But still I see that the solr is going down in a couple of minutes with an
OOM error. Also in the solr logs it says below connectivity issue between
solr and zookeeper. Please advise.
Zookeeper is running fine.
2021-08-30 06:24:13.0
Yes. Don’t set those memory restrictions, just xms and xmx, both to 31 gigs.
Java has problems past that line and will make the gc go into a bad loop. I can
send you a link as to why
https://community.datastax.com/questions/3661/why-is-a-32-gb-heap-allocation-not-recommended.html
But this is a
On 8/29/2021 2:38 AM, HariBabu kuruva wrote:
Is it required to define both the parameters SOLR_HEAP and SOLR_JAVA_MEM.
or can i comment SOLR_HEAP and only define SOLR_JAVA_MEM.
Also what highest value of Xmx value i can go if i receive OOM with 31gb.
I have only solr running on that node.
If
Thanks for your reply Dave.
Is it required to define both the parameters SOLR_HEAP and SOLR_JAVA_MEM.
or can i comment SOLR_HEAP and only define SOLR_JAVA_MEM.
Also what highest value of Xmx value i can go if i receive OOM with 31gb.
I have only solr running on that node.
And could you please l
Try setting you xms and xmx to the same value. You have enough memory to go
pretty high so I’d try 31gb, 31000mg. Do not go over 31. Also disable swap on
the server
Let me know if it helps
> On Aug 29, 2021, at 3:56 AM, HariBabu kuruva
> wrote:
>
> Hi All
>
> We are using solr cloud 8.8.1
30 matches
Mail list logo