On 3/15/23 06:35, Mark H. Wood wrote:
I'm always flummoxed by this question: what is the total index size?
It's easy to get 'numDocs' from the admin. interface, but there's
nothing I can find there that I would interpret as "index size".
Does this mean the sum of sizes of all files in $CORE/da
At Solr Admin Web Interface I click on "Core Collector" and
select one of the shards.
My maxDocs says 33.972.375 and below the size reports 112.5GB.
Because of 10 shards in my cloud just multiply by 10 which gives
a rough estimate ~340.000.000 Docs and ~1.125TB size.
To calculate growth just take
On Tue, Mar 14, 2023 at 08:21:26AM -0600, Shawn Heisey wrote:
> On 3/14/23 08:01, HariBabu kuruva wrote:
> > Till now it was running with 45GB heap memory. I am trying to tune the
> > performance of solr by adjusting heap memory.
>
> What is the total index size and total doc count of the server?
It may sound counter intuitive, but allocate as little Java heap as possible to
Solr, without causing OOM. Read up on the reference guide links provided as
well as excellent advice on profiling your heap usage.
Jan Høydahl
> 14. mar. 2023 kl. 15:03 skrev HariBabu kuruva :
>
> Hi ,
>
> Till n
Do what I suggested a few days ago. That is how you find out how much heap the
system really needs.
Use a heap analysis tool. You’ll see a sawtooth pattern in the heap size. The
bottom of that sawtooth is the actual amount of memory that Solr is using. Pick
the highest point of the bottom of th
On 3/14/23 08:01, HariBabu kuruva wrote:
Till now it was running with 45GB heap memory. I am trying to tune the
performance of solr by adjusting heap memory.
What is the total index size and total doc count of the server?
In the past I have run Solr servers with 80 million documents across 3
Hi ,
Till now it was running with 45GB heap memory. I am trying to tune the
performance of solr by adjusting heap memory.
So, I am looking for your inputs.
On Tue, Mar 14, 2023 at 3:23 PM Jan Høydahl wrote:
> Why do you believe you need such a huge heap as 31g? Can you support such
> a choice
Why do you believe you need such a huge heap as 31g? Can you support such a
choice by some observations or measurements?
Jan
> 14. mar. 2023 kl. 06:39 skrev HariBabu kuruva :
>
> Thank you all for your responses.
>
> There are no spaces between Xms and the values.
>
> I have updated similar a
No. Don’t breach 31gb unless you go all the way to 47 plus. Where you’re at
sounds pretty good if you index is less than 30 gb so it can get set into memory
> On Mar 14, 2023, at 1:39 AM, HariBabu kuruva
> wrote:
>
> Thank you all for your responses.
>
> There are no spaces between Xms and t
Thank you all for your responses.
There are no spaces between Xms and the values.
I have updated similar arguments(-Xms30720m -Xmx30720m) in one of the
non-prod environments(in mbs instead of gb). It correctly shows the max
heap as 30GB in the Solr UI.
So, I would like to update 31.5 GB similarl
Use a heap analysis tool. You’ll see a sawtooth pattern in the heap size. The
bottom of that sawtooth is the actual amount of memory that Solr is using. Pick
the highest point of the bottom of the sawtooth, then add some headroom, maybe
a gigabyte. Test with that value.
wunder
Walter Underwood
>Set -Xms to "I know it wants at least this much".
>Set -Xmx to significantly, but not wildly, more.
no, always set them to the same no matter what. I like increments of 1024M
so I would start at 2048M and work up to 8gb and see how it performs.
Having a test script that forks to how man
On Thu, Mar 09, 2023 at 01:56:11PM +0100, Jan Høydahl wrote:
> It's a waste to set heap to 30g if your use of Solr only requires 6g to
> function. That is 24G memory not being used for index caching, and it will
> may, depending on chose GC, cause bigger/longer GC events as more garbage
> piles
Agreed, but often times as a developer you are subject to the requests of those
higher up and you end up with 30 facets of strings that are the length of
names. But yes, test as low as you can and try to keep the qtimes low and just
keep adjusting until you are happy with whatever time works fo
It's a waste to set heap to 30g if your use of Solr only requires 6g to
function. That is 24G memory not being used for index caching, and it will may,
depending on chose GC, cause bigger/longer GC events as more garbage piles up
before collection.
You have to measure and experiment to find you
Again, set to less than 32, I liked 30
> On Mar 9, 2023, at 1:04 AM, Deepak Goel wrote:
>
> The max heap could be the max heap used by the process uptill now. And not
> the max value you have set. I would suggest you increase the load by at
> least 20 times to see the max heap to go to 32 gb.
The max heap could be the max heap used by the process uptill now. And not
the max value you have set. I would suggest you increase the load by at
least 20 times to see the max heap to go to 32 gb.
Deepak
"The greatness of a nation can be judged by the way its animals are treated
- Mahatma Gandhi
On 3/8/2023 9:24 AM, HariBabu kuruva wrote:
I have set the Heap memory as -Xms 1g -Xmx 40g in the Production
environment.
But when i see the Heap memory in the Solr UI. I can see the Max Heap below.
Max: 3.8Gb
Used: 2.2Gb
The other answers you've gotten are good. This is mostly just a little
Hi,
There should be no spaces, try -Xms1g.
If you add the sapce, Java will likely fall back to defaults, which is a
certain percentage of physical mem.
See
https://stackoverflow.com/questions/4667483/how-is-the-default-max-java-heap-size-determined
You should follow our advice on memory tuning
-Xms3M
-Xmx3M
Keep them the same, no spaces, I preferred to use M , never go above 32g cause
reasons (jvm gets weird after 32) and make sure your machine still has the
memory to hold your index.
> On Mar 8, 2023, at 11:27 AM, HariBabu kuruva
> wrote:
>
> Hi All,
>
> I have set the
20 matches
Mail list logo