time. Why?
Thanks.
________
From: Hongxu Ma
Sent: Monday, March 16, 2020 16:46
To: solr-user@lucene.apache.org
Subject: number of documents exceed 2147483519
Hi
I'm using solr-cloud (ver 6.6), got an error:
org.apache.solr.common.SolrException: Exception writing document id (null) to
the index;
Hi
I'm using solr-cloud (ver 6.6), got an error:
org.apache.solr.common.SolrException: Exception writing document id (null) to
the index; possible analysis error: number of documents in the index cannot
exceed 2147483519
After googled it, I know the number is exceed one solr shard limit.
The col
erts:0
CACHE.searcher.filterCache.lookups:84
CACHE.searcher.filterCache.maxRamMB:-1
CACHE.searcher.filterCache.ramBytesUsed:70768
CACHE.searcher.filterCache.size:12
CACHE.searcher.filterCache.warmupTime:1
> -Original Message-
> From: Hongxu Ma [mailto:inte...@outlook.com]
> Sent: Tuesday, February 1
@Erick Erickson<mailto:erickerick...@gmail.com> and @Mikhail Khludnev
got it, the explanation is very clear.
Thank you for your help.
From: Hongxu Ma
Sent: Tuesday, February 18, 2020 10:22
To: Vadim Ivanov ;
solr-user@lucene.apache.org
Subject: Re: A qu
ter cache,
as well as some examples of current filtercaches in RAM
Core, for ex, with 10 mln docs uses 1.3 MB of Ram for every filterCache
> -Original Message-
> From: Hongxu Ma [mailto:inte...@outlook.com]
> Sent: Monday, February 17, 2020 12:13 PM
> To: solr-user@lucene.apach
Hi
I want to know the internal of solr filter cache, especially its memory usage.
I googled some pages:
https://teaspoon-consulting.com/articles/solr-cache-tuning.html
https://lucene.472066.n3.nabble.com/Solr-Filter-Cache-Size-td4120912.html
(Erick Erickson's answer)
All of them said its structu
Hi community
I plan to set up a 128 host cluster: 2 solr nodes on each host.
But I have a little concern about whether solr can support so many nodes.
I searched on wiki and found:
https://cwiki.apache.org/confluence/display/SOLR/2019-11+Meeting+on+SolrCloud+and+project+health
"If you create thous
/PULL replica.
Thanks.
From: Erick Erickson
Sent: Thursday, December 12, 2019 22:49
To: Hongxu Ma
Subject: Re: A question of solr recovery
If you’re using TLOG/PULL replica types, then only changed segments
are downloaded. That replication pattern has a very diff
a long time. Not certain
that’s what’s happening, but something to be aware of.
Best,
Erick
> On Dec 10, 2019, at 10:39 PM, Hongxu Ma wrote:
>
> Hi all
> In my cluster, Solr node turned into long time recovery sometimes.
> So I want to know more about recovery and have read a
Hi all
In my cluster, Solr node turned into long time recovery sometimes.
So I want to know more about recovery and have read a good blog:
https://lucidworks.com/post/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
It mentioned in the recovery section:
"Replays the documents fro
What's your thoughts?
thanks.
From: Shawn Heisey
Sent: Thursday, November 14, 2019 1:15
To: solr-user@lucene.apache.org
Subject: Re: Question about startup memory usage
On 11/13/2019 2:03 AM, Hongxu Ma wrote:
> I have a solr-cloud cluster with a big col
Hi
I have a solr-cloud cluster with a big collection, after startup (no any
search/index operations), its jvm memory usage is 9GB (via top: RES).
Cluster and collection info:
each host: total 64G mem, two solr nodes with -xmx=15G
collection: total 9B billion docs (but each doc is very small: only
hen this error happens.
Thanks again.
From: Shawn Heisey
Sent: Wednesday, September 18, 2019 20:21
To: solr-user@lucene.apache.org
Subject: Re: Question about "No registered leader" error
On 9/18/2019 6:11 AM, Shawn Heisey wrote:
> On 9/17/2019 9:3
Hi all
I got an error when I was doing index operation:
"2019-09-18 02:35:44.427244 ... No registered leader was found after waiting
for 4000ms , collection: foo slice: shard2"
Beside it, there is no other error in solr log.
Collection foo have 2 shards, then I check their jvm gc log:
* 20
e customer define their problem so they properly model
their search problem. This may mean that the result will be a hybrid where Solr
is used for the free-text search and the RDBMS uses the results of the search
to do something. Or vice versa.
FWIW
Erick
> On Sep 2, 2019, at 5:55 AM, H
but
> maybe it is worth to consolidate some collections to avoid also
> administrative overhead.
>
>> Am 29.08.2019 um 05:27 schrieb Hongxu Ma :
>>
>> Hi
>> I have a solr-cloud cluster, but it's unstable when collection number is
>> big: 1000 replica
To: solr-user@lucene.apache.org
Subject: Re: Question: Solr perform well with thousands of replicas?
On 8/28/2019 9:27 PM, Hongxu Ma wrote:
> I have a solr-cloud cluster, but it's unstable when collection number is big:
> 1000 replica/core per solr node.
>
> To sol
Hi
I have a solr-cloud cluster, but it's unstable when collection number is big:
1000 replica/core per solr node.
To solve this issue, I have read the performance guide:
https://cwiki.apache.org/confluence/display/SOLR/SolrPerformanceProblems
I noted there is a sentence on solr-cloud section:
"R
18 matches
Mail list logo