RE: Re: [External Email] Re: Shard Split and composite id

2022-05-18 Thread Hasmik Sarkezians
Thank you for taking the time and explaining this. hasmik On 2022/05/18 16:56:29 Hasmik Sarkezians wrote: > Thanks for the reply. > > It doesn't matter to me which shard the document ends up in, just matters > how many shards the document ends up with: > > And seems like I wouldn't have control o

Re: How to select the class of fields

2022-05-18 Thread WU, Zhiqing
Hi Thomas, Many thanks for your suggestion. It seems many people have the same issue. OK, we will use two different fields to reach our target. Kind regards, Zhiqing On Tue, 17 May 2022 at 22:12, Thomas Corthals wrote: > Hi Zhiqing, > > It is very common with Solr to have the same value indexed

Re: Backup failing and taking previous backup data with it

2022-05-18 Thread Michael B. Klein
I thought I’d follow up on this with some good news. It turns out nothing was going wrong. Back when we were using Solr 7 (and taking daily non-incremental snapshots), one of our admins created a cron job on a service account to clean out snapshots older than 2 weeks. Now that we’re doing daily

Re: [External Email] Re: Shard Split and composite id

2022-05-18 Thread Shawn Heisey
On 5/18/22 10:56, Hasmik Sarkezians wrote: Thanks for the reply. It doesn't matter to me which shard the document ends up in, just matters how many shards the document ends up with: And seems like I wouldn't have control over that as the number of shards grows. I've been thinking about some d

Re: Growing cores after upgrade to 8.11.1

2022-05-18 Thread Gus Heck
Your link leads to a signup page with advertising for clothing. Please don't do that. On Wed, May 18, 2022 at 1:35 PM Jesús Roca wrote: > Hello, > > We are having problema > > We have a cluster with Solr 8 (15 nodes running RHEL) and ZooKeeper 3.6.2 > (5 nodes) and only one collection of around

Growing cores after upgrade to 8.11.1

2022-05-18 Thread Jesús Roca
Hello, We are having problema We have a cluster with Solr 8 (15 nodes running RHEL) and ZooKeeper 3.6.2 (5 nodes) and only one collection of around 48 millions of docs with 10 shards and a replication factor of 3, so every server holds 2 cores. A couple of weeks ago we performed an upgrade from S

Re: [External Email] Re: Shard Split and composite id

2022-05-18 Thread Hasmik Sarkezians
Thanks for the reply. It doesn't matter to me which shard the document ends up in, just matters how many shards the document ends up with: And seems like I wouldn't have control over that as the number of shards grows. thanks, hasmik On Wed, May 18, 2022 at 11:38 AM Shawn Heisey wrote: > On

Re: Shard Split and composite id

2022-05-18 Thread Shawn Heisey
On 5/18/22 08:42, Hasmik Sarkezians wrote: Have a question about shard splitting and compositeId usage. We are starting a solr collection with X number of shards for our multi-tenant application. We are assuming that the number of shards will increase over time as the number of customers grows as

Shard Split and composite id

2022-05-18 Thread Hasmik Sarkezians
Have a question about shard splitting and compositeId usage. We are starting a solr collection with X number of shards for our multi-tenant application. We are assuming that the number of shards will increase over time as the number of customers grows as well as the customer data. We are thinking

R: Query /admin/info/system very slow

2022-05-18 Thread Tealdi Paolo
Hi all. I'm partially answering to myself. After a server reboot, on both server, it seems that the problem is gone away. The answer dropped to 0.5 sec . I will monitor if the monitor would return. I'm supposing that some nasty thing happened to both server in the last days that corrupted the me

Query /admin/info/system very slow

2022-05-18 Thread Tealdi Paolo
Hi all. I'm reporting a very slow response from my solr cluster (two identical nodes, FreeBSD 12.3) with the query /admin/info/system. This query is used by solr UI web interface for home page rendering: The browser receive solr home page but can't render the solr cluster page ( the one with