pite one node failing to reload.
We are planning to start upgrading to 9.2.1, hopefully that will solve the
issue.
---
Nick Vladiceanu
vladicean...@gmail.com
> On 9. Jun 2023, at 16:33, Shawn Heisey wrote:
>
> On 6/9/23 03:05, Nick Vladiceanu wrote:
>> autoCommit is enab
updates to
the collections. Perhaps, anyone has better ideas on how to monitor the lag of
active index?
Thanks,
---
Nick Vladiceanu
vladicean...@gmail.com
> On 9. Jun 2023, at 10:34, Shawn Heisey wrote:
>
> On 6/9/23 01:43, Nick Vladiceanu wrote:
>> We noticed that we get inc
ted, and why the aliases weren’t switched
to use the new index? Are there any metrics that could tell us the the time
since last replication from leader of the active index?
Thanks in advance.
Best regards,
---
Nick Vladiceanu
vladicean...@gmail.com
rdHandlerFactory, etc.)?
> >
> >
> > That is a great idea. (Obviously with the operator you need to keep some
> of
> > the values there that it relies on, but I think everything it uses is
> > vanilla starting with Solr 9)
> >
> > - Houston
> >
>
not behave stable? Do you
think it makes sense to go with a vanilla solrconfig.xml and introduce all the
custom options one-by-one (i.e. ShardHandlerFactory, etc.)?
---
Nick Vladiceanu
vladicean...@gmail.com
> On 18. Jan 2023, at 18:41, Kevin Risden wrote:
>
> So I am going to s
old GC activity; better response time, less pressure on GC;
circuitBreaker:
disabling circuitBreaker;
Result: no impact;
---
Nick Vladiceanu
vladicean...@gmail.com
> On 20. Dec 2022, at 15:58, Shawn Heisey wrote:
>
> On 12/20/22 06:34, Nick Vladiceanu wrote:
>> Thank y
Thank you Shawn for sharing, indeed useful information.
However, I must say that we only used deleteById and never deleteByQuery. We
also only rely on the auto segment merging and not issuing optimize command.
Thanks,
---
Nick Vladiceanu
vladicean...@gmail.com
> On 20. Dec 2022, at 11
Unfortunately we couldn’t find the root cause of such behaviour in Solr 9 and
thus forced to rollback to 8.11.
Does anyone else face similar to the issues mentioned in this thread? Any ideas
how we should proceed in such case?
Thanks
---
Nick Vladiceanu
vladicean...@gmail.com
> O
.http1=true
>
>
>
> On Mon, Dec 5, 2022 at 5:08 AM Nick Vladiceanu <mailto:vladicean...@gmail.com>>
> wrote:
>
>> Hello folks,
>>
>> We’re running our SolrCloud cluster in Kubernetes. Recently we’ve upgraded
>> from 8.11 to 9.0 (and eventuall
anyone face similar issues after upgrading to version 9 of Solr? Could you
please advice where should we focus our attention while debugging this
behavior? Any other advices/suggestions?
Thank you
Best regards,
Nick Vladiceanu
Sounds great, I’ve created Jira ticket here
https://issues.apache.org/jira/browse/SOLR-16485
<https://issues.apache.org/jira/browse/SOLR-16485>
Thank you
Best regards,
Nick Vladiceanu
> On 20. Oct 2022, at 7:02 PM, Houston Putman wrote:
>
> So it looks like this could
onfiguration as it used to be in previous version of Solr?
Did anyone else face this issue? What would be the approach to solve it?
Perhaps, there is a bug reported already? Thanks
Best regards,
Nick Vladiceanu
> On 8. Nov 2021, at 11:45 PM, Shawn Heisey wrote:
>
> On 11/8/21 2:05 PM, Nick Vladiceanu wrote:
>> Ok, makes sense. However, when the core is initially created, the data is
>> not yet there. Running the firstSearcher queries against empty index won’t
>> have any
, and therefore, run the
warmup queries? What’s the point of opening the first searcher when initially
the core is created, if there is no data?
> On 8. Nov 2021, at 9:30 PM, Shawn Heisey wrote:
>
> On 11/8/21 12:44 PM, Nick Vladiceanu wrote:
>> When the “firstSearcher” queries
hello,
I’m trying to warmup the caches of brand new cores using “firstSearcher”
queries in my solrconfig.xml.
What I’m observing is, right after the new core is being created
"CoreAdminOperation core create command ...” and "Opening new SolrCore at …”,
the “searcherExecutor” is being fired an
Try looking into "/solr/admin/autoscaling” setting, it might have an
autoscaling option to add new replicas automatically.
> On 1. Nov 2021, at 3:13 PM, Michael Conrad wrote:
>
> Update:
>
> It removed replica solr-0001 and now I have the leader and replica on the
> same node?
>
> -Mike/News
Is there a particular reason for using TLOG replica types? For such a small
cluster and the scenario you’ve described it sounds more reasonable to use NRT,
that will (almost) guarantee that once you write your data - it’ll be (almost)
immediately available on all the nodes.
> On 3. Sep 2021,
regards,
Nick Vladiceanu
> On 11. Jun 2021, at 6:38 PM, Houston Putman wrote:
>
> So the issue seems to be with the autocommit time.
>
> The PULL and TLOG followers fetch the index every x seconds. This 'x' is
> 1/2 of the autocommit time, so when you increas
. maybe there is some difference in how the OS
> handles the page cache and memory mapping the index files if they come
> in cold over the network vs being actively written by the Solr
> process. What kind of storage are you using?
>
> On Fri, Jun 11, 2021 at 10:38 AM Nick Vladicea
e they
> both slower, or just one?
>
> Mike
>
> On 2021/06/11 12:55:31, Nick Vladiceanu wrote:
>> hello,
>> I’m facing some performance issues when moving from NRT replica types to
>> TLOG + PULL. We’re constantly indexing new data and heavily querying (~2k
>
LMK
>
> Tim
>
> On Fri, Jun 11, 2021 at 6:55 AM Nick Vladiceanu
> wrote:
>>
>> hello,
>> I’m facing some performance issues when moving from NRT replica types to
>> TLOG + PULL. We’re constantly indexing new data and heavily querying (~2k
>> rps
>
>
>
> false
> 8
>
One of my assumption was to reduce the maxWarmingSearchers and to increase the
autoCommit maxTime, since the softCommit isn’t available anymore in TLOG
replicas. Is that valid?
I couldn’t find any documents with the differences/considerations we need to
take into account between NRT and TLOG, could you please help? Thanks a lot in
advance. Please let me know if there is anything else required.
Best regards,
Nick Vladiceanu
22 matches
Mail list logo