> <maxTime>100}</maxTime>

There seems to be a typo here with the "}"? 
It is unusual with 100ms commit time, you risk that commits pile up during 
rapid indexing and cause inefficiencies. I'd increase it to at least 1000ms.

Can  you reproduce this in an IDLE system by simply adding ONE document?
What does your document look like? Number of fields, size, nested docs etc? 
Does it happen every time or just once in a while?
Do you have access to system metrics for the server and jvm which can tell 
something about its general health and load?

Jan


> 28. sep. 2023 kl. 13:54 skrev John Jackson <john382...@gmail.com>:
> 
> How many docs have you added before the softCommit?
> 
>>> only one record EMP5487098118986160 added.
> 
> Do you use any cache warming or other commit hooks?
> 
>>> No we are not using any cache and our commit for solr config below
> 
> <autoCommit>
> <maxTime>600000</maxTime>
> <maxDocs>20000</maxDocs>
> <openSearcher>false</openSearcher>
> </autoCommit>
> <autoSoftCommit>
> <maxTime>100}</maxTime>
> </autoSoftCommit>
> 
> We are indexing via zoo keeper and do not commit after indexing all because
> we have configured in solr config xml.
> 
> On Thu, Sep 28, 2023 at 5:15 PM Jan Høydahl <jan....@cominvent.com> wrote:
> 
>> How many docs have you added before the softCommit?
>> Do you use any cache warming or other commit hooks?
>> 
>> Jan
>> 
>>> 28. sep. 2023 kl. 13:28 skrev John Jackson <john382...@gmail.com>:
>>> 
>>> Hello
>>> 
>>> We are using Solr 8.9.0. We have configured Solr cloud like 2 shards and
>>> each shard has one replica. We have used 5 zoo keepers for Solr cloud.
>>> 
>>> We have used the below schema field in employee collection. <field
>>> name="id" type="string" indexed="true" stored="true" required="true"
>>> multiValued="false" docValues="true"/> <dynamicField name="/*"
>> type="text"
>>> indexed="true" stored="true" multiValued="true"/>
>>> 
>>> 
>>> 
>>> *Total no of record*: 8562099
>>> *Size of instance:* solrgnrls2r1----67GB solrgnrls1----66 GB
>>> solrgnrls1r1----66 GB solrgnrls2----68 GB
>>> 
>>> *Solr logs:* 2023-09-14 10:04:30.705 DEBUG (qtp1984975621-8805766)
>> [c:forms
>>> s:shard1 r:core_node3 x:forms_shard1_replica_n1]
>>> o.a.s.u.DirectUpdateHandler2
>>> 
>> updateDocuments(add{_version_=1777003156686766080,id=EMP5487098118986160})
>>> 2023-09-14 10:04:30.710 INFO  (qtp1984975621-8805766) [c:forms s:shard1
>>> r:core_node3 x:forms_shard1_replica_n1]
>> o.a.s.u.p.LogUpdateProcessorFactory
>>> [forms_shard1_replica_n1]  webapp=/solr path=/update
>>> params={wt=javabin&version=2}{add=[FORM5487098118986160
>>> (1777003156686766080)]} 0 5 2023-09-14 10:04:30.807 DEBUG
>>> (commitScheduler-930-thread-1) [c:employee s:shard1 r:core_node3
>>> x:employee_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start
>>> 
>> commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
>>> 2023-09-14 10:04:35.134 DEBUG (commitScheduler-930-thread-1) [c:employee
>>> s:shard1 r:core_node3 x:employee_shard1_replica_n1]
>>> o.a.s.s.SolrIndexSearcher Opening
>>> [Searcher@796ab9b9[employee_shard1_replica_n1]
>>> main] 2023-09-14 10:04:35.413 DEBUG (commitScheduler-930-thread-1)
>>> [c:employee s:shard1 r:core_node3 x:employee_shard1_replica_n1]
>>> o.a.s.u.DirectUpdateHandler2 end_commit_flush
>>> 
>>> 
>>> Why is the commitScheduler thread taking 5 seconds to complete? Due to
>>> this, we can not see the latest update for id EMP5487098118986160. We
>> also
>>> have another collection with an index size of 120 GB and the number of
>>> documents is 744620373 but still, there is no slowness in the soft
>> commit.
>>> 
>>> When we checked Solr source code we found that time spent in
>>> ExitableDirectoryReader.wrap(UninvertingReader.wrap(reader,
>>> 
>> core.getLatestSchema().getUninversionMapper()),SolrQueryTimeoutImpl.getInstance());
>>> this.leafReader = SlowCompositeReaderWrapper.wrap(this.reader);
>>> 
>>> How can we troubleshoot the issue?
>> 
>> 

Reply via email to