Thank you Radu for the quick response. I have updated the values as
you suggested.


Nikhilesh Jannu      Principal Software Engineer      405.609.4259
<(405)%20741-9895>


On Wed, Jun 29, 2022 at 12:03 AM Radu Gheorghe <[email protected]>
wrote:

> Hi Nikhilesh,
>
> Try hard-committing more often. This way you'll have smaller tlog files and
> there will be less data to recover. My suggestion is to add a maxSize
> constraint to autoCommit. 100MB is a good rule of thumb, makes sure you
> don't replay more than 100MB worth of data (even if you have an indexing
> spike).
>
> Best regards,
> Radu
> --
> Elasticsearch/OpenSearch & Solr Consulting, Production Support & Training
> Sematext Cloud - Full Stack Observability
> http://sematext.com/
>
>
> On Wed, Jun 29, 2022 at 9:10 AM Nikhilesh Jannu <
> [email protected]>
> wrote:
>
> > Dear Users,
> >
> > We are using the SOLR TRA collection for capturing the logs.  We  are
> > writing the logs to SOLR using the Rest API in a batch of 100 and also we
> > are using SOFT commit interval of 15000 and Hard commit interval of
> 60000.
> >
> > Solr Version : 8.11.1.
> >
> > When we restart the SOLR node in the cloud the current day's collection
> > goes in recovery mode and we see the following logs. It takes a long time
> > for the recovery process to complete. Not sure how to avoid it. Any
> > suggestions ?
> >
> > Sample of the logs below.
> >
> > 2022-06-29 06:03:27.500 INFO
> >  (recoveryExecutor-67-thread-1-processing-n:10.0.42.157:8983_solr
> > x:logs__TRA__2022-06-29_shard1_replica_n1 c:logs__TRA__2022-06-29
> s:shard1
> > r:core_node2) [c:logs__TRA__2022-06-29 s:shard1 r:core_node2
> > x:logs__TRA__2022-06-29_shard1_replica_n1] o.a.s.u.UpdateLog log replay
> > status
> >
> >
> tlog{file=/var/solr/data/logs__TRA__2022-06-29_shard1_replica_n1/data/tlog/tlog.0000000000000000243
> > refcount=3} active=false starting pos=0 current pos=1119002110 current
> > size=3287152529 % read=34.0
> >
> > Regards,
> > Nikhilesh Jannu
> >
>

Reply via email to