Hi Ritvik, It looks like you're indexing and the replica(s) are in recovery and you're indexing faster than the replica can replay tlogs so it's not able to catch up. I've had this occur in our production Solr clusters (which are also on 6.6) many times, when it happens we have to throttle our indexing workloads so that Solr is able to catch up with all the updates in the tlogs. Brian
On Tue, Mar 23, 2021 at 7:19 AM Ritvik Sharma <ritvik.s...@gmail.com> wrote: > Hi Brend, > > Thanks for the reply. > > The errors I am getting, > > > INFO - 2021-03-23 19:45:32.433; [c:solrcollection s:shard2 r:core_node4 > x:solrcollection_shard2_replica2] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring > commit while not ACTIVE - state: APPLYING_BUFFERED replay: false > INFO - 2021-03-23 19:45:32.444; [c:solrcollection s:shard2 r:core_node4 > x:solrcollection_shard2_replica2] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring > commit while not ACTIVE - state: APPLYING_BUFFERED replay: false > INFO - 2021-03-23 19:45:32.874; [c:solrcollection s:shard2 r:core_node4 > x:solrcollection_shard2_replica2] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring > commit while not ACTIVE - state: APPLYING_BUFFERED replay: false > INFO - 2021-03-23 19:45:33.031; [c:solrcollection s:shard2 r:core_node4 > x:solrcollection_shard2_replica2] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring > commit while not ACTIVE - state: APPLYING_BUFFERED replay: false > INFO - 2021-03-23 19:45:33.331; [c:solrcollection s:shard2 r:core_node4 > x:solrcollection_shard2_replica2] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring > commit while not ACTIVE - state: APPLYING_BUFFERED replay: false > > > > On Tue, 23 Mar 2021 at 19:20, Bernd Fehling < > bernd.fehl...@uni-bielefeld.de> > wrote: > > > Hi, > > > > without any more info from your system and configs it is impossible to > > guess > > what the problem could be. But generally I can say that solrcloud 6.6 > > has no problems with it, as I have a cloud with 5 shards and 2 replicas > > each > > on 5 nodes and a total of 260 mio. records. > > Somewhere between 1 to 3 tlog files and up to 50MB of size each. > > > > Bernd > > > > Am 23.03.21 um 10:40 schrieb Ritvik Sharma: > > > Hi > > > > > > I am facing an issue where the tlog size in each shard's replica is > > > increasing ~150GB where as actual index is ~40GB. > > > > > > I have enabled HardCommit also and passed* commit=true* while indexing > > the > > > data. Still no luck. > > > > > > Can you help in this regard? > > > > > > -- *Brian Lininger* Technical Architect, Infrastructure & Search *Veeva Systems * brian.linin...@veeva.com *Zoom:* https://veeva.zoom.us/j/8113896271 www.veeva.com *This email and the information it contains are intended for the intended recipient only, are confidential and may be privileged information exempt from disclosure by law.* *If you have received this email in error, please notify us immediately by reply email and delete this message from your computer.* *Please do not retain, copy or distribute this email.*