Hi Harsha,
Thanks for the reply.
Issue is resolved as of now and the root cause was a runaway application 
spawning many instances of kafkacat and hammering kafka brokers. I am still 
wondering that what could be reason for shrink and expand is a client hammers a 
broker  .
--Ashish 
    On Thursday, January 24, 2019, 8:53:10 AM PST, Harsha Chintalapani 
<ka...@harsha.io> wrote:  
 
 Hi Ashish,
           Whats your replica.lag.time.max.ms set to and do you see any network 
issues between brokers.
-Harsha



On Jan 22, 2019, 10:09 PM -0800, Ashish Karalkar 
<ashish_karal...@yahoo.com.INVALID>, wrote:
> Hi All,
> We just upgraded from 0.10.x to 1.1 and enabled rack awareness on an existing 
> clusters which has about 20 nodes in 4 rack . After this we see that few 
> brokers goes on continuous expand and shrink ISR to itself  cycle , it is 
> also causing high time for serving meta data requests.
> What is the impact of enabling rack awareness on existing cluster assuming 
> replication factor is 3 and all existing replica may or may not be in 
> different rack when rack awareness was enabled after which a rolling bounce 
> was done.
> Symptoms we are having are replica lag and slow metadata requests. Also in 
> brokers log we continuously see disconnection from the broker where it is 
> trying to expand.
> Thanks for helping
> --A  

Reply via email to