Kafka Group coordinator discovery failing for subsequent restarts

2019-08-28 Thread Hrishikesh Mishra
Hi, We are facing following issues with Kafka cluster. - Kafka Version: 2.0.0 - We following cluster configuration: - Number of Broker: 14 - Per Broker: 37GB Memory and 14 Cores. - Topics: 40 - 50 - Partitions per topic: 32 - Replicas: 3 - Min In Sync Replica: 2 - __con

Re: Kafka Group coordinator discovery failing for subsequent restarts

2019-08-28 Thread Hrishikesh Mishra
-not-available > We have gone through this link, but in our case it not feasible always to clean data from offset topic and restart (our cluster size is huge). Best, > Lisheng > > > Hrishikesh Mishra 于2019年8月29日周四 下午12:19写道: > > > Hi, > > > > We are

Re: Kafka consumer Fetcher several Ignoring fetched records logs

2019-09-06 Thread Hrishikesh Mishra
Can you check whether its happening because logs are getting purge very fast. On Sat, 7 Sep 2019 at 12:18 AM, Aminouvic wrote: > Hello all, > > We're noticing several logs on our consumer apps similar to the following : > > 2019-09-06 17:56:36,933 DEBUG > org.apache.kafka.clients.consumer.intern

Re: Purging dead consumer ids from _consumer_offsets?

2019-09-23 Thread Hrishikesh Mishra
+ Following the post. On Mon, Sep 23, 2019 at 6:31 PM Marina Popova wrote: > I'm also very interested in this question - any update on this? > thanks! > Marina > > > > Sent with ProtonMail Secure Email. > > ‐‐‐ Original Message ‐‐‐ > On Thursday, September 5, 2019 6:30 PM, Ash G > wrote

How auto.offset.reset = latest works

2019-10-03 Thread Hrishikesh Mishra
Hi, I want to understand how does *auto.offset.reset = latest *work. When consumer first call poll() method, will it assign the current offsets to consumer for all partition (when single consumer is up in a consumer group)? How do I know all partitions are assigned to a consumer? Regards Hrishik

Continuously getting FETCH_SESSION_ID_NOT_FOUND

2019-10-18 Thread Hrishikesh Mishra
Hi, I am continuously getting *FETCH_SESSION_ID_NOT_FOUND*. I'm not sure why its happening. Can anyone please me here what is the problem and what will be the impact on consumers and brokers. *Kafka Server Log:* INFO [2019-10-18 12:09:00,709] [ReplicaFetcherThread-1-8][] org.apache.kafka.client

Impact on having large number of consumers on producers / brokers

2019-10-18 Thread Hrishikesh Mishra
Hi all, I wanted to understand, having large numbers of consumers on producer latency and brokers. I have around 7K independent consumers. Each consumer is consuming all partitions of a topic. I have manually assigned partitions of a topic to a consumer, not using consumer groups. Each consumer is

Re: Impact on having large number of consumers on producers / brokers

2019-10-19 Thread Hrishikesh Mishra
Can anyone please help me with this? On Fri, 18 Oct 2019 at 2:58 PM, Hrishikesh Mishra wrote: > Hi all, > > I wanted to understand, having large numbers of consumers on > producer latency and brokers. I have around 7K independent consumers. Each > consumer is consuming all partit

Re: Continuously getting FETCH_SESSION_ID_NOT_FOUND

2019-10-19 Thread Hrishikesh Mishra
Can anyone please help me with this? On Fri, 18 Oct 2019 at 12:42 PM, Hrishikesh Mishra wrote: > Hi, > > I am continuously getting *FETCH_SESSION_ID_NOT_FOUND*. I'm not sure why > its happening. Can anyone please me here what is the problem and what will > be the imp

Re: Impact on having large number of consumers on producers / brokers

2019-10-22 Thread Hrishikesh Mishra
rishikesh Mishra > wrote: > > > Can anyone please help me with this? > > > > On Fri, 18 Oct 2019 at 2:58 PM, Hrishikesh Mishra > > wrote: > > > > > Hi all, > > > > > > I wanted to understand, having large numbers of consumers on &g

Re: Impact on having large number of consumers on producers / brokers

2019-10-23 Thread Hrishikesh Mishra
dingly. > > Thanks, > > On Tue, 22 Oct 2019 at 10:09, Hrishikesh Mishra > wrote: > > > I wanted to understand whether broker will be unstable with large number > of > > consumers or will consume face some issue like lag will increase? > > > > > >