How does one deploy to consumers without causing re-balancing for real time use case?

2017-02-05 Thread Praveen
I have a 16 broker kafka cluster. There is a topic with 32 partitions containing real time data and on the other side, I have 32 boxes w/ 1 consumer reading from these partitions. Today our deployment strategy is stop, deploy and start the processes on all the 32 consumers. This triggers re-balanc

Re: At Least Once semantics for Kafka Streams

2017-02-05 Thread Eno Thereska
Hi Mahendra, That is a good question. Streams uses consumers and that config applies to consumers. However, in streams we always set enable.auto.commit to false, and manage commits using the other commit parameter. That way streams has more control on when offsets are committed. Eno > On 6 Fe

Re: Fault tolerance not working in Kafka

2017-02-05 Thread Stevo Slavić
Check __consumer-offsets topic, number of partitions and their replica assignment. You may be affected by problem solved by https://cwiki.apache.org/confluence/display/KAFKA/KIP-115%3A+Enforce+offsets.topic.replication.factor+upon+__consumer_offsets+auto+topic+creation On Mon, Feb 6, 2017, 06:32 R

trying to understand CommitFailedException

2017-02-05 Thread Sachin Mittal
Hi All, We some time get errors like this: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the config

Need help in understanding bunch of rocksdb errors on kafka_2.10-0.10.1.1

2017-02-05 Thread Sachin Mittal
Hello All, We recently upgraded to kafka_2.10-0.10.1.1. We have a source topic with replication = 3 and partition = 40. We have a streams application run with NUM_STREAM_THREADS_CONFIG = 4 and on three machines. So 12 threads in total. What we do is start the same streams application one by one

Re: At Least Once semantics for Kafka Streams

2017-02-05 Thread Mahendra Kariya
I have another follow up question regarding configuration. There is a config for enable.auto.commit for consumers. Does this apply to Kafka streams? If yes, how is the behavior different when the value of this config is true vs false? More generally, which of the consumer configs

Re: Fault tolerance not working in Kafka

2017-02-05 Thread R Krishna
What is the exact exception you see? With 4 partitions, consumers should not have a problem if one goes down, do you see any broker ISR for your topic? On Sun, Feb 5, 2017 at 8:15 PM, Nitin Shende wrote: > Hi Team, > > I am using Apache kafka with 6 brokers. I m having topic with 4 partition > a

Fault tolerance not working in Kafka

2017-02-05 Thread Nitin Shende
Hi Team, I am using Apache kafka with 6 brokers. I m having topic with 4 partition and replication factor 3, my java code is able to consume messages when all brokers are up. But when I one broker is killed then java consumer stops working. Could you please help? Best Regards, Nitin Shende, +91 9