Re: possible kafka bug, maybe in console producer/consumer utils

2017-04-19 Thread jan
@Robert Quinlivan: the producer is just the kafka-console-producer shell that comes in the kafka/bin directory (kafka/bin/windows in my case). Nothing special. I'll try messing with acks because this problem is somewhat incidental to what I'm trying to do which is see how big the log directory grow

Re: possible kafka bug, maybe in console producer/consumer utils

2017-04-19 Thread Sachin Mittal
Just to point to you all, I also get similar exception in my streams application when producer is trying to commit something to changelog topic. Error sending record to topic test-stream-key-table-changelog org.apache.kafka.common.errors.TimeoutException: Batch containing 2 record(s) expired due t

Re: Kafka Producer - Multiple broker - Data sent to buffer but not in Queue

2017-04-19 Thread Kamal C
> bootstrap.servers = , Is your bootstrap.servers configuration is correct ? You have specified port `9091`, but running the GetOffsetShell command on `9094` On Wed, Apr 19, 2017 at 11:58 AM, Ranjith Anbazhakan < ranjith.anbazha...@aspiresys.com> wrote: > Unfortunately, there is no specifi

RE: Kafka Producer - Multiple broker - Data sent to buffer but not in Queue

2017-04-19 Thread Ranjith Anbazhakan
Sorry about that. That was a typo. The exact configuration is as below: bootstrap.servers = , Thanks, Ranjith -Original Message- From: kamaltar...@gmail.com [mailto:kamaltar...@gmail.com] On Behalf Of Kamal C Sent: Wednesday, April 19, 2017 16:25 To: users@kafka.apache.org Subject: Re:

RE: Re: ZK and Kafka failover testing

2017-04-19 Thread Shrikant Patel
While we were testing, our producer had following configuration max.in.flight.requests.per.connection=1, acks= all and retries=3. The entire producer side set is below. The consumer has manual offset commit, it commit offset after it has successfully processed the message. Producer setting boot

Subscribe to mailing list

2017-04-19 Thread Arunkumar
Hi There I would like to subscribe to this mailing list and know more about kafka. Please add me to the list. Thanks in advance Thanks Arunkumar Pichaimuthu, PMP

Re: Subscribe to mailing list

2017-04-19 Thread Prahalad kothwal
Arun , send e-mail to users-subscr...@kafka.apache.org Thanks, Prahalad On Wed, Apr 19, 2017 at 8:24 PM, Arunkumar wrote: > > Hi There > I would like to subscribe to this mailing list and know more about kafka. > Please add me to the list. Thanks in advance > > Thanks > Arunkumar Pichaimuthu,

Re: Re: ZK and Kafka failover testing

2017-04-19 Thread Onur Karaman
If this is what I think it is, it has nothing to do with acks, max.in.flight.requests.per.connection, or anything client-side and is purely about the kafka cluster. Here's a simple example involving a single zookeeper instance, 3 brokers, a KafkaConsumer and KafkaProducer (neither of these clients

Re: Re: ZK and Kafka failover testing

2017-04-19 Thread Hans Jespersen
The OP was asking about duplicate messages, not lost messages, so I think we are discussing two different possible scenarios. When ever someone says they see duplicate messages it's always good practice to first double check ack mode, in flight messages, and retries. Also its important to check if

Re: Re: ZK and Kafka failover testing

2017-04-19 Thread Jun Rao
Hi, Shri, As Onur explained, if ZK is down, Kafka can still work, but won't be able to react to actual broker failures until ZK is up again. So if a broker is down in that window, some of the partitions may not be ready for read or write. As for the duplicates in the consumer, Hans had a good poi

Re: Re: ZK and Kafka failover testing

2017-04-19 Thread Jeff Widman
*As Onur explained, if ZK is down, Kafka can still work, but won't be able to react to actual broker failures until ZK is up again. So if a broker is down in that window, some of the partitions may not be ready for read or write.* We had a production scenario where ZK had a long GC pause and Kafka

Re: Re: ZK and Kafka failover testing

2017-04-19 Thread Jeff Widman
Oops, I linked to the wrong ticket, this is the one we hit: https://issues.apache.org/jira/browse/KAFKA-3042 On Wed, Apr 19, 2017 at 1:45 PM, Jeff Widman wrote: > > > > > > *As Onur explained, if ZK is down, Kafka can still work, but won't be able > to react to actual broker failures until ZK is

RE: Re: Re: ZK and Kafka failover testing

2017-04-19 Thread Shrikant Patel
Thanks Jeff, Onur, Jun, Hans. I am learning a lot from your response. Just to summarize briefly my steps, 5 node Kafka and ZK cluster. 1. ZK cluster has all node working. Consumer is down. 2. Bring down majority of ZK nodes. 3. Thing are functional no issue (no dup or lost message) 4. Now first ka

RE: Re: Re: ZK and Kafka failover testing

2017-04-19 Thread Shrikant Patel
Just to add, I see below behavior repeat with even command line console producer and consumer that come with Kafka. Thanks, Shri __ Shrikant Patel | 817.367.4302 Enterprise Architecture Team PDX-NHIN -Original Message- From: Shrikant Pat

Re: Re: Re: ZK and Kafka failover testing

2017-04-19 Thread Hans Jespersen
The kafka-console-producer.sh defaults to acks=1 so just be careful with using those tools for too much debugging. Your output is helpful though. https://github.com/apache/kafka/blob/5a2fcdd6d480e9f003cc49a59d5952ba4c515a71/core/src/main/scala/kafka/tools/ConsoleProducer.scala#L185 -hans On Wed,

how can I contribute to this project?

2017-04-19 Thread James Chain
Hi Because I love this project, so I want to take part of it. But I'm brand new to opensource project. How can I get started to make contribution? Can you give me some advise or something? By the way, I already have JIRA account which called "james.c" Sincerely, James.C

Re: how can I contribute to this project?

2017-04-19 Thread Mahendra Kariya
Hi James, This page has all the information you are looking for. https://kafka.apache.org/contributing On Thu, Apr 20, 2017 at 9:32 AM, James Chain wrote: > Hi > Because I love this project, so I want to take part of it. But I'm brand > new to opensource project. > > How can I get started to ma

Question about retention and log file times

2017-04-19 Thread Gwilym Evans
Hello, Yesterday, I had to replace a faulty Kafka broker node, and the method of replacement involved bringing up a blank replacement using the old broker's ID, thus triggering a replication of all its old partitions. Today I was dealing with disk usage alerts for only that broker: it turned out

Re: Question about retention and log file times

2017-04-19 Thread Manikumar
AFAIK, this behavior is changed in 0.10.1.0 release. Now retention is based on the largest timestamp of the messages in a log segment. On Thu, Apr 20, 2017 at 11:19 AM, Gwilym Evans wrote: > Hello, > > Yesterday, I had to replace a faulty Kafka broker node, and the method of > replacement involv

Re: Question about retention and log file times

2017-04-19 Thread Gwilym Evans
I am running 0.10.1.0 so, if that's true, it might not be a default. If you know of a config value to change that would be very helpful. -Gwilym On 20 April 2017 at 06:07, Manikumar wrote: > AFAIK, this behavior is changed in 0.10.1.0 release. Now retention is based > on the largest > timestamp

Re: Question about retention and log file times

2017-04-19 Thread Manikumar
You may be producing in the old message format. Check the "log.message.format.version" config. What is the version of the Producer/Consumer clients? On Thu, Apr 20, 2017 at 11:39 AM, Gwilym Evans wrote: > I am running 0.10.1.0 so, if that's true, it might not be a default. If you > know of a c

Re: Question about retention and log file times

2017-04-19 Thread Gwilym Evans
inter.broker.protocol.version = 0.10.1-IV2 log.message.format.version = 0.10.1-IV2 It will take me longer to check the producer/consumer versions, but I believe they're all *at least* 0.10 -Gwilym On 20 April 2017 at 06:42, Manikumar wrote: > You may be producing in the old message format. C