Re: Kafka-streams restoreActiveState

2016-02-04 Thread Tom Dearman
Guozhang, I will do that now, thanks. It looks like it is just due to StoreChangeLogger getting its topic value direct from the store suppliers name which is just the [store-name]. Tom > On 3 Feb 2016, at 23:57, Guozhang Wang wrote: > > Thanks for letting me know. From what you said it seems

behavior of two live brokers with the same id?

2016-02-04 Thread Carlo Cabanilla
Hello, How does a kafka cluster behave if there are two live brokers with the same broker id, particularly if that broker is a leader? Is it deterministic that the older one wins? Magical master-master replication between the two? .Carlo Datadog

Re: behavior of two live brokers with the same id?

2016-02-04 Thread Harsha
You won't be able to start and register to brokers with same id in zookeeper. On Thu, Feb 4, 2016, at 06:26 AM, Carlo Cabanilla wrote: > Hello, > > How does a kafka cluster behave if there are two live brokers with the > same > broker id, particularly if that broker is a leader? Is it determinis

Re: Kafka-streams restoreActiveState

2016-02-04 Thread Tom Dearman
Hi Guozhang, I have created a ticket for this issue: KAFKA-3207 Thanks, Tom > On 4 Feb 2016, at 08:54, Tom Dearman wrote: > > Guozhang, > > I will do that now, thanks. > It looks like it is just due to StoreChangeLogger getting its topic val

RE: Protocol Question

2016-02-04 Thread Heath Ivie
Dana, Again thanks for the feedback. I will try that today and see what happens. Is there anything special to start the loop? What I mean is there any buffer byte to signify the start of the arrays? -Original Message- From: Dana Powers [mailto:dana.pow...@gmail.com] Sent: Wednesday, Fe

Re: Kafka-streams restoreActiveState

2016-02-04 Thread Guozhang Wang
Thanks Tom, looking at it now. Guozhang On Thu, Feb 4, 2016 at 7:29 AM, Tom Dearman wrote: > Hi Guozhang, > > I have created a ticket for this issue: > > KAFKA-3207 > Thanks, > > Tom > > > On 4 Feb 2016, at 08:54, Tom Dearman wrote: > > > > Gu

Kafka Connect - SinkRecord schema

2016-02-04 Thread Shiti Saxena
Hi, I was trying to define the following Kafka Connect pipeline : JDBC Source -> Console Sink using bulk mode I realized the schema resulting from SinkRecord.valueSchema was incorrect. I modified FileStreamSinkTask's put method to, public void put(Collection sinkRecords) { for (SinkRecord r

RE: Protocol Question

2016-02-04 Thread Heath Ivie
Dana, I was able to get it to work, thanks Heath -Original Message- From: Heath Ivie [mailto:hi...@autoanything.com] Sent: Thursday, February 04, 2016 8:19 AM To: users@kafka.apache.org Subject: RE: Protocol Question Dana, Again thanks for the feedback. I will try that today and see wh

Re: exception in log cleaner

2016-02-04 Thread Chen Song
I read a bit about the code (see below) and confused. For each message in the message set generate from the read buffer, it iterates and writes into the buffer. In theory, the number of bytes being written should never exceed the number of bytes read. val messages = new ByteBufferMessageSet(sour

Detecting broker version programmatically

2016-02-04 Thread marko
Is there a way to detect the broker version (even at a high level 0.8 vs 0.9) using the kafka-clients Java library? -- Best regards, Marko www.kafkatool.com

Kafka protocol fetch request max wait.

2016-02-04 Thread Rajiv Kurian
I am writing a Kafka consumer client using the document at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol One place where I am having problems is the fetch request itself. I am able to send fetch requests and can get fetch responses that I can parse properly, but i

Re: Kafka protocol fetch request max wait.

2016-02-04 Thread Rajiv Kurian
And just like that it stopped happening even though I didn't change any of my code. I had filed https://issues.apache.org/jira/browse/KAFKA-3159 where the stock 0.9 kafka consumer was using very high CPU and seeing a lot of EOFExceptions on the same topic and partition. I wonder if it was hitting t

Re: Kafka protocol fetch request max wait.

2016-02-04 Thread Rajiv Kurian
Indeed this seems to be the case. I am now running the client mentioned in https://issues.apache.org/jira/browse/KAFKA-3159 and it is no longer taking up high CPU. The high number of EOF exceptions are also gone. It is performing very well now. I can't understand if the improvement is because of m

Re: Kafka protocol fetch request max wait.

2016-02-04 Thread Jason Gustafson
Hey Rajiv, Just to be clear, when you received the empty fetch response, did you check the error codes? It would help to also include some more information (such as broker and topic settings). If you can come up with a way to reproduce it, that will help immensely. Also, would you mind updating K

Re: Detecting broker version programmatically

2016-02-04 Thread tao xiao
I think you can try with https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java On Fri, 5 Feb 2016 at 07:01 wrote: > Is there a way to detect the broker version (even at a high level 0.8 vs > 0.9) using the kafka-clients Java library?

Re: Detecting broker version programmatically

2016-02-04 Thread Manikumar Reddy
Currently it is available through JMX Mbean. It is not available on wire protocol/requests. Pending JIRAs related to this: https://issues.apache.org/jira/browse/KAFKA-2061 On Fri, Feb 5, 2016 at 4:31 AM, wrote: > Is there a way to detect the broker version (even at a high level 0.8 vs > 0.9) us

Re: Detecting broker version programmatically

2016-02-04 Thread James Cheng
> On Feb 4, 2016, at 8:28 PM, Manikumar Reddy wrote: > > Currently it is available through JMX Mbean. It is not available on wire > protocol/requests. > The name of the JMX Mbean is kafka.server:type=app-info,id=4 Not sure what the id=4 means. -James > Pending JIRAs related to this: > https:/

Re: Detecting broker version programmatically

2016-02-04 Thread Dana Powers
I think it is possible to infer a version based on a remote broker's response to various newer api methods. I wrote some probes in python that check ListGroups for 0.9, GroupCoordinatorRequest for 0.8.2, OffsetFetchRequest for 0.8.1, and MetadataRequest for 0.8.0. It seems to work well enough. Als

Re: Kafka protocol fetch request max wait.

2016-02-04 Thread Rajiv Kurian
Hey Jason, Yes I checked for error codes. There were none. The message was perfectly legal as parsed by my hand written parser. I also verified the size of the response which was exactly the size of a response with an empty message set per partition. The topic has 128 partitions and has a retenti

Re: Detecting broker version programmatically

2016-02-04 Thread Manikumar Reddy
@James It is broker-id for Kafka server and client-id for java producer/consumer apps @Dana Yes, we can infer using custom logic.

Re: Kafka protocol fetch request max wait.

2016-02-04 Thread Rajiv Kurian
I actually restarted my application with the consumer config I mentioned at https://issues.apache.org/jira/browse/KAFKA-3159 and I can't get it to use high CPU any more :( Not quite sure about how to proceed. I'll try to shut down all producers and let the logs age out to see if the problem happens

Maximum Offset

2016-02-04 Thread Joe San
What is the maximum Offset? I guess it is tied to the data type? Long.MAX_VALUE? What happens after that? Is the commit log reset automatically after it hits the maximum value?

Concumer with topicFilter

2016-02-04 Thread Neil Obeid
Hi everyone, I have a use-case where I need to use many topics (about 500) each topic have different number of partitions, some as low as 10, and some as many as 20K partitions. The reasons this is a requirement is data sensitivity, we cannot put data from different sources in same toppar. My Kaf

Kafka 0.9 Offset Management

2016-02-04 Thread Joe San
Could anyone point me to some code samples or some documentation on where I could find more information about Kafka's Offset management? Currently, I'm using the props.put("enable.auto.commit", "true") props.put("auto.commit.interval.ms", "1000") which I guess commits to the Zookeeper and I u

Re: Kafka 0.9 Offset Management

2016-02-04 Thread Adam Kunicki
Hi Joe, I found: http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client

Re: Kafka protocol fetch request max wait.

2016-02-04 Thread Rajiv Kurian
I think I found out when the problem happens. When a broker that is sent a fetch request has no messages for any of the partitions it is being asked messages for, it returns immediately instead of waiting out the poll period. Both the kafka 0.9 consumer and my own hand written consumer suffer the s