Clean shutdown but corrupted data on reboot?

2016-12-22 Thread Stephane Maarek
Hi, I’m shutting down kafka properly see down the bottom. But on reboot, I sometimes get the following: [2016-12-23 07:45:26,544] INFO Loading logs. (kafka.log.LogManager) [2016-12-23 07:45:26,609] WARN Found a corrupted index file due to requirement failed: Corrupt index found, index file (/mn

Kafa 0.9.0.1 producer warning in log UNKNOWN_TOPIC_OR_PARTITION

2016-12-22 Thread em vee
Hi , I am using Kafka 0.9.0.1 client . If a message is sent to a a topic and then if the topic is deleted then you see the following entries in the log once a subsequent message is sent to any other topic. === WARN org.apache.kafka.clients.NetworkClient {} - Error while fetching metad

Re: can kafka 10 stream API read the topic from a Kafka 9 cluster?

2016-12-22 Thread hans
Kafka clients (currently) do not work against older Kafka brokers/servers so you have no other option but to upgrade to a 0.10.1.0 or higher Kafka broker. -hans > On Dec 22, 2016, at 2:25 PM, Joanne Contact wrote: > > Hello I have a program which requires 0.10.1.0 streams API. The jar is > pac

can kafka 10 stream API read the topic from a Kafka 9 cluster?

2016-12-22 Thread Joanne Contact
Hello I have a program which requires 0.10.1.0 streams API. The jar is packaged by maven with all dependencies. I tried to consume a Kafka topic spit from a Kafka 9 cluster. It has such error: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error read

suscribe

2016-12-22 Thread em vee

Re: Kafka won't replicate from a specific broker

2016-12-22 Thread Valentin Golev
Hi, Ismael and Jan, Thanks a lot for your prompt responses! > Is inter.broker.protocol.version set correctly in brokers 1 and 2? It > should be 0.10.0 so that they can talk to the older broker without issue. I set it on the broker #2, but it doesnt't seem to work. > The only option I know of is

Re: Kafka won't replicate from a specific broker

2016-12-22 Thread Jan Omar
Unfortunately I think you hit this bug: https://issues.apache.org/jira/browse/KAFKA-4477 The only option I know of is to reboot the affected broker. And upgrade to 0.10.1.1 as quickly as possible. We haven't seen this issue on 0.10.1.1.RC0. Re

Re: Memory / resource leak in 0.10.1.1 release

2016-12-22 Thread Jon Yeargers
Yes - that's the one. It's 100% reproducible (for me). On Thu, Dec 22, 2016 at 8:03 AM, Damian Guy wrote: > Hi Jon, > > Is this for the topology where you are doing something like: > > topology: kStream -> groupByKey.aggregate(minute) -> foreach > \-> groupByKey.agg

Re: Kafka won't replicate from a specific broker

2016-12-22 Thread Ismael Juma
Hi Valentin, Is inter.broker.protocol.version set correctly in brokers 1 and 2? It should be 0.10.0 so that they can talk to the older broker without issue. Ismael On Thu, Dec 22, 2016 at 4:42 PM, Valentin Golev wrote: > Hello, > > I have a three broker Kafka setup (the ids are 1, 2 (kafka 0.1

Kafka won't replicate from a specific broker

2016-12-22 Thread Valentin Golev
Hello, I have a three broker Kafka setup (the ids are 1, 2 (kafka 0.10.1.0) and 1001 (kafka 0.10.0.0)). After a failure of two of them a lot of the partitions have the third one (1001) as their leader. It's like this: Topic: userevents0.open Partition: 5Leader: 1 Replicas: 1,2,1001

Re: NotLeaderForPartitionException while doing repartitioning

2016-12-22 Thread Dvorkin, Eugene (CORP)
I had this issue lately. On broker 9. Check what errors you got from a change log: kafka-run-class kafka.tools.StateChangeLogMerger --logs /var/log/kafka/state-change.log --topic [__consumer_offsets If it complains about connection, it maybe this broker does not read data from zookeeper Check z

Error in kafka-stream example

2016-12-22 Thread Amrit Jangid
Hi All, I want to try out kafka stream example using this example : https://github.com/confluentinc/examples/blob/3.1.x/kafka-streams/src/main/scala/io/confluent/examples/streams/MapFunctionScalaExample.scala Getting exception while compiling code : *[error] val uppercasedWithMapValues: KStr

Failure metrics

2016-12-22 Thread Gaurav Abbi
Hi, I am trying to visualize Kafka metrics that can help me gain better insights to failures in the Kafka Server level while handling requests. While looking at the various metrics in JMX, I can see following two metrics - FailedProduceRequestsPerSec - FailedFetchRequestsPerSec When I visu

Re: Memory / resource leak in 0.10.1.1 release

2016-12-22 Thread Damian Guy
Hi Jon, Is this for the topology where you are doing something like: topology: kStream -> groupByKey.aggregate(minute) -> foreach \-> groupByKey.aggregate(hour) -> foreach I'm trying to understand how i could reproduce your problem. I've not seen any such issues with

Memory / resource leak in 0.10.1.1 release

2016-12-22 Thread Jon Yeargers
Im still hitting this leak with the released version of 0.10.1.1. Process mem % grows over the course of 10-20 minutes and eventually the OS kills it. Messages like this appear in /var/log/messages: Dec 22 13:31:22 ip-172-16-101-108 kernel: [2989844.793692] java invoked oom-killer: gfp_mask=0x24

what is the key used for change log topic backed by windowed store

2016-12-22 Thread Sachin Mittal
Hi All, Our stream is something like builder.stream() .groupByKey() .aggregate(Initializer, Aggregator, TimeWindows, valueSerde, "table-name') So This creates a changelog topic. I was wondering what would be the key used for this topic. Would it be they key we use to group by or

Common client metrics - per request?

2016-12-22 Thread Stevo Slavić
Hello Apache Kafka community, Please correct me if wrong, I assume common Kafka client metrics (see https://kafka.apache.org/documentation.html#selector_monitoring ) are aggregated metrics of all different requests particular client instance makes. So e.g. producer common metrics like outgoing-byt

Re: Kafka Backup Strategy

2016-12-22 Thread Stephane Maarek
Thanks Andrew for the detailed response We’re having a replication factor of 3 so we’re safe there. What do you recommend for min.insync.replicas, acks and log flush internal? I was worried about region failures or someone that goes in and deletes our instances and associated volumes (that’s a ki