KafkaConsumer 0.10.0 poll returns 0 size List

2017-05-30 Thread Zach Schoenberger
Hi All, I have come across the case where my client will return a 0 size list from poll after an extended period of time of working properly. There is 100% new data being added to the topic when poll starts returning nothing. I am hoping someone might have some insight as to why this happens. Abou

Re: 0.10.0.0 cluster : segments getting latest ts

2017-05-30 Thread Milind Vaidya
Upgraded prod cluster to 0.10.0.1 . But the issue did not go away. The moment brokers were upgraded and restarted in rolling fashion, the File system TS changed to current one for all log files. Fortunately we knew what to expect

Re: util.parsing.json and Scala 2.12

2017-05-30 Thread Ismael Juma
It still exists in the parser combinators project (although deprecated): https://github.com/scala/scala-parser-combinators/blob/1.0.x/shared/src/main/scala/scala/util/parsing/json/JSON.scala#L32 That dependency is included by the Kafka build, so we'd need to understand more details of how you've

Re: Java APIs for ZooKeeper related operations

2017-05-30 Thread Hans Jespersen
Target is sometime in June. Apache Kafka releases are every 4 months so February, June, and October of each year -hans > On May 30, 2017, at 3:58 PM, Raghav wrote: > > Hans > > When will this version (0.11) be available ? > > On Tue, May 30, 2017 at 3:54 PM, Hans Jespersen

Re: Java APIs for ZooKeeper related operations

2017-05-30 Thread Raghav
Hans When will this version (0.11) be available ? On Tue, May 30, 2017 at 3:54 PM, Hans Jespersen wrote: > Probably important to read and understand these enhancements coming in 0.11 > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-117%3A+Add+a+public+ > AdminClient+API+for+Kafka+admin

Re: Java APIs for ZooKeeper related operations

2017-05-30 Thread Hans Jespersen
Probably important to read and understand these enhancements coming in 0.11 https://cwiki.apache.org/confluence/display/KAFKA/KIP-117%3A+Add+a+public+AdminClient+API+for+Kafka+admin+operations -hans /** * Hans Jespersen, Principal Systems Engineer, Confluent Inc. * h...@confluent.io (650)924-2

Re: Java APIs for ZooKeeper related operations

2017-05-30 Thread Mohammed Manna
1) For issue no. 1 I think you might find the AdminUtils useful This link should help you understand. I haven't got around using ACL for Kafka yet (as I am still doing PoC myself) - so probably

util.parsing.json and Scala 2.12

2017-05-30 Thread Adam Lugowski
Is anyone else seeing this? java.lang.NoClassDefFoundError: scala/util/parsing/json/JSON$ at kafka.utils.Json$.(Json.scala:28) at kafka.utils.Json$.(Json.scala) at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.s

Java APIs for ZooKeeper related operations

2017-05-30 Thread Raghav
Hi I want to know if there are Java APIs for the following. I want to be able to do these things programmatically in our Kafka cluster. Using command line tools seems a bit hacky. Please advise the right way to do, and any pointers to Library in Java, Python or Go. 1. Creating topic with a given

How to debug why partition reassignment failure?

2017-05-30 Thread Kerry Wei
Hi, I'm using kafka-reassign-partitions.sh to move partitions around, however, sometimes I got partition reassignment failure. The cluster is healthy before the rebalance, and a retry after 10 mins resolved the problem. However, I wonder if there's a way I can check why the reassignment failed for

Re: Kafka: LogAppendTime + compressed messages

2017-05-30 Thread Dmitriy Vsekhvalnov
Hey, thanks for update, do you know if there is any mention of this in "official" docs? It was golang client for kafka. On Tue, May 30, 2017 at 3:50 PM, Ismael Juma wrote: > Hi Dmitriy, > > Yes, the broker only updates the timestamp of the outer message (or record > batch in message format V2

Re: Trouble with querying offsets when using new consumer groups API

2017-05-30 Thread Hans Jespersen
I can confirm that in 0.10.2.1 I get offset information for disconnected consumers. The note in the output is a bit misleading because it also works with non-Java clients as long as they implement the new consumer. For example below is what I get when using the blizzard/node-rdkafka client which

Re: Trouble with querying offsets when using new consumer groups API

2017-05-30 Thread Jerry George
Thank you Hans and Vahid. That was definitely of great help. Much appreciated! Regards, Jerry On Tue, May 30, 2017 at 1:53 PM, Vahid S Hashemian < vahidhashem...@us.ibm.com> wrote: > Hi Jerry, > > The behavior you are expecting is implemented in 0.10.2 through KIP-88 ( > https://cwiki.apache.or

Re: Trouble with querying offsets when using new consumer groups API

2017-05-30 Thread Vahid S Hashemian
Hi Jerry, The behavior you are expecting is implemented in 0.10.2 through KIP-88 ( https://cwiki.apache.org/confluence/display/KAFKA/KIP-88%3A+OffsetFetch+Protocol+Update ) and KAFKA-3853 (https://issues.apache.org/jira/browse/KAFKA-3853). Starting from this release when you query a consumer group

Re: Trouble with querying offsets when using new consumer groups API

2017-05-30 Thread Hans Jespersen
It is definitely expected behavior that the new consumer version of kafka-consumer-groups.sh —describe only returns metadata for ‘active’ members. It will print an error message if the consumer group you provide has no active members. https://github.com/confluentinc/kafka/blob/trunk/core/src/ma

Kafka Time Based Index - Server Property ?

2017-05-30 Thread SenthilKumar K
Hi All , I've started exploring SearchMessagesByTimestamp https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index#KIP-33-Addatimebasedlogindex-Searchmessagebytimestamp . Kafka Producer produces the record with timestamp . When i try to search Timestamps few cases i

RE: client recordmetadata meaning

2017-05-30 Thread Tauzell, Dave
I'm not sure if the flush would happen before the ack. Maybe somebody closer to the code can answer that? I haven't tested but I think your performance will go way down. -Dave -Original Message- From: JEVTIC, MARKO [mailto:marko.jev...@fisglobal.com] Sent: Tuesday, May 30, 2017 10:

Re: client recordmetadata meaning

2017-05-30 Thread JEVTIC, MARKO
Thank you Dave. But if we have configuration of: log.flush.interval.messages=1 Could we assume that without replication, when we get the client producer reply, we assume that message has been written to the disk. It's somewhat not clear to me in the documentation, whether log.flush.interv

KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-05-30 Thread Vahid S Hashemian
Hi, I started a new KIP to improve the minimum required ACL permissions of some of the APIs: https://cwiki.apache.org/confluence/display/KAFKA/KIP-163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch The KIP is to address KAFKA-4585. Feedback and suggestions are welcome! Thanks. --V

Re: Trouble with querying offsets when using new consumer groups API

2017-05-30 Thread Jerry George
Hi Abhimanyu, No, actually waiting for someone with operational experience to reply on the list. Thank you for bumping the question though :) If anyone in the list has experience increasing the retention or if this is expected behaviour, could kindly suggest an alternative? Regards, Jerry On S

RE: client recordmetadata meaning

2017-05-30 Thread Tauzell, Dave
>>If kafka client producer gets record meta data with a valid offset, do we >>consider that that message is indeed fsynced to the disk ? No, it doesn't.The meaning depends on your configuration (https://www.cloudera.com/documentation/kafka/latest/topics/kafka_ha.html). To increase the dura

Kafka old log files couldn't be deleted [JIRA 1194]

2017-05-30 Thread Mohammed Manna
Hi, I can see that this is an existing issue. The latest comments says "Everything works fine after manual clean-up", but this information is vague and doesn't really say "What/When to delete"? Has anyone got any idea whether this has been addressed already? I am using the latest release. FYI - t

log purge - only processed records after certain period

2017-05-30 Thread Jaroslav Libák
Hello I'm thinking about using Kafka for messaging use-case, when records will be entity change events, e.g "orderStateChange". There can be multiple consumers of events and I do not want to lose any Kafka log records unless they have been processed by all consumers (e.g due to some consumers temp

Re: log.cleanup.policy (Comma Separated list)

2017-05-30 Thread Damian Guy
Hi, No it just means it will run both policies. Old segments will be compacted and any segments that don't meet the retention settings will be deleted. Thanks, Damian On Tue, 30 May 2017 at 12:13 Mohammed Manna wrote: > Hi, > > Since the documentation says that the acceptable values are comma-

client recordmetadata meaning

2017-05-30 Thread JEVTIC, MARKO
Hi all, I wasn't able to find in documentation firm agreement about Kafka message reply. So, before going through the source code, I would like to ask a question: If kafka client producer gets record meta data with a valid offset, do we consider that that message is indeed fsynced to the disk

Re: Kafka: LogAppendTime + compressed messages

2017-05-30 Thread Ismael Juma
Hi Dmitriy, Yes, the broker only updates the timestamp of the outer message (or record batch in message format V2) so that it does not need to recompress if log append time is used. Consumers should ignore the timestamp in the inner message (or record-level timestamp in message format V2) if log a

log.cleanup.policy (Comma Separated list)

2017-05-30 Thread Mohammed Manna
Hi, Since the documentation says that the acceptable values are comma-separated list of policies (compact, delete), does it mean that it will prioritise Left-to-Right when using the policies? For example, if I have set it to "compact,delete", does it mean the following: 1) Will perform log compac

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-30 Thread Michael Noll
Thanks for your work on this KIP, Eno -- much appreciated! - I think it would help to improve the KIP by adding an end-to-end code example that demonstrates, with the DSL and with the Processor API, how the user would write a simple application that would then be augmented with the proposed KIP ch

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-30 Thread Jan Filipiak
Hi Jay, Eno mentioned that he will narrow down the scope to only ConsumerRecord deserialisation. I am working with Database Changelogs only. I would really not like to see a dead letter queue or something similliar. how am I expected to get these back in order. Just grind to hold an call me

Kafka: LogAppendTime + compressed messages

2017-05-30 Thread Dmitriy Vsekhvalnov
Hi all, we noticed that when kafka broker configured with: log.message.timestamp.type=LogAppendTime to timestamp incoming messages on its own and producer is configured to use any kind of compression. What we end up on the wire for consumer: - outer compressed envelope - LogAppendTime, by