Log end offset

2015-05-10 Thread Achanta Vamsi Subhash
Hi, What is the best way for finding out the log end offset for a topic? Currently I am using the SimpleConsumer getLastOffset logic mentioned in: https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example But we are running into ClosedChannelException for some of the topics.

Re: Kafka Rebalance on Watcher event Question

2015-05-10 Thread Manikumar Reddy
All the consumers in the same consumer group will share the load across given topic/partitions. So for any consumer failure, there will be a re-balance to assign the failed topic/partitions to live consumers. pl check consumer documentation here https://kafka.apache.org/documentation.html#introduc

Re: Is there a way to know when I've reached the end of a partition (consumed all messages) when using the high-level consumer?

2015-05-10 Thread Bhavesh Mistry
I have used what Gwen has suggested but to avoid false positive: While consuming records keep track of *last* consumed offset and compare with latest offset on broker for consumed topic when you get "TimeOut Exception" for that particular partition for given topic (e.g JMX Bean *LogEndOffset *for

Re: Kafka Rebalance on Watcher event Question

2015-05-10 Thread dinesh kumar
But why? What is reason for triggering a rebalance if none of the topics of a consumers are affected? Is there some reason for triggering a rebalance irrespective of the consumers topics getting affected ? On 11 May 2015 at 11:06, Manikumar Reddy wrote: > If both C1,C2 belongs to same consumer

Re: Kafka Rebalance on Watcher event Question

2015-05-10 Thread Manikumar Reddy
If both C1,C2 belongs to same consumer group, then the re-balance will be triggered. A consumer subscribes to event changes of the consumer id registry within its group. On Mon, May 11, 2015 at 10:55 AM, dinesh kumar wrote: > Hi, > I am looking at the code of kafka.consumer.ZookeeperConsumerConn

Kafka Rebalance on Watcher event Question

2015-05-10 Thread dinesh kumar
Hi, I am looking at the code of kafka.consumer.ZookeeperConsumerConnector.scala (link here ) and I see that all ids registered to a particular group ids are registered to the path /consu

Re: Kafka Client in Rust

2015-05-10 Thread Ewen Cheslack-Postava
Added to the wiki, which required adding a new Rust section :) Thanks for the contribution, Yousuf! On Sun, May 10, 2015 at 6:57 PM, Yousuf Fauzan wrote: > Hi All, > > I have create Kafka client for Rust. The client supports Metadata, Produce, > Fetch, and Offset requests. I plan to add support

Kafka Client in Rust

2015-05-10 Thread Yousuf Fauzan
Hi All, I have create Kafka client for Rust. The client supports Metadata, Produce, Fetch, and Offset requests. I plan to add support of Consumers and Offset management soon. Will it be possible to get it added to https://cwiki.apache.org/confluence/display/KAFKA/Clients Info: Pure Rust implemen

Asynchronous producer-consumer

2015-05-10 Thread Knowledge gatherer
Hi, I have a requirement in which I have configure producer & consumer asynchronously, so that for every 1 mb of data the queued message will be passed. Please provide some help. Thanks

Re: Is there a way to know when I've reached the end of a partition (consumed all messages) when using the high-level consumer?

2015-05-10 Thread Ewen Cheslack-Postava
@Gwen- But that only works for topics that have low enough traffic that you would ever actually hit that timeout. The Confluent schema registry needs to do something similar to make sure it has fully consumed the topic it stores data in so it doesn't serve stale data. We know in our case we'll onl

Re: Pulling Snapshots from Kafka, Log compaction last compact offset

2015-05-10 Thread Hisham Mardam-Bey
With mypipe (MySQL -> Kafka) we've had a similar discussion re: topic names and preserving transactions. At this point: - Kafka topic names are configurable allowing for per db or per table topics - transactions maintain a transaction ID for each event when published into Kafka https://github.co

Re: Is there a way to know when I've reached the end of a partition (consumed all messages) when using the high-level consumer?

2015-05-10 Thread Gwen Shapira
For Flume, we use the timeout configuration and catch the exception, with the assumption that "no messages for few seconds" == "the end". On Sat, May 9, 2015 at 2:04 AM, James Cheng wrote: > Hi, > > I want to use the high level consumer to read all partitions for a topic, > and know when I have