Re: Checking the consumer lag when using manual partition assignment with the KafkaConsumer

2016-11-06 Thread Robert Metzger
G: ConsumerOffsetChecker is deprecated and will be dropped in > > releases following 0.9.0. Use ConsumerGroupCommand instead. > > (kafka.tools.ConsumerOffsetChecker$) > > > - -Matthias > > On 11/3/16 4:07 AM, Robert Metzger wrote: > > Hi, > > > > some F

Checking the consumer lag when using manual partition assignment with the KafkaConsumer

2016-11-03 Thread Robert Metzger
Hi, some Flink users recently noticed that they can not check the consumer lag when using Flink's kafka consumer [1]. According to this discussion on the Kafka user list [2] the kafka-consumer-groups.sh utility doesn't work with KafkaConsumers with manual partition assignment. Is there a way to g

Re: Blocked in KafkaConsumer.commitOffsets

2016-06-15 Thread Robert Metzger
Hi, I've looked at this issue already at the Flink list and recommended Hironori to post here. It seems that the consumer is not returning from the poll() call, that's why the commitOffsets() method can not enter the synchronized block. The KafkaConsumer is logging the following statements: 2016-

Re: NotLeaderForPartitionException: This server is not the leader for that topic-partition.

2016-02-08 Thread Robert Metzger
Sorry for reviving this old mailing list discussion. I'm facing a similar issue while running a load test with many small topics (100 topics) with 4 partitions each. There is also a Flink user who's facing the issue: https://issues.apache.org/jira/browse/FLINK-3066 Are you also writing into many

Re: Broker Exception: Attempt to read with a maximum offset less than start offset

2016-01-27 Thread Robert Metzger
or the 0.9.0 branch. > > Ismael > On 27 Jan 2016 13:05, "Robert Metzger" wrote: > > > Hi Manu, > > > > in the streaming-benchmark, are seeing the issue only when reading with > > Gearpump, or is it triggered by a different processing framework as well?

Re: Broker Exception: Attempt to read with a maximum offset less than start offset

2016-01-27 Thread Robert Metzger
Hi Manu, in the streaming-benchmark, are seeing the issue only when reading with Gearpump, or is it triggered by a different processing framework as well? I'm asking because there is a Flink user who is using Kafka 0.8.2.1 as well who's reporting a very similar issue on SO: http://stackoverflow.c

Re: SimpleConsumer.getOffsetsBefore() in 0.9 KafkaConsumer

2016-01-25 Thread Robert Metzger
r seek() call to go to the offset you really want to get to. > > > > I don't think there are current plans to add getOffsetsBefore, but maybe > we > > need it for the use-case you specified. > > I think the developer mailing list (or a JIRA) will be a better place

SimpleConsumer.getOffsetsBefore() in 0.9 KafkaConsumer

2016-01-20 Thread Robert Metzger
Hi, I'm currently looking into implementing a load shedding strategy into Flink's Kafka consumer. Therefore, I would like to allow users to request the latest offset of the subscribed TopicPartitions, so that they can a) determine the lag b) maybe set the next fetch offset to the latest offset (o

Re: Minimal KakfaConsumer in Scala fails compilation with `could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[String]`

2016-01-04 Thread Robert Metzger
Hi Peter, The problem is that you have the DataSet and DataStream package imports. Remove the import from the DataSet API (import org.apache.flink.api.scala._) to make the example work. On Sun, Dec 20, 2015 at 3:20 PM, Peter Vandenabeele wrote: > Hi, > > I am trying to write a minimal Kafka con

Re: Consumer that consumes only local partition?

2015-08-04 Thread Robert Metzger
roductive :) > Kafka brokers and clients both have Metrics that may help you track > where the performance issues are coming from. > > Gwen > > On Wed, Jul 15, 2015 at 9:24 AM, Robert Metzger > wrote: > > Hi Shef, > > > > did you resolve this issue? > > I

Re: Consumer that consumes only local partition?

2015-07-15 Thread Robert Metzger
Hi Shef, did you resolve this issue? I'm facing some performance issues and I was wondering whether reading locally would resolve them. On Mon, Jun 22, 2015 at 11:43 PM, Shef wrote: > Noob question here. I want to have a single consumer for each partition > that consumes only the messages that

Manually controlling the start offset in the high level API

2015-04-22 Thread Robert Metzger
Hi, I'm a committer at the Apache Flink project. I'm working on adding support for exactly-once semantics for Flink's stream processing component. Therefore, we want to keep track of the read offset from the KafkaSource and restart the consumption from the last known offset (tracked within Flink).