G: ConsumerOffsetChecker is deprecated and will be dropped in
> > releases following 0.9.0. Use ConsumerGroupCommand instead.
> > (kafka.tools.ConsumerOffsetChecker$)
>
>
> - -Matthias
>
> On 11/3/16 4:07 AM, Robert Metzger wrote:
> > Hi,
> >
> > some F
Hi,
some Flink users recently noticed that they can not check the consumer lag
when using Flink's kafka consumer [1]. According to this discussion on the
Kafka user list [2] the kafka-consumer-groups.sh utility doesn't work with
KafkaConsumers with manual partition assignment.
Is there a way to g
Hi,
I've looked at this issue already at the Flink list and recommended Hironori
to post here. It seems that the consumer is not returning from the poll()
call, that's why the commitOffsets() method can not enter the synchronized
block.
The KafkaConsumer is logging the following statements:
2016-
Sorry for reviving this old mailing list discussion.
I'm facing a similar issue while running a load test with many small topics
(100 topics) with 4 partitions each.
There is also a Flink user who's facing the issue:
https://issues.apache.org/jira/browse/FLINK-3066
Are you also writing into many
or the 0.9.0 branch.
>
> Ismael
> On 27 Jan 2016 13:05, "Robert Metzger" wrote:
>
> > Hi Manu,
> >
> > in the streaming-benchmark, are seeing the issue only when reading with
> > Gearpump, or is it triggered by a different processing framework as well?
Hi Manu,
in the streaming-benchmark, are seeing the issue only when reading with
Gearpump, or is it triggered by a different processing framework as well?
I'm asking because there is a Flink user who is using Kafka 0.8.2.1 as well
who's reporting a very similar issue on SO:
http://stackoverflow.c
r seek() call to go to the offset you really want to get to.
> >
> > I don't think there are current plans to add getOffsetsBefore, but maybe
> we
> > need it for the use-case you specified.
> > I think the developer mailing list (or a JIRA) will be a better place
Hi,
I'm currently looking into implementing a load shedding strategy into
Flink's Kafka consumer.
Therefore, I would like to allow users to request the latest offset of the
subscribed TopicPartitions, so that they can
a) determine the lag
b) maybe set the next fetch offset to the latest offset (o
Hi Peter,
The problem is that you have the DataSet and DataStream package imports.
Remove the import from the DataSet API (import org.apache.flink.api.scala._)
to make the example work.
On Sun, Dec 20, 2015 at 3:20 PM, Peter Vandenabeele
wrote:
> Hi,
>
> I am trying to write a minimal Kafka con
roductive :)
> Kafka brokers and clients both have Metrics that may help you track
> where the performance issues are coming from.
>
> Gwen
>
> On Wed, Jul 15, 2015 at 9:24 AM, Robert Metzger
> wrote:
> > Hi Shef,
> >
> > did you resolve this issue?
> > I
Hi Shef,
did you resolve this issue?
I'm facing some performance issues and I was wondering whether reading
locally would resolve them.
On Mon, Jun 22, 2015 at 11:43 PM, Shef wrote:
> Noob question here. I want to have a single consumer for each partition
> that consumes only the messages that
Hi,
I'm a committer at the Apache Flink project.
I'm working on adding support for exactly-once semantics for Flink's stream
processing component.
Therefore, we want to keep track of the read offset from the KafkaSource
and restart the consumption from the last known offset (tracked within
Flink).
12 matches
Mail list logo