Thanks Matthias!

1) I didn't realize kafka-consumer-groups.sh only queries consumer
coordinator.
I was checking after terminating the streaming app. Got this via
console-consumer.

2) Understood.

3) Nope. Will check this out.

4)Yes, I can probably have a processorSupplier for this.

Srikanth

On Fri, Jun 3, 2016 at 8:43 AM, Matthias J. Sax <matth...@confluent.io>
wrote:

> Srikanth,
>
> KafkaStreams uses the new consumers, thus you need to use
>
> > bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server
> localhost:9092 --list
>
> instead of "--zookeeper XXX"
> <Srikanth>
> AbstractTask.commit() is called each "commit.interval.ms". A task is
> requested to persist all buffered data on this call; ie, flush state
> data to persistent storage and flush pending writes to Kafka. After all
> tasks executed .commit(), KafkaStreams app will commit the current
> source topic offsets to Kafka. This is a regular KafkaConsumer commit.
>
> If you use the latest trunc version you can get consumer/producer
> metrics via KafkaStreams.metrics().
>
> For getting the Context object, you need to write you own custom
> processor and add it via stream.process(...);
>
>
> -Matthias
>
> On 06/02/2016 06:54 PM, Srikanth wrote:
> > Matthias,
> >
> > """bin/kafka-consumer-groups.sh --zookeeper localhost:2181/kafka10
> > --list""" output didn't show the group I used in streams app.
> > Also, AbstractTask.java had a commit() API. That made me wonder if offset
> > management was overridden too.
> >
> > I'm trying out KafkaStreams for one new streaming app we are working on.
> > We'll most likely stick to DSL for that.
> > Does the DSL expose any stat or debug info? Or any way to access the
> > underlying Context?
> >
> > Srikanth
> >
> > On Thu, Jun 2, 2016 at 9:30 AM, Matthias J. Sax <matth...@confluent.io>
> > wrote:
> >
> >> Hi Srikanth,
> >>
> >> I am not exactly sure if I understand your question correctly.
> >>
> >> One way to track the progress is to get the current record offset (you
> >> can obtain it in the low lever Processor API via the provided Context
> >> object).
> >>
> >> Otherwise, on commit, all writes to intermediate topics are flushed to
> >> Kafka and the source offsets get committed to Kafka, too.
> >>
> >> A KafkaStream application internally uses the standard high level Java
> >> KafkaConsumer (all instances of a single application belong to the same
> >> consumer group) and standard Java KafkaProducer.
> >>
> >> So you can use standard Kafka tools to access this information.
> >>
> >> Does this answer your question?
> >>
> >> -Matthias
> >>
> >> On 05/31/2016 09:10 PM, Srikanth wrote:
> >>> Hi,
> >>>
> >>> How can I track the progress of a kafka streaming job?
> >>> The only reference I see is "commit.interval.ms" which controls how
> >> often
> >>> offset is committed.
> >>> By default where is it committed and is there a tool to read it back?
> May
> >>> be something similar to bin/kafka-consumer-groups.sh.
> >>>
> >>> I'd like to look at details for source & intermediate topics too.
> >>>
> >>> Srikanth
> >>>
> >>
> >>
> >
>
>

Reply via email to