Yep, that should also work!
-John

On Mon, Sep 17, 2018 at 8:36 AM David Espinosa <espi...@gmail.com> wrote:

> Thank you all for your responses!
> I also asked this on the confluent slack channel (
> https://confluentcommunity.slack.com) and I got this approach:
>
>    1. Query the partitions' high watermark offset
>    2. Set the consumer to consume from beginning
>    3. Break out when you've reached the high offset
>
> Still have some doubts regarding the implementation, but it seems a good
> approach (I'm using a single partition so a single loop would be enough per
> topic).
> What do you think?
>
> El sáb., 15 sept. 2018 a las 0:30, John Roesler (<j...@confluent.io>)
> escribió:
>
> > Specifically, you can monitor the "records-lag-max" (
> > https://docs.confluent.io/current/kafka/monitoring.html#fetch-metrics)
> > metric. (or the more granular one per partition).
> >
> > Once this metric goes to 0, you know that you've caught up with the tail
> of
> > the log.
> >
> > Hope this helps,
> > -John
> >
> > On Fri, Sep 14, 2018 at 2:02 PM Matthias J. Sax <matth...@confluent.io>
> > wrote:
> >
> > > Using Kafka Streams this is a little tricky.
> > >
> > > The API itself has no built-in mechanism to do this. You would need to
> > > monitor the lag of the application, and if the lag is zero (assuming
> you
> > > don't write new data into the topic in parallel), terminate the
> > > application.
> > >
> > >
> > > -Matthias
> > >
> > > On 9/14/18 4:19 AM, Henning Røigaard-Petersen wrote:
> > > > Spin up a consumer, subscribe to EOF events, assign all partitions
> from
> > > the beginning, and keep polling until all partitions has reached EOF.
> > > > Though, if you have concurrent writers, new messages may be appended
> > > after you observe EOF on a partition, so you are never guaranteed to
> have
> > > read all messages at the time you choose to close the consumer.
> > > >
> > > > /Henning Røigaard-Petersen
> > > >
> > > > -----Original Message-----
> > > > From: David Espinosa <espi...@gmail.com>
> > > > Sent: 14. september 2018 09:46
> > > > To: users@kafka.apache.org
> > > > Subject: Best way for reading all messages and close
> > > >
> > > > Hi all,
> > > >
> > > > Although the usage of Kafka is stream oriented, for a concrete use
> case
> > > I need to read all the messages existing in a topic and once all them
> has
> > > been read then closing the consumer.
> > > >
> > > > What's the best way or framework for doing this?
> > > >
> > > > Thanks in advance,
> > > > David,
> > > >
> > >
> > >
> >
>

Reply via email to