What is the process for deleting the consumer group from zookeeper? Should
I export offset, delete and then import?

Thanks,
Akhilesh

On Fri, Dec 18, 2015 at 11:32 PM, Todd Palino <tpal...@gmail.com> wrote:

> Yes, that’s right. It’s just work for no real gain :)
>
> -Todd
>
> On Fri, Dec 18, 2015 at 9:38 AM, Marko Bonaći <marko.bon...@sematext.com>
> wrote:
>
> > Hmm, I guess you're right Tod :)
> > Just to confirm, you meant that, while you're changing the exported file
> it
> > might happen that one of the segment files becomes eligible for cleanup
> by
> > retention, which would then make the imported offsets out of range?
> >
> > Marko Bonaći
> > Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> > Solr & Elasticsearch Support
> > Sematext <http://sematext.com/> | Contact
> > <http://sematext.com/about/contact.html>
> >
> > On Fri, Dec 18, 2015 at 6:29 PM, Todd Palino <tpal...@gmail.com> wrote:
> >
> > > That works if you want to set to an arbitrary offset, Marko. However in
> > the
> > > case the OP described, wanting to reset to smallest, it is better to
> just
> > > delete the consumer group and start the consumer with auto.offset.reset
> > set
> > > to smallest. The reason is that while you can pull the current smallest
> > > offsets from the brokers and set them in Zookeeper for the consumer, by
> > the
> > > time you do that the smallest offset is likely no longer valid. This
> > means
> > > you’re going to resort to the offset reset logic anyways.
> > >
> > > -Todd
> > >
> > >
> > > On Fri, Dec 18, 2015 at 7:10 AM, Marko Bonaći <
> marko.bon...@sematext.com
> > >
> > > wrote:
> > >
> > > > You can also do this:
> > > > 1. stop consumers
> > > > 2. export offsets from ZK
> > > > 3. make changes to the exported file
> > > > 4. import offsets to ZK
> > > > 5. start consumers
> > > >
> > > > e.g.
> > > > bin/kafka-run-class.sh kafka.tools.ExportZkOffsets --group group-name
> > > > --output-file /tmp/zk-offsets --zkconnect localhost:2181
> > > > bin/kafka-run-class.sh kafka.tools.ImportZkOffsets --input-file
> > > > /tmp/zk-offsets --zkconnect localhost:2181
> > > >
> > > > Marko Bonaći
> > > > Monitoring | Alerting | Anomaly Detection | Centralized Log
> Management
> > > > Solr & Elasticsearch Support
> > > > Sematext <http://sematext.com/> | Contact
> > > > <http://sematext.com/about/contact.html>
> > > >
> > > > On Fri, Dec 18, 2015 at 4:06 PM, Jens Rantil <jens.ran...@tink.se>
> > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I noticed that a consumer in the new consumer API supports setting
> > the
> > > > > offset for a partition to beginning. I assume doing so also would
> > > update
> > > > > the offset in Zookeeper eventually.
> > > > >
> > > > > Cheers,
> > > > > Jens
> > > > >
> > > > > On Friday, December 18, 2015, Akhilesh Pathodia <
> > > > > pathodia.akhil...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I want to reset the kafka offset in zookeeper so that the
> consumer
> > > will
> > > > > > start reading messages from first offset. I am using flume as a
> > > > consumer
> > > > > to
> > > > > > kafka. I have set the kafka property kafka.auto.offset.reset to
> > > > > "smallest",
> > > > > > but it does not reset the offset in zookeeper and that's why
> flume
> > > will
> > > > > not
> > > > > > read messages from first offset.
> > > > > >
> > > > > > Is there any way to reset kafka offset in zookeeper?
> > > > > >
> > > > > > Thanks,
> > > > > > Akhilesh
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Jens Rantil
> > > > > Backend engineer
> > > > > Tink AB
> > > > >
> > > > > Email: jens.ran...@tink.se
> > > > > Phone: +46 708 84 18 32
> > > > > Web: www.tink.se
> > > > >
> > > > > Facebook <https://www.facebook.com/#!/tink.se> Linkedin
> > > > > <
> > > > >
> > > >
> > >
> >
> http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary
> > > > > >
> > > > >  Twitter <https://twitter.com/tink>
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > *—-*
> > > *Todd Palino*
> > > Staff Site Reliability Engineer
> > > Data Infrastructure Streaming
> > >
> > >
> > >
> > > linkedin.com/in/toddpalino
> > >
> >
>
>
>
> --
> *—-*
> *Todd Palino*
> Staff Site Reliability Engineer
> Data Infrastructure Streaming
>
>
>
> linkedin.com/in/toddpalino
>

Reply via email to