Sorry in fact the test code in gist does not exactly reproduce the problem
we're facing. I'm working on that.
2016-02-02 10:46 GMT+01:00 Han JU :
> Thanks Guazhang for the reply!
>
> So in fact if it's the case you said, if I understand correctly, then the
> messa
lose messages 50 to 100.
> >
> > Hence as a user of the consumer, one should only call "commit" if she is
> > certain that all messages returned from "poll()" have been processed.
> >
> > Guozhang
> >
> >
> > On Mon, Feb 1, 2016 at
Hi,
One of our usage of kafka is to tolerate arbitrary consumer crash without
losing or duplicating messages. So in our code we manually commit offset
after successfully persisted the consumer state.
In prototyping with kafka-0.9's new consumer API, I found that in some
cases, kafka failed to sen
ou need to commit offsets regularly. In the gist, offsets are only
> > > > committed on shutdown or when a rebalance occurs. When the group is
> > > stable,
> > > > no progress will be seen because there are no commits to update the
> > > > position.
> > > >
>
Issue created: https://issues.apache.org/jira/browse/KAFKA-3146
2016-01-25 16:07 GMT+01:00 Han JU :
> Hi Bruno,
>
> Can you tell me a little bit more about that? A seek() in the
> `onPartitionAssigned`?
>
> Thanks.
>
> 2016-01-25 10:51 GMT+01:00 Han JU :
>
>> Ok
Hi Bruno,
Can you tell me a little bit more about that? A seek() in the
`onPartitionAssigned`?
Thanks.
2016-01-25 10:51 GMT+01:00 Han JU :
> Ok I'll create a JIRA issue on this.
>
> Thanks!
>
> 2016-01-23 21:47 GMT+01:00 Bruno Rassaerts :
>
>> +1 here
>>
ase file an issue in JIRA so that we make sure this is
> > investigated?
> >
> > Ismael
> >
> >> On Fri, Jan 22, 2016 at 3:13 PM, Han JU wrote:
> >>
> >> Hi,
> >>
> >> I'm prototyping with the new consumer API of kafka 0.9 and I
Hi,
I'm prototyping with the new consumer API of kafka 0.9 and I'm particularly
interested in the `ConsumerRebalanceListener`.
My test setup is like the following:
- 5M messages pre-loaded in one node kafka 0.9
- 12 partitions, auto offset commit set to false
- in `onPartitionsRevoked`, com
nsumer-groups.sh
> (kafka.admin.ConsumerGroupCommand).
>
> Guozhang
>
>
>
> On Wed, Dec 30, 2015 at 9:10 AM, Han JU wrote:
>
> > Hi Marko,
> >
> > Yes we're currently using this on our production kafka 0.8. But it does
> not
> > seem to work wit
Hi,
I'm trying to check the offset of a consumer group with the new consumer
API. But it seems that kafka-run-class cannot launch `ConsumerGroupCommand`.
bin/kafka-run-class.sh kafka.tools.ConsumerGroupCommand --zookeeper
localhost:2181 --group my-group
>> Error: Could not find or load main class
tion | Centralized Log Management
> Solr & Elasticsearch Support
> Sematext <http://sematext.com/> | Contact
> <http://sematext.com/about/contact.html>
>
> On Wed, Dec 30, 2015 at 12:54 PM, Han JU wrote:
>
> > Thanks guys. The `seek` seems a solution. But it
> >
> > There doesn't seem to be a tool for committing offset, only for
> > checking/fetching current offset (see
> > http://kafka.apache.org/documentation.html#operations )
> >
> > On Tue, Dec 29, 2015 at 4:35 PM, Han JU wrote:
> >
> > > Hi Stevo,
umers all the messages.
2015-12-29 16:19 GMT+01:00 Stevo Slavić :
> Have you considered deleting and recreating topic used in test?
> Once topic is clean, read/poll once - any committed offset should be
> outside of the range, and consumer should reset offset.
>
> On Tue, Dec 29,
Hello,
For local test purpose I need to frequently reset offset for a consumer
group. In 0.8 I just delete the consumer group's zk node under
`/consumers`. But with the redesign of the 0.9, how could I achieve the
same thing?
Thanks!
--
*JU Han*
Software Engineer @ Teads.tv
+33 061960
gt;
> Thanks,
> Grant
>
> On Thu, Nov 12, 2015 at 7:52 AM, Han JU wrote:
>
> > Hello,
> >
> > Just want to know if the new consumer API coming with 0.9 will be
> > compatible with 0.8 broker servers? We're looking at the new consumer
> > because the new
Hello,
Just want to know if the new consumer API coming with 0.9 will be
compatible with 0.8 broker servers? We're looking at the new consumer
because the new rebalancing listener is very interesting for one of our use
case.
Another question is that if we have to upgrade our brokers to 0.9, will
16 matches
Mail list logo