Next time you successfully auto commit it should be fine.
Michael
> On 6 Feb 2017, at 12:38, Jon Yeargers wrote:
>
> This message seems to come and go for various consumers:
>
> WARN o.a.k.c.c.i.ConsumerCoordinator - Auto offset commit failed for
> group : Commit offsets failed with retriabl
If the topic has not seen traffic for a while then Kafka will remove the stored
offset. When your consumer reconnects Kafka no longer has the offset so it will
reprocess from earliest.
Michael
> On 12 Jan 2017, at 11:13, Mahendra Kariya wrote:
>
> Hey All,
>
> We have a Kafka cluster hosted
Thanks for sharing Radek, great article.
Michael
> On 17 Sep 2016, at 21:13, Radoslaw Gruchalski wrote:
>
> Please read this article:
> https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying
>
> –
> Best regards,
> Radek
Did you try props.put("group.id", "test");
On Thu, Sep 15, 2016 at 12:55 AM, Joyce Chen wrote:
> Hi,
>
> I created a few consumers that belong to the same group_id, but I noticed
> that each consumer get all messages instead of only some of the messages.
>
> As for the topic, I did create the t
You can you seek.
https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long)
Or if you want to re read the entire log use a different consumer group
name and use auto.offset.reset=earliest
On Tue, Aug 30, 2016
Might be easier to handle duplicate messages as opposed to handling long
periods of time without messages.
Michael
> On 22 Aug 2016, at 15:55, Misra, Rahul wrote:
>
> Hi,
>
> Can anybody provide any guidance on the following:
>
> 1. Given a limited set of groups and consumers, will increasin
For future reference server the following is needed
offsets.topic.replication.factor=3
Michael
> On 14 Jul 2016, at 10:56, Michael Freeman wrote:
>
> Anyone have any ideas? Looks like the group coordinator is not failing over.
> Or at least not detected by the Java consumer.
I'm running a three broker cluster.
Do I need to have offsets.topic.replication.factor=3 set in order for co
ordinator failover to occur?
Michael
Anyone have any ideas? Looks like the group coordinator is not failing over. Or
at least not detected by the Java consumer.
A new leader is elected so I'm at a loss.
Michael
> On 13 Jul 2016, at 20:58, Michael Freeman wrote:
>
> Hi,
> I'm running a Kafka cluster wi
;
>
>
> -Original Message-
> From: Michael Freeman [mailto:mikfree...@gmail.com]
> Sent: Wednesday, July 13, 2016 3:36 PM
> To: users@kafka.apache.org
> Subject: Re: Role of Producer
>
> Could you write them a client that uses the Kafka producer?
> You could als
Could you write them a client that uses the Kafka producer?
You could also write some restful services that send the data to kafka.
If they use MQ you could listen to MQ and send to Kafka.
On Wed, Jul 13, 2016 at 9:31 PM, Luo, Chao wrote:
> Dear Kafka guys,
>
> I just started to build up a Kaf
Hi,
I'm running a Kafka cluster with 3 nodes.
I have a topic with a replication factor of 3.
When I stop node 1 running kafka-topics.sh shows me that node 2 and 3 have
successfully failed over the partitions for the topic.
The message producers are still sending messages and I can still consum
;
> Tom Crayford
> Heroku Kafka
>
> On Wednesday, 4 May 2016, Michael Freeman wrote:
>
> > Hey Tom,
> > Are there any details on the negative side effects of
> > increasing the offset retention period? I'd like to increase it but want
> to
&g
Hey Tom,
Are there any details on the negative side effects of
increasing the offset retention period? I'd like to increase it but want to be
aware of the risks.
Thanks
Michael
> On 4 May 2016, at 05:06, Tom Crayford wrote:
>
> Jun,
>
> Yep, you got it. If there are no offs
Hi,
I'm using the 0.9.0.1 consumer with 'earliest' offset reset.
After cleanly shutting down the consumers and restarting I see reconsumption of
some old messages.
The offset of the reconsumed messages is 0.
If I'm committing cleanly and shutting down cleanly why is the committed offset
lo
Was wondering the same. From what I can tell it shows unknown when no
committed offset is recorded for that partition by the consumer.
On Mon, Mar 28, 2016 at 12:25 PM, craig w wrote:
> When using the ConsumerGroupCommand to describe a group (using
> new-consumer, 0.9.0.1) why does "unknown" sho
committed offset is still what you expect)
> b) otherwise, abort the background processing thread.
>
> Would that work for your case? It's also worth mentioning that there's a
> proposal to add a sticky partition assignor to Kafka, which would make 5.b
> less li
necessary.
>
> On Thu, Mar 10, 2016 at 1:40 AM, Michael Freeman
> wrote:
>
>> Thanks Christian,
>> We would want to retry indefinitely. Or at
>> least for say x minutes. If we don't poll how do we keep the heart beat
>> aliv
Perfect thanks for your help
Michael
> On 10 Mar 2016, at 16:29, tao xiao wrote:
>
> You need to change group.max.session.timeout.ms in broker to be larger than
> what you have in consumer.
>
>> On Fri, 11 Mar 2016 at 00:24 Michael Freeman wrote:
>>
>>
Hi,
I'm trying to set the following on a 0.9.0.1 consumer.
session.timeout.ms=12
request.timeout.ms=144000
I get the below error but I can't find any documentation on acceptable ranges.
"The session timeout is not within an acceptable range." Logged by
AbstractCoordinator
Any idea's on
do you? Can you just retry
> and/or backoff-retry with the message you have? And just do the "commit" of
> the offset if successfully?
>
>
>
> On Wed, Mar 9, 2016 at 2:00 PM, Michael Freeman
> wrote:
>
>> Hey,
>> My team is new to Kafka and
Hey,
My team is new to Kafka and we are using the examples found at.
http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
We process messages from kafka and persist them to Mongo.
If Mongo is unavailable we are wondering how we can re-consume
22 matches
Mail list logo