Greetings!
I have problem with Kafka. I had a cluster of 3 brokers in version 0.8.1. I
have a very important topic with raw events, that had a config
retention.ms={365
days in ms} .
It all worked, fine, data was not being deleted.
But now I upgraded all brokers to 0.8.2 and suddenly brokers delete
Hi,
I use the following snippets to try to get fetch the offset in a
SimpleConsumer I have committed (the commission has created topic
__consumer_offsets):
fetchRequest = new OffsetFetchRequest(
clientGroup, // What's the client group for simple consumer?
partitions,
(short) 1 /* version
Sent :)
From: Gwen Shapira [gshap...@cloudera.com]
Sent: Friday, June 26, 2015 11:53 AM
To: users@kafka.apache.org
Cc: d...@kafka.apache.org
Subject: Re: Help Us Nominate Apache Kafka for a 2015 Bossie (Best of OSS)
Award - Due June 30th
Sent! Thanks for l
The logic you're requesting is basically what the new producer implements.
The first condition is the batch size limit and the second is linger.ms.
The actual logic is a bit more complicated and has some caveats dealing
with, for example, backing off after failures, but you can see in this code
ht
*bump*
On Tue, Jun 23, 2015 at 1:03 PM, Achanta Vamsi Subhash <
achanta.va...@flipkart.com> wrote:
> Hi,
>
> We are using the batch producer of 0.8.2.1 and we are getting very bad
> latencies for the topics. We have ~40K partitions now in a 20-node cluster.
>
> - We have many topics and each with
That is so cool. Thank you
On Sun, 28 Jun 2015 at 04:29 Guozhang Wang wrote:
> Tao, I have added you to the contributor list of Kafka so you can assign
> tickets to yourself now.
>
> I will review the patch soon.
>
> Guozhang
>
> On Thu, Jun 25, 2015 at 2:54 AM, tao xiao wrote:
>
> > Patch upda