Hi,
Just wondering - is there any reason why rebalance.max.retries is 4 by
default? Is there any good reason why I shouldn't expect my consumers to
keep trying to rebalance for minutes (e.g. 30 retries every 6 seconds),
rather than seconds (4 retries every 2 seconds by default)?
Also, if my consu
Hi Alex, as Jun said it depends on your use case.
Folks are doing everything under the sun now with Kafka now-a-days (which
is awesome).
You can find Kafka with Accumulo, Akka, Camel, Camus, Cassandra, Druid,
LogStash, Metrics, Samza, Spark, Storm and a whole lot more more.
There is also *a lot
Dear Kafka Users,
We are using kafka 0.8.0 is our application development. To keep message
delivery reliable we want to detect any failure while sending message. To
get high throughput we are using async producer.
As of kafka 0.8.0 async producer implementation, failure to send to message
is logg
I am confused as to exactly what you are trying to accomplish. Are you
trying to evenly distribute partitions across your newly added nodes?
-Clark
Clark Elliott Haskins III
LinkedIn DDS Site Reliability Engineer
Kafka, Zookeeper, Samza SRE
Mobile: 505.385.1484
BlueJeans: https://www.bluejeans.co
Thank you for pointing out to JIRA.
On 7/9/14, 6:38 PM, "Jun Rao" wrote:
>This is being worked on in
>https://issues.apache.org/jira/browse/KAFKA-1325
>
>Thanks,
>
>Jun
>
>
>On Wed, Jul 9, 2014 at 11:42 AM, Virendra Pratap Singh <
>vpsi...@yahoo-inc.com.invalid> wrote:
>
>> Well currently the lo
Yes exactly, I am just trying to rebalance partitions to the new node.
I just want to balance it evently across the all cluster.
Am I doing wrong?
On Thu, Jul 10, 2014 at 9:58 AM, Clark Haskins <
chask...@linkedin.com.invalid> wrote:
> I am confused as to exactly what you are trying to accompl
Hi Guozhang,
Just a follow up question. Is one simpleconsumer instance can be shared by
multiple threads and fetch data from multiple topics and commit offsets ?
It seems all the implementation are synchronized which means it can be
shared between multiple threads ? Is my understanding correct ?
Hi Jun,
That was the problem. It was actually the Ubuntu upstart job over writing
the limit. Thank you very much for your help.
Paul Lung
On 7/9/14, 1:58 PM, "Jun Rao" wrote:
>Is it possible your container wrapper somehow overrides the file handler
>limit?
>
>Thanks,
>
>Jun
>
>
>On Wed, Jul 9,
Has someone really used : log.retention.minutes in kafka 0.8.1.1.
I have my full cluster running on 0.8.1.1 and the logs data is just not
getting cleaned up.
And I see this message in kafka server.log
...
[2014-07-10 20:49:43,786] WARN Property log.retention.minutes is not valid
(kafka.utils.Verif
Hi Michal,
The rebalance will only be triggered on consumer membership or
topic/partition changes. Once triggered it will try to finish the rebalance
for at most rebalance.max.retries times, i.e. if it fails it will wait for
rebalance.backoff.ms, and then try again until number of retries exhauste
Hello Prashant,
You can use on the producer failure sensors to do the monitoring.
http://kafka.apache.org/documentation.html#monitoring
Guozhang
On Thu, Jul 10, 2014 at 6:11 AM, Prashant Prakash
wrote:
> Dear Kafka Users,
>
> We are using kafka 0.8.0 is our application development. To keep m
Yes it can be shared.
Guozhang
On Thu, Jul 10, 2014 at 11:12 AM, Weide Zhang wrote:
> Hi Guozhang,
>
> Just a follow up question. Is one simpleconsumer instance can be shared by
> multiple threads and fetch data from multiple topics and commit offsets ?
>
> It seems all the implementation are
What you are doing should work. Although you might find better results by
creating the partition assignment JSON file on your own rather than using
the move topics to brokers JSON file to generate it. Doing this yourself
will allow you to optimize the number of partition moves you are making
and ho
Thanks for your answer,
Indeed, I have already worked on this kind of script. I ended up with 800
lines of groovy script that rebalance partitions across the cluster and
minimizing the number of partition moves. I also worked on the partition
leadership balancing.
I still have to work on my scri
No worries, its working. My deployment was flawed and it was running
0.8.0, while I was under the impression that its running 0.8.1.1.
After going to proper 0.8.1.1, the parameter started working properly.
On 7/10/14, 2:18 PM, "Virendra Pratap Singh" wrote:
>Has someone really used : log.retenti
Hi Gouzhang,
Monitoring through JMX mbean will be an indirect way to detect producer
failure.
In our requirement we want to send messages in a pre-defined sequence. At
no point we want to any message out of order at consumer.
In case of failure we replay the entire sequence. We dedupe messages at
16 matches
Mail list logo