Thanks Bill, the StateRestoreListener is exactly the tool needed for my use
case.
Patrik, thanks for the heads-up on that issue. I guess until it's fixed
that makes it even easier to wait until the cache is warmed :-).
Chris
On Tue, Nov 13, 2018 at 10:40 PM Patrik Kleindl wrote:
> Hi Chris
>
>
EBS is one of the option. But we use instance level storage where we loose
all data as soon as we have a broker failed in AWS.
In such scenario, anyone has better launch script or cofiguration can be
executed on new broker to retain the old id not conflicting with existing
broker ids.
On Wed, Nov
Wow, that seems like an anti-pattern.
Replication itself should be enough to resurrect the cluster in case of
node failures. Technically, you shouldn't have to maintain the same broker
id. There must be something else going on with replication.
On Thu, Nov 15, 2018 at 12:28 AM Andrey Dyachkov
wro
You can attach EBS volume, which will store data and metadata(e.g. broker
id), and then attach it to the new AWS instance and start Kafka, it will
pick the broker id plus you won’t need to rebalance the cluster.
On Wed 14. Nov 2018 at 19:48, naresh Goud
wrote:
> Static IP. Buying static IP may h
thanks Naresh for the quick response. But we don't want to make use of any
elastic IP in this case.
I found something like, we can manually get the broker-id using some script
mentioned @ http://tech.gc.com/scaling-with-kafka/ while instance is
getting launched.
Trying to find, if there is any o
Static IP. Buying static IP may help. I am not aws expert
On Wed, Nov 14, 2018 at 12:47 PM Srinivas Rapolu wrote:
> Hello Kafka experts,
>
> We are running Kafka on AWS, main question is what is the best way to
> retain broker.id on new instance spun-up in-place of instance/broker
> failed.
>
>
Hello Kafka experts,
We are running Kafka on AWS, main question is what is the best way to
retain broker.id on new instance spun-up in-place of instance/broker failed.
We are currently running Kafka in AWS with broker.id gets auto generated.
But we are having issues when a broker is failed, new b
Hi there!
We are running kafka 0.11.0 with 0.10.0 message format configured for a
topic
The topic has 1 partition + 3 replicas, unclean.leader.election.enable is
set to false.
We have reasons to believe that an old partition leader did not truncate
its dirty log tail
before syncing with new leade
Hi ,
I am looking to upgrade Kafka from 0.10.2 to version 2. Documentation in kafka
website says editing server.properties file and upgrade the code. I understood
the editing part of server.peroperties file. But what is meant by upgrade the
code or upgrade kafka ? is it just replacing kafka bina