Hi,

I have a question, when you start kafka on a node, if there is a random
replica log should it delete it on startup?  Here is an example:  Assume
you have a 4 node cluster.  Topic X has 3 replicas and it is replicated on
nodes 1, 2, and 3.  Now you shutdown node 3 and you place  the replica that
was on node 3 on node 4.  Then once everything is in-sync you start up node
3 again.  What should happen to the replica X on node 3.  Should kafka
delete it or will it stick around forever.

Given the above scenario we are seeing the replica stick around forever.
Is this working as designed, or is this a bug?

Thanks!
ttyl
Dima

-- 
dbrod...@salesforce.com

"The price of reliability is the pursuit of the utmost simplicity.
It is the price which the very rich find most hard to pay." (Sir Antony
Hoare, 1980)

Reply via email to