Hi Dima,
They'll stick around. I think the point is that when node 3 is back, you
should do partition reassignment so that the pre-existed partition/replica
data can be reused.

But for your question, the unused log won't delete is a bug or as
designed... I think it is working as designed and it should be removed
manually. But that's just my thought.

Thanks.
Luke

On Wed, Sep 16, 2020 at 10:48 AM Dima Brodsky
<dbrod...@salesforce.com.invalid> wrote:

> Hi,
>
> I have a question, when you start kafka on a node, if there is a random
> replica log should it delete it on startup?  Here is an example:  Assume
> you have a 4 node cluster.  Topic X has 3 replicas and it is replicated on
> nodes 1, 2, and 3.  Now you shutdown node 3 and you place  the replica that
> was on node 3 on node 4.  Then once everything is in-sync you start up node
> 3 again.  What should happen to the replica X on node 3.  Should kafka
> delete it or will it stick around forever.
>
> Given the above scenario we are seeing the replica stick around forever.
> Is this working as designed, or is this a bug?
>
> Thanks!
> ttyl
> Dima
>
> --
> dbrod...@salesforce.com
>
> "The price of reliability is the pursuit of the utmost simplicity.
> It is the price which the very rich find most hard to pay." (Sir Antony
> Hoare, 1980)
>

Reply via email to