I also tried changing the broker id to a new id: 5 and still got many many
similar warnings and no data generated in log.dir.

[2014-01-21 18:00:00,923] WARN [KafkaApi-5] Fetch request with correlation
id 1528423082 from client ReplicaFetcherThread-0-1 on partition [co,9]
failed due to Partition [co,9] doesn't exist on 5 (kafka.server.KafkaApis)

[2014-01-21 18:00:00,924] WARN [Replica Manager on Broker 5]: While
recording the follower position, the partition [co,4] hasn't been created,
skip updating leader HW (kafka.server.ReplicaManager)


2014/1/22 Xiao Bo <xiaob...@gmail.com>

> Thanks for reply.  Yes, the broker id is the same.
> We have tried just restarting the broker, but after 1 day there were still
> no partition files generated in the log.dir and by list-topic I could not
> see broker-1 back. Any other way can I add it back ?
> As I said in last mail, the topic seems having only 2 replicas for all
> partitions on alive brokers, what's the reason ?
>
>
> 2014/1/22 Jun Rao <jun...@gmail.com>
>
>> If you keep the broker id the same, you don't need to use the
>> reassign-partitions tool to add the failed broker back to the cluster. Just
>> restart the broker and it should catch up with the leader. Once it's fully
>> caught up, you can run the rebalance leader tool to move some leaders back
>> to the failed broker.
>>
>> Thanks,
>>
>> Jun
>>
>>
>> On Tue, Jan 21, 2014 at 8:24 PM, Xiao Bo <xiaob...@gmail.com> wrote:
>>
>>> Hi guys and jun,
>>>
>>> We have a problem when adding a breakdown broker back to the cluster.
>>>  Hope you guys have some solution for it.
>>>
>>> A cluster of 5 brokers(id=0~4)  of kafka 0.8.0 was running for  log
>>> aggregation . Because of some issues of the disk, a broker(id=1) was down.
>>> We spent one week to replace the disk, so we don't have  any old data. When
>>> adding the brokers back and using kafka-reassign-partitions.sh tool to
>>> receive partition data from the up-to-date leaders,  we have so many
>>> warnings in server.log as shown below and no partition data files generated
>>> in log.dirs.
>>>
>>>
>>> [2014-01-15 20:00:00,519] WARN [Replica Manager on Broker 1]: While
>>> recording the follower position, the partition [co,4] hasn't been created,
>>> skip updating leader HW (kafka.server.ReplicaManager)
>>>
>>> [2014-01-15 20:00:00,519] WARN [Replica Manager on Broker 1]: While
>>> recording the follower position, the partition [co,9] hasn't been created,
>>> skip updating leader HW (kafka.server.ReplicaManager)
>>>
>>> [2014-01-15 20:00:00,519] WARN [KafkaApi-1] Fetch request with
>>> correlation id 359929003 from client ReplicaFetcherThread-0-1 on partition
>>> [co,4] failed due to Partition [co,4] doesn't exist on 1
>>> (kafka.server.KafkaApis)
>>>
>>> [2014-01-15 20:00:00,519] WARN [KafkaApi-1] Fetch request with
>>> correlation id 359929003 from client ReplicaFetcherThread-0-1 on partition
>>> [co,9] failed due to Partition [co,9] doesn't exist on 1
>>> (kafka.server.KafkaApis)
>>>
>>> [2014-01-15
>>>
>>> And it seems an another issue in the listing topics result. Previously
>>> we had configured 3 replica for the topic, but now there are only 2
>>> replicas for all the partitions and alive brokers.
>>>
>>> topic: countinfo partition: 0 leader: 4 replicas: 3,4 isr: 4,3
>>>
>>> topic: countinfo partition: 1 leader: 0 replicas: 4,0 isr: 0,4
>>>
>>> topic: countinfo partition: 2 leader: 0 replicas: 0,2 isr: 0,2
>>>
>>> topic: countinfo partition: 3 leader: 2 replicas: 2,3 isr: 2,3
>>>
>>> topic: countinfo partition: 4 leader: 3 replicas: 3,0 isr: 3,0
>>>
>>> topic: countinfo partition: 5 leader: 4 replicas: 4,2 isr: 4,2
>>>
>>> topic: countinfo partition: 6 leader: 0 replicas: 0,3 isr: 0,3
>>>
>>> topic: countinfo partition: 7 leader: 4 replicas: 2,4 isr: 4,2
>>>
>>> topic: countinfo partition: 8 leader: 2 replicas: 3,2 isr: 2,3
>>>
>>> topic: countinfo partition: 9 leader: 3 replicas: 4,3 isr: 3,4
>>>
>>> So my question is: how can I re-add the breakdown broker back and make
>>> the replicas to be 3 again ?
>>>
>>> Thanks in advance.
>>>
>>>
>>> --
>>> Best Wishes,
>>>
>>> Bo
>>>
>>
>>
>
>
> --
> Best Wishes,
>
> Bo
>



-- 
Best Wishes,

Bo

Reply via email to