I believe this is the ticket https://issues.apache.org/jira/browse/KAFKA-972

/*******************************************
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
********************************************/

On Tue, Sep 30, 2014 at 1:00 PM, Christofer Hedbrandh <
christo...@knewton.com> wrote:

> Hi Kafka users,
>
> Was there ever a JIRA ticket filed for this?
>
> "Re: Stale TopicMetadata"
>
>
> http://mail-archives.apache.org/mod_mbox/kafka-users/201307.mbox/%3ce238b018f88c39429066fc8c4bfd0c2e019be...@esv4-mbx01.linkedin.biz%3E
>
> As far as I can tell this is still an issue in 0.8.1.1
>
> Using the python client (VERSION 0.2-alpha):
> client = KafkaClient(host, port)
> request_id = client._next_id()
> KafkaProtocol.encode_metadata_request(client.client_id, request_id,
> topic_names)
> response = client._send_broker_unaware_request(request_id, request)
> brokers, topics = KafkaProtocol.decode_metadata_response(response)
>
> the meta data returned tells me only a subset of the replicas are in-sync
>
> E.g.
> {'test-topic-1': {0: PartitionMetadata(topic='test-topic-1', partition=0,
> leader=2018497752, replicas=(2018497752, 915105820, 1417963519),
> isr=(2018497752,))}}
>
> but when I fetch meta data with the kafka-topics.sh --describe tool, it
> looks like all replicas are in sync.
>
> Topic:test-topic-1 PartitionCount:1 ReplicationFactor:3 Configs:
> retention.ms
> =604800000
> Topic: test-topic-1 Partition: 0 Leader: 2018497752 Replicas:
> 2018497752,915105820,1417963519 Isr: 2018497752,915105820,1417963519
>
> I looked around for a JIRA ticket for this but couldn't find one. Please
> let me know where this bug is tracked.
>
> Thanks,
> Christofer
>

Reply via email to