Hi Jun,

Thanks for your prompt answer. The producer yields those errors in the
beginning, so I think the topic metadata refresh has nothing to do with it.

The problem is one of the brokers isn't leader on any partition assigned to
it and because topics were created with a replication factor of 1, the
producer will never connect to that broker at all. What I don't understand
is why doesn't the broker assume the lead of those partitions.

I deleted all the topics and tried now with a replication factor of two

topic: A  partition: 0    leader: 1       replicas: 1,0   isr: 1
topic: A  partition: 1    leader: 0       replicas: 0,1   isr: 0,1
topic: B partition: 0    leader: 0       replicas: 0,1   isr: 0,1
topic: B partition: 1    leader: 1       replicas: 1,0   isr: 1
topic: C      partition: 0    leader: 1       replicas: 1,0   isr: 1
topic: C      partition: 1    leader: 0       replicas: 0,1   isr: 0,1


Now producer doesn't yield errors. However, one of the brokers ( broker 0 )
generates lots of lines like this:

[2013-06-12 16:19:41,805] WARN [KafkaApi-0] Produce request with
correlation id 404999 from client  on partition [B,0] failed due to
Partition [B,0] doesn't exist on 0 (kafka.server.KafkaApis)

There should be a replica there, so I don't know why it complains about
that message.

Have you ever found anything like this?



On 12 June 2013 16:27, Jun Rao <jun...@gmail.com> wrote:

> If the leaders exist in both brokers, the producer should be able to
> connect to both of them, assuming you don't provide any key when sending
> the data. Could you try restarting the producer? If there has been broker
> failures, it may take topic.metadata.refresh.interval.ms for the producer
> to pick up the newly available partitions (see
> http://kafka.apache.org/08/configuration.html for details).
>
> Thanks,
>
> Jun
>
>
> On Wed, Jun 12, 2013 at 8:01 AM, Alexandre Rodrigues <
> alexan...@blismedia.com> wrote:
>
> > Hi,
> >
> > I have a Kafka 0.8 cluster with two nodes connected to three ZKs, with
> the
> > same configuration but the brokerId (one is 0 and the other 1). I created
> > three topics A, B and C with 4 partitions and a replication factor of 1.
> My
> > idea was to have 2 partitions per topic in each broker. However, when I
> > connect a producer, I can't have both brokers to write at the same time
> and
> > I don't know what's going on.
> >
> > My server.config has the following entries:
> >
> > auto.create.topics.enable=true
> > num.partitions=2
> >
> >
> > When I run bin/kafka-list-topic.sh --zookeeper localhost:2181   I get the
> > following partition leader assignments:
> >
> > topic: A  partition: 0    leader: 1       replicas: 1     isr: 1
> > topic: A  partition: 1    leader: 0       replicas: 0     isr: 0
> > topic: A  partition: 2    leader: 1       replicas: 1     isr: 1
> > topic: A  partition: 3    leader: 0       replicas: 0     isr: 0
> > topic: B partition: 0    leader: 0       replicas: 0     isr: 0
> > topic: B partition: 1    leader: 1       replicas: 1     isr: 1
> > topic: B partition: 2    leader: 0       replicas: 0     isr: 0
> > topic: B partition: 3    leader: 1       replicas: 1     isr: 1
> > topic: C      partition: 0    leader: 0       replicas: 0     isr: 0
> > topic: C      partition: 1    leader: 1       replicas: 1     isr: 1
> > topic: C      partition: 2    leader: 0       replicas: 0     isr: 0
> > topic: C      partition: 3    leader: 1       replicas: 1     isr: 1
> >
> >
> > I've forced reassignment using the kafka-reassign-partitions tool with
> the
> > following JSON:
> >
> > {"partitions":  [
> >    {"topic": "A", "partition": 1, "replicas": [0] },
> >    {"topic": "A", "partition": 3, "replicas": [0] },
> >    {"topic": "A", "partition": 0, "replicas": [1] },
> >    {"topic": "A", "partition": 2, "replicas": [1] },
> >    {"topic": "B", "partition": 1, "replicas": [0] },
> >    {"topic": "B", "partition": 3, "replicas": [0] },
> >    {"topic": "B", "partition": 0, "replicas": [1] },
> >    {"topic": "B", "partition": 2, "replicas": [1] },
> >    {"topic": "C", "partition": 0, "replicas": [0] },
> >    {"topic": "C", "partition": 1, "replicas": [1] },
> >    {"topic": "C", "partition": 2, "replicas": [0] },
> >    {"topic": "C", "partition": 3, "replicas": [1] }
> > ]}
> >
> > After reassignment, I've restarted producer and nothing worked. I've
> tried
> > also to restart both brokers and producer and nothing.
> >
> > The producer contains this logs:
> >
> > 2013-06-12 14:48:46,467] WARN Error while fetching metadata    partition
> 0
> >     leader: none    replicas:       isr:    isUnderReplicated: false for
> > topic partition [C,0]: [class kafka.common.LeaderNotAvailableException]
> > (kafka.producer.BrokerPartitionInfo)
> > [2013-06-12 14:48:46,467] WARN Error while fetching metadata
>  partition 0
> >     leader: none    replicas:       isr:    isUnderReplicated: false for
> > topic partition [C,0]: [class kafka.common.LeaderNotAvailableException]
> > (kafka.producer.BrokerPartitionInfo)
> > [2013-06-12 14:48:46,468] WARN Error while fetching metadata
>  partition 2
> >     leader: none    replicas:       isr:    isUnderReplicated: false for
> > topic partition [C,2]: [class kafka.common.LeaderNotAvailableException]
> > (kafka.producer.BrokerPartitionInfo)
> > [2013-06-12 14:48:46,468] WARN Error while fetching metadata
>  partition 2
> >     leader: none    replicas:       isr:    isUnderReplicated: false for
> > topic partition [C,2]: [class kafka.common.LeaderNotAvailableException]
> > (kafka.producer.BrokerPartitionInfo)
> >
> >
> > And sometimes lines like this:
> >
> > [2013-06-12 14:55:37,339] WARN Error while fetching metadata
> > [{TopicMetadata for topic B ->
> > No partition metadata for topic B due to
> > kafka.common.LeaderNotAvailableException}] for topic [B]: class
> > kafka.common.LeaderNotAvailableException
> >  (kafka.producer.BrokerPartitionInfo)
> >
> >
> > Do you guys have any idea what's going on?
> >
> > Thanks in advance,
> > Alex
> >
> > --
> >
> > @BlisMedia <http://twitter.com/BlisMedia>
> >
> > www.blismedia.com <http://blismedia.com>
> >
> > This email and any attachments to it may be confidential and are intended
> > solely
> > for the use of the individual to whom it is addressed. Any views or
> > opinions
> > expressed are solely those of the author and do not necessarily represent
> > those of BlisMedia Ltd, a company registered in England and Wales with
> > registered number 06455773. Its registered office is 3rd Floor, 101 New
> > Cavendish St, London, W1W 6XH, United Kingdom.
> >
> > If you are not the intended recipient of this email, you must neither
> take
> > any action based upon its contents, nor copy or show it to anyone. Please
> > contact the sender if you believe you have received this email in error.
> >
>

-- 

@BlisMedia <http://twitter.com/BlisMedia>

www.blismedia.com <http://blismedia.com>

This email and any attachments to it may be confidential and are intended 
solely 
for the use of the individual to whom it is addressed. Any views or opinions 
expressed are solely those of the author and do not necessarily represent 
those of BlisMedia Ltd, a company registered in England and Wales with 
registered number 06455773. Its registered office is 3rd Floor, 101 New 
Cavendish St, London, W1W 6XH, United Kingdom.

If you are not the intended recipient of this email, you must neither take 
any action based upon its contents, nor copy or show it to anyone. Please 
contact the sender if you believe you have received this email in error. 

Reply via email to