By the way, this is what I get when I describe the topic:
Topic:lead.indexer PartitionCount:53 ReplicationFactor:1 Configs:
Topic: lead.indexer Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 1 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 2 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 3 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 4 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 5 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 6 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 7 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 8 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 9 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 10 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 11 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 12 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 13 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 14 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 15 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 16 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 17 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 18 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 19 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 20 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 21 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 22 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 23 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 24 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 25 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 26 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 27 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 28 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 29 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 30 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 31 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 32 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 33 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 34 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 35 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 36 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 37 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 38 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 39 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 40 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 41 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 42 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 43 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 44 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 45 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 46 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 47 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 48 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 49 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 50 Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 51 Leader: 1 Replicas: 1 Isr: 1
Topic: lead.indexer Partition: 52 Leader: 2 Replicas: 2 Isr: 2
-----Original Message-----
From: England, Michael
Sent: Wednesday, June 25, 2014 4:58 PM
To: [email protected]
Subject: RE: Failed to send messages after 3 tries
Ok, at WARN level I see the following:
2014-06-25 16:46:16 WARN kafka-consumer-sp_lead.index.processor1
kafka.producer.BrokerPartitionInfo - Error while fetching metadata
[{TopicMetadata for topic lead.indexer ->
No partition metadata for topic lead.indexer due to
kafka.common.LeaderNotAvailableException}] for topic [lead.indexer]: class
kafka.common.LeaderNotAvailableException
Any suggestions about how to address this? I see that there are some threads
about this in the mailing list archive. I'll start to look through them.
Thanks,
Mike
-----Original Message-----
From: Neha Narkhede [mailto:[email protected]]
Sent: Wednesday, June 25, 2014 4:47 PM
To: [email protected]
Subject: Re: Failed to send messages after 3 tries
It should be at WARN.
On Wed, Jun 25, 2014 at 3:42 PM, England, Michael <[email protected]>
wrote:
> Neha,
>
> I don’t see that error message in the logs. The error that I included in
> my original email is the only error that I see from Kafka.
>
> Do I need to change log levels get the info that you need?
>
> Mike
>
> -----Original Message-----
> From: Neha Narkhede [mailto:[email protected]]
> Sent: Wednesday, June 25, 2014 4:31 PM
> To: [email protected]
> Subject: Re: Failed to send messages after 3 tries
>
> Could you provide information on why each retry failed. Look for an error
> message that says "Failed to send producer request".
>
>
> On Wed, Jun 25, 2014 at 2:18 PM, England, Michael <
> [email protected]>
> wrote:
>
> > Hi,
> >
> > I get the following error from my producer when sending a message:
> > Caused by: kafka.common.FailedToSendMessageException: Failed to send
> > messages after 3 tries.
> > at
> >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
> > at kafka.producer.Producer.send(Producer.scala:76)
> > at
> kafka.javaapi.producer.Producer.send(Producer.scala:42)
> > at
> >
> com.servicemagic.kafka.producer.KafkaProducerTemplate.send(KafkaProducerTemplate.java:37)
> > ... 31 more
> >
> > The producer is running locally, the broker is on a different machine. I
> > can telnet to the broker, so it isn't a network issue. Also, I have other
> > producers that work fine using the same broker (but a different topic).
> >
> > I've checked the various logs on the broker, but I don't see anything
> > obvious in them. I'm not sure how to turn up the logging level, though,
> so
> > perhaps there would be useful info if I could do that.
> >
> > Can you give me some suggestions on how to troubleshoot this issue?
> >
> > Thanks,
> >
> > Mike
> >
>