You can use the DumpLogSegments tool to see if a log segment is indeed
corrupted.

Thanks,

Jun

On Mon, Jan 12, 2015 at 2:04 PM, Bhavesh Mistry <mistry.p.bhav...@gmail.com>
wrote:

> Hi Kafka Team,
>
> I am trying to find out Kafka Internal and how a message can be corrupted
> or lost at brokers side.
>
> I have refer to following documentations for monitoring:
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Internals
> http://kafka.apache.org/documentation.html#monitoring
>
> I am looking at following beans:
>
>
> "kafka.server":type="BrokerTopicMetrics",name="test-FailedProduceRequestsPerSec"
> "kafka.server":type="BrokerTopicMetrics",name="test-BytesRejectedPerSec"
>
>
> I see following exception on Brokers side rejecting due to large request.
> This is great but it does not show the source ip of prodcuer that caused
> this issue ? Is there any way to log and capture this ?
>
>
>
>
>
>
> *[2014-10-14 22:09:53,262] ERROR [KafkaApi-2] Error processing
> ProducerRequest with correlation id 28795280 from client XXXXXon partition
> [XXX,17]
> (kafka.server.KafkaApis)kafka.common.MessageSizeTooLargeException: Message
> size is 2924038 bytes which exceeds the maximum configured message size of
> 2097152.  *
>
> Can you this be reported as separate metric "MessageSizeTooLargeException
> per topic" ?
>
> Also, what is best way to find the CRC check error from the consumer side
> ? How do you debug this ?
>
> e.g log line:
>
> *11 Dec 2014 07:22:33,387 ERROR [pool-15-thread-4] *
> *kafka.message.InvalidMessageException: Message is corrupt (stored crc =
> 1834644195, computed crc = 2374999037)*
>
> Also, is there any jira open to update with list all latest metrics and
> its format and what it means ?
> http://kafka.apache.org/documentation.html#monitoring.  Please see
> attached image for list of all metrics.
>
> Version of Broker is 0.8.1.1.
>
> Thanks,
>
> Bhavesh
>

Reply via email to