[ 
https://issues.apache.org/jira/browse/KAFKA-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13955689#comment-13955689
 ] 

Jun Rao commented on KAFKA-1352:
--------------------------------

Some examples of excessive logging.

2014/03/26 00:56:23.605 ERROR [KafkaApis] [kafka-request-handler-12] 
[kafka-server] [] [KafkaApi-512] Error while fetching metadata for partition 
[xxx,4]
kafka.common.ReplicaNotAvailableException
    at kafka.server.KafkaApis$$anonfun$17$$anonfun$20.apply(KafkaApis.scala:552)
    at kafka.server.KafkaApis$$anonfun$17$$anonfun$20.apply(KafkaApis.scala:537)
    at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
    at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
    at 
scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:61)
    at scala.collection.immutable.List.foreach(List.scala:45)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
    at scala.collection.immutable.List.map(List.scala:45)
    at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:537)
    at kafka.server.KafkaApis$$anonfun$17.apply(KafkaApis.scala:533)
    at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
    at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
    at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:123)
    at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:322)
    at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:322)
    at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:322)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
    at scala.collection.immutable.HashSet.map(HashSet.scala:32)
    at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:533)

2014/03/26 07:42:01.845 ERROR [ReplicaFetcherThread] 
[ReplicaFetcherThread-4-516] [kafka-server] [] [ReplicaFetcherThread-4-516], 
Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 84
2728; ClientId: ReplicaFetcherThread-4-516; ReplicaId: 512; MaxWait: 0 ms; 
MinBytes: 1 bytes; RequestInfo: [ xxx ]
java.net.SocketTimeoutException
        at 
sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
        at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
        at 
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
        at kafka.utils.Utils$.read(Utils.scala:375)
        at 
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
        at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
        at 
kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
        at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
        at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
        at 
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
        at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109)
        at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
        at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
        at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
        at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108)
        at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
        at 
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
        at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
        at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107)

In both cases, at least the stack trace is not useful.

> Reduce logging on the server
> ----------------------------
>
>                 Key: KAFKA-1352
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1352
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.8.0, 0.8.1
>            Reporter: Neha Narkhede
>              Labels: newbie, usability
>
> We have excessive logging in the server, making the logs unreadable and also 
> affecting the performance of the server in practice. We need to clean the 
> logs to address these issues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to