Hello Edward,

You can try to increase the maxWait and minBytes (currently they are 100 ms
and 1 byte from your logs) so that the consumer will not frequently try to
pull data if there is little.

Guozhang


On Thu, Aug 28, 2014 at 12:49 PM, Edward Capriolo <edlinuxg...@gmail.com>
wrote:

> At a certain hour I have seen a huge up tick in requests.
>
> log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
> log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
> log4j.appender.requestAppender.File=logs/kafka-request.log
> log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
> log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
>
>
> [2014-08-28 18:00:00,031] TRACE Completed request:Name: FetchRequest;
> Version: 0; CorrelationId: 10672153; ClientId: xxxx; ReplicaId: -1;
> MaxWait: 100 ms; MinBytes: 1 bytes; RequestInfo: [events,0] ->
> PartitionFetchInfo(41358968,1048576) from client
> /10.9.61.6:55168
> ;totalTime:100,queueTime:0,localTime:0,remoteTime:100,sendTime:0
> (kafka.request.logger)
>
> Everything seems normal, we have a topic with 100 partitions a good flow of
> data an 10 active producers. I am going ot lower the log level but does
> anything thing this chatter is indicative of a problem?
>



-- 
-- Guozhang

Reply via email to