Thank you Sachin, this solution is very helpful..

~Srinivas




On Tue, 4 Aug, 2020, 21:31 Sachin Mittal, <sjmit...@gmail.com> wrote:

> I think if you just log to the console, kubernetes will manage the log
> rotation for you.
> https://kubernetes.io/docs/concepts/cluster-administration/logging/
> You can use "kubectl logs" command to fetch the logs or use some logging
> agent to move the logs centrally.
>
> On Tue, Aug 4, 2020 at 8:15 PM Srinivas Seema <srinivassee...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > We have Kafka cluster deployed in Kubernetes and running with docker
> image
> > (solsson/kafka:2.4.0)
> >
> > I have below logging configuration: config/log4j.properties
> >
> > # Unspecified loggers and loggers with additivity=true output to
> server.log
> > and stdout
> > # Note that INFO only applies to unspecified loggers, the log level of
> the
> > child logger is used otherwise
> > log4j.rootLogger=INFO, stdout, kafkaAppender
> >
> > log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> > log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> > log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
> >
> > log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
> > log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
> > log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
> > log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
> >
> >
> >
> log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
> >
> log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
> > log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
> > log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m
> > (%c)%n
> >
> > log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
> > log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
> > log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
> > log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
> >
> > log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
> > log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
> > log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
> > log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
> >
> >
> log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
> > log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
> > log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
> > log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m
> > (%c)%n
> >
> >
> log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
> >
> >
> log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
> > log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
> > log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m
> > (%c)%n
> >
> > # Change the line below to adjust ZK client logging
> > log4j.logger.org.apache.zookeeper=INFO
> >
> > # Change the two lines below to adjust the general broker logging level
> > (output to server.log and stdout)
> > log4j.logger.kafka=INFO
> > log4j.logger.org.apache.kafka=INFO
> >
> > # Change to DEBUG or TRACE to enable request logging
> > log4j.logger.kafka.request.logger=WARN, requestAppender
> > log4j.additivity.kafka.request.logger=false
> >
> > # Uncomment the lines below and change
> > log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output
> > # related to the handling of requests
> > #log4j.logger.kafka.network.Processor=TRACE, requestAppender
> > #log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
> > #log4j.additivity.kafka.server.KafkaApis=false
> > log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
> > log4j.additivity.kafka.network.RequestChannel$=false
> >
> > log4j.logger.kafka.controller=TRACE, controllerAppender
> > log4j.additivity.kafka.controller=false
> >
> > log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
> > log4j.additivity.kafka.log.LogCleaner=false
> >
> > log4j.logger.state.change.logger=TRACE, stateChangeAppender
> > log4j.additivity.state.change.logger=false
> >
> > # Access denials are logged at INFO level, change to DEBUG to also log
> > allowed accesses
> > log4j.logger.kafka.authorizer.logger=INFO, authorizerAppender
> > log4j.additivity.kafka.authorizer.logger=false
> >
> > Problem Statement:
> > logs are getting piled up and using more disk and pods are going to
> evicted
> > state.
> >
> > I would like to know the best approach to rotate the logs and archive
> > (kafka component wise &  provided size e.g: 10 MB), so that I can analyze
> > logs and save the disk space.
> >
> > ~Srinivas
> >
>

Reply via email to