Most of these are available via JMX and others can be available via zookeeper. I'm not sure why/how you would monitor "messages being deleted by the broker". In general, monitoring via JMX is preferable to scraping logs.
Thanks, Neha On Mon, Jun 23, 2014 at 11:51 PM, ravi singh <rrs120...@gmail.com> wrote: > Primarily we want to log below date(although this is not the exhaustive > list): > > + any error/exception during kafka start/stop > + any error/exception while broker is running > + broker state changes like leader re-election, broker goes down, > + Current live brokers > + new topic creation > + when messages are deleted by broker after specified limit > + Broker health : memory usage > > Regards, > Ravi > > > On Tue, Jun 24, 2014 at 11:11 AM, Neha Narkhede <neha.narkh...@gmail.com> > wrote: > > > What kind of broker metrics are you trying to push to this centralized > > logging framework? > > > > Thanks, > > Neha > > On Jun 23, 2014 8:51 PM, "ravi singh" <rrs120...@gmail.com> wrote: > > > > > Thanks Guozhang/Neha for replies. > > > Here's my use case: > > > > > > We use proprietary application logging in our apps. We are planning to > > use > > > Kafka brokers in production , but apart from the logs which are already > > > logged using log4j in kafka we want to log the broker stats using our > > > centralized application logging framework. > > > > > > Simply put I want to write an application which could start when the > > kafka > > > brokers starts, read the broker state and metrics and push it to the > > > centralized logging servers. > > > > > > In ActiveMQ we have a plugin for our proprietary logging. We intercept > > > broker operation and install the plugin into the interceptor chain of > the > > > broker. > > > > > > Regards, > > > Ravi > > > > > > > > > On Mon, Jun 23, 2014 at 9:29 PM, Neha Narkhede < > neha.narkh...@gmail.com> > > > wrote: > > > > > > > Ravi, > > > > > > > > Our goal is to provide the best implementation of a set of useful > > > > abstractions and features in Kafka. The motivation behind this > > philosophy > > > > is performance and simplicity at the cost of flexibility. In most > > cases, > > > we > > > > can argue that the loss in flexibility is minimal since you can > always > > > get > > > > that functionality by modeling your application differently, > especially > > > if > > > > the system supports high performance. ActiveMQ has to support the JMS > > > > protocol and hence provide all sorts of hooks and plugins on the > > brokers > > > at > > > > the cost of performance. > > > > > > > > Could you elaborate more on your use case? There is probably another > > way > > > to > > > > model your application using Kafka. > > > > > > > > Thanks, > > > > Neha > > > > > > > > > > > > On Sat, Jun 21, 2014 at 9:24 AM, ravi singh <rrs120...@gmail.com> > > wrote: > > > > > > > > > How do I intercept Kakfa broker operation so that features such as > > > > > security,logging,etc can be implemented as a pluggable filter. For > > > > example > > > > > we have "BrokerFilter" class in ActiveMQ , Is there anything > similar > > in > > > > > Kafka? > > > > > > > > > > -- > > > > > *Regards,* > > > > > *Ravi* > > > > > > > > > > > > > > > > > > > > > -- > > > *Regards,* > > > *Ravi* > > > > > > > > > -- > *Regards,* > *Ravi* >