I have a system which gathers logs from several clients.
The client logs are produced with a key specifying which client it is.
That way processing logs is guaranteed to be ordered by time per user.
Getting metrics per the topic is good and all, but getting metrics per key
allows you to analyze how
Hi Iirop,
question:
> If not, what is the best way to know how many messages with a specific key
got inside a topic?
why do require this information? is this a business metric? or are you
looking at it from an operational point of view?
I'm not aware of any metric that will give you the per-k
ards
>
> Adrien
>
> De : Daniel Hanley
> Envoyé : lundi 9 juillet 2018 16:23:36
> À : users@kafka.apache.org
> Objet : Re: Monitoring Kafka
>
> Hi Adrien
> You could take some ideas from: https://github.com/framiere/
> monitoring-demo
>
> Alternatively
ards
>
> Adrien
>
> De : Daniel Hanley
> Envoyé : lundi 9 juillet 2018 16:23:36
> À : users@kafka.apache.org
> Objet : Re: Monitoring Kafka
>
> Hi Adrien
> You could take some ideas from: https://github.com/framiere/
> monitoring-demo
>
> Alternatively
Great ! Thank a lot Daniel !
I will try it.
Best Regards
Adrien
De : Daniel Hanley
Envoyé : lundi 9 juillet 2018 16:23:36
À : users@kafka.apache.org
Objet : Re: Monitoring Kafka
Hi Adrien
You could take some ideas from: https://github.com/framiere/monitoring
Hi Adrien
You could take some ideas from: https://github.com/framiere/monitoring-demo
Alternatively, Confluent provide a very powerful Control Center for
monitoring and managing Kafka
(disclaimer, I work for Confluent!)
Best Regards
Dan
On Mon, Jul 9, 2018 at 2:12 AM, Adrien Ruffie wrote:
>
Thanks guys for the pointers !
On Sat, Apr 21, 2018 at 9:53 PM, Steve Jang wrote:
> The following tool is really good:
> https://github.com/yahoo/kafka-manager
>
>
> On Sat, Apr 21, 2018 at 5:42 AM, Joris Meijer wrote:
>
> > You can do this without exposing the JMX port, e.g. by using a Prometh
The following tool is really good:
https://github.com/yahoo/kafka-manager
On Sat, Apr 21, 2018 at 5:42 AM, Joris Meijer wrote:
> You can do this without exposing the JMX port, e.g. by using a Prometheus
> exporter as javaagent (https://github.com/prometheus/jmx_exporter).
>
> Metricsreporters,
You can do this without exposing the JMX port, e.g. by using a Prometheus
exporter as javaagent (https://github.com/prometheus/jmx_exporter).
Metricsreporters, such as the one from Confluent, also don't require you to
open ports, because metrics will be pushed out of the broker (
https://docs.conf
Without JMX may be difficult.. why not install an agent and report to an
external service like ELK or new Relic?
That’s long standing industry pattern.
Some reading.. and some tools in the readings.. these articles are opinionated
towards the vendors that published them but its a starting point
Hi Marius,
I'm curious what you've found?
One way to go about it is to have alerts (those that look for health status
- in SPM we call them Heartbeat alerts) hooked up to a Webhook, where a
Webhook is basically your custom HTTP endpoint (or something like Nagios).
This should let you integrate he
Hello Otis,
Thank you for your reply. Sorry for not being very explicit. For this
particular case, the failed application was on the consumer side, however,
monitoring the producer in the same way would be desired as well. I had a
look into SMP. I looks good however I'm up to finding a way to chec
Hi,
By "kafka client" I assume you mean you Kafka producer and/or consumers?
If so, any decent Kafka monitoring solution should let you monitor that.
See https://sematext.com/spm/integrations/kafka-monitoring/ for an example.
Otis
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr
On Wed, Jun 29, 2016 at 9:44 AM, Sumit Arora wrote:
> Hello,
>
> We are currently building our data-pipeline using Confluent and as part of
> this implementation, we have written couple of Kafka Connect Sink
> Connectors for Azure and MS SQL server. To provide some more context, we
> are planning
The problem was that the metric names had all changed in the latest
version. Fixing the names seems to have done it.
On Thu, Aug 13, 2015 at 3:13 PM, Rajiv Kurian wrote:
> Aah that seems like a red herring - seems like the underlying cause is
> that the MBeans I was trying to poll (through our m
Aah that seems like a red herring - seems like the underlying cause is that
the MBeans I was trying to poll (through our metrics system) are no longer
present. We use collectd JMX to get metrics from Kafka and here is what I
see:
GenericJMXConfMBean: No MBean matched the ObjectName
"kafka.server"
16 matches
Mail list logo