I have a system which gathers logs from several clients.
The client logs are produced with a key specifying which client it is.
That way processing logs is guaranteed to be ordered by time per user.
Getting metrics per the topic is good and all, but getting metrics per key
allows you to analyze how
the per-key/partition
distribution, especially from the broker side.
-- Pere
On Fri, Feb 24, 2023 at 4:46 PM lirop kaykov wrote:
> Hi, I have a question about monitoring kafka.
> When using jmx exporter and kafka exporter, you get key metrics like topic
> metrics (bytes in per second
Hi, I have a question about monitoring kafka.
When using jmx exporter and kafka exporter, you get key metrics like topic
metrics (bytes in per second, messages in per second etc).
Is there a way to get bytes/messages per second per message key?
Meaning, getting the distribution of messages by key
Kafka
Hi Adrien,
Take a look at this post that I wrote. Maybe can guide you.
Enjoy,
https://medium.com/@danielmrosa/monitoring-kafka-b97d2d5a5434
2018-07-09 12:09 GMT-03:00 adrien ruffie :
> Great ! Thank a lot Daniel !
>
> I will try it.
>
> Best Reg
Hi Adrien,
Take a look at this post that I wrote. Maybe can guide you.
Enjoy,
https://medium.com/@danielmrosa/monitoring-kafka-b97d2d5a5434
2018-07-09 12:09 GMT-03:00 adrien ruffie :
> Great ! Thank a lot Daniel !
>
> I will try it.
>
> Best Reg
Great ! Thank a lot Daniel !
I will try it.
Best Regards
Adrien
De : Daniel Hanley
Envoyé : lundi 9 juillet 2018 16:23:36
À : users@kafka.apache.org
Objet : Re: Monitoring Kafka
Hi Adrien
You could take some ideas from: https://github.com/framiere/monitoring
Hi Adrien
You could take some ideas from: https://github.com/framiere/monitoring-demo
Alternatively, Confluent provide a very powerful Control Center for
monitoring and managing Kafka
(disclaimer, I work for Confluent!)
Best Regards
Dan
On Mon, Jul 9, 2018 at 2:12 AM, Adrien Ruffie wrote:
>
Hello Kafka Users,
I want to monitor our Kafka cluster correctly. I have read several articles
on "how to monitor Kafka" but I have the impression that every company is
doing a bit of a thing (rearranging them in his own way).
What the really thing I need to monitor, verify and set notifications
s
> an
> > > external service like ELK or new Relic?
> > >
> > > That’s long standing industry pattern.
> > >
> > > Some reading.. and some tools in the readings.. these articles are
> > > opinionated towards the vendors that published
cles are
> > opinionated towards the vendors that published them but its a starting
> > point.
> >
> > https://blog.serverdensity.com/how-to-monitor-kafka/
> > https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/
> >
> >
> > On Apr 21,
>
> Some reading.. and some tools in the readings.. these articles are
> opinionated towards the vendors that published them but its a starting
> point.
>
> https://blog.serverdensity.com/how-to-monitor-kafka/
> https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics
point.
https://blog.serverdensity.com/how-to-monitor-kafka/
https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/
On Apr 21, 2018, 6:54 AM -0400, Raghu Arur , wrote:
> Hi,
>
> Is there a way to pull broker stats (like partitions its is managing, jvm
> info, state of the par
Hi,
Is there a way to pull broker stats (like partitions its is managing, jvm
info, state of the partitions, etc.) without using JMX. We are shipping
kafka in a appliance and there are restrictions on the ports that are open
for security reasons. Are there any known ways of monitoring the health o
Hi Marius,
I'm curious what you've found?
One way to go about it is to have alerts (those that look for health status
- in SPM we call them Heartbeat alerts) hooked up to a Webhook, where a
Webhook is basically your custom HTTP endpoint (or something like Nagios).
This should let you integrate he
Hello Otis,
Thank you for your reply. Sorry for not being very explicit. For this
particular case, the failed application was on the consumer side, however,
monitoring the producer in the same way would be desired as well. I had a
look into SMP. I looks good however I'm up to finding a way to chec
Hi,
By "kafka client" I assume you mean you Kafka producer and/or consumers?
If so, any decent Kafka monitoring solution should let you monitor that.
See https://sematext.com/spm/integrations/kafka-monitoring/ for an example.
Otis
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr
Hi,
My application recently experience a network connectivity issue which lead
into getting the client (0.8.2.2) disconnected. After the network was
restored the client failed to reconnect because while trying to do this,
resolving the Zookeeper Server hostname to an IP failed as well (DNS
failure
killed?
Connect is agnostic to the deployment strategy, so it doesn't get involved
in process management at all. You should use whatever mechanism you
normally use to monitor processes (supervisord, a cluster manager like
Mesos or Kubernetes, etc).
> How can we identify the failing tasks
illed? How can we identify the failing
tasks on different workers?Are there any best practices on monitoring Kafka
Connect and managing workers/connectors/tasks?
We are close to finishing our development and deploying this to test and
production environments and it is very important that we fig
The problem was that the metric names had all changed in the latest
version. Fixing the names seems to have done it.
On Thu, Aug 13, 2015 at 3:13 PM, Rajiv Kurian wrote:
> Aah that seems like a red herring - seems like the underlying cause is
> that the MBeans I was trying to poll (through our m
Aah that seems like a red herring - seems like the underlying cause is that
the MBeans I was trying to poll (through our metrics system) are no longer
present. We use collectd JMX to get metrics from Kafka and here is what I
see:
GenericJMXConfMBean: No MBean matched the ObjectName
"kafka.server"
Till recently we were on 0.8.1 and updated to 0.8.2.1
Everything seems to work but I am no longer seeing metrics reported from
the broker that was updated to the new version.
My config file has the following lines:
kafka.metrics.polling.interval.secs=5
kafka.metrics.reporters=kafka.metrics.Kafka
22 matches
Mail list logo