Hi,
I have run consumer performance test on my kafka cluster.
Can you please help me to understand below parameters? Basically I don't know
whats unit of those parameters? I cant assume it blindly
Its only given in 2 columnts, the "Metric Name" and its "Value"
Metric Name
Hi all,
I am writing to describe where I got with the issue.
The next thing I wanted to do was to check topics if they contain records
with mixed timestamps in single partition that could cause the
warning "Skipping record for expired segment" - meaning, timestamp of
incoming record is behind obs
Hi Michael,
You can find the ASF trademark policies here:
https://www.apache.org/foundation/marks/#books
--
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
On Sun, 23 Feb 2020 at 19:47, Michi Bertschi
wrote:
> Hello
> I'm Michael
>
>
> I am writing a thesis for the
Hi Jiri,
Thank you for the follow up.
I guess, it can happen that during start-up and the respective
rebalances some partitions are read more often than others and that
consequently the timestamps in the repartition topic are mixed up more
than during normal operation. Unfortunately, I do not kno
Hi List,
I'm trying to find the way to keep track of topics' size (or
partitions) across brokers, but
I didn't found any Kafka metrics for that. I start wonder if that is possible.
I have ideas for a work around (querying log files) but I wonder what
will be the right way or how do you keep monit
Hi Richard,
If you are running Kafka > 1.0.0 the information you are looking for
is exposed by the Admin API, describeLogDirs method, or via CLI with
kafka-log-dirs.sh.
The metrics are also exposed by the dropwizard metrics reporter, e.g.
for Graphite:
kafkalog001.kafka.log.Log.partition.49.topic.
Hi Gabriele,
I'm using Kafka 5.3.1, which is apache kafka 2.3 and I'm using JMX to
retrieve metrics from brokers.
The only metric I saw from "kafka.log" is LogFlushStats, but nothing
about Log.partition
This is the pattern I have deployed for kafka.log metrics, maybe I
need a different for partiti
So I am trying to process incoming events, that may or may not actually
update the state of my output object. Originally I was doing this with a
KStream/KTable join, until I saw the discussion about "KTable in Compact
Topic takes too long to be updated", when I switched to
groupByKey().aggregate().
Hello Adam,
It seems your intention is to not "avoid emitting if the new aggregation
result is the same as the old aggregation" but to "avoid processing the
aggregation at all if it state is already some certain value", right?
In this case I think you can try sth. like this:
*stream.transformVal