Dear all,
We got a kafka cluster with 5 nodes, and from the metrics of datadog, we
found that regularly the elapse for sending to kafka was more than 200ms,
and there was a peek on the system.io.await.
Please help to advice what would be the problem and any hints?
[image: image.png]
Kind regards
Sounds like you're reaching the limits of what your disks will do either on
reads or writes. Debug it as you would any other disk based app,
https://haydenjames.io/linux-server-performance-disk-io-slowing-application/
might help.
On Tue, 22 Jan 2019 at 09:19, wenxing zheng wrote:
> Dear all,
Hi Kafka Devs & Users,
We recently had an issue where we processed a lot of old data and we
crashed our brokers due to too many memory mapped files.
It seems to me that the nature of Kafka / Kafka Streams is a bit suboptimal
in terms of resource management. (Keeping all files open all the time,
ma
Hi Johan,
Your observation is correct, the root cause is that your two instances is
being upgraded in sequential order: say your old topology is tp1, and your
new topology with the new stream / topic is tp2, when you are upgrading say
instance1, instance1 knows already about tp2 while the other in
Hello Niklas,
If you can monitor your repartition topic's consumer lag, and it was
increasing consistently, it means your downstream processor cannot simply
keep up with the throughput of the upstream processor. Usually it means
your downstream operators is heavier (e.g. aggregations, joins that a
Hi Peter,
Just to follow up on the actual bug, can you confirm whether:
* when you say "restart", do you mean orderly shutdown and restart, or
crash and restart?
* have you tried this with EOS enabled? I can imagine some ways that there
could be duplicates, but they should be impossible with EOS e
Hello,
I have subscribed to a kafka topic as below . I need to run some logic
only after the consumer has been assigned a partition .How ever
consumer.assignment() comes back as an empty set no matter how long I wait
. If I do not have the while loop and then do a consumer.poll() I do get
the r
Hi All,
We just upgraded from 0.10.x to 1.1 and enabled rack awareness on an existing
clusters which has about 20 nodes in 4 rack . After this we see that few
brokers goes on continuous expand and shrink ISR to itself cycle , it is also
causing high time for serving meta data requests.
What i
Hi All,
We just upgraded from 0.10.x to 1.1 and enabled rack awareness on an existing
clusters which has about 20 nodes in 4 rack . After this we see that few
brokers goes on continuous expand and shrink ISR to itself cycle , it is also
causing high time for serving meta data requests.
What is
Hi,
I have one dought, What is the maximum limit of partitions in one topic of
kafka cluster.Please help me.
There is no limit for partitioning in Kafka. It would be good the number of
partitions is equal to number of consumers. The consumer fetches a batch of
messages per partition. The more partitions that a consumer consumes, the
more memory it needs.
On Wed, Jan 23, 2019 at 12:25 PM marimuthu eee
wr
11 matches
Mail list logo