Hello there,
I managed to fix this, but I would love to understand the why of the
failure... hopefully some of you can explain :-)
The production configuration has 4 kafka-streams threads, also there are
about 4 instances, so its about 16 kafka-streams threads working.
The production topic has 4
Hey Aravind,
If your client/broker just upgraded to 2.3, Jason has filed a blocker for
2.3: https://issues.apache.org/jira/browse/KAFKA-8653
and a fix is on its way: https://github.com/apache/kafka/pull/7072/files
Let me know if you are actually on a different version.
Boyang
On Wed, Jul 10, 2
What about monitoring consumer lag?
-Matthias
On 7/10/19 5:19 PM, Brian Putt wrote:
> Hello,
>
> We have multiple stream services that we're looking to monitor when they've
> been disconnected from the broker so that we can restart the services.
>
> I've looked at https://issues.apache.org/jira
Our kafka streams application is stuck and continuously emits "(Re-)joining
group” log message every 5 minutes without making any progress.
Kafka-consumer-groups cmd line tool with “—members” option shows lots of stale
members, in addition to expected member-ids shown on log msgs on kafka-stream
We have a situation on one of our clusters where when we run a partition
reassignment using /usr/local/kafka/bin/kafka-reassign-partitions.sh we see
the reassignment data published to the zookeeper in
/admin/reassign_partitions and all partitions requests show in progress but
the brokers do not app
Hello,
We have multiple stream services that we're looking to monitor when they've
been disconnected from the broker so that we can restart the services.
I've looked at https://issues.apache.org/jira/browse/KAFKA-6520 and am
wondering if anyone has suggestions on what we can do today to help ensu
CVE-2018-17196: Potential to bypass transaction/idempotent ACL checks in
Apache Kafka
Severity: Moderate
Vendor: The Apache Software Foundation
Versions Affected: Apache Kafka 0.11.0.0 - 2.1.0
Description: It is possible to manually craft a Produce request which
bypasses transaction/idempotent
Thanks Bruno and Patrik,
my fault was that since on confluent cloud auto creation is set to false,
I've manually created the topics, without taking care of changing the
cleanup policy.
Doing so the store changelogs kept the whole changes, in fact after
enabling the compacting policy the storage us
Just to close the loop on this for the mailing list: after discussion
with Matthias Sax on Slack, I created this issue:
https://issues.apache.org/jira/browse/KAFKA-8650.
Regards,
Raman
On Tue, Jul 9, 2019 at 12:43 PM Raman Gupta wrote:
>
> I have a stream that is configured for exactly-once proc
Hi
Regarding the I/O, RocksDB has something called write amplification which
writes the data to multiple levels internally to enable better optimization at
the cost of storage and I/O.
This is also the reason the stores can get larger than the topics themselves.
This can be modified by RocksDB se
Hi Alessandro,
> - how do I specify the retention period of the data? Just by setting the
> max retention time for the changelog topic?
For window and session stores, you can set retention time on a local
state store by using Materialized.withRetention(...). Consult the
javadocs for details. If
11 matches
Mail list logo