Hello,
I am seeing this doc:
https://github.com/zendesk/ruby-kafka who says:
This behavior is controlled by the required_acks option to #producer and
#async_producer:
# This is the default: all replicas must acknowledge.
producer = kafka.producer(required_acks: :all)
My question
Producer only send the "required_ack=all/1/0" in the request to the
partition leader.
Leader is responsible to make sure required_ack is fulfilled, then it ack
to client as producing succeeded.
On Mon, Dec 20, 2021 at 5:23 PM yonghua wrote:
> Hello,
>
> I am seeing this doc:
> https://github.co
Hello,
Being new to Kafka, I’d like to deploy a Kafka cluster on K8s with 3 brokers
with listenerSecurityProtocolMap: "INTERNAL:SSL,CLIENT:PLAINTEXT,EXTERNAL:SSL"
To enable TLS authentication, I use self-signed TLS certificates. To enable
external access, for Kafka, it needs to use 3 LoadBalanc
Hi
I have a KafkaStreams application with a reasonably complex, stateful
topology.
By monitoring it, we can say for sure that it is bounded by writing I/O.
This has become way worse after we upgraded KafkaStreams from 2.4 to 2.8.
(even though we disabled warm-up replicas by setting
"acceptable.reco
Hi Murilo,
Have you checked out the following blog post on tuning performance of
RocksDB state stores [1] especially the section on high disk I/O and
write stalls [2]?
Do you manage the off-heap memory used by RocksDB as described in the
Streams docs [3]?
I do not know what may have caused
Can anyone pls provide some feedback. I don't have logs now but just wanted
to confirm how can this case arise and how to ensure that it does not
happen again?
On Fri, Nov 26, 2021 at 5:44 PM Lehar Jain wrote:
> Hello community,
>
> Recently my team faced an issue with our Kafka Connect Mirrorma