-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
There are two reasons (some would be easier to address than others):
1) a client can only connect to one cluster; to allow data
repartitioning the producer must write the repartition data into the
input cluster, otherwise the consumer cannot read th
Hi Kafka community,
We're running Kafka 2.4 and facing a pretty strange situation.
Let's say there were three brokers in the cluster 0, 1, and 2. Then:
1. Broker 3 was added.
2. Partitions were reassigned from broker 0 to broker 3.
3. Broker 0 was shut down (not gracefully) and removed from
Cyrille, I don't see why using MM1/2 would break your isolation
requirement. But if you can't mirror topics for some reason consider Flink
instead of Kafka Streams.
Ryanne
On Thu, Feb 13, 2020 at 10:52 AM Cyrille Karmann
wrote:
> Hello,
>
> We are trying to create a streaming pipeline of data b
Hello,
We are trying to create a streaming pipeline of data between different
Kafka clusters. Our users send data to the input Kafka cluster, and we want
to process this data and send the result to topics on another Kafka cluster.
We have different reasons for this setup, but mainly it's for isol
Hi,
Please find the inline message
From: M. Manna
Sent: 13 February 2020 19:18
To: Chikulal C
Cc: Kafka Users
Subject: Re: Kafka clustering issue
Hi,
On Thu, 13 Feb 2020 at 12:43, Chikulal C
mailto:chikula...@rcggs.com>> wrote:
1. Turned off node1 and no
Hi,
On Thu, 13 Feb 2020 at 12:43, Chikulal C wrote:
>
>1. Turned off node1 and node 2 (expected vs. actual)
>1. expected: Message publish failure with following warnings ( in
> producer )
> 1. Connection to node 0 could not be established. Broker may not be
> availab
1. Turned off node1 and node 2 (expected vs. actual)
* expected: Message publish failure with following warnings ( in
producer )
* Connection to node 0 could not be established. Broker may not be
available.
Connection to node 1 could not be established. Broker may not be avail
My apologies as I misread one of the steps you mentioned in your original
email.
Could you kindly mention what you are seeing as per your order of failover
tests?
1. Turned off node1 and node 2 (expected vs. actual)
2. Turned on node 1 (expected vs actual)
3. Turned off node 1 (expect
Hi,
I tried setting transaction.state.log.min.isr=1. But the issue still exists.
I am also getting one warning after doing step 3 (with
transaction.ate.log.min.isr=1) and producing some data on the topic as given
below.
[Producer clientId=producer-1] 2 partitions have leader brokers without
This could be because you have set your transaction.ate.log.min.isr=2. Have
you tried with setting this to 1?
Also, please note that if your min.insync.replica=1, and you only have 2
nodes, you would only have a guarantee from 1 brokers to have the messages
- but if the same broker fails then you
Hi,
I am facing an issue with the Kafka clustering setup that I have. I have a
Kafka cluster with two broker that are connected to two zookeepers. I am
posting data to a topic that have replication factor and partition two each
with a spring boot Kafka producer and consuming the same with anoth
Hi Khoi,
Short answer: No.
Broker logs are append only, so your options are either to delete all
records before a certain offset (for a non-compacted topic) or to have a
compacted topic and send a message with a null value. For some applications
(such as GDPR compliance, where there's a legal req
12 matches
Mail list logo