, Paul
From: Sachin Mittal
Date: Monday, 3 February 2025 at 3:41 pm
To: users@kafka.apache.org
Subject: Re: How to scale a Kafka Cluster, what all should we consider
[You don't often get email from sjmit...@gmail.com. Learn why this is important
at https://aka.ms/LearnAboutSenderIdentific
as message rate, message size, fan-out,
> number of consumers and partitions etc, all of which potentially consume
> CPU – good luck!
>
> Regards, Paul Brebner
>
> From: Sachin Mittal
> Date: Friday, 31 January 2025 at 7:41 pm
> To: users@kafka.apache.org
> Subject: How
potentially consume CPU – good luck!
Regards, Paul Brebner
From: Sachin Mittal
Date: Friday, 31 January 2025 at 7:41 pm
To: users@kafka.apache.org
Subject: How to scale a Kafka Cluster, what all should we consider
[You don't often get email from sjmit...@gmail.com. Learn why this is impo
Hi,
I just wanted to have some general discussions around the topic of how to
scale up a Kafka Cluster ?
Currently we are running a 5 node Kafka Cluster.
Each node has* 4 vcpu *and *8 GiB* memory.
I have a topic which is partitioned *35* ways.
I have *5* producers publishing messages to that
We increate that settings to very high number when we deploy kafka on
production, so no, there is no problem about limits
-Original Message-
From: Oleksandr Shulgin
Sent: Monday, December 2, 2024 1:49 PM
To: users@kafka.apache.org
Subject: COMMERCIAL:Re: Kafka cluster collapsed for no
On Sat, Nov 30, 2024 at 10:02 AM Rybalka, Grigoriy (Fortebank)
wrote:
>
> And this logs appears on remaining 2 brokers nodes, it seems that kafka
> brokers is lost connection between nodes, but ssh and other traffic to/from
> nodes was worked! And on the network side there is no problems, What el
Hello! We have a 3 broker Kafka cluster ( KRaft ) brokers and Kraft controllers
on the same nodes
CPU: 16
RAM: 32GB
We have 2241 topic and 107262 online partitions with 23652 client connections
kafka version is 3.6.1
And yesterday we have trouble from 12:08 to 12:11
We have so many logs on
ism+to+cordon+brokers+and+log+directories
> > >
> > is trying to address.
> > It is under discussion, and should be included in the upcoming releases.
> >
> > Thanks.
> > Luke
> >
> >
> >
> > On Tue, Oct 29, 2024 at 12:45 AM hayoun
echanism+to+cordon+brokers+and+log+directories
> >
> is trying to address.
> It is under discussion, and should be included in the upcoming releases.
>
> Thanks.
> Luke
>
>
>
> On Tue, Oct 29, 2024 at 12:45 AM hayoung lee wrote:
>
> > Hi Team,
> >
> >
at 12:45 AM hayoung lee wrote:
> Hi Team,
>
> I am currently operating a Kafka cluster in KRaft mode and would like to
> raise a few questions and suggestions regarding the cluster scale-down
> process.
>
> Recently, I scaled down the cluster by removing one broker; however,
Hi Team,
I am currently operating a Kafka cluster in KRaft mode and would like to
raise a few questions and suggestions regarding the cluster scale-down
process.
Recently, I scaled down the cluster by removing one broker; however, I
encountered a situation where the removed broker still appeared
Hi Team,
I am currently operating a Kafka cluster in KRaft mode and would like to
raise a few questions and suggestions regarding the cluster scale-down
process.
Recently, I scaled down the cluster by removing one broker; however, I
encountered a situation where the removed broker still appeared
Hi,
As Artem mentioned, I did some tests with setting replication factor 1 and
3 for two different topics
One of the kafka broker is down:
The command works if the replication factor is 3. (*testtopicreplica3 is
created with rf 3)*
*[root@node-223 kafka_2.12-2.8.2]# ./bin/kafka-consumer-gr
Hi,
Just a long shoot, but I might be wrong. You have
offsets.topic.replication.factor=1 in you config, when one broker is down,
some partitions of __consumer_offsets topic will be down either. So
kafka-consumer-groups can't get offsets from it. Maybe it's just a little
misleading error message.
Hi, sorry for the confusion, here is details:
I have 3 broker nodes: 192.168.20.223 / 224 / 225
When all kafka services are UP:
[image: image.png]
I stopped the kafka service on *node 225*:
[image: image.png]
Then i tried the command on node223 with --bootstrap-server
192.168.20.223:9092,192.16
Hi.
Which server did you shutdown in testing?
If it was 192.168.20.223, that is natural kafka-consumer-groups script
fails because you passed only 192.168.20.223 to the bootstrap-server arg.
In HA setup, you have to pass multiple brokers (as the comma separated
string) to bootstrap-server so that
Hi all,
I'm trying to do some tests about high availability on kafka v2.8.2
I have 3 kafka brokers and 3 zookeeper instances.
when i shutdown one of the kafka service only in one server i got this
error:
[root@node-223 ~]# /root/kafka_2.12-2.8.2/bin/kafka-consumer-groups.sh
--bootstrap-server 192
aring to a
graceful shutdown.
On Thu, Nov 23, 2023 at 12:40 PM Denis Santangelo <
denis.santang...@scorechain.com> wrote:
> Hello Denis,
>
> I'm encountering a peculiar issue with my Kafka cluster.
>
> I've been running 8 brokers on version 3.4.0 for several
Hello Denis,
I'm encountering a peculiar issue with my Kafka cluster.
I've been running 8 brokers on version 3.4.0 for several months, and
everything seems to be functioning well.
All my topics have at least two replicas for each partition.
However, I face a problem when I shut dow
Hi all,
I created a Kafka rebalancer (in bash) which minimizes the number of partitions
to be moved around when a broker (or more than one) is added or removed from
the cluster.
Contrarily to the already available 'kafka-reassign-partitions' (which comes
with Kafka package), this new tool work
Thanks Divij, I will check further.
---
Thanks & Regards,
Kunal Jadhav
On Fri, Apr 14, 2023 at 4:25 PM Divij Vaidya
wrote:
> Hey Kunal
>
> We would need more information to debug your scenario since there are no
> known bugs (AFAIK) in 3.3.2 associated with leader election.
>
> At a very high
Hey Kunal
We would need more information to debug your scenario since there are no
known bugs (AFAIK) in 3.3.2 associated with leader election.
At a very high level, the ideal sequence of events should be as follows:
1. When the existing leader shuts down, it will stop sending requests for
heartb
Hello All,
We have implemented 3 brokers cluster on a single node server in the
kubernetes environment, which is a zookeeper-less cluster having kafka
version 3.32. And facing one issue like when the existing leader broker
gets down then the new leader is not elected. We have faced this issue
seve
, sunilmchaudhar...@gmail.com
Subject: Re: Kafka Cluster WITHOUT Zookeeper
NetApp Security WARNING: This is an external email. Do not click links or open
attachments unless you recognize the sender and know the content is safe.
Paul, thanks for the articles. It's great to see someone di
<
> israele...@gmail.com>, ranlupov...@gmail.com ,
> scante...@gmail.com , show...@gmail.com <
> show...@gmail.com>, sunilmchaudhar...@gmail.com <
> sunilmchaudhar...@gmail.com>
> *Subject: *Re: Kafka Cluster WITHOUT Zookeeper
>
> *NetApp Security WARNING*: This
lupov...@gmail.com ,
> scante...@gmail.com , show...@gmail.com <
> show...@gmail.com>, sunilmchaudhar...@gmail.com <
> sunilmchaudhar...@gmail.com>
> *Subject: *Re: Kafka Cluster WITHOUT Zookeeper
>
> *NetApp Security WARNING*: This is an external email. Do not click links
> o
, Israel Ekpo ,
ranlupov...@gmail.com , scante...@gmail.com
, show...@gmail.com ,
sunilmchaudhar...@gmail.com
Subject: Re: Kafka Cluster WITHOUT Zookeeper
NetApp Security WARNING: This is an external email. Do not click links or open
attachments unless you recognize the sender and know the
Kafka Cluster be functional servingproducers and consumers without having Zookeeper cluster manage theinstance .Any particular version of kafka for this or how can we achieve this please
Hello Kafka experts
Is there a way where we can have Kafka Cluster be functional serving
producers and consumers without having Zookeeper cluster manage the
instance .
Any particular version of kafka for this or how can we achieve this please
isk restarting the next
> broker before all partitions have returned to healthy, and then you’ll have
> offline partitions because your minISR is 2.
>
> --
> Peter Bukowinski
>
> > On Mar 6, 2023, at 7:04 AM, Luis Alves wrote:
> >
> > Hello,
> >
> >
before all
partitions have returned to healthy, and then you’ll have offline partitions
because your minISR is 2.
--
Peter Bukowinski
> On Mar 6, 2023, at 7:04 AM, Luis Alves wrote:
>
> Hello,
>
> I'm doing some tests with rolling restarts in a Kafka cluster and
Hello,
I'm doing some tests with rolling restarts in a Kafka cluster and I have a
couple of questions related to the impact of rolling restarts on Kafka
consumers/producers and on the overall process.
First, some context on my setup:
- Kafka cluster with 3 nodes.
- Topic replic
Hello,
I'm doing some tests with rolling restarts in a Kafka cluster and I have a
couple of questions related to the impact of rolling restarts on Kafka
consumers/producers and on the overall process.
First, some context on my setup:
- Kafka cluster with 3 nodes.
- Topic replic
Hi
I have a Kafka cluster and a Kafka Connect cluster that connects to it.
This Kafka Connect cluster has:
* group.id = KCG1
* config.storage.topic = connect-config
* offset.storage.topic = connect-storage
* status.storage.topic = connect-status
I want to build a 2nd Kafka Connect cluster with
entation/#monitoring
>
> Thank you.
> Luke
>
>
> On Thu, Feb 3, 2022 at 1:54 PM Dhirendra Singh
> wrote:
>
> > Hi All,
> > does kafka have any metrics for count of number of producers and
> consumers
> > connected to a kafka cluster and any given time ?
> >
> > Thanks,
> > Dhirendra.
> >
>
3, 2022 at 1:54 PM Dhirendra Singh
wrote:
> Hi All,
> does kafka have any metrics for count of number of producers and consumers
> connected to a kafka cluster and any given time ?
>
> Thanks,
> Dhirendra.
>
Hi All,
does kafka have any metrics for count of number of producers and consumers
connected to a kafka cluster and any given time ?
Thanks,
Dhirendra.
:
https://www.confluent.io/blog/confluent-rest-proxy-putting-kafka-to-rest/
When I tried /Kafka/v3/clusters with a kraft mode Kafka cluster got a 404. The
Kafka cluster was set up using this doc and works with kafdrop.
Thx.
https://www.confluent.io/blog/confluent-rest-proxy-putting-kafka-to-rest/
When I tried /Kafka/v3/clusters with a kraft mode Kafka cluster got a 404. The
Kafka cluster was set up using this doc and works with kafdrop.
Thx.
..@die-schneider.net>
> > wrote:
> >
> > > We have a single tenant application that we deploy to a kubernetes
> > cluster
> > > in many instances.
> > > Every customer has several environments of the application. Each
> > > application lives i
has several environments of the application. Each
> > application lives in a separate namespace and should be isolated from
> other
> > applications.
> >
> > We plan to use kafka to communicate inside an environment (between the
> > different pods).
> > As sett
om other
> applications.
>
> We plan to use kafka to communicate inside an environment (between the
> different pods).
> As setting up one kafka cluster per such environment is a lot of overhead
> and cost we would like to just use a single multi tenant kafka cluster.
>
> Let's
environment (between the
different pods).
As setting up one kafka cluster per such environment is a lot of overhead
and cost we would like to just use a single multi tenant kafka cluster.
Let's assume we just have one topic with 10 partitions for simplicity.
We can now use the environment id as
kafka client (version 2.4.1) connecting to a Kafka cluster
> (version 2.6.1). After a successful SSL handshake I get the following
> error (sorry but I had to attach a screenshot because I do not have the
> logs in text version anymore, my apologies for this):
>
> [image: Inline im
ndering if someone has already faced a similar problem.
>
> I have a kafka client (version 2.4.1) connecting to a Kafka cluster
> (version 2.6.1). After a successful SSL handshake I get the following
> error (sorry but I had to attach a screenshot because I do not have the
> logs in t
ndering if someone has already faced a similar problem.
>
> I have a kafka client (version 2.4.1) connecting to a Kafka cluster
> (version 2.6.1). After a successful SSL handshake I get the following
> error (sorry but I had to attach a screenshot because I do not have the
> logs in t
Hi,
I'm currently facing an integration issue between two different kafka versions
and I was wondering if someone has already faced a similar problem.
I have a kafka client (version 2.4.1) connecting to a Kafka cluster (version
2.6.1). After a successful SSL handshake I get the following
Hi ,
I have an environment like kafka cluster with 3 brokers & kafka-streams to
process data of kafka topic.
Here kafka & kafka-streams versions are 2.7.0 .
Which is working fine for sometime , later having issues in
kafka-streams, in logs showing below error's
- Ex
ey and value, idempotency =true. Is there a chance the JMX
> metrics can go wrong ?.
>
> On Thu, Apr 29, 2021 at 12:09 AM fighter wrote:
>
> > We have did the kafka cluster migration from source kafka cluster to
> > target kafka cluster using MirrorMaker 2.5.1 in distributed mode
In one of our Kafka clusters we noticed that fetch sessions are being
evicted and lots of clients print `FETCH_SESSION_ID_NOT_FOUND` log
messages. We tried to increase max.incremental.fetch.session.cache.slots
from 1k to 10k in the brokers but the slots were immediately used up
again and slots
In one of our Kafka clusters we noticed that fetch sessions are being evicted
and lots of clients print `FETCH_SESSION_ID_NOT_FOUND` log messages. We tried
to increase max.incremental.fetch.session.cache.slots from 1k to 10k in the
brokers but the slots were immediately used up again and slots w
wrong ?.
On Thu, Apr 29, 2021 at 12:09 AM fighter wrote:
> We have did the kafka cluster migration from source kafka cluster to
> target kafka cluster using MirrorMaker 2.5.1 in distributed mode using
> kafka connect cluster. We see noticeable difference incoming messages rate
> per sec o
We have did the kafka cluster migration from source kafka cluster to target
kafka cluster using MirrorMaker 2.5.1 in distributed mode using kafka
connect cluster. We see noticeable difference incoming messages rate per
sec on source and target. We also analyze that on kafka connect producer
has
om s3 into Kafka.
> >
> > -> again here the same question, does s3 also store offset for each topic
> > as it is modified in kafka? If not then when the back is restored back
> into
> > kafka cluster, how it will know where to process each topic from?
> >
>
gt; For example, you could use a Kafka Connect s3 sink. You'd have to write
> some disaster-recovery code to restore lost data from s3 into Kafka.
>
> -> again here the same question, does s3 also store offset for each topic
> as it is modified in kafka? If not then when the back is
ame question, does s3 also store offset for each topic
as it is modified in kafka? If not then when the back is restored back into
kafka cluster, how it will know where to process each topic from?
On Sat, Mar 6, 2021 at 4:44 PM Himanshu Shukla
wrote:
> Hi Pushkar,
>
> you could also
probably okay for your use-case, but you should be aware of it.
>
> Since what you want is a backup, there are many ways to do that which might
> be cheaper than another Kafka cluster.
>
> For example, you could use a Kafka Connect s3 sink. You'd have to write
> some dis
t preserve the offsets of the records. That's
probably okay for your use-case, but you should be aware of it.
Since what you want is a backup, there are many ways to do that which might
be cheaper than another Kafka cluster.
For example, you could use a Kafka Connect s3 sink. You'd have to
s://kafka.apache.org/documentation/#georeplication-mirrormaker
>>>
>>> Thanks.
>>> Luke
>>>
>>> On Fri, Mar 5, 2021 at 1:50 PM Pushkar Deole
>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> I was looking for so
e Chen wrote:
> >
> > > Hi Pushkar,
> > > MirrorMaker is what you're looking for.
> > > ref:
> https://kafka.apache.org/documentation/#georeplication-mirrormaker
> > >
> > > Thanks.
> > > Luke
> > >
> > &g
te:
>
> > Hi Pushkar,
> > MirrorMaker is what you're looking for.
> > ref: https://kafka.apache.org/documentation/#georeplication-mirrormaker
> >
> > Thanks.
> > Luke
> >
> > On Fri, Mar 5, 2021 at 1:50 PM Pushkar Deole
> wrote:
> >
>
for.
> ref: https://kafka.apache.org/documentation/#georeplication-mirrormaker
>
> Thanks.
> Luke
>
> On Fri, Mar 5, 2021 at 1:50 PM Pushkar Deole wrote:
>
> > Hi All,
> >
> > I was looking for some options to backup a running kafka cluster, for
> >
Hi Pushkar,
MirrorMaker is what you're looking for.
ref: https://kafka.apache.org/documentation/#georeplication-mirrormaker
Thanks.
Luke
On Fri, Mar 5, 2021 at 1:50 PM Pushkar Deole wrote:
> Hi All,
>
> I was looking for some options to backup a running kafka cluster, for
>
Hi All,
I was looking for some options to backup a running kafka cluster, for
disaster recovery requirements. Can someone provide what are the available
options to backup and restore a running cluster in case the entire cluster
goes down?
Thanks..
zookeeper nodes. After that, my Kafka cluster could not
> connect to the zookeeper cluster, and there was no information available in
> the log.
>
> What should we do? Thank you
>
Thank you for the update steps!I've successfully expanded my zookeeper。But
what should I do with Kafka clusters that can't connect to zookeeper,Now the
Kafka cluster can work normally, but it cannot be operated。
Thanks for your help!!
发件人: m
___
发件人: manoj.agraw...@cognizant.com
发送时间: 2020年8月29日 0:01
收件人: users@kafka.apache.org
主题: Re: Kafka cluster cannot connect to zookeeper
You have'nt describe how you are adding zookeeper .
Right way to add zookeeper
One host at a time
1. update the existing zookeep
users@kafka.apache.org
主题: Re: Kafka cluster cannot connect to zookeeper
You have'nt describe how you are adding zookeeper .
Right way to add zookeeper
One host at a time
1. update the existing zookeeper node conf/zoo.cfg by adding new host
2. restart the zk process on existing host
3. st
un" wrote:
[External]
We have one zookeeper node and two Kafka nodes. After that, we expand the
capacity of zookeeper: change the configuration of zookeeper node, restart it,
and add two zookeeper nodes. After that, my Kafka cluster could not connect to
the zookeeper cluster, a
We have one zookeeper node and two Kafka nodes. After that, we expand the
capacity of zookeeper: change the configuration of zookeeper node, restart it,
and add two zookeeper nodes. After that, my Kafka cluster could not connect to
the zookeeper cluster, and there was no information available
We have one zookeeper node and two Kafka nodes. After that, we expand the
capacity of zookeeper: change the configuration of zookeeper node, restart it,
and add two zookeeper nodes. After that, my Kafka cluster could not connect to
the zookeeper cluster, and there was no information available
We have one zookeeper node and two Kafka nodes. After that, we expand the
capacity of zookeeper: change the configuration of zookeeper node, restart it,
and add two zookeeper nodes. After that, my Kafka cluster could not connect to
the zookeeper cluster, and there was no information available
Hi, I want to know if consume from A kafka cluster -> process -> produce to
B kafka cluster, is supported by the kafka-2.2.1
Both kafka cluster support transaction.
Thanks first .
t; >
> > This can do what you want. Note that the offsets are not copied, nor are
> > the message timestamps.
> >
> > HTH
> >
> >
> > On Wed, Apr 29, 2020 at 6:47 PM vishnu murali <
> vishnumurali9...@gmail.
> > > Is this possible...
> > >
> > > On Thu, Apr 30, 2020, 00:19 Blake Miller
> wrote:
> > >
> > >> Hi Vishnu,
> > >>
> > >> Check out MirrorMaker
> > >>
> >
> https://cwiki.apache.org/confluence/pages/viewpag
>
> >> Hi Vishnu,
> >>
> >> Check out MirrorMaker
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
> >>
> >> This can do what you want. Note that the offsets are not copied, nor are
> >> the message tim
t; https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
> >
> > This can do what you want. Note that the offsets are not copied, nor are
> > the message timestamps.
> >
> > HTH
> >
> >
> > On Wed, Apr 29, 2020 at 6:47 PM vishnu
>> Hi Vishnu,
>>
>> Check out MirrorMaker
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
>>
>> This can do what you want. Note that the offsets are not copied, nor are
>> the message timestamps.
>>
>> HTH
>>
>>
luence/pages/viewpage.action?pageId=27846330
>
> This can do what you want. Note that the offsets are not copied, nor are
> the message timestamps.
>
> HTH
>
>
> On Wed, Apr 29, 2020 at 6:47 PM vishnu murali
> wrote:
>
> > Hi Guys,
> >
> > I am
r
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
>
> This can do what you want. Note that the offsets are not copied, nor are
> the message timestamps.
>
> HTH
>
>
> On Wed, Apr 29, 2020 at 6:47 PM vishnu murali
> wrote:
>
>> Hi Gu
aving two separate Kafka cluster running in two independent zookeeper
>
> I need to send a set of data from one topic from cluster A to cluster B
> with the same topic name with all data also..
>
> How can I achieve this
>
> Done anyone have any idea ??
>
Hi Guys,
I am having two separate Kafka cluster running in two independent zookeeper
I need to send a set of data from one topic from cluster A to cluster B
with the same topic name with all data also..
How can I achieve this
Done anyone have any idea ??
Hi,
I am having trouble accessing the Secured sasl Kafka cluster from my windows
laptop. I do have Confluent Kafka installed but not able to access the Kafka
cluster to create topic and change retention policy on topics residing in
cluster. Any suggestions on how to achieve this ?
Thanks
Hi,
Can Kafka Connect be configured to store Statuses, Configs and Offsets in a
separate Kafka cluster, different from the one which acts as a source/sink?
Thanks
On 2020-02-07 16:16, Robin Moffatt wrote:
The error you get puts you on the right lines:
Is your advertised.listeners (called advertised.host.name before Kafka
9) correct and resolvable?
This article explains why and how to fix it:
https://rmoff.net/2018/08/02/kafka-listeners-explained/
per Advocate | ro...@confluent.io | @rmoff
On Fri, 7 Feb 2020 at 13:21, Marcus Engene wrote:
> Hi,
>
> I tried to use kafka-python 1.4.7 to connect to a bitnami kafka cluster
> using private ip the brokers.
>
> This works great from another Compute Instance.
>
> When i try the s
Hi,
I tried to use kafka-python 1.4.7 to connect to a bitnami kafka cluster
using private ip the brokers.
This works great from another Compute Instance.
When i try the same code from django on app-engine (that is setup to be
able to use stuff on Compute, f,ex some locally installed Redis
ught could be related to our problems but it seems like it's projected
> to be included in 2.5.0.
>
> Thanks,
> Brandon
>
>
> From: Ismael Juma
> Sent: Monday, February 3, 2020 7:31 AM
> To: Kafka Users
> Subject: Re: High CPU
like it's projected to be
included in 2.5.0.
Thanks,
Brandon
From: Ismael Juma
Sent: Monday, February 3, 2020 7:31 AM
To: Kafka Users
Subject: Re: High CPU in 2.2.0 kafka cluster
Hi Brandon,
Are you still seeing this behavior with Apache Kafka 2.4.0?
Ismael
> were running.
>
> Brandon
>
>
> From: Jamie
> Sent: Thursday, January 30, 2020 1:03 PM
> To: users@kafka.apache.org
> Subject: Re: High CPU in 2.2.0 kafka cluster
>
> Hi Brandon,
> Which version of Kafka are the consumers runn
: Re: High CPU in 2.2.0 kafka cluster
Hi Brandon,
Which version of Kafka are the consumers running? My understanding is that if
they're running a version lower than the brokers then they could be using a
different format for the messages which means the brokers have to convert each
record b
hanks,
Jamie
-Original Message-
From: Brandon Barron
To: users@kafka.apache.org
Sent: Thu, 30 Jan 2020 16:11
Subject: High CPU in 2.2.0 kafka cluster
Hi,
We had a small cluster (4 brokers) dealing with very low throughput - a couple
hundred messages per minute at the very most. In that cl
Hi,
We had a small cluster (4 brokers) dealing with very low throughput - a couple
hundred messages per minute at the very most. In that cluster we had a little
under 3300 total consumers (all were kafka streams instances). All broker CPUs
were maxed out almost consistently for a few weeks.
We
Hello all,
[sorry if this is a duplicate, not sure why my 1st attempt didn't come
through]
I do have two Kafka clusters in action, test and prod. The two are
formed by 3 nodes each, are independent and run their own zookeeper
setups. My prod cluster is running fine. My test cluster is half-broken
Hello,
We are planing to use Mirror Maker 1 to replicate kafka cluster. We are
trying to find a way how to verify the consistency of remote /
destination cluster.
Is there any way that we can compare the topics in certain period and
consistency of the cluster. Can you propose certain way
Hello,
We are planing to use Mirror Maker 1 to replicate kafka cluster. We are
trying to find a way how to verify the consistency of remote /
destination cluster.
Is there any way that we can compare the topics in certain period and
consistency of the cluster. Can you propose certain way
Hello,
I am trying to understand how to cleanup offsets stored outside kafka. What
I understand is that using ConsumerRebalanceListener, we can utilise
storing/retrieving consumer offsets from external storage.
https://kafka.apache.org/23/javadoc/org/apache/kafka/clients/consumer/ConsumerRebalanc
id) and
> > restart the Kafka then what would be impact ? how it will handle with the
> > requests because other brokers have not these settings yet.
> >
> > Also, we have 1000+ topics in our cluster , do we need to manually
> reassign
> > partitions for all topi
her brokers have not these settings yet.
>
> Also, we have 1000+ topics in our cluster , do we need to manually reassign
> partitions for all topics ?
>
> I have searched everywhere but couldn't find any place where it says if can
> do it while kafka cluster is in o
, do we need to manually reassign
partitions for all topics ?
I have searched everywhere but couldn't find any place where it says if can
do it while kafka cluster is in operation.
I really appreciate if someone can help on this.
--
Regard
Ashu
1 - 100 of 463 matches
Mail list logo