just want to understand that you are talking about below scenario
Mirroring data between A <->B and B<>C correct ?
From: Doug Whitfield
Sent: Monday, January 31, 2022 2:18 PM
To: users@kafka.apache.org
Subject: Re: Is it possible to run MirrorMaker in a
Are u getting error at kafka log or server.log ?
From: Nicolas Carlot
Sent: Tuesday, January 25, 2022 6:20 AM
To: users@kafka.apache.org
Subject: Upgrade from 2.0 to 2.8.1 failed
[External]
Hello everyone,
I just had a major failure while upgrading a kafka clu
Hi All,
Wanted to understand a bit more on the schema registry
1. With Apache kafka , can we use schema registry
2. Amazon MSK , can we use Schema registry ?
Thanks
Manoj A
This e-mail and any files transmitted with it are for the sole use of the
intended recipient(s) and may contain confiden
In order to produce the data , topic should be have min ISR =2 but look like
ISR is out of sync . kafka cluster health is not good .
Topic: FooBar Partition: 0 Leader: 3 Replicas: 2,3,1 Isr: 3
|[root@LoremIpsum kafka]# /usr/lib/kafka/kafka/bin/kafka-topics.sh
--bootstrap-server localh
Look like this is bug ..
You can clean the data log on this node 3 and start the kafka process on node
3 . This should resolve the issue
On 9/15/20, 8:09 PM, "Dima Brodsky" wrote:
[External]
We are using version 2.3.1.
Two more pieces are information wrt Luke's answer. Assume
It should delete the old data log based on retention of topic.
What kafka version you are using ?
On 9/15/20, 7:48 PM, "Dima Brodsky" wrote:
[External]
Hi,
I have a question, when you start kafka on a node, if there is a random
replica log should it delete it on startup? Her
Hi Ryanne/Josh,
I'm working on active-active mirror maker and while translating consumer
offset from source- cluster A to dest cluster B. any pointer would be helpful .
Cluster A
Cluster Name--A
Topic name: testA
Consumer group name: mm-testA-consumer
Cluster -B
Cluster Name--B
Topic name: sou
Hi Ananya
Are you able to resolve this issue ,I'm also facing same issue .
What parameter should be pass here if I'm doing failover from cluster A ---> B
Map newOffsets =
RemoteClusterUtils.translateOffsets(properties, "A",
"TestTopic-123", Duration.ofMillis(5500));
Properties= Bo
We also upgraded kafka 2.2.1 to kafka 2.5.0 and kept same zookeeper . no issued
reported .
Later we also upgraded zookeeper to 3.5.8 . all good .
On 9/3/20, 8:42 PM, "Andrey Klochkov" wrote:
[External]
Hello all,
FWIW we upgraded to Kafka 2.4.1 and kept ZK at 3.4.6, no issues not
Issue has been fixed by copying empty snapshot file to data dir .
Thanks .
On 9/2/20, 10:51 PM, "Enrico Olivelli" wrote:
[External]
The official way to fix it is here
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FZOOKEEPER-
HI ALL ,
I’m planning to upgrade the Kafka 2.2.1 to kafka 2.5.0 , I m getting below
error while upgrading zookeeper version as below . Any idea ?
java.io.IOException: No snapshot found, but there are log entries. Something is
broken!
at
org.apache.zookeeper.server.persistence.FileTx
Try below
1. Update conf/ zoo.cfg Configure the configuration of exiting one and new
one. server nodes
2. Add myid under dataDir
3. Restart the existing zookeeper node
4. Start the other one zookeeper nodes
5. Update conf/ zoo.cfg Configure the configuration of existing
Hi ,
We are using kafka 2.2.1 and we have requirement to provide read only access
to user a for all topics existing in kafka cluster . is there any way we can
add KAFKA ACL rule for read access at cluster level or all topic* to user .
Thanks
Manoj A
This e-mail and any files transmitted with
You have'nt describe how you are adding zookeeper .
Right way to add zookeeper
One host at a time
1. update the existing zookeeper node conf/zoo.cfg by adding new host
2. restart the zk process on existing host
3. start the zk process in new node
On 8/28/20, 8:20 AM, "Li,Dingqun" wrote:
Hi All ,
Can we use VIP ip rather than Kafka Broker host name in bootstrap string at
producer side ?
Any concern or recommendation way
This e-mail and any files transmitted with it are for the sole use of the
intended recipient(s) and may contain confidential and privileged information.
If yo
What error you are getting , can you share the exact error ?
What is version of kafka lib at client side ?
On 8/25/20, 7:50 AM, "Prateek Rajput"
wrote:
[External]
Hi, please if anyone can help, will be a huge favor.
*Regards,*
*Prateek Rajput*
On Tue, Aug 25, 2020 at
Great .
Share your finding to this group once you done upgrade Confluent Kafka 4.1x
to 5.3x successfully .
I see many people having same question here .
On 8/19/20, 10:38 AM, "Rijo Roy" wrote:
[External]
Thanks Manoj!
Yeah, the plan is to start with non-prod and validate fir
I advise to do it non-prod for validation .
You can backup data log folder if you want but I have'nt see any issue . but
better to backup data if it small .
Don’t change below value to latest until you done full validation , once you
changed to latest then you can't rollback .
inter.broker.pro
You can follow below steps
1. set inter.broker.protocol.version=2.1.x and rolling restart kafka
2. Rolling upgrade the Kafka cluster to 2.5 -
3. rolling upgrade ZK cluster
Validate the kafka .
4. set inter.broker.protocol.version= new version and rolling restart the Kafka
On 8/18/20, 12:54 P
Can you please share what action you are performing and how ?
On 8/11/20, 10:19 PM, "Indu V" wrote:
[External]
Hi Team,
I am facing an issue in a clustered Kafka environment,
org.apache.kafka.common.KafkaException: Cannot perform send because at
least one previous transact
I'm also working on mirror maker 2.0 . Do you have any documentation for mirror
maker 2.0 config setup or can you share the mirror maker 2.0 config .
Have you encounter any issue
On 8/9/20, 5:02 AM, "Liam Clarke-Hutchinson" wrote:
[External]
Hi Dor,
Yep, we're using Mirrormaker
Or manually you can move data dir . I'm assuming you have replica >1
Stop the kafka process on broker 1
Move 1 or 2 dir log from Disk 1 to disk 2
And start the kafka process
Wait for ISR sync
Then you can repeate this step again .
On 8/7/20, 6:45 AM, "William Reynolds"
wrote:
[Externa
Are you getting any error at kafka broker or producing/consuming message ?
Can you please provide more detail how did you upgrade or what error you are
getting . it all depend how did you upgraded ?
On 8/6/20, 4:13 PM, "Satish Kumar" wrote:
[External]
Hello,
I upgraded kafka f
What do you mean older disk ?
On 8/6/20, 12:05 PM, "Péter Nagykátai" wrote:
[External]
Yeah, but it doesn't do that. My "older" disks have ~70 partitions, the
newer ones ~5 partitions. That's why I'm asking what went wrong.
On Thu, Aug 6, 2020 at 8:35 PM wrote:
> Kafka
Kafka evenly distributed number of partition on each disk so in your case
every disk should have 3/2 topic partitions .
It is producer job to evenly produce data by partition key to topic partition .
How it partition key , it is auto generated or producer sending key along with
message .
On
Hi ,
You also make to change at producer and consumer side as well
server.properties:
message.max.bytes=15728640
replica.fetch.max.bytes=15728640
max.request.size=15728640
fetch.message.max.bytes=15728640
and producer.properties:
max.request.size=15728640
consumer
max.partition.fetch.
What version of kafka you are using ?
On 7/25/20, 2:22 AM, "Dumitru-Nicolae Marasoui"
wrote:
[External]
Hello kafka community,
Doing the following cli command to copy messages from one cluster to
another, without any transformation on the binary keys/values of the
messag
You should use active-active mirror maker
On 7/25/20, 9:03 AM, "Rajib Deb" wrote:
[External]
Hi,
I came across the below question and wanted to seek an answer on the same.
If a producer needs to write to a certain broker only, is this possible.
For example, if the producer i
What error you aare getting . just make sure user have appropriate permission .
Please share the error if you are getting .
On 7/8/20, 3:56 AM, "Ann Pricks" wrote:
[External]
Hi Team,
Any update on this.
Regards,
Pricks
From: Ann Pricks
Date: Friday, 3 July 2
Or if you don’t want to automate then , use the excel sheet and generate below
command for all topic .
Put all 350 statement in script and run it .
On 6/21/20, 9:28 PM, "Peter Bukowinski" wrote:
[External]
You can’t use a wildcard and must address each topic individually. You can
You can use below command to alter to partition
./bin/kafka-topics.sh --alter --zookeeper localhost:2181 --topic my-topic
--partitions 6
Thanks
Manoj
On 6/21/20, 7:38 PM, "sunil chaudhari" wrote:
[External]
Hi,
I already have 350 topics created. Please guide me how can I do
KAFKA ACL support there for all version .
On 5/13/20, 8:02 AM, "Jadhawar, Ganesh" wrote:
[External]
Hi Team,
Please let us know kafka ACL authorizer is support from which kafka release.
Thanks,
Ganesh
This e-mail and any files transmitted with it are for the sole use o
Please share the consumer.sh .
Are you using Apache kafka and what version ?
From: "wangl...@geekplus.com.cn"
Date: Sunday, May 10, 2020 at 9:38 PM
To: users
Cc: "Agrawal, Manoj (Cognizant)"
Subject: Re: Re: kafka-console-consumer.sh: Port already in use Exception after
enable JMX
[External]
You can change jmx-port to any available port - 9992
On 5/10/20, 7:49 PM, "wangl...@geekplus.com.cn"
wrote:
[External]
Add JMX_PORT=9988 to kafka-run-class.sh to enable JMX
After execute bin/kafka-console-consumer.sh there‘s exception:
Error: Exception thrown by the agent
You can use below command
To generate the json file
./bin/kafka-reassign-partitions.sh --zookeeper zookeeper_hoost:2181
--generate --topics-to-move-json-file test.json --broker-list 10,20,30 <--
list of broker id
To execute the reassign partition
./bin/kafka-reassign-partitions.sh --zooke
How many broker you have on this cluster and what is content of --
increase-replication-factor.json
On 5/8/20, 12:16 PM, "Rajib Deb" wrote:
[External]
Hi I have by mistake created a topic with replication factor of 1. I am
trying to increase the replication, but I get the below erro
Glade , it work for you .
Kafka Admin API run on zookeeper and sometime you don’t have access to
Zookeeper host /port . I don’t know in your scenario how you are managing
kafka/Zk cluster but for security purpose , Zookeeper access only limited to
kafka Cluster .
From: SenthilKumar K
Date:
I think you can filter list of topic return by KafkaConsumer.partitionsFor()
on by using method type , if this is PartitionInfo.leader() then include
those partition in list .
On 5/5/20, 11:44 AM, "SenthilKumar K" wrote:
[External]
Hi Team, We are using KafkaConsumer.partit
Is there documentation or example for mirror maker 2.0 ?
On 4/29/20, 9:04 PM, "Liam Clarke-Hutchinson"
wrote:
[External]
Hi Blake,
Replicator is, AFAIK, not FOSS - however, Mirror Maker 2.0, which is built
along very similar lines (i.e., on top of Kafka Connect) is, as is
Use mirror maker .
On 4/29/20, 11:52 AM, "vishnu murali" wrote:
[External]
Hi Guys,
I am having two separate Kafka cluster running in two independent zookeeper
I need to send a set of data from one topic from cluster A to cluster B
with the same topic name with all dat
Follower take some time to become the leader in case leader is down . you can
build retry logic to around this to handle this situation .
On 4/28/20, 1:08 AM, "M.Gopala Krishnan" wrote:
[External]
Hi,
I have a 3 node kafka cluster (replication-factor : 3), suddenly one of the
41 matches
Mail list logo