Congrats Kamal!
-
Gaurav
> On 30 Sep 2024, at 18:10, Mickael Maison wrote:
>
> Congratulations Kamal!
>
> On Mon, Sep 30, 2024 at 2:37 PM Luke Chen wrote:
>>
>> Hi all,
>>
>> The PMC of Apache Kafka is pleased to announce a new Kafka committer, Ka
KIP-1061 which proposes a way to export
> SCRAM credentials:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1061%3A+Allow+exporting+SCRAM+credentials
>
> Please have a look. Looking forward to hearing your thoughts!
>
> Regards,
> Gaurav
Hi all,
I'd like to kick off discussion for KIP-1061 which proposes a way to export
SCRAM credentials:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1061%3A+Allow+exporting+SCRAM+credentials
Please have a look. Looking forward to hearing your thoughts!
Regards,
Gaurav
Dear Kafka experts , Could anyone having this data share the details please
On Wed, Apr 3, 2024 at 3:42 PM Kafka Life wrote:
> Hi Kafka users
>
> Does any one have a document or ppt that showcases the capabilities of
> Kafka along with any cost management capability?
> i have a
Hi Kafka users
Does any one have a document or ppt that showcases the capabilities of
Kafka along with any cost management capability?
i have a customer who is still using IBM MQM and rabbit MQ. I want the
client to consider kafka for messaging and data streaming. I wanted to seek
your expert
g/8.5/userguide/more_about_tasks.html#sec:task_timeouts
[1] https://issues.apache.org/jira/browse/KAFKA-16219
Best,
Gaurav
On 2024/01/25 21:49:00 Justine Olshan wrote:
> It looks like there was some server maintenance that shut down Jenkins.
> Upon coming back up, the builds were expired bu
Apologies, I duplicated KAFKA-16157 twice in my previous message. I intended to
mention KAFKA-16195
with the PR at https://github.com/apache/kafka/pull/15262 as the second JIRA.
Thanks,
Gaurav
> On 26 Jan 2024, at 15:34, ka...@gnarula.com wrote:
>
> Hi Stan,
>
> I wanted to sha
Hi Stan,
I wanted to share some updates about the bugs you shared earlier.
- KAFKA-14616: I've reviewed and tested the PR from Colin and have observed
the fix works as intended.
- KAFKA-16162: I reviewed Proven's PR and found some gaps in the proposed fix.
I've
therefo
Hi Stanislav,
Thanks for bringing these JIRAs/PRs up.
I'll be testing the open PRs for KAFKA-14616 and KAFKA-16162 this week and I
hope to have some feedback
by Friday. I gather the latter JIRA is marked as a WIP by Proven and he's away.
I'll try to build on his work in the me
gt;
>>>>>> ———
>>>>>> Josep Prat
>>>>>>
>>>>>> Aiven Deutschland GmbH
>>>>>>
>>>>>> Alexanderufer 3-7, 10117 Berlin
>>>>>>
>>>>>> Amtsgericht Charlottenbu
Hi,
I'd like to request permissions to contribute to Apache Kafka. My account
details are as follows:
# Wiki
Email: gaurav_naru...@apple.com <mailto:gaurav_naru...@apple.com>
Username: gnarula
# JIRA
Email: gaurav_naru...@apple.com <mailto:gaurav_naru...@apple.com>
Username:
Dear Kafka Experts
How can we check for a particular offset number in Apache kafka 3.2.3
version.Could you please share some light.
The kafka_console_consumer tool is throwing class not found error.
./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
--topic your-topic
--group
Many thanks Samuel. Will go thru this.
On Tue, Apr 25, 2023 at 9:03 PM Samuel Delepiere <
samuel.delepi...@celer-tech.com> wrote:
> Hi,
>
> I use a combination of the Prometheus JMX exporter (
> https://github.com/prometheus/jmx_exporter) and the Prometheus Kafka
> exporte
Dear Kafka Experts
Could you please suggest good metrics exporter for consumer lag and topic
level metrics apart from Linkedin kafka burrow for the kafka broker cluster.
Hi Kafka , zookeeper experts
Is it possible to upgrade the 3.4.14 version of zookeeper cluster in a
rolling fashion (one by one node) to 3.5.7 zookeeper version. Would the
cluster work with a possible combination of 3.4.14 and 3.5.7 . Please
advise .
Any help from these experts ?
On Sat, Apr 8, 2023 at 2:23 PM Kafka Life wrote:
> Hello Kafka Experts
>
> Need a help . Currently the grafana agent
> triggering kafka/3.2.3/config/kafka_metrics.yml is sending over 5 thousand
> metrics. Is there a way to limit these many metrics
Hello Kafka Experts
Need a help . Currently the grafana agent
triggering kafka/3.2.3/config/kafka_metrics.yml is sending over 5 thousand
metrics. Is there a way to limit these many metrics to be sent and send
only what is required .Any pointers or such customized script is much
appreciated.kafka
Hi experts.. any pointers or guidance for this
On Wed, Apr 5, 2023 at 8:35 PM Kafka Life wrote:
> Respected Kafka experts/managers
>
> Do anyone have Subject of work -Activities related to Kafka cluster
> management for Apache or Confluent kafka . Something to assess and pro
Respected Kafka experts/managers
Do anyone have Subject of work -Activities related to Kafka cluster
management for Apache or Confluent kafka . Something to assess and propose
to an enterprise for kafka cluster management. Request you to kindly share
any such documentation please.
This is really great information Paul . Thank you .
On Tue, Mar 28, 2023 at 4:01 AM Brebner, Paul
wrote:
> I have a recent 3 part blog series on Kraft (expanded version of ApacheCon
> 2022 talk):
>
>
>
>
> https://www.instaclustr.com/blog/apache-kafka-kraft-abandons
Many thanks Joseph for your response
On Mon, Mar 27, 2023 at 4:50 PM Josep Prat
wrote:
> Hello there,
>
> You can find the general policy here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy
>
>
>
>
> Hello Kafka experts
>
> where can i see the end of support for apache kafka versions? i would like
> to know about 0.11 Kafka version on when was this deprecated
>
Hello Kafka experts
Is there a way where we can have Kafka Cluster be functional serving
producers and consumers without having Zookeeper cluster manage the
instance .
Any particular version of kafka for this or how can we achieve this please
Hello Experts, Any info or pointers on my query please.
On Mon, Aug 15, 2022 at 11:36 PM Kafka Life wrote:
> Dear Kafka Experts
> we need to monitor the consumer lag in kafka clusters 2.5.1 and 2.8.0
> versions of kafka in Grafana.
>
> 1/ What is the correct path for JMX metr
Dear Kafka Experts
we need to monitor the consumer lag in kafka clusters 2.5.1 and 2.8.0
versions of kafka in Grafana.
1/ What is the correct path for JMX metrics to evaluate Consumer Lag in
kafka cluster.
2/ I had thought it is FetcherLag but it looks like it is not as per the
link below
Dear Luke , Thank you for your kind and prompt response.
On Mon, Apr 4, 2022 at 1:23 PM Luke Chen wrote:
> Hi,
>
> The impact for the CVE-2022-22965? Since this is a RCE vulnerability, which
> means the whole system (including Kafka and ZK) is under the attackers'
>
Hi Kafka Experts
Regarding the recent threat of vulnerability in spring framework ,
CVE-2022-22965 vulnerability is SpringBoot (Java) for apache kafka and
Zookeeper. Could one of you suggest how Apache kafka and zk are impacted
and what should be the ideal fix for this .
Vulnerability in the
Thank you Malcolm. Will go through this.
On Sat, Feb 26, 2022 at 2:22 AM Malcolm McFarland
wrote:
> Maybe this could help?
> https://github.com/dimas/kafka-reassign-tool
>
> Cheers,
> Malcolm McFarland
> Cavulus
>
>
> On Fri, Feb 25, 2022 at 9:00 AM Kafka Life
Dear Experts
do you have any solution for this please
On Tue, Feb 22, 2022 at 8:31 PM Kafka Life wrote:
> Dear Kafka Experts
>
> Does anyone have a dynamically generated Json file based on the Under
> replicated partition in the kafka cluster.
> Everytime when the URP is increa
Dear Kafka Experts
Does anyone have a dynamically generated Json file based on the Under
replicated partition in the kafka cluster.
Everytime when the URP is increased to over 500 , it is a tedious job to
manually create a Json file .
I request you to share any such dynamically generated script
Dear Kafka Experts , Need your advice please
I am running a mirror maker in kafka 2.8 to replicate a topic from kafka
0.11 instance.
The size of each partition for a topic on 0.11 is always in 5 to 6 GB but
the replicated topic in 2.8 instances is in 40 GB for the same partition.
The topic
Dear Kafka experts
i have a 10 broker kafka cluster with all topics having replication factor
as 3 and partition 50
min.in.synch replicas is 2.
One broker went down for a hardware failure, but many applications
complained they are not able to produce /consume messages.
I request you to please
Thank you Men and Ran
On Sat, Nov 6, 2021 at 7:23 PM Men Lim wrote:
> I'm currently using Kafka-gitops.
>
> On Sat, Nov 6, 2021 at 3:35 AM Kafka Life wrote:
>
> > Dear Kafka experts
> >
> > does anyone have ready /automated script to create /dele
Dear Kafka experts
does anyone have ready /automated script to create /delete /alter topics in
different environments?
taking Configuration parameter as input .
if yes i request you to kindly share it with me .. please
Hello Luke
i have build a new kafka environment with kafka 2.8.0
the consumer is a new consumer set up to this environment is throwing the
below error... the old consumers for the same applications for the same
environment -2.8.0 is working fine.. .
could you please advise
2021-11-02 12:25:24
Dear Kafka Experts
We have set up a group.id (consumer ) = YYY
But when tried to connect to kafka instance : i get this error message. I
am sure this consumer (group id does not exist in kafka) .We user plain
text protocol to connect to kafka 2.8.0. Please suggest how to resolve this
issue
Dear Kafka experts
when an broker is started using start script , could any of you please let
me know the sequence of steps that happens in the back ground when the node
UP
like : when the script is initiated to start ,
1/ is it checking indexes .. ?
2/ is it checking isr ?
3/ is URP being made
Thank you very much Mr. Israel Ekpo. Really appreciate it.
We are using the 0.10 version of kafka and in the process of upgrading to
2.6.1 . Planning in process and Yes, these connections to zookeepers are
for Kafka functionality.
frequently there are incidents where zookeepers get bombarded
Dear KAFKA & Zookeeper experts.
1/ What is zookeeper Throttling ? Is it done at zookeepr ? How is it set
configured ?
2/ Is it helpful ?
Dear kafka Experts
Could one of you please help to explain what this below log in broker
instance mean..and what scenarios it would occur when there is no change
done .
INFO [GroupCoordinator 9610]: Member
webhooks-retry-app-840d3107-833f-4908-90bc-ea8c394c07c3-StreamThread-2-consumer-f87c3b85
Hello Kafka experts
The consumer team is reporting issue while consuming the data from the
topic as Singularity Header issue.
Can some one please tell on how to resolve this issue.
Error looks like ;
Starting offset: 1226716
offset: 1226716 position: 0 CreateTime: 1583780622665 isvalid: true
Dear Kafka Experts
1- Can any one share the upgrade plan with steps /Plan /tracker or any
useful documentation please.
2- upgrading kafka from old version of 0.11 to 2.5 .Any
suggestions/directions is highly appreciated.
Thanks
I am using "confluent-kafka==1.0.1". It works fine when I am using py3 and
ubuntu18, but fails with py3 and ubuntu14. I get the following error.
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/metrics_agent/kafka_writer.py",
line 147,
Hi
My jira id is sshil. I want to contribute to kafka. Can you please add me
to the contributors list?
Hi all,
I'm implementing a custom client.
I was wondering whether anyone could explain the OFFSET_OUT_OF_RANGE
error in this scenario.
My test suite tears down and spins up a fresh zookeeper and kafka every
time inside a pristine docker container.
The test suite runs as:
1. Producer
On Wed, Dec 25, 2019 at 5:51 PM Kafka Shil wrote:
> Hi,
> I am using docker compose file to start schema-registry. I need to change
> default port to 8084 instead if 8081. I made the following changes in
> docker compose file.
>
> schema-registry:
> image: confluenti
Hi,
I am using docker compose file to start schema-registry. I need to change
default port to 8084 instead if 8081. I made the following changes in
docker compose file.
schema-registry:
image: confluentinc/cp-schema-registry:5.3.1
hostname: schema-registry
container_name: schema-regist
Mr Kafka created KAFKA-7710:
---
Summary: Poor Zookeeper ACL management with Kerberos
Key: KAFKA-7710
URL: https://issues.apache.org/jira/browse/KAFKA-7710
Project: Kafka
Issue Type: Bug
Mr Kafka created KAFKA-7510:
---
Summary: KStreams RecordCollectorImpl leaks data to logs on error
Key: KAFKA-7510
URL: https://issues.apache.org/jira/browse/KAFKA-7510
Project: Kafka
Issue Type: Bug
Adam Kafka created KAFKA-7349:
-
Summary: Long Disk Writes cause Zookeeper Disconnects
Key: KAFKA-7349
URL: https://issues.apache.org/jira/browse/KAFKA-7349
Project: Kafka
Issue Type: Bug
fset.positionDiff(fetchOffset), fetchStatus.fetchInfo.fetchSize)
}
}
so we can ensure that our fetchOffset’s segmentBaseOffset is not the same as
endOffset’s segmentBaseOffset,then we check our topic-partition’s segment, we
found the data in the segment is all cleaned by the ka
Oh please ignore my last reply.
I find if leaderReplica.highWatermark.messageOffset >= requiredOffset , this
can ensure all replicas’ leo in curInSyncReplicas is >= the requiredOffset.
> 在 2016年9月23日,下午3:39,Kafka 写道:
>
> OK, the example before is not enough to exposure probl
ot the code as belows,
if (minIsr <= curInSyncReplicas.size && minIsr <= numAcks) {
(true, ErrorMapping.NoError)
} else {
(true, ErrorMapping.NotEnoughReplicasAfterAppendCode)
}
Its seems that only one condition in kafka broker’s code is not en
@wangguozhang,could you give me some advices.
> 在 2016年9月22日,下午6:56,Kafka 写道:
>
> Hi all,
> in terms of topic, we create a topic with 6 partition,and each with 3
> replicas.
>in terms of producer,when we send message with ack -1 using sync
> interface
Hi all,
in terms of topic, we create a topic with 6 partition,and each with 3
replicas.
in terms of producer,when we send message with ack -1 using sync
interface.
in terms of brokers,we set min.insync.replicas to 2.
after we review the kafka broker’s code,we know that
thanks for your answer,I know the necessity of key for compacted topics,and as
you know,__consumer_offsets is a internal compacted topic in kafka,and it’s key
is a triple of ,these errors are occurred when the
consumer client wants to commit group offset.
so why does his happen?
> 在 2016年7
Hi,
The server log shows error as belows on broker 0.9.0.
ERROR [Replica Manager on Broker 0]: Error processing append operation
on partition [__consumer_offsets,5] (kafka.server.ReplicaManager)
kafka.message.InvalidMessageException: Compacted topic cannot accept message
without
Hi, __consumer_offsets ’s partition 7 and partition 27 leader is -1, and isr
is null,who can tell me how to recover it,thank you.
Topic: __consumer_offsets Partition: 0Leader: 3 Replicas: 3,4,5
Isr: 4,5,3
Topic: __consumer_offsets Partition: 1Leader: 4
can someone please explain why latency is so big for me?thanks
> 在 2016年6月25日,下午11:16,Jay Kreps 写道:
>
> Can you sanity check this with the end-to-end latency test that ships with
> Kafka in the tools package?
>
> https://apache.googlesource.com/kafka/+/1769642bb779921267bd57
Hi all,
my kafka cluster is composed of three brokers with each have 8core cpu
and 8g memory and 1g network card.
with java async client,I sent 100 messages with size of 1024 bytes
per message ,the send gap between each sending is 20us,the consumer’s config is
like this
so it will not be the bottlenecks.
> 在 2016年6月18日,下午10:29,Kafka 写道:
>
> I send every message with timestamp, and when I receive a message,I do a
> subtraction between current timestamp and message’s timestamp. then I get
> the consumer’s delay.
>
>> 在 2016年6月18日,上午11:28,Kafka 写道:
>>
>>
>
I send every message with timestamp, and when I receive a message,I do a
subtraction between current timestamp and message’s timestamp. then I get the
consumer’s delay.
> 在 2016年6月18日,上午11:28,Kafka 写道:
>
>
hello,I have done a series of tests on kafka 0.9.0,and one of the results
confused me.
test enviroment:
kafka cluster: 3 brokers,8core cpu / 8g mem /1g netcard
client:4core cpu/4g mem
topic:6 partitions,2 replica
total messages:1
singal message size:1024byte
use Top util, we can see 280% cpu unilization, then i use JSTACK, I
found there are 4 threads use cpu most, which show below:
"kafka-network-thread-9092-0" prio=10 tid=0x7f46c8709000 nid=0x35dd
runnable [0x7f46b73f2000]
java.lang.Thread.State: RUNNABLE
"kafka-network-thre
64 matches
Mail list logo