e
> compatibility matrix.
>
> You can find more information in that PR.
> Best,
> Chia-Ping
>
> Kiran Satpute 於 2024年12月10日 週二 下午6:26寫道:
>
> > Hi Team,
> >
> > Can I get Apache Kafka and Zookeeper Compatibility matrix.
> > I have to upgrade Apache
find more information in that PR.
Best,
Chia-Ping
Kiran Satpute 於 2024年12月10日 週二 下午6:26寫道:
> Hi Team,
>
> Can I get Apache Kafka and Zookeeper Compatibility matrix.
> I have to upgrade Apache Kafka from 3.3 to the latest stable version.
> Please suggest the latest stable version
Hi Team,
Can I get Apache Kafka and Zookeeper Compatibility matrix.
I have to upgrade Apache Kafka from 3.3 to the latest stable version.
Please suggest the latest stable version of both apache kafka and zookeeper.
--
Thanks & Regard
Kiran Satpute
(9921424521)
or without TLS encryption. Were you getting an error?"
>> > > >
>> > > >
>> > > > "Fatal error during KafkaServer startup. Prepare to shutdown"
>> > > > "java.lang.SecurityException: zookeeper.set.acl is true, but
nfig=./../config/kafka_server_jaas.conf,
> > > > zookeeper.sasl.client=false,
> > zookeeper.sasl.clientconfig=default:Client]
> > > >
> > > > at
> kafka.server.KafkaServer.initZkClient(KafkaServer.scala:445)
> > > >
> > > >
the doc you suggested,
> > > I cannot configure SSL as I already mentioned, If I skip ssl config
> part
> > > from your suggested doc and tried Digest-MD5, I come up "saslToken
> > missing"
> > > exception which I mentioned above!
> > > I d
; exception which I mentioned above!
> > I don't really understand what saslToken is and how to make it get
> > generated for Digest auth!
> > Please assist!
> >
> > On Thu, Nov 9, 2023 at 7:15 PM Alex Craig wrote:
> >
> > > " I co
keeper.set.acl=true, I'm forced to configure TLS."
> > Hmm, that config shouldn't have anything to do with TLS. You can set
> ACL's
> > with or without TLS encryption. Were you getting an error?
> >
> > On Wed, Nov 8, 2023 at 11:35 PM arjun s v wrote:
> &
8, 2023 at 11:35 PM arjun s v wrote:
>
> > Team,
> >
> > Please consider this as high priority, we need to enable authentication
> > ASAP. Please assist.
> > On Tue, Nov 7, 2023 at 4:38 PM arjun s v wrote:
> >
> > > Hi team,
> > >
> > > I
8, 2023 at 11:35 PM arjun s v wrote:
>
> > Team,
> >
> > Please consider this as high priority, we need to enable authentication
> > ASAP. Please assist.
> > On Tue, Nov 7, 2023 at 4:38 PM arjun s v wrote:
> >
> > > Hi team,
> > >
> > > I
; Team,
>
> Please consider this as high priority, we need to enable authentication
> ASAP. Please assist.
> On Tue, Nov 7, 2023 at 4:38 PM arjun s v wrote:
>
> > Hi team,
> >
> > I'm trying to configure *Digest-MD5* authentication between kafka and
> >
Team,
Please consider this as high priority, we need to enable authentication
ASAP. Please assist.
On Tue, Nov 7, 2023 at 4:38 PM arjun s v wrote:
> Hi team,
>
> I'm trying to configure *Digest-MD5* authentication between kafka and
> zookeeper.
> Also I need to set ACL wi
Hi team,
I'm trying to configure *Digest-MD5* authentication between kafka and
zookeeper.
Also I need to set ACL with digest scheme and credentials.
I don't want to enable SASL.
I tried to follow this
<https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authen
Hi,
In my environment I have configured FIPS java, Zookeeper to work with
SASL and a Kafka broker to connect to Zookeeper with SASL
I noticed that if FIPS is enabled, kafka cannot connect and there is an
error
"ERROR SASL authentication with Zookeeper Quorum member failed.
(org.apache.zooke
:44
To: users@kafka.apache.org
Subject: Re: SASL authentication between Kafka and Zookeeper
Hi,
Wich version of Kafka/Zookeeper are you using?
On 06/06/2022 12:11, Ivanov, Evgeny wrote:
> Hi François,
>
>
>
> yes, I did (for both Client and Server), but no luck:
>
>
>
&g
bject: Re: SASL authentication between Kafka and Zookeeper
Hi,
Did you tried to use?
org.apache.zookeeper.server.auth.DigestLoginModule instead of
org.apache.kafka.common.security.plain.PlainLoginModule
Regards,
François
On 06/06/2022 11:27, Ivano
ital
Telephone.: +7 (495) 960 (ext.264423)
Mobile: +7 (916) 091-8939
-Original Message-
From: fpapon
Sent: 6 июня 2022 г. 13:08
To: users@kafka.apache.org
Subject: Re: SASL authentication between Kafka and Zookeeper
Hi,
Did y
Hi,
Did you tried to use?
org.apache.zookeeper.server.auth.DigestLoginModule instead of
org.apache.kafka.common.security.plain.PlainLoginModule
Regards,
François
On 06/06/2022 11:27, Ivanov, Evgeny wrote:
org.apache.kafka.common.security.plain.PlainLoginModule required
--
--
François
Hi all,
could you please advise correct configuration settings (jaas and config files)
to enable SASL authentication between Kafka and Zookeeper ?
Here is the error I get:
[2022-06-06 10:54:30,348] ERROR SASL authentication failed using login context
'C
that's all right. thanks.
On 2021/12/20 11:26, Luke Chen wrote:
There's no a specific timeline for this feature to be completed.
Hi Yonghua,
Thanks for asking.
Currently, the community is still working in progress for the feature to
remove zookeeper (aka. Kraft mode).
You can run the Kraft mode in testing environment now, but not in
production env.
There's no a specific timeline for this feature to be completed.
For more d
hello kafka developers,
from which version of kafka, Zookeeper will be canceled?
what's the alternative to zookeeper in that version?
Thank you.
Hi all
I have a question about enabling kafka and zookeeper on TSL and SASL together,
that it will use TSL to do encryption and SASL to do auth, below are my config
files(removed non-related info)
zookeeper.conf:
secureClientPort=2182
serverCnxnFactory
Hi, I’m upgrading cloud infrastructure kafka(2.3.0) and zookeeper(3.4.9) to
newer version. May I ask recommendation of the best latest stable versions for
the both I could use?
Kafka 2.6.0 or 2.5.1?
How about compatible zookeeper version?
Thanks,
Nancy
Kafka version: 2.3.0
Zookeeper version: 3.5.5
Hi!
I'm trying to keep all communication secure in my test cluster, but somehow
I'm unable to get Kafka->Zookeeper connection using SSL. If I don't open
the "clientPort" next to "secureClientPort" I get:
Zookeeper:
WARN
> [epollEventLoopGroup-7-4:
Hi
It seems that your keytab doesn't have the principal you configured your
"client" section to use. Post your jaas here if you want further help but
basically you should be able to do
kinit -V -k -t
On 18 Feb. 2017 3:56 am, "Raghav" wrote:
Hi
I am trying to setup a simple setup with one K
Hi
I am trying to setup a simple setup with one Kafka broker, and zookeeper on
the same VM. One producer and one consumer on each VM. I have setup a KDC
on cents VM.
I am trying to following this guide:
http://docs.confluent.io/2.0.0/kafka/sasl.html#kerberos
When I start Kafka, it errors out wit
.@qq.com> wrote:
> Any solution?
>
>
>
>
> -- 原始邮件 --
> 发件人: "Xiaoyuan Chen"<253441...@qq.com>;
> 发送时间: 2016年12月9日(星期五) 上午10:15
> 收件人: "users";
> 主题: The connection between kafka and zookeeper is ofte
Any solution?
-- --
??: "Xiaoyuan Chen"<253441...@qq.com>;
: 2016??12??9??(??) 10:15
??: "users";
????: The connection between kafka and zookeeper is often closed byzookeeper,
lead to NotLeaderF
if not, it will run doTransport, but the doTransport costs about 10s, so next
loop, it will find the timeout.
Keep going, I thought there could be a deadlock at that time, so I keep
printing the jstack of the kafka and zookeeper. Using the shell like below:
while true; do echo -e "\n\
understood, and i am looking at that bit but i would still like to know the
answer.
On Thu, Dec 8, 2016 at 8:22 AM, Asaf Mesika wrote:
> Off-question a bit - Using the Kafka Mesos framework should save you from
> handling those questions: https://github.com/mesos/kafka
>
>
> On Thu, Dec 8, 2016
Off-question a bit - Using the Kafka Mesos framework should save you from
handling those questions: https://github.com/mesos/kafka
On Thu, Dec 8, 2016 at 2:33 PM Mike Marzo
wrote:
If i'm running a 5 node zk cluster and a 3 node kafka cluster in dcker on a
mesos/marathon environment where my zk
If i'm running a 5 node zk cluster and a 3 node kafka cluster in dcker on a
mesos/marathon environment where my zk and broker nodes are all leveraging
local disk on the hosts they are running on is there any value to the local
data being preserved across restarts?
In other words when a broker
Useful resource Nico, Thanks
B
On Tuesday, 11 October 2016, Nicolas Motte wrote:
> Hi everyone,
>
> I created a training for Application Management and OPS teams in my
> company.
> Some sections are specific to our deployment, but most of them are generic
> and explain how K
Hi everyone,
I created a training for Application Management and OPS teams in my company.
Some sections are specific to our deployment, but most of them are generic
and explain how Kafka and ZooKeeper work.
I uploaded it on SlideShare, I thought it might be useful to other people:
http
Data is always provided by the leader of a topic-partition (i.e. a broker).
Here is a summary of how zookeeper is used:
https://www.quora.com/What-is-the-actual-role-of-ZooKeeper-in-Kafka
-David
On 9/10/16, 3:47 PM, "Eric Ho" wrote:
I notice that some Spark programs would contact someth
I notice that some Spark programs would contact something like 'zoo1:2181'
when trying to suck data out of Kafka.
Does the kafka data actually get routed out of zookeeper before delivering
the payload onto Spark ?
--
-eric ho
and consumers
using basic String serialization.
* use of Netflix's curator API to instantiate an in-process zookeeper
server, together with an in-memory instance of the
kafka.server.KafkaServer class
* ensure that all threads launched by Kafka and zookeeper are cleanly
shutdown.
Yeah, so it would seem a work around could be to defer full replica
assignment until adequate brokers are available, but in the meantime, allow
topic creation to proceed.
With respect to Joel's point around the possibility for imbalanced
partition assignment if not all replicas are available, this
When creating a new topic, we require # live brokers to be equal to or
larger than # replicas. Without enough brokers, can't complete the replica
assignment since we can't assign more than 1 replica on the same broker.
Thanks,
Jun
On Tue, Oct 15, 2013 at 1:47 PM, Jason Rosenberg wrote:
> Is t
That's a good question. Off the top of my head I don't remember any
fundamentally good reason why we don't allow it - apart from:
- broker registration paths are ephemeral so topic creation cannot
succeed when there are insufficient brokers available
- it may be confusing to some users to successfu
Is there a fundamental reason for not allowing creation of new topics while
in an under-replicated state? For systems that use automatic topic
creation, it seems like losing a node in this case is akin to the cluster
being unavailable, if one of the nodes goes down, etc.
On Tue, Oct 15, 2013 at
Steve - that's right. I think Monika wanted clarification on what
would happen if replication factor is two and only one broker is
available. In that case, you won't be able to create new topics with
replication factor two (you should see an AdministrationException
saying the replication factor is
If you have a double broker failure with replication factor of 2 and only
have 2 brokers in the cluster. Wouldn't every partition be not available?
On Tue, Oct 15, 2013 at 8:48 AM, Jun Rao wrote:
> If you have double broker failures with a replication factor of 2, some
> partitions will not be
Thanks for replying..:)
What if the second broker never comes?
On Oct 15, 2013 3:48 PM, "Jun Rao" wrote:
> If you have double broker failures with a replication factor of 2, some
> partitions will not be available. When one of the brokers comes back, the
> partition is made available again (there
If you have double broker failures with a replication factor of 2, some
partitions will not be available. When one of the brokers comes back, the
partition is made available again (there is potential data loss), but in an
under replicated mode. After the second broker comes back, it will catch up
f
I have 2 nodes kafka cluster with default.replication.factor=2,is set in
server.properties file.
I removed one node-in removing that node,I killed Kafka process,removed all
the kafka-logs and bundle from that node.
Then I stopped my remaining running node in the cluster and started
again(default.
47 matches
Mail list logo