Hi,
So we specifically kept the consumers to world writable in secure
mode. This is to allow zookeeper based consumers to create their own child
nodes under /consumers and they can add their own sasl based acls on top of
it. From the looks of it incase of zookeeper digest based connection it
One way to delete is to delete the topic partition directories from disks
and delete /broker/topics.
If you just shutdown those brokers controller might try to replicate the
topic onto brokers and since you don't have any leaders you might replica
fetcher errors in the logs.
Thanks,
Harsha
On Thu,
Did you make sure both those CA's are imported into Broker's truststore?
-Harsha
On Fri, Jul 15, 2016 at 5:12 PM Raghavan, Gopal
wrote:
> Hi,
>
> Can Kakfa support multiple CA certs on broker.
> If yes, can you please point me to an example.
>
> Producer signed with second CA (CA2) is failing.
+1 (binding)
1. Ran 3 node cluser
2. Ran few tests in creating, producing , consuming from secure &
non-secure clients.
Thanks,
Harsha
On Fri, Aug 5, 2016 at 8:50 PM Manikumar Reddy
wrote:
> +1 (non-binding).
> verified quick start and artifacts.
>
> On Sat, Aug 6, 2016 at 5:45 AM, Joel Koshy
how many brokers you've in this cluster. Do you try using a stable
zookeeper release like 3.4.8?
-Harhsa
On Mon, Aug 29, 2016 at 5:21 AM Nomar Morado wrote:
> we are using kafka 0.9.0.1 and zk 3.5.0-alpha
>
> On Mon, Aug 29, 2016 at 8:12 AM, Nomar Morado
> wrote:
>
> > we would get this occasio
Shri,
SSL in 0.9.0.1 is not beta and can be used in production. If you want
to put authorizer on top of SSL to enable ACLs for clients and topics
that's possible too.
Thanks,
Harsha
On Mon, Oct 3, 2016 at 8:30 AM Shrikant Patel wrote:
> We are are 0.9.0.1 and want to use SSL for ACL and
Hi All,
We are proposing to have a REST Server as part of Apache Kafka
to provide producer/consumer/admin APIs. We Strongly believe having
REST server functionality with Apache Kafka will help a lot of users.
Here is the KIP that Mani Kumar wrote
https://cwiki.apache.org/confluence/disp
fore trying to add fairly major new pieces to the project.
> >
> > Plus I think it's very beneficial for the Kafka community for Confluent
> to
> > have a strong business model--they provide many contributions to core
> that
> > most of us benefit from for free. Kee
>> sur...@hortonworks.com
> >> > > >
> >> > > wrote:
> >> > >
> >> > > > +1.
> >> > > >
> >> > > > This is an http access to core Kafka. This is very much needed as
> >> part
> >
This Vote now closed. There is the majority who are not in favor of
including the REST server.
Thanks everyone for participating.
Vote tally:
+1 Votes:
Harsha Chintalapani
Parth Brahbhatt
Ken Jackson
Suresh Srinivas
Jungtaek Lim
Ali Akthar
Haohui Mai
Shekhar
Congrats Becket!
-Harsha
On Mon, Oct 31, 2016 at 2:13 PM Rajini Sivaram
wrote:
> Congratulations, Becket!
>
> On Mon, Oct 31, 2016 at 8:38 PM, Matthias J. Sax
> wrote:
>
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > Congrats!
> >
> > On 10/31/16 11:01 AM, Renu Tewari wrote:
>
+1.
Ran a 3 node cluster with few simple tests.
Thanks,
Harsha
On Nov 1, 2018, 9:50 AM -0700, Eno Thereska , wrote:
> Anything else holding this up?
>
> Thanks
> Eno
>
> On Thu, Nov 1, 2018 at 10:27 AM Jakub Scholz wrote:
>
> > +1 (non-binding) ... I used the staged binaries and run tests with
>
Chris,
You are upgrading from 0.10.2.2 to 2.0.0 . There will be quite few
changes and it looks like you might be using classes other than KafkaConsumer
which are not public API? What classes specifically are not available.
-Harsha
On Nov 9, 2018, 7:47 AM -0800, Chris Barlock , wrote:
> I
Hi Ashish,
Whats your replica.lag.time.max.ms set to and do you see any network
issues between brokers.
-Harsha
On Jan 22, 2019, 10:09 PM -0800, Ashish Karalkar
, wrote:
> Hi All,
> We just upgraded from 0.10.x to 1.1 and enabled rack awareness on an existing
> clusters which has a
Hi,
When you kerberoize Kafka and enable zookeeper.set.acl to true, all the
zookeeper nodes created under zookeeper root will have ACLs to allow only Kafka
Broker’s principal. Since all topic creation will go directly to zookeeper, i.e
Kafka-topic.sh script creates a zookeeper node under /
Hi Marcos,
I think what you need is static membership which reduces the no.of
rebalances required. There is active discussion and work going for this KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances
-Ha
pawning many instances of kafkacat and hammering kafka brokers. I am still
> wondering that what could be reason for shrink and expand is a client hammers
> a broker .
> --Ashish
> On Thursday, January 24, 2019, 8:53:10 AM PST, Harsha Chintalapani
> wrote:
>
> H
Congratulations, Boyang
-Harsha
On Mon, Jun 22, 2020 at 4:32 PM AJ Chen wrote:
> Congrats, Boyang!
>
> -aj
>
>
>
>
> On Mon, Jun 22, 2020 at 4:26 PM Guozhang Wang wrote:
>
> > The PMC for Apache Kafka has invited Boyang Chen as a committer and we
> are
> > pleased to announce that he has accept
Hi Christian,
Kafka client connections are long-llving connections,
hence the authentication part comes up during connection establishment and
once we authenticate regular kafka protocols can be exchanged.
Doing heartbeat to keep the token alive in a Authorizer is not a good idea.
For inter broker communication over SSL all you need is to add
security.inter.broker.protocol to SSL.
"How do i make zookeeper talk to each other and brokers?"
Not sure I understand the question. You need to make sure zookeeper hosts
and port are reachable from your broker nodes.
-Harsha
On Wed, M
Hi Aditya,
Thanks for your interest. We entatively planning one in June
1st week. If you haven't already please register here
https://www.meetup.com/Apache-Storm-Apache-Kafka/ . I'll keep the Storm
lists updated once we finalize the date & location.
Thanks,
Harsha
On Mon, Apr 24, 201
21 matches
Mail list logo