Hi
Use it with --command-config client_security.properties and pass below
type configurations in properties file:-
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="*" \
password="*";
Congratulations
On Mon, 19 Oct, 2020, 11:02 pm Bill Bejeck, wrote:
> Congratulations Chia-Ping!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax wrote:
>
> > Congrats Chia-Ping!
> >
> > On 10/19/20 10:24 AM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > I'm happy to announce th
Hi John
Please find my inline response below
Regards and Thanks
Deepak Raghav
On Tue, Sep 1, 2020 at 8:22 PM John Roesler wrote:
> Hi Deepak,
>
> It sounds like you're saying that the exception handler is
> correctly indicating that Streams should "Continue", and
Hi Team
Just a reminder.
Can you please help me with this?
Regards and Thanks
Deepak Raghav
On Tue, Sep 1, 2020 at 1:44 PM Deepak Raghav
wrote:
> Hi Team
>
> I have created a CustomExceptionHandler class by
> implementing DeserializationExceptionHandler interface to handle the
ow if I missed anything.
Regards and Thanks
Deepak Raghav
Hi Tom
Can you please reply to this.
Regards and Thanks
Deepak Raghav
On Mon, Jul 27, 2020 at 10:05 PM Deepak Raghav
wrote:
> Hi Tom
>
> I have to change the log level at runtime i.e without restarting the
> worker process.
>
> Can you please share any suggestion
Hi Tom
I have to change the log level at runtime i.e without restarting the worker
process.
Can you please share any suggestion on this with log4j.
Regards and Thanks
Deepak Raghav
On Mon, Jul 27, 2020 at 7:09 PM Tom Bentley wrote:
> Hi Deepak,
>
> https://issues.apache.org/ji
Hi Team
Request you to please have a look.
Regards and Thanks
Deepak Raghav
On Thu, Jul 23, 2020 at 6:42 PM Deepak Raghav
wrote:
> Hi Team
>
> I have some source connector, which is using the logging provided by
> kafka-connect framework.
>
> Now I need to change the log
log4j2, could you please help me with this.
Regards and Thanks
Deepak Raghav
Hi Robin
Request you to please reply.
Regards and Thanks
Deepak Raghav
On Wed, Jun 10, 2020 at 11:57 AM Deepak Raghav
wrote:
> Hi Robin
>
> Can you please reply.
>
> I just want to add one more thing, that yesterday I tried with
> connect.protocal=eager. Task distrib
Hi Robin
Can you please reply.
I just want to add one more thing, that yesterday I tried with
connect.protocal=eager. Task distribution was balanced after that.
Regards and Thanks
Deepak Raghav
On Tue, Jun 9, 2020 at 2:37 PM Deepak Raghav
wrote:
> Hi Robin
>
> Thanks for your
understanding is correct or not.
Regards and Thanks
Deepak Raghav
On Tue, May 26, 2020 at 8:20 PM Robin Moffatt wrote:
> The KIP for the current rebalancing protocol is probably a good reference:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-415:+Incremental+Cooperative+Re
Hi Team
Just a Gentle Reminder.
Regards and Thanks
Deepak Raghav
On Fri, May 29, 2020 at 1:15 PM Deepak Raghav
wrote:
> Hi Team
>
> Recently, I had seen strange behavior in kafka-connect. We have source
> connector with single task only, which reads from S3 bucket and cop
connector, a task can be
left assigned in both the worker node.
Note : I have seen this only one time, after that it was never reproduced.
Regards and Thanks
Deepak Raghav
I cannot show this mail as a
reference.
It would be very great if you please share any link/discussion reference
regarding the same.
Regards and Thanks
Deepak Raghav
On Thu, May 21, 2020 at 2:12 PM Robin Moffatt wrote:
> I don't think you're right to assert that this is "
chchargeableevent",
"connector": {
"state": "RUNNING",
"worker_id": "10.0.0.5:*8080*"
},
"tasks": [
{
"id": 0,
"state": "RUNNING",
"worker_id": "10.0.0.5:*8080*&qu
ss like below :
*W1* *W2*
C1T1C1T2
C2T2C2T2
I hope, I gave your answer,
Regards and Thanks
Deepak Raghav
On Wed, May 20, 2020 at 4:42 PM Robin Moffatt wrote:
> OK, I understand better now.
>
> You can read more abou
ected.
Regards and Thanks
Deepak Raghav
On Wed, May 20, 2020 at 1:41 PM Robin Moffatt wrote:
> So you're running two workers on the same machine (10.0.0.4), is
> that correct? Normally you'd run one worker per machine unless there was a
> particular reason otherwise.
>
Hi
Please, can anybody help me with this?
Regards and Thanks
Deepak Raghav
On Tue, May 19, 2020 at 1:37 PM Deepak Raghav
wrote:
> Hi Team
>
> We have two worker node in a cluster and 2 connector with having 10 tasks
> each.
>
> Now, suppose if we have two kafka connect pr
Hi
Is there any study that shows why smaller size messages are optimal for
Kafka, and as size increases to 1MB and more, the throughput decreases ?
What are the design choices in Kafka that leads to this behavior.
Can any developer or committer share any insight and point to the relevant
piece o
overcome our broken
graphs.
Thanks. Regards.
--
Raghav
;
}
}
Thanks.
R
On Fri, Mar 8, 2019 at 10:41 PM Manikumar wrote:
> Hi Raghav,
>
> As you know, KIP-372 added "version" tag to RequestsPerSec metric to
> monitor requests for each version.
> As mentioned in the KIP, to get total count per request (across all
>
please help us figure out the answer for the email below ? It will
be greatly appreciated. We just want to know how to find the version number
?
Many thanks.
R
On Fri, Dec 14, 2018 at 5:16 PM Raghav wrote:
> I got it to work. I fired up a console and then saw what beans are
> registered the
wildcards don't work. See the screenshot below,
apiVersion is 7. Where did this come from ? Can someone please help to
understand.
[image: jmx.png]
On Fri, Dec 14, 2018 at 4:29 PM Raghav wrote:
> Is this a test case for this commit:
> https://github.com/apache/kafka/pull/4506 ? I
:34 AM Raghav wrote:
> Thanks Ismael. How to query it in 2.1 ? I tried all possible ways
> including using version, but I am still getting the same exception message.
>
> Thanks for your help.
>
> On Thu, Dec 13, 2018 at 7:19 PM Ismael Juma wrote:
>
>> The metric was
in
> the upgrade notes for 2.0.0.
>
> Ismael
>
> On Thu, Dec 13, 2018, 3:35 PM Raghav
> > Hi
> >
> > We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to monitor
> > our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed that
> the
Hi
We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to monitor
our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed that the
*below* mentioned metric can't be retrie
and we get the below exception:
*"kafka.network:type=RequestMetrics,name=RequestsPerSec,request=Produce
Hi
We have a 3 node Kafka Brokers setup.
Our current value of default.replication.factor is 2.
What should be the recommended value of offsets.topic.replication.factor ?
Please advise as we are not completely sure
about offsets.topic.replication.factor ?
Thanks for your help.
R
Anyone ? We really hit the wall deciphering this error log, and we don't
know how to fix it.
On Wed, Oct 10, 2018 at 12:52 PM Raghav wrote:
> Hi
>
> We are on Kafka 1.1 and have 3 Kafka brokers, and help your need to
> understand the error message, and what it would take t
s here*
*controlled.shutdown.enable=true
auto.leader.rebalance.enable=true*
*unclean.leader.election.enable=true*
--
Raghav
Hi
Our 3 node Zookeeper ensemble got powered down, and upon powering up the
zookeeper could get quorum and kept throwing these errors. As a result our
Kafka cluster was unusable. What is the best way to revive ZK cluster in
such situations ? Please suggest.
2018-08-17_00:59:18.87009 2018-08-17 0
Wed, Aug 8, 2018 at 6:46 PM, Raghav wrote:
> Hi
>
> Is there any Java API available so that I can enable our Kafka cluster's
> JMX port, and consume metrics via JMX api, and dump to a time series
> database.
>
> I checked out jmxtrans, but currently it does not dump to TS
and dumping to InfluxDB
>
>
>
> Boris Lublinsky
> FDP Architect
> boris.lublin...@lightbend.com
> https://www.lightbend.com/
>
> On Aug 8, 2018, at 8:46 PM, Raghav wrote:
>
> Hi
>
> Is there any Java API available so that I can enable our Kafka cluster's
Hi
Is there any Java API available so that I can enable our Kafka cluster's
JMX port, and consume metrics via JMX api, and dump to a time series
database.
I checked out jmxtrans, but currently it does not dump to TSDB (time series
database).
Thanks.
R
Hi
We want to have to have two copies of our Kafka cluster (in two different
data centers). In case one DC is unavailable, the Kafka Cluster in other
data center should be able to serve.
1. What are the recommended ways to achieve this ? I am assuming using
mirrormaker, we can achieve this. Any D
o do a rolling restart
> > manually, you should shut down one broker at a time.
> >
> > In this way, you leave time to the broker controller service to balance
> > the active replicas into the healthy nodes.
> >
> > The same procedure when you start up your n
Hi
We have a 3 Kafka brokers setup on 0.10.2.1. We have a requirement in our
company environment that we have to first stop our 3 Kafka Broker setup,
then do some operations stuff that takes about 1 hours, and then bring up
Kafka (version 1.1) brokers again.
In order to achieve this, we issue:
1
ercome this issue ? We can repro it every single time.
Many thanks.
> On Fri, May 11, 2018 at 3:16 PM, Raghav wrote:
>
> > Hi
> >
> > We have a 3 node zk ensemble as well as 3 node Kafka Cluster. They both
> are
> > hosted on the same 3 VMs.
> >
>
Hi
We have a 3 node zk ensemble as well as 3 node Kafka Cluster. They both are
hosted on the same 3 VMs.
Before Restart
1. We were on Kafka 0.10.2.1
After Restart
1. We moved to Kafka 1.1
We observe that Kafkas report leadership issues, and for lot of partitions
Leader is -1. I see some logs in
Hi
Are there anything that needs to be taken care for if we want to move from
0.10.2.x to latest 1.1 release ?
Is this stable release and is it recommended for production use ?
Thanks
Raghav
Anyone ?
On Thu, Mar 29, 2018 at 6:11 PM, Raghav wrote:
> Hi
>
> We have a 3 node Kafka cluster running. Time to time, we have some changes
> in trust store and we restart Kafka to take new changes into account. We
> are on Kafka 0.10.x.
>
> If we move to 1.1, would we
Hi
We have a 3 node Kafka cluster running. Time to time, we have some changes
in trust store and we restart Kafka to take new changes into account. We
are on Kafka 0.10.x.
If we move to 1.1, would we still need to restart Kafka upon trust store
changes ?
Thanks.
--
Raghav
Is It recommended to move to 1.0 release if we want to overcome this issue
? Please advise, Ted.
Thanks.
R
On Thu, Mar 15, 2018 at 7:43 PM, Ted Yu wrote:
> Looking at KAFKA-3702, it is still Open.
>
> FYI
>
> On Thu, Mar 15, 2018 at 5:51 PM, Raghav wrote:
>
> >
; org.apache.kafka.common.network.SslTransportLayer.
> > close(SslTransportLayer.java:148)
> > at
> > org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
> > at
> > org.apache.kafka.common.network.Selector.close(Selector.java:442)
> > at org.apache.kafka.common.network.Selector.poll(
> > Selector.java:310)
> > at
> > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
> > at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
> > at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Thanks a lot.
> >
>
--
Raghav
Hi
We have a 3 node secure Kafka Cluster (
https://kafka.apache.org/documentation/#security_ssl)
Recently, my producer client is receiving the below message. Can someone
please help to understand the message and possible few pointers to fix
debug and may be fix this issue.
18/03/15 14:37:23 INF
By the way, is there a bug that was fixed in the later release.
https://issues.apache.org/jira/browse/KAFKA-6030
Can you please confirm ?
On Tue, Feb 6, 2018 at 1:38 PM, Ted Yu wrote:
> The log cleaner abortion in the log file preceded log deletion.
>
> On Tue, Feb 6, 2018 at 1:36 P
tually deleting it.
>
> if (cleaner != null && !isFuture) {
>
> cleaner.abortCleaning(topicPartition)
>
> FYI
>
> On Tue, Feb 6, 2018 at 12:56 PM, Raghav wrote:
>
> > From the log-cleaner.log, I see the following. It seems like it resume
> but
>
>From the log-cleaner.log, I see the following. It seems like it resume but
is aborted. Not sure how to read this:
[2018-02-06 18:06:22,178] INFO Compaction for partition topic043-27 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,178] INFO The cleaning for partition topic043-27 is
aborted
Linux. CentOS.
On Tue, Feb 6, 2018 at 12:26 PM, M. Manna wrote:
> Is this Windows or Linux?
>
> On 6 Feb 2018 8:24 pm, "Raghav" wrote:
>
> > Hi
> >
> > While configuring a topic, we are specifying the retention bytes per
> topic
> > as follows
t; > > > log.segment.bytes=536870912
> > > > > >
> > > > > > *topic configuration (30GB):*
> > > > > >
> > > > > > [tstumpges@kafka-02 kafka]$ bin/kafka-topics.sh --zookeeper
> > > > > > zk-01:2181/kafka --describe --topic stg_logtopic
> > > > > > Topic:stg_logtopicPartitionCount:12 ReplicationFactor:3
> > > > > > Configs:retention.bytes=300
> > > > > > Topic: stg_logtopic Partition: 0Leader: 4
> > > > > > Replicas: 4,5,6 Isr: 4,5,6
> > > > > > Topic: stg_logtopic Partition: 1Leader: 5
> > > > > > Replicas: 5,6,1 Isr: 5,1,6
> > > > > > ...
> > > > > >
> > > > > > And, disk usage showing 910GB usage for one partition!
> > > > > >
> > > > > > [tstumpges@kafka-02 kafka]$ sudo du -s -h /data1/kafka-data/*
> > > > > > 82G /data1/kafka-data/stg_logother3-2
> > > > > > 155G/data1/kafka-data/stg_logother2-9
> > > > > > 169G/data1/kafka-data/stg_logother1-6
> > > > > > 910G/data1/kafka-data/stg_logtopic-4
> > > > > >
> > > > > > I can see there are plenty of segment log files (512MB each) in
> the
> > > > > > partition directory... what is going on?!
> > > > > >
> > > > > > Thanks in advance, Thunder
> > > > > >
> > > > >
> > > >
> > >
> >
>
--
Raghav
Hi
While configuring a topic, we are specifying the retention bytes per topic
as follows. Our retention time in hours is 48.
*bin/kafka-topics.sh, --zookeeper zk-1:2181,zk-2:2181,zk-3:2181 --create
--topic AmazingTopic --replication-factor 2 --partitions 64 --config
retention.bytes=16106127360 --
Can someone please help here ?
On Thu, Nov 23, 2017 at 10:42 AM, Raghav wrote:
> Anyone here ?
>
> On Wed, Nov 22, 2017 at 4:04 PM, Raghav wrote:
>
>> Hi
>>
>> If I give several locations with smaller capacity for log.dirs vs one
>> large drive for log.dirs
Anyone here ?
On Wed, Nov 22, 2017 at 4:04 PM, Raghav wrote:
> Hi
>
> If I give several locations with smaller capacity for log.dirs vs one
> large drive for log.dirs, are there any PROS or CONS between the two
> (assuming total storage is same in both cases).
>
> I don
that there are no issues.
Thanks.
--
Raghav
:
> There's a video where Jay Kreps talks about how Kafka works - YouTube has
> it as the top 5 under "How Kafka Works".
>
>
> On 20 Sep 2017 5:49 pm, "Raghav" wrote:
>
> > Hi
> >
> > Just wondering if there is any video/blog that
.
--
Raghav
Thanks, Guozhang.
On Mon, Sep 18, 2017 at 5:23 PM, Guozhang Wang wrote:
> It is available online now:
> https://www.confluent.io/kafka-summit-sf17/resource/
>
>
> Guozhang
>
> On Tue, Sep 19, 2017 at 8:13 AM, Raghav wrote:
>
> > Hi
> >
> > Just wond
Hi
Just wondering if the videos are available anywhere from Kafka Summit 2017
to watch ?
--
Raghav
Thanks, Kamal.
On Fri, Sep 8, 2017 at 4:10 AM, Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:
> add this lines at the end of your log4j.properties,
>
> log4j.logger.org.apache.kafka.clients.producer=WARN
>
> On Thu, Sep 7, 2017 at 5:27 PM, Raghav w
ayout
log4j.appender.file.layout.ConversionPattern=%d{dd-MM- HH:mm:ss} %-5p
%c{1}:%L - %m%n
On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi
wrote:
> Hi Raghav,
>
> I think it is enough to raise the logging level
> of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> Also I'd like to
Can you post the exact log messages that you are seeing?
>
> -Jaikiran
>
>
>
> On 07/09/17 7:55 AM, Raghav wrote:
>
>> Hi
>>
>> My Java code produces Kafka config overtime it does a send which makes log
>> very very verbose.
>>
>> How can
Hi
My Java code produces Kafka config overtime it does a send which makes log
very very verbose.
How can I reduce the Kafka client (producer) logging in my java code ?
Thanks for your help.
--
Raghav
Kafka Brokers only. Clients were Java client that used the same client
version as the broker.
On Thu, Aug 31, 2017 at 5:43 AM, Saravanan Tirugnanum
wrote:
> Thank you Raghav. Was it like you upgraded Kafka Broker or Clients or both.
>
> Regards
> Saravanan
>
> On Wednesday, Au
Saravanan
>
>
> On Wednesday, August 9, 2017 at 11:51:19 PM UTC-5, Raghav wrote:
>>
>> Hi
>>
>> I am sending very small 32 byte message to Kafka broker in a tight loop
>> with 250ms sleep. I have one broker, 1 partition, and replication factor =
>> 1.
>&g
[myid:] - INFO [main:NIOServerCnxnFactory@89] -
binding to port 0.0.0.0/0.0.0.0:2181
Killed
--
Raghav
Broker is 100% running. ZK path shows /broker/ids/1
On Fri, Aug 18, 2017 at 1:02 AM, Yang Cui wrote:
> please use zk client to check the path:/brokers/ids in ZK
>
> 发自我的 iPhone
>
> > 在 2017年8月18日,下午3:14,Raghav 写道:
> >
> > Hi
> >
> > I have a 1 broker
07:05:47,813] ERROR
org.apache.kafka.common.errors.InvalidReplicationFactorException:
replication factor: 1 larger than available brokers: 0*
* (kafka.admin.TopicCommand$)"*
Thanks.
--
Raghav
Hey Martin
I am using default setting for queue.enqueue.timeout.ms. since I have not
set it my Java client. Network doesn't seem to time out either.
Could I be missing something else ?
On Sat, Aug 12, 2017 at 5:18 AM, Martin Gainty wrote:
>
>
> ____
=1502479584961,
sendTimeMs=1502479584961),
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
17/08/11 19:26:24 DEBUG internals.AbstractCoordinator:563 Group coordinator
lookup for group ConsumerGroup05 failed: The group coordinator is not
available.
Thanks.
--
Raghav
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:52)
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
--
Raghav
Thanks.
On Thu, Jul 13, 2017 at 2:41 AM, Rajini Sivaram
wrote:
> Hi Raghav,
>
> You could take a look at https://github.com/apache/
> kafka/blob/trunk/clients/src/test/java/org/apache/kafka/
> test/TestSslUtils.java
>
> Regards,
>
> Rajini
>
> On Wed, Jul 12
Guys, Would anyone know about it ?
On Tue, Jul 11, 2017 at 6:20 AM, Raghav wrote:
> Hi
>
> I followed https://kafka.apache.org/documentation/#security to create
> keystore and trust store using Java Keytool. Now, I am looking to do the
> same stuff programmatically using Java.
.
--
Raghav
Hi Rajini
Now that 0.11.0 is out, can we use the Admin client ? Are there some
example code for these ?
Thanks.
On Wed, May 24, 2017 at 9:06 PM, Rajini Sivaram
wrote:
> Hi Raghav,
>
> Yes, you can create ACLs programmatically. Take a look at the use of
> AclCommand.main in https:
Thanks for the update, Guozhang.
On Thu, Jun 22, 2017 at 9:52 PM, Guozhang Wang wrote:
> Raghav,
>
> We are going through the voting process now, expecting to have another RC
> and release in a few more days.
>
>
> Guozhang
>
> On Thu, Jun 22, 2017 at 3:59
Hi
Would anyone know when is the Kafka 0.11.0 scheduled to be released ?
Thanks.
--
Raghav
er cannot receive the message.
>
> What value should we use for advertised.listeners so that Producer can
> write and Consumers can read ?
>
> Thanks.
>
--
Raghav
ng PoC
> > myself) - so probably some other power user can chime in?
> >
> > KR,
> >
> > On 30 May 2017 at 23:35, Raghav wrote:
> >
> > > Hi
> > >
> > > I want to know if there are Java APIs for the following. I want to be
> &
replication and partition.
2. Push ACLs into Kafka Cluster
3. Get existing ACL info from Kafka Cluster
Thanks.
Raghav
Hi Alex
In fact I copied the same configuration that Rajini pasted above and it
worked for me. You can try the same. Let me know if it doesn't work.
Thanks.
On Fri, May 26, 2017 at 4:19 AM, Kamalov, Alex
wrote:
> Hey Raghav,
>
>
>
> Yes, I would very much love to get yo
lt;(908)%20209-4484>
>
> On May 24, 2017 9:29 PM, "Raghav" wrote:
>
>> Mike
>>
>> I am not using jaas file. I literally took the config Rajini gave in the
>> previous email and it worked for me. I am using ssl Kafka with ACLs. I am
>>
configure. Any assistance would be greatly appreciated. Thanks in advance
>
> kafka: { version: 0.10.1.1 }
>
> zkper: { version: 3.4.9 }
>
> Conrad Bennett Jr.
>
>
--
Raghav
I initially tried kerberos, but it felt too complicated, so gave up and
only tried SSL.
On Wed, May 24, 2017 at 7:47 PM, Mike Marzo
wrote:
> Thanks. We will try it. Struggling with krb5 and acls
>
> mike marzo
> 908 209-4484 <(908)%20209-4484>
>
> On May 24, 2017
Out of intereat, are you
> starting ur brokers with a jaas file, if so do u mind sharing the client
> and server side jaas entries so I can validate what I'm doing.
>
> mike marzo
> 908 209-4484
>
> On May 24, 2017 10:54 AM, "Raghav" wrote:
>
> > Hi R
with a CA to sign certificates. Hopefully that would
work too.
Thanks a lot again.
Raghav
On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram
wrote:
> Raghav/Darshan,
>
> Can you try these steps on a clean installation of Kafka? It works for me,
> so hopefully it will work for you.
Rajini
I will try and report to you shortly. Many thanks.
Raghav
On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram
wrote:
> Raghav/Darshan,
>
> Can you try these steps on a clean installation of Kafka? It works for me,
> so hopefully it will work for you. And then you can adapt to y
wrote:
> Raghav
>
> I saw few posts of yours around Kafka ACLs and the problems. I have seen
> similar issues where Writer has not been able to write to any topic. I have
> seen "leader not available" and sometimes "unknown topic or partition", and
> "topi
Hello Kafka Users
I am a new Kafka user and trying to make Kafka SSL work with Authorization
and ACLs. I followed posts from Kafka and Confluent docs exactly to the
point but my producer cannot write to kafka broker. I get
"LEADER_NOT_FOUND" errors. And even Consumer throws the same errors.
Can s
: Create from
hosts: *
[root@kafka1 KAFKA]#
Thanks.
On Mon, May 22, 2017 at 8:02 AM, Rajini Sivaram
wrote:
> If you are using auto-create of topics, you also need to grant Create
> access to kaka-cluster.
>
> On Mon, May 22, 2017 at 9:51 AM, Raghav wrote:
>
> > Hi Rajini
>
}
(org.apache.kafka.clients.NetworkClient)
On Mon, May 22, 2017 at 8:02 AM, Rajini Sivaram
wrote:
> If you are using auto-create of topics, you also need to grant Create
> access to kaka-cluster.
>
> On Mon, May 22, 2017 at 9:51 AM, Raghav wrote:
>
> > Hi Rajini
> >
> > I t
= 10.10.0.23 on resource =
Cluster:kafka-cluster (kafka.authorizer.logger)
On Mon, May 22, 2017 at 6:34 AM, Rajini Sivaram
wrote:
> Raghav,
>
> I don't believe we do reverse DNS lookup for matching ACL hosts. Have you
> tried defining ACLs with host IP address?
>
> On Mo
(kafka.authorizer.logger)
[2017-05-22 06:10:16,942] DEBUG Principal = User:CN=kafka2 is Denied
Operation = Describe from host = 10.10.0.23 on resource =
Topic:kafka-testtopic (kafka.authorizer.logger)
Thanks.
On Sun, May 21, 2017 at 10:52 PM, Raghav wrote:
> I tried all possible ways (including the way
er:2181
> --add --allow-principal User:CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US
> --allow-host "*" --operation Read --operation Write --topic TOPICNAME
>
>
> Am 19.05.17, 20:02 schrieb "Raghav" :
>
> If it helps, this is how I generated the keystone for my clien
2017 at 10:32 AM, Raghav wrote:
> Hi
>
> I have a SSL setup with Kafka Broker, Producer and Consumer, and it works
> fine. I tried to setup ACLs as given on the website. When I start my
> producer, I am getting this error:
>
>
> [root@kafka-dev2 KAFKA]# bin/kafka-console-p
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:Bob
When certificate was being generated for Producer (Bob was used in the
CNAME.)
Am I missing something here ? Please help
Thanks.
Raghav
>> > > (org.apache.kafka.clients.NetworkClient)*
> > >> > >
> > >> > >
> > >> > > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ cat
> client-ssl.properties*
> > >> > >
> > >> > > *#group.id <http://group.id>=sslgroup*
> > >> > >
> > >> > > *security.protocol=SSL*
> > >> > >
> > >> > > *ssl.truststore.location=/Users/rbaddam/Desktop/Dev/
> > >> > > kafka_2.11-0.10.1.0/ssl/client.truststore.jks*
> > >> > >
> > >> > > *ssl.truststore.password=123456*
> > >> > >
> > >> > > * #Configure Below if you use Client Auth*
> > >> > >
> > >> > >
> > >> > > *ssl.keystore.location=/Users/rbaddam/Desktop/Dev/kafka_2.
> > >> > > 11-0.10.1.0/ssl/client.keystore.jks*
> > >> > >
> > >> > > *ssl.keystore.password=123456*
> > >> > >
> > >> > > *ssl.key.password=123456*
> > >> > >
> > >> > >
> > >> > > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$
> > >> bin/kafka-console-consumer.sh
> > >> > > --bootstrap-server 10.247.195.122:9093 <
> http://10.247.195.122:9093>
> > >> > > --new-consumer --consumer.config client-ssl.properties --topic
> > >> ssltopic
> > >> > > --from-beginning*
> > >> > >
> > >> > > *[2016-12-13 14:53:28,817] WARN Error while fetching metadata with
> > >> > > correlation id 1 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> > >> > > (org.apache.kafka.clients.NetworkClient)*
> > >> > >
> > >> > > *[2016-12-13 14:53:28,819] ERROR Unknown error when running
> > consumer:
> > >> > > (kafka.tools.ConsoleConsumer$)*
> > >> > >
> > >> > > *org.apache.kafka.common.errors.GroupAuthorizationException: Not
> > >> > > authorized to access group: console-consumer-52826*
> > >> > >
> > >> > >
> > >> > > Thanks in advance,
> > >> > >
> > >> > > Raghu - raghu98...@gmail.com
> > >> > > This e-mail and its contents (to include attachments) are the
> > >> property of
> > >> > > National Health Systems, Inc., its subsidiaries and affiliates,
> > >> including
> > >> > > but not limited to Rx.com Community Healthcare Network, Inc. and
> its
> > >> > > subsidiaries, and may contain confidential and proprietary or
> > >> privileged
> > >> > > information. If you are not the intended recipient of this e-mail,
> > you
> > >> > are
> > >> > > hereby notified that any unauthorized disclosure, copying, or
> > >> > distribution
> > >> > > of this e-mail or of its attachments, or the taking of any
> > >> unauthorized
> > >> > > action based on information contained herein is strictly
> prohibited.
> > >> > > Unauthorized use of information contained herein may subject you
> to
> > >> civil
> > >> > > and criminal prosecution and penalties. If you are not the
> intended
> > >> > > recipient, please immediately notify the sender by telephone at
> > >> > > 800-433-5719 or return e-mail and permanently delete the original
> > >> > e-mail.
> > >> > >
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > G.Kiran Kumar
> > >
> >
> >
> >
> > --
> > G.Kiran Kumar
> >
>
--
Raghav
store and key store. In this
test, I did not add the CA cert in either keystone or trust store.
Thanks for all your help.
On Thu, May 18, 2017 at 8:26 AM, Rajini Sivaram
wrote:
> Raghav,
>
> Perhaps what you want to do is:
>
> *You do (for the brokers):*
>
> Ge
y 18, 2017 at 6:26 AM, Rajini Sivaram
wrote:
> Raghav,
>
> Yes, you can create a truststore with your customers' certificates and
> vice-versa. It will be best to give your CA certificate to your customers
> and get the CA certificate from each of your customers and add them
Another quick question:
Say we chose to add our customer's certificates directly to our brokers
trust store and vice verse, could that work ? There is no documentation on
Kafka or Confluent site for this ?
Thanks.
On Wed, May 17, 2017 at 1:56 PM, Rajini Sivaram
wrote:
> Raghav,
&g
s mentioned in
https://kafka.apache.org/documentation/#security I have to manually give
password. It would be great if we can automate this process either through
script or Java code. Any suggestions ...
Many thanks.
On Tue, May 16, 2017 at 10:58 AM, Raghav wrote:
> Many thanks, Rajini.
>
> On Tue, Ma
1 - 100 of 111 matches
Mail list logo