can you add your jass file details. Your jaas file might have
useTicketCache=true and storeKey=true as well example of
KafkaServer jass file

KafkaServer {

com.sun.security.auth.module.Krb5LoginModule required

useKeyTab=true

storeKey=true

serviceName="kafka"

keyTab="/vagrant/keytabs/kafka1.keytab"

principal="kafka/kafka1.witzend....@witzend.com"; };

and KafkaClient KafkaClient {

com.sun.security.auth.module.Krb5LoginModule required

useTicketCache=true

serviceName="kafka";

};

On Wed, Dec 30, 2015, at 03:10 AM, prabhu v wrote:
> Hi Harsha,
>
> I have used the Fully qualified domain name. Just for security
> concerns, Before sending this mail,i have replaced our FQDN hostname
> to localhost.
>
> yes, i have tried KINIT and I am able to view the tickets using klist
> command as well.
>
> Thanks, Prabhu
>
> On Wed, Dec 30, 2015 at 11:27 AM, Harsha <ka...@harsha.io> wrote:
>> Prabhu,
>>
When using SASL/kerberos always make sure you give FQDN of
>>
the hostname . In your command you are using --zookeeper
>>
localhost:2181 and make sure you change that hostname.
>>
>>
"avax.security.auth.login.LoginException: No key to store Will continue
>>
> connection to Zookeeper server without SASL authentication, if
> Zookeeper"
>>
>> did you try  kinit with that keytab at the command line.
>>
>>
-Harsha
>> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
>>
> Thanks for the input Ismael.
>>
>
>>
> I will try and let you know.
>>
>
>>
> Also need your valuable inputs for the below issue:)
>>
>
>>
> i am not able to run kafka-topics.sh(0.9.0.0 version)
>>
>
>>
> [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
>>
> [2015-12-28 12:41:32,589] WARN SASL configuration failed:
>>
> javax.security.auth.login.LoginException: No key to store Will
> continue
>>
> connection to Zookeeper server without SASL authentication, if
> Zookeeper
>>
> server allows it. (org.apache.zookeeper.ClientCnxn)
>>
> ^Z
>>
>
>>
> I am sure the key is present in its keytab file ( I have cross
> verified
>>
> using kinit command as well).
>>
>
>>
> Am i missing anything while calling the kafka-topics.sh??
>>
>
>>
>
>>
>
>>
> On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma
> <isma...@gmail.com> wrote:
>>
>
>>
> > Hi Prabhu,
>>
> >
>>
> > kafka-console-consumer.sh uses the old consumer by default, but
> > only the
>>
> > new consumer supports security. Use --new-consumer to change this.
>>
> >
>>
> > Hope this helps.
>>
> >
>>
> > Ismael
>>
> > On 28 Dec 2015 05:48, "prabhu v" <prabhuvrajp...@gmail.com> wrote:
>>
> >
>>
> > > Hi Experts,
>>
> > >
>>
> > > I am getting the below error when running the consumer
>>
> > > "kafka-console-consumer.sh" .
>>
> > >
>>
> > > I am using the new version 0.9.0.1.
>>
> > > Topic name: test
>>
> > >
>>
> > >
>>
> > > [2015-12-28 06:13:34,409] WARN
>>
> > >
>>
> > >
>>
> > [console-consumer-61657_localhost-1451283204993-5512891d-leader-
> > finder-thread],
>>
> > > Failed to find leader for Set([test,0])
>>
> > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
>>
> > > kafka.common.BrokerEndPointNotAvailableException: End point
> > > PLAINTEXT not
>>
> > > found for broker 0
>>
> > >at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
>>
> > >
>>
> > >
>>
> > > Please find the current configuration below.
>>
> > >
>>
> > > Configuration:
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" consumer.properties
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=60000
>>
> > > group.id=test-consumer-group
>>
> > > security.protocol=SASL_PLAINTEXT
>>
> > > sasl.kerberos.service.name="kafka"
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" producer.properties
>>
> > > metadata.broker.list=localhost:9094,localhost:9095
>>
> > > producer.type=sync
>>
> > > compression.codec=none
>>
> > > serializer.class=kafka.serializer.DefaultEncoder
>>
> > > security.protocol=SASL_PLAINTEXT
>>
> > > sasl.kerberos.service.name="kafka"
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" server1.properties
>>
> > >
>>
> > > broker.id=0
>>
> > > listeners=SASL_PLAINTEXT://localhost:9094
>>
> > > delete.topic.enable=true
>>
> > > num.network.threads=3
>>
> > > num.io.threads=8
>>
> > > socket.send.buffer.bytes=102400
>>
> > > socket.receive.buffer.bytes=102400
>>
> > > socket.request.max.bytes=104857600
>>
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
>>
> > > num.partitions=1
>>
> > > num.recovery.threads.per.data.dir=1
>>
> > > log.retention.hours=168
>>
> > > log.segment.bytes=1073741824
>>
> > > log.retention.check.interval.ms=300000
>>
> > > log.cleaner.enable=false
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=60000
>>
> > > inter.broker.protocol.version=0.9.0.0
>>
> > > security.inter.broker.protocol=SASL_PLAINTEXT
>>
> > > allow.everyone.if.no.acl.found=true
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" server4.properties
>>
> > > broker.id=1
>>
> > > listeners=SASL_PLAINTEXT://localhost:9095
>>
> > > delete.topic.enable=true
>>
> > > num.network.threads=3
>>
> > > num.io.threads=8
>>
> > > socket.send.buffer.bytes=102400
>>
> > > socket.receive.buffer.bytes=102400
>>
> > > socket.request.max.bytes=104857600
>>
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
>>
> > > num.partitions=1
>>
> > > num.recovery.threads.per.data.dir=1
>>
> > > log.retention.hours=168
>>
> > > log.segment.bytes=1073741824
>>
> > > log.retention.check.interval.ms=300000
>>
> > > log.cleaner.enable=false
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=60000
>>
> > > inter.broker.protocol.version=0.9.0.0
>>
> > > security.inter.broker.protocol=SASL_PLAINTEXT
>>
> > > zookeeper.sasl.client=zkclient
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" zookeeper.properties
>>
> > > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
>>
> > > clientPort=2181
>>
> > > maxClientCnxns=0
>>
> > > requireClientAuthScheme=sasl
>>
> > >
>>
> > authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationP-
> > rovider
>>
> > > jaasLoginRenew=3600000
>>
> > >
>>
> > >
>>
> > > Need your valuable inputs on this issue.
>>
> > > --
>>
> > > Regards,
>>
> > >
>>
> > > Prabhu.V
>>
> > >
>>
> >
>>
>
>>
>
>>
>
>>
> --
>>
> Regards,
>>
>
>>
> Prabhu.V
>
>
>
> --
> Regards,
>
> Prabhu.V
>

Reply via email to