Kris,
This is a bit surprising, but handling the bootstrap servers, broker
failures/retirement, and cluster metadata properly is surprisingly hard to
get right!
https://issues.apache.org/jira/browse/KAFKA-1843 explains some of the
challenges. https://issues.apache.org/jira/browse/KAFKA-3068 shows
Consuming plain JSON is a bit tricky for something like HDFS because all
the output formats expect the data to have a schema. You can read the JSON
data with the provided JsonConverter, but it'll be returned without a
schema. The HDFS connector will currently fail on this because it expects a
fixed
I got a consumer using high-level API based on the example from here:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+ExampleWorks
fine but after an hour or so of inactivity it stops responding to new messages.
All I see the log is:
INFO [2016-02-23 17:06:10,070] org.apache.zo
Hey all,
I was reviewing the Kafka connect JDBC driver, and I had a question. Is it
possible to use the JDBC driver with a look-back configured? The reason
that I ask is that there are some known issues with using a modified
timestamp:
Slide 14 here explains one with Oracle:
https://qconsf.com/s
Hi Franco,
The default.replication.factor shouldn't be a problem but the error message
does look like you don't have any available brokers?
kafka.admin.AdminOperationException: replication factor: 1 larger than
available brokers: 0
When you are deleting the topic using the kafka topics tool it do
Hi John,
Glad to help :) I ran into similar issues recently being confused by what
does the offsets mean as well so I understand you pain haha.
Best of luck,
Leo
On Tue, Feb 23, 2016 at 1:53 PM, John Bickerstaff
wrote:
> Thanks Leo!
>
> =
> TL;DR summary:
>
> You're correct - I didn't
Well, I am running on the same machine, so I say yes
Sent from my iPhone
> On Feb 23, 2016, at 18:05, Martin Gainty wrote:
>
> one more thing to check:
>
> specifically are the /etc/krb5.conf credentials the same you use to
> authenticate to ubuntu.oleg.com
>
> ?
> Martin
> ___
one more thing to check:
specifically are the /etc/krb5.conf credentials the same you use to
authenticate to ubuntu.oleg.com
?
Martin
__
> Subject: Re: Kerberized Kafka setup
See below
On Tue, Feb 23, 2016 at 11:45 AM, vivek shankar
wrote:
> Hello All,
>
> Can you please help with the below :
>
> I was reading up on Kafka 0.9 API version and came across the below :
>
> The following is a draft design that uses a high-available consumer
> coordinator at the broker sid
Harsh
I followed this blog
(http://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption)
and got an environment via vagrant setup, no issues. I’ll poke around what the
differences are and if find the issue will post.
Thanks for your help anyway.
Cheers
Oleg
On Fe
Thanks Leo!
=
TL;DR summary:
You're correct - I didn't absolutely need the offset.
I had to provide Disaster Recovery advice and couldn't explain the offset
numbers, which wouldn't fly
Explanation for how I got myself confused in the text below -- in case it
helps someone else later.
Than
Hello All,
Can you please help with the below :
I was reading up on Kafka 0.9 API version and came across the below :
The following is a draft design that uses a high-available consumer
coordinator at the broker side to handle consumer rebalance. By migrating
the rebalance logic from the consume
Yeah, I noticed the localhost as well, but I’ve changed it since to FQDN and it
is still the same including 'sname is zookeeper/localh...@oleg.com’
Oleg
> On Feb 23, 2016, at 4:00 PM, Harsha wrote:
>
> whats your zookeeper.connect in server.properties looks like. Did you
> use the hostname or
whats your zookeeper.connect in server.properties looks like. Did you
use the hostname or localhost
-Harsha
On Tue, Feb 23, 2016, at 12:01 PM, Oleg Zhurakousky wrote:
> Still digging, but here is more info that may help
>
> 2016-02-23 14:59:24,240] INFO zookeeper state changed (SyncConnected)
>
Still digging, but here is more info that may help
2016-02-23 14:59:24,240] INFO zookeeper state changed (SyncConnected)
(org.I0Itec.zkclient.ZkClient)
Found ticket for kafka/ubuntu.oleg@oleg.com to go to
krbtgt/oleg@oleg.com expiring on Wed Feb 24 00:59:24 EST 2016
Entered Krb5Context.i
No joy. the same error
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
useKeyTab=true
storeKey=true
keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
principal="kafka/ubuntu.oleg@oleg.com";
};
Clie
My bad it should be under Client section
Client {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
useKeyTab=true
storeKey=true
serviceName=zookeeper
keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
principal="kafk
can you try adding "serviceName=zookeeper" to KafkaServer section like
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
useKeyTab=true
storeKey=true
serviceName=zookeeper
keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/secur
More info
I am starting both services as myself ‘oleg’. Validated that both key tab files
are readable. o I am assuming Zookeeper is started as ‘zookeeper’ and Kafka as
‘kafka’
Oleg
> On Feb 23, 2016, at 2:22 PM, Oleg Zhurakousky
> wrote:
>
> Harsha
>
> Thanks for following up. Here is is
Harsha
Thanks for following up. Here is is:
oleg@ubuntu:~/kafka_2.10-0.9.0.1/config$ cat kafka_server_jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
useKeyTab=true
storeKey=true
keyTab="/home/oleg/kafka_2.10-0.9.0.
Oleg,
Can you post your jaas configs. Its important that serviceName
must match the principal name with which zookeeper is running.
Whats the principal name zookeeper service is running with.
-Harsha
On Tue, Feb 23, 2016, at 11:01 AM, Oleg Zhurakousky wrote:
> Hey guys, fir
Hey guys, first post here so bare with me
Trying to setup Kerberized Kafka 0.9.0.. Followed the instructions here
http://kafka.apache.org/documentation.html#security_sasl and i seem to be very
close, but not quite there yet.
ZOOKEEPER
Starting Zookeeper seems to be OK (below is the relevant par
Ok, thanks tao.
btw: I think this is a small bug.
2016-02-23 14:24 GMT+01:00 tao xiao :
> The default value is false.
>
>
> https://github.com/apache/kafka/blob/d5b43b19bb06e9cdc606312c8bcf87ed267daf44/clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java#L232
>
> On Tue, 2
The default value is false.
https://github.com/apache/kafka/blob/d5b43b19bb06e9cdc606312c8bcf87ed267daf44/clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java#L232
On Tue, 23 Feb 2016 at 21:14 Franco Giacosa wrote:
> Hi Guys,
>
> I was going over the producer kafka config
Hi Guys,
I was going over the producer kafka configuration, and the
property block.on.buffer.full in the documentation says:
"When our memory buffer is exhausted we must either stop accepting new
records (block) or throw errors. *By default this setting is true* and we
block, however in some scen
Hello,
I am having the following problem trying to delete a topic.
The topic was auto-created with a default.replication.factor = 1, but my
test cluster has only 1 machine, so now when I start kafka I get this error
ERROR [KafkaApi-0] error when handling request Name: TopicMetadataRequest;
Versi
26 matches
Mail list logo