What do you have configured, do you have the brokers set as super users,
with the right certificate?
On Wed, Jun 1, 2016 at 6:43 AM 换个头像 wrote:
> Hi Kafka Experts,
>
>
> I setup a secured kafka cluster(slal-plain authentication). But when I try
> to add ACLs for some existing topics, all three b
Hi Kafka Experts,
I setup a secured kafka cluster(slal-plain authentication). But when I try to
add ACLs for some existing topics, all three brokers output errors like "Not
authorized to access topics: [Topic authorization failed.]".
I checked my configuration several times according to of
Hi,
There are number of messages floating on the internet suggesting that Kafka
cannot persist messages infinitely ?
Primarily that Kafka partitions are pinned to a node and they can’t
outgrow the storage capacity of a node..
Can someone help me understand this limitation and how it can be overco
Hello Everyone,
I have a load of ~10k messages/sec. As the load increases, I see a burst of
following error in Kafka (before everything starts working fine again):
*Error*: ERROR kafka.server.ReplicaManager: [Replica Manager on Broker 22]:
Error processing append operation on partition _topic_name
Hi Umesh,
It's the latter - ephemeral nodes under /brokers/ids
Best,
Shikhar
On Tue, May 31, 2016 at 8:55 PM Unmesh Joshi wrote:
> Hi,
>
> In Kafka cluster, how do brokers find other brokers? Is there a gossip
> style protocol used? or does it use zookeeper ephermal nodes to figure out
> live
we use 0.8.2.2.
2016-05-31 20:14 GMT+08:00 Tom Crayford :
> Is this under 0.8? There are a few known bugs in 0.8 that can lead to this
> situation. I'd recommend upgrading to 0.9 as soon as is viable to prevent
> this and many other kinds of issues that were fixed in 0.9.
>
> Thanks
>
> Tom Crayf
we use 0.8.2.2. is this version ok?
2016-05-31 20:12 GMT+08:00 Tom Crayford :
> Hi,
>
> Which version of Kafka are you running? We run thousands of clusters, and
> typically use this mechanism for replacing damaged hardware, and we've only
> seen this issue under Kafka 0.8, where the controller c
Hi All,
Anyone used HDP to run kafka, I used it and face a problem.The following is
the error info:
[image: Inline image 2]
The following is my HDP configuration:
[image: Inline image 1]
Should I set some configuration on HDP.
Thanks in advance.
Thanks,
Nicole
Hello.
I've implemented a Kafka Consumer Application which consume large number of
monitoring data from Kafka Broker and analyze those data accordingly.
I referred to a guide,
http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client,
since I thought the
Hi,
In Kafka cluster, how do brokers find other brokers? Is there a gossip
style protocol used? or does it use zookeeper ephermal nodes to figure out
live brokers?
Thanks,
Unmesh
Hello,
Working on a multi data center Kafka installation in which all clusters have
the same topics, the producers will be able to connect to any of the clusters.
Would like the ability to dynamically control the set of clusters a producer
will be able to connect to, that will allow to graceful
Hi Igor, a change in the number of brokers generally doesn't require
configuration or code changes in producers and consumers. You will need to
change bootstrap.servers if its original value no longer contains an active
broker.
Alex
On Tue, May 31, 2016 at 12:44 PM, Igor Kravzov
wrote:
> What i
What if number of brokers change? Does it mean I need to change
configuration or potentially recompile my producer and consumer?
On Tue, May 31, 2016 at 3:27 PM, Alex Loddengaard wrote:
> The "old" consumer used ZooKeeper. The "new" consumer, introduced in 0.9,
> doesn't use ZooKeeper. The produ
The "old" consumer used ZooKeeper. The "new" consumer, introduced in 0.9,
doesn't use ZooKeeper. The producer doesn't use ZooKeeper, either. However,
brokers still use ZooKeeper.
Alex
On Tue, May 31, 2016 at 12:03 PM, Igor Kravzov
wrote:
> When I look at code samples producers mostly write to b
Hi,
How can I track the progress of a kafka streaming job?
The only reference I see is "commit.interval.ms" which controls how often
offset is committed.
By default where is it committed and is there a tool to read it back? May
be something similar to bin/kafka-consumer-groups.sh.
I'd like to loo
Thanks Alex.
On Tue, May 31, 2016 at 2:37 PM, Alex Loddengaard wrote:
> Hi Igor, see inline:
>
> On Sat, May 28, 2016 at 8:14 AM, Igor Kravzov
> wrote:
>
> > I need some clarification on subject.
> > In Kafka documentations I found the following:
> >
> > Kafka only provides a total order over m
When I look at code samples producers mostly write to brokers and consumers
use Zookeeper to consume from topics.
Using Microsoft .net client (
https://github.com/Microsoft/CSharpClient-for-Kafka) I wrote producer
witch uses Zookeeper and was able to write data successfully.
Am I missing somethi
Hi Igor, see inline:
On Sat, May 28, 2016 at 8:14 AM, Igor Kravzov
wrote:
> I need some clarification on subject.
> In Kafka documentations I found the following:
>
> Kafka only provides a total order over messages *within* a partition, not
> between different partitions in a topic. Per-partitio
Hi Kiran, can you enable SASL logging? Do it with
"-Dsun.security.krb5.debug=true".
Alex
On Fri, May 27, 2016 at 8:15 PM, kiran kumar wrote:
> Hi Alex,
>
> Thanks for the response.
>
> Here is the latest log. looks like it is failing at session establishment
> after connection establishment suc
FYI: I fixed the docs of schema registry (vProps -> props).
Best, Michael
On Tue, May 31, 2016 at 2:05 AM, Rick Mangi wrote:
> That was exactly the problem, I found the example here to be very helpful
> -
> https://github.com/confluentinc/examples/blob/master/kafka-clients/specific-avro-consum
Hafsa, Florin
First thing first, it is possible to scale a Kafka cluster up or down (i.e.
add/remove servers).
And as has been noted in this thread, after you add a server to a cluster, you
need to rebalance the topic partitions in order to put the newly added server
into use.
And similarly, be
Yes, the one that the SinkConnector uses is the WorkerSinkTaskContext, but,
unfortunately, it creates it and uses it internally, but doesn't expose any
accessors for it, nor does the constructor allow me to pass one in for it
to use.
-Jack
On Tue, May 31, 2016 at 11:34 AM Dean Arnold wrote:
> H
Have you tried either of the SinkTaskContext.offset() methods ?
https://kafka.apache.org/0100/javadoc/org/apache/kafka/connect/sink/SinkTaskContext.html
On Tue, May 31, 2016 at 8:43 AM, Jack Lund
wrote:
> I'm trying to use the Connector API to write data to a backing store (HDFS
> for now, but
I'm trying to use the Connector API to write data to a backing store (HDFS
for now, but probably something like S3 later) for potential replay back
into Kafka later. However, I can't seem to find how to reset the offsets
for the SinkConnector.
I've found the rewind() function on the WorkerSinkTask
In our system some data can be as big as 10MB.
Is it OK to send 10 MB message through Kafka? What configuration
parameters should I check/set?
It is going to be one topic with one consumer - Apache NiFi GetKafka
processor.
Is one partition enough?
Hey Florin,
you could put the JMX env config into the start skript
(kafka-server-start.sh). Before line "exec $base_dir."
not the best way...but working
best wishes
johannes
2016-05-31 16:02 GMT+02:00 Spico Florin :
> Hello!
>I'm using Kafka 0.9.1 as a service in Horton Works Ambar
Hi!
What version of Kafka you are using? What do you mean by "Kafka needs
rebalacing?" Rebalancing of what? Can you please be more specific.
Regards,
Florin
On Tue, May 31, 2016 at 4:58 PM, Hafsa Asif
wrote:
> Hello Folks,
>
> Today , my team members shows concern that whenever we increase
Hi folks.
I used kafka 8 to create a proof-of-concept, using the SimpleConsumer.
In this code,
I locally kept track of the offset of the last message read from a topic.
I'm now refactoring the code for production, and I've also started using
version 9 of kafka.
There is no more SimpleConsume
Hello!
I'm using Kafka 0.9.1 as a service in Horton Works Ambari.
I have installed on one machine M1 Kafka manager that needs the JMX_PORT
for getting the consumers for a specific topic.
If I'm running the kafka scripts such as kafka-consumer-groups or
kafka-topics from the same machine where I
Hello Folks,
Today , my team members shows concern that whenever we increase node in
Kafka cluster, Kafka needs rebalancing. The rebalancing is sort of manual
and not-good step whenever scaling happens. Second, if Kafka scales up then
it cannot be scale down. Please provide us proper guidance over
Hi,
Currently we are developing -SPiDR WebRTC GW- for enabling realtime
communications between web and sip world.
In a few months, we’r planning to use Kafka as a distributed event management
framework in our solution
One of our deployment model will be openstack (https://www.openstack.org/) .
Hi
Our scenario -
One of the brokers in the kafka cluster is down.
The cluster has got rebalanced, but the metadata refresh does not bring the
same detail every time.
The metadata refresh waits indefinitely sometimes.
We are using kafka-clients jar version 0.8.2.1 to send data to kafka cluster
w
Is this under 0.8? There are a few known bugs in 0.8 that can lead to this
situation. I'd recommend upgrading to 0.9 as soon as is viable to prevent
this and many other kinds of issues that were fixed in 0.9.
Thanks
Tom Crayford
Heroku Kafka
On Tue, May 31, 2016 at 6:19 AM, Fredo Lee wrote:
>
Hi,
Which version of Kafka are you running? We run thousands of clusters, and
typically use this mechanism for replacing damaged hardware, and we've only
seen this issue under Kafka 0.8, where the controller can get stuck (due to
a few bugs in Kafka) and not be functioning. If you are on 0.8, I'd
If you want system administrators not being able to see the data, the only
option is encryption, with only the clients sharing the key (or whatever is
used to (de)crypt the data). Like the example from eugene. I don't know the
kind of messages you have, but you could always wrap something around an
I’ve asked the same question in the past, and disk encryption was suggested as
a solution as well.
However, as far as I know, disk encryption will not prevent your data to be
stolen when the machine is compromised.
What we are looking for is even an additional barrier, so that even system
admini
Great to hear Stefano, thanks for the update.
Ismael
On Tue, May 31, 2016 at 10:13 AM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:
> Hi Ismael,
>
> thank you so much for helping out: both tips (using the proper configs for
> my Kafka version and appending the --new-consumer option) p
Hi Ismael,
thank you so much for helping out: both tips (using the proper configs for
my Kafka version and appending the --new-consumer option) proved right and
I've been able to run my simple Flink job reading from and writing to Kafka
in a secure environment.
Best,
Stefano
On Mon, May 30, 2016
i find the new broker with old broker id always fetch message from itself
for the reason that it believe it's the leader of some partitions.
2016-05-31 15:56 GMT+08:00 Fredo Lee :
> we have a kafka cluster and one of them is down for the reason of disk
> damaged. so we use the same broker id in a
we have a kafka cluster and one of them is down for the reason of disk
damaged. so we use the same broker id in a new server machine.
when start kafka in the new machine, lots of error msg: "[2016-05-31
10:30:49,792] ERROR [ReplicaFetcherThread-0-1013], Error for partition
[consup-25,20] t
Yes, if you read the upgrade documentation, you'll see "it is important to
upgrade your Kafka clusters before upgrading your clients" mentioned:
http://kafka.apache.org/documentation.html#upgrade
It is also a common question in the mailing lists. It should probably be in
the FAQ.
Ismael
On Tue,
Is this documented somewhere?
On Mon, May 30, 2016 at 8:16 PM, Ismael Juma wrote:
> Hi Mikael,
>
> This is expected. Older clients work with newer brokers, but newer clients
> don't work with older brokers.
>
> Ismael
> On 30 May 2016 17:29, "Mikael Ståldal" wrote:
>
> > I am experiencing compa
42 matches
Mail list logo