Hi,
We have a use case where we need to start kafka consumer with a fixed list
of kafka topics and add more topics on the fly. Since there is no pattern
in names of topics, using pattern for dynamic subscriptions of topics is
not feasible.
Is it a good practice to subscribe to topics on given kaf
for solution A , actually including two parts:
- zookeeper : manual operation to migrate zookeeper, so far I think it
needs at least 6 steps. this part will be tricky, we will have 3 zookeeper
exhibitor , and 5 new zookeeper without e
- Kafka : this should be easier , try launch new on
Did you intend to attach pictures following the two solutions ?
It seems the pictures didn't come thru.
FYI
On Wed, Jan 3, 2018 at 8:39 PM, Tony Liu wrote:
> Hi All,
>
> This post here is aimed to ask experience about what did you do migration
> `Kafka/zookeeper` ? :)
>
> All of Kafka/zookeep
Hi All,
This post here is aimed to ask experience about what did you do migration
`Kafka/zookeeper` ? :)
All of Kafka/zookeeper are running on AWS, because of some reason, we have
to replace all the existed server (you can simply think we will terminate
the old server and create new server to re
Good point Jun Rao. We've been trying to get things to scale with
normal mode first and haven't tried failure scenarios yet.
Thanks for the pointer to KIP-227! It looks promising indeed. I'm
working on sprucing up my reproduction environment and tests and
hopefully will have more info to share soo
Hi, Andrey,
If the test is in the normal mode, it would be useful to figure out why ZK
is the bottleneck since the normal mode typically doesn't require ZK
accesses.
Thanks,
Jun
On Wed, Jan 3, 2018 at 3:00 PM, Andrey Falko wrote:
> Ben Wood:
> 1. We have 5 ZK nodes.
> 2. I only tracked outsta
Ben Wood:
1. We have 5 ZK nodes.
2. I only tracked outstanding requests thus far from ZK-side of
things. At 9.5k topics, I recorded about 5k outstanding requests. I'll
start tracking this better for my next run. Anything else worth
tracking?
Jun Rao:
I'm testing the latest 1.0.0. I'm testing norma
Hi, Andrey,
Thanks for reporting the results. Which version of Kafka are you testing?
Also, it would be useful to know if you are testing the normal mode when
all replicas are up and in sync, or the failure mode when some of the
replicas are being restarted. Typically, ZK is only accessed in the f
1. How many ZK nodes in your ensemble?
2. Do you have metrics on how many requests ZK is handling?
On Wed, Jan 3, 2018 at 1:48 PM, Andrey Falko wrote:
> Hi everyone,
>
> We are seeing more and more push from our Kafka users to support well
> more than 10k replicated partitions. We'd ideally like
Hi everyone,
We are seeing more and more push from our Kafka users to support well
more than 10k replicated partitions. We'd ideally like to avoid running multiple
clusters to keep our cluster management and monitoring simple. We started
testing kafka to see how many replicated partitions it could
The logs say,
log4j:ERROR Failed to rename [C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log] to
[C:\kafka_2.10-0.10.2.1/logs/log-cleaner.log.2018-01-02-09].
Does that second filename already exist?
Does the user starting Kafka have permission to create files in that directory?
Does the user starting
I'm running kafka_2.11-1.0.0 on my (test) server and version 1.3.5 of
the kafka-python client library (available from PyPI). Subscribing
using patterns seems not to work. I always have to enumerate a set of
topics, and given that the topics are created dynamically as different
kinds of content arri
:facepalm:
They are in the producer and consumer JVMs, as would be totally expected.
I'm only collecting JMX metrics from the brokers. Duh.
Sorry for the noise.
On Wed, Jan 3, 2018 at 12:01 PM, Tim Visher wrote:
> Hello Everyone,
>
> I'm trying to get my datadog kafka integration working as ex
Hello Everyone,
I'm trying to get my datadog kafka integration working as expected and I
noticed that the documented `kafka.(producer|consumer)` JMX MBeans do not
appear to be available.
I verified this by using JConsole to browse the MBeans. Sure enough,
kafka.server, kafka.log, kafka.network et
You should check the kafka docs. You need to increase your open file
descriptors.
Here's a link that describes how to do
it. (Ignore that this is for hdp the ulimit part is what you need to pay
attend to.)
You can also check kafka docs too they say to change open file descriptors
to 100,000 I
Hello,
Currently Zookeeper and kafka do not work appropriately and the logs are
not streaming to logstash. The var-log partitions across the zookeeper
cluster are filled up to 100 percent. We are getting the following
messages when we investigated the logs and the services status:
1 . Zooke
16 matches
Mail list logo