Gotcha. Thanks again. Will post back once I've tried this with an update!
Chris
On Fri, Jul 21, 2017 at 1:12 PM, Carl Haferd
wrote:
> I would recommend allowing each broker enough time to catch-up before
> starting the next, but this may be less of a concern if the entire cluster
> is being b
I would recommend allowing each broker enough time to catch-up before
starting the next, but this may be less of a concern if the entire cluster
is being brought down and then started from scratch. To automate, we poll
until the Kafka process binds to its configured port (9092), and then once
all
Thanks Carl.
Always fun to do this stuff in production... ;)
Appreciate the input. I'll try a full cycle and see how that works.
In your opinion, if I stop all brokers and all Zookeeper nodes, then
restart all Zookeepers...at that point can I start both brokers at the same
time, or should I let
I have encountered similar difficulties in a test environment and it may be
necessary to stop the Kafka process on each broker and take Zookeeper
offline before removing the files and zookeeper paths. Otherwise there may
be a race condition between brokers which could cause the cluster to retain
i
Welp. Surprisingly, that did not fix the problem. :(
I cleaned out all the entries for these topics from /config/topics, and
removed the logs from the file system for those topics, and the messages
are still flying by in the server.log file.
Also, more concerning, when I was looking through the
Just to add (in case the platoform is Windows)
For Windows based cluster implementation, log/topic cleanup doesn't work
out of the box. Users are more or less aware of it, and doing their own
maintenance as workaround.
If you have issues on Topic deletion not working properly on Windows (i.e.
wit
@Carl,
There is nothing under /admin/delete_topics other than
[]
And nothing under /admin other than delete_topics :)
The topics DO exist, however, under /config/topics! We may be on to
something. I will remove them here and see if that clears it up.
Thanks so much for all the help!
Chris
O
Thanks again for the replies. VERY much appreciated. I'll check both
/admin/delete_topics and /config/topics.
Chris
On Thu, Jul 20, 2017 at 9:22 PM, Carl Haferd
wrote:
> If delete normally works, there would hopefully be some log entries when it
> fails. Are there any unusual zookeeper entri
If delete normally works, there would hopefully be some log entries when it
fails. Are there any unusual zookeeper entries in the /admin/delete_topics
path or in the other /admin folders?
Does the topic name still exist in zookeeper under /config/topics? If so,
that should probably deleted as we
Delete is definitely there. The delete worked fine, based on the fact that
there is nothing in Zookeeper, and that the controller reported that the
delete was successful, it's just something seems to have gotten out of
sync.
delete.topic.enabled is true. I've successfully deleted topics in the
p
I could be totally wrong, but I seem to recall that delete wasn't fully
implemented in 0.8.x?
On Fri, Jul 21, 2017 at 10:10 AM, Carl Haferd
wrote:
> Chris,
>
> You could first check to make sure that delete.topic.enable is true and try
> deleting again if not. If that doesn't work with 0.8.1.1
Chris,
You could first check to make sure that delete.topic.enable is true and try
deleting again if not. If that doesn't work with 0.8.1.1 you might need to
manually remove the topic's log files from the configured log.dirs folder
on each broker in addition to removing the topic's zookeeper path
Hi all,
I have a weird situation here. I have deleted a few topics on my 0.8.1.1
cluster (old, I know...). The deletes succeeded according to the
controller.log:
[2017-07-20 16:40:31,175] INFO [TopicChangeListener on Controller 1]: New
topics: [Set()], deleted topics:
[Set(perf_doorway-supplier
13 matches
Mail list logo