We have kafka setup on staging environment, when debug the consumer we want
to directly listen to the kafka on staging environment. I set tunnel but it
seems i can't produce or consume from my local machine. I can create topic
though. I have no problem to produce/consumer on other machine in stagin
We noticed we have more than 2 active controllers. How can we fix the
issue? it has been for a few days.
Thanks,
Wei
among
different data centers running different version kafka?
thanks,
Wei
We have make a simple web console to monitor some kafka informations like
consumer offset, logsize.
https://github.com/shunfei/DCMonitor
Hope you like it and offer your help to make it better :)
Regards
Flow
ject, but
> it gives me "Permission denied, the remote end hung up unexpectedly". Can
> you provide any suggestions to this issue?
>
> Thanks.
>
> best,
> Yuheng
>
> On Mon, Mar 23, 2015 at 8:54 AM, Wan Wei wrote:
>
> > We have make a simple web console
Hi all,
Bit confused on rebalance and failures:
(if understand correctly about rebalance procedure)
Suppose during the middle of the rebalance, some consumer, C1, hits an
unclean shutdown (i.e. crashes, or kill -9), and the coordinator won't be
aware that C1 is dead until {zookeeper.session.timeou
You can check this
http://kafka.apache.org/documentation.html#basic_ops_add_topic
But from our experience it is best to delete topics one by one, i.e., make sure
Kafka is in good shape before and after deleting a topic before working on next
one.
Regards,
-- Jianbin
> On Oct 11, 2016, at 9:2
In our environment we notice that sometimes Kafka would close the connection
after one message is sent over. The client does not detect that and tries to
send another message again. That triggers a RST packet.
Any idea why the Kafka broker would close the connection?
Attached you can find the
lancer IPs + port, or any 1 LoadBalancer IP + port?
* On external client side, does it need all 3 broker’s certificates?
* How does the client know using which certificate while creating
request to Kafka cluster?
Thanks and regards,
Wei Yang
Cloud Infrastructure Engineer
[/var/
Hi Luke,
Thanks a lot for the clarifications. Very helpful to me for getting started.
As we can import the root CA of all certificates to trust them all, I’d like to
understand
* Why Kafka needs one LoadBalancer per broker?
Thank you very much!
Regards,
Wei
From: Luke Chen
Date
or zookeeper logs. Can someone suggest how I
can further debug this issue? BTW, I am using logstash as kafka client to
read data from kafka topic.
Thanks,
Wei
java.lang.OutOfMemoryError is not really necessary directly related with
memory usage. In your config, it requests only 1G. If your system is not
stressed, I would suggest you to check ulimit for kafka runtime user,
particularly check max number of open file descriptor and max number of
processes
Hi there,
Can't find Java API to do partition reassignment, so i took a look at the
source code (not a deep look). It seems that the
kafka-reassign-partitions.sh script created the znode
/admin/reassign_partitions with the new plan (in JSON) as the data.
I wonder if I could post data to zookeeper i
Hi,
I'm using kafka-reassign-partitions.sh to move partitions around, however,
sometimes I got partition reassignment failure. The cluster is healthy
before the rebalance, and a retry after 10 mins resolved the problem.
However, I wonder if there's a way I can check why the reassignment failed
for
14 matches
Mail list logo