semantics on production work.
Thanks & Have a good day!
--
Yang Cui
FreeWheel | Beijing
+86 1381-1441-685
please use zk client to check the path:/brokers/ids in ZK
发自我的 iPhone
> 在 2017年8月18日,下午3:14,Raghav 写道:
>
> Hi
>
> I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1.
> I am trying to create a topic using below command:
>
> "bin/kafka-topics.sh --create --zookeeper local
your broker is not running
发自我的 iPhone
> 在 2017年8月18日,下午3:14,Raghav 写道:
>
> Hi
>
> I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1.
> I am trying to create a topic using below command:
>
> "bin/kafka-topics.sh --create --zookeeper localhost:2181
> --replication-facto
We wish to enlarge the segment file size from 1GB to 2GB, but we found that it
will throw the exception as: “Invalid value 2147483648 for configuration
log.segment.bytes: Not a number of type INT”.
It is because of the overflow of INT, if we set l”og.segment.bytes” to
2147483648.
Now we are s
# /etc/security/limits.conf
* - nofile 65536
On Fri, May 12, 2017 at 6:34 PM, Yang Cui wrote:
> Our Kafka cluster is broken down by the problem “java.io.IOException: Too
> many open files” three times in 3 weeks.
>
> We encounter
Our Kafka cluster is broken down by the problem “java.io.IOException: Too many
open files” three times in 3 weeks.
We encounter these problem on both 0.9.0.1 and 0.10.2.1 version.
The error is like:
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0
Hi guys,
I have a confusion about whether it will lost data when I rolling restart
Kafka cluster.
Suppose that Kafka cluster has 4 brokers: A, B, C, D, and 4 partitions of a
topic which name is “test-topic” are allocated on these brokers with
balance: p1 for A, p2 for B, p3 for C, p4 for
Hi All,
Have anyone can help answer this question? Thanks a lot!
On 26/04/2017, 8:00 PM, "Yang Cui" wrote:
Dear All,
I am using Kafka cluster 2.11_0.9.0.1, and the new consumer of
2.11_0.9.0.1.
When I set the quota configuration is:
quota.produc
Dear All,
I am using Kafka cluster 2.11_0.9.0.1, and the new consumer of 2.11_0.9.0.1.
When I set the quota configuration is:
quota.producer.default=100
quota.consumer.default=100
And I used the new consumer to consume data, then the error happened
sometimes:
org
I am thinking about that:
1 If a Producer compresses a Message set which is nested more than 2 levels
records recursively and sends it to broker, how does broker know which offset
should be allocated to this message set without uncompressed all levels and
getting the all records?
2 Assumi
10 matches
Mail list logo