What is your consumer prop file ? How have you adjusted the properties?
On Wed, 13 Feb 2019 at 09:41, Jorit Hagedorn <[email protected]> wrote: > Hello, > > I've setup a Kafka- / Zookeeper-Cluster with 3 vms: > > 101 > 201 > 102 > > These 3 servers run as cluster. Configs below: > > Kafka: > > broker.id=101 # 102 and 201 > num.network.threads=3 > num.io.threads=8 > socket.send.buffer.bytes=102400 > socket.receive.buffer.bytes=102400 > socket.request.max.bytes=104857600 > log.dirs=/opt/kafka/kafka-data > num.partitions=1 > num.recovery.threads.per.data.dir=1 > offsets.topic.replication.factor=1 > transaction.state.log.replication.factor=1 > transaction.state.log.min.isr=1 > log.retention.hours=168 > log.segment.bytes=1073741824 > log.retention.check.interval.ms=300000 > zookeeper.connect=10.1.221.13:2181,10.2.172.13:2181,10.1.221.16:2181 > zookeeper.connection.timeout.ms=6000 > group.initial.rebalance.delay.ms=0 > > > Zookeeper: > > dataDir=/opt/kafka/zookeeper-data > clientPort=2181 > initLimit=10 > syncLimit=5 > maxClientCnxns=0 > > server.101=10.1.221.13:2888:3888 > server.201=10.2.172.13:2888:3888 > server.102=10.1.221.16:2888:3888 > > > Full cluster is available: > > > /opt/kafka/bin/zookeeper-shell.sh localhost:2181 <<< "ls /brokers/ids" > Connecting to localhost:2181 > Welcome to ZooKeeper! > JLine support is disabled > > WATCHER:: > > WatchedEvent state:SyncConnected type:None path:null > [101, 102, 201] > > After that, I've created a replicated topic: > > /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 > --replication-factor 3 --partitions 1 --topic myreplicatedtopic > > Verify: > > /opt/kafka/bin/kafka-topics.sh --describe --topic myreplicatedtopic > --zookeeper localhost:2181 > Topic:myreplicatedtopic PartitionCount:1 ReplicationFactor:3 > Configs: > Topic: myreplicatedtopic Partition: 0 Leader: 102 Replicas: > 102,101,201 Isr: 102,101,201 > > > The issue is that a consumer which connects to node 102 for example will > stop working and not use any of the other 2 servers for failover. > As a consumer we currently have logstash. The consumer seems to fail > once the first server that started in the cluster becomes unavailable > for some reason. > > Producing messages (with filebeat) is always working, as long as the > majority of the servers are up, which is the expected behaviour. > > What am I missing here? > > Kind Regards > > Jorit > > >
