Hi,

I tried setting transaction.state.log.min.isr=1. But the issue still exists.


I am also getting one warning after doing step 3 (with 
transaction.ate.log.min.isr=1) and producing some data on the topic as given 
below.


[Producer clientId=producer-1] 2 partitions have leader brokers without a 
matching listener, including [topic2-0, topic2-1]


But I was not facing this issue when transaction.state.log.min.isr was 2. This 
warning also leads to failure from the producer side to put data on the topic.


Are there any other things I have to check?


Thanks



________________________________
From: M. Manna <manme...@gmail.com>
Sent: 13 February 2020 16:35
To: Kafka Users <users@kafka.apache.org>
Subject: Re: Kafka clustering issue

This could be because you have set your transaction.ate.log.min.isr=2. Have
you tried with setting this to 1?

Also, please note that if your min.insync.replica=1, and you only have 2
nodes, you would only have a guarantee from 1 brokers to have the messages
- but if the same broker fails then you may see issues.

On Thu, 13 Feb 2020 at 10:40, Chikulal C <chikula...@rcggs.com.invalid>
wrote:

> Hi,
>
> I am facing an issue with the Kafka clustering setup that I have. I have a
> Kafka cluster with two broker that are connected to two zookeepers. I am
> posting data to a topic that have replication factor and partition two each
> with a spring boot Kafka producer and consuming the same with another
> spring boot app.
>
> I found one strange behavior when testing the cluster in the following
> manner -
>
>   1.   Turned off node1 and node 2
>   2.   Turned on node 1
>   3.   Turned off node 1
>   4.   Turned on node 2
>
> After turning on node 2 Kafka cluster got failed and I am not able to
> produce data to Kafka. My consumer started throwing the message
> continuously as given below.
>
>  [Producer clientId=producer-1] Connection to node 1 (/server1-ip:9092)
> could not be established. Broker may not be available.
>
> Issue is visible in both nodes. But if I kept both system up for a while
> issue will get resolved and I can turn off any of the node without breaking
> the cluster.
> My broker configuration is as below.
>
> broker.id=0
> listeners=PLAINTEXT://server1-ip:9092
> advertised.listeners=PLAINTEXT://serever1-ip:9092
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> log.dirs=/home/user/kafka/data/kafka-logs
> num.partitions=1
> num.recovery.threads.per.data.dir=2
> offsets.topic.replication.factor=2
> transaction.state.log.replication.factor=2
> transaction.state.log.min.isr=2
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=300000
> zookeeper.connect=serever1-ip:2181,serever2-ip:2181
> zookeeper.connection.timeout.ms=6000
> group.initial.rebalance.delay.ms=3000
> auto.leader.rebalance.enable=true
> leader.imbalance.check.interval.seconds=5
>
> Zookeeper configuration
>
> dataDir=/home/user/kafka/data
> clientPort=2181
> maxClientCnxns=0
> initLimit=10
> syncLimit=5
> tickTime=2000
> server.1=server1-ip:2888:3888
> server.2=server2-ip:2888:3888
>
> Is this is an expected behavior of Kafka or am I doing something wrong
> with this configuration ?
>
> Can somebody help me with this issue ..
>
> Thanks in advance.
>
>
>

Reply via email to