Hi,
I have followed the instructions you detail and I could create topics,
which were getting a leader and were properly replicated.
I think the problem I experienced was due to some old temporary
communication problems between Kafka and Zookeeper. But that's only a guess.
Thanks a lot Mohammed f
Hi,
I setup a fresh cluster (3-brokers, 3-keepers) and created a topic
according to your settings - obviously the log directories are kept
separeate e.g. (var/lib/zookeeper2 and var/lib/zookeeper3) and not to
mention, the myid files for every zookeeper to identify themselves in the
ensemble. Canno
So, I fixed the problem doing a rolling restart, and after some checks
seems there was no data loss.
On 1 June 2017 at 17:57, Del Barrio, Alberto <
alberto.delbar...@360dialog.com> wrote:
> I might give it a try tomorrow. The reason for having so large init and
> sync limit times is because in th
I might give it a try tomorrow. The reason for having so large init and
sync limit times is because in the past our ZK cluster was storing large
amount of data, and lower values were not enough for the server syncs when
restarting zk processes.
On 1 June 2017 at 17:52, Mohammed Manna wrote:
> Co
Cool - I will try and take a look into this - Meanwhile, do you mind awfuly
to change the following and see if things improve?
tickTime = 1000
initLimit=3
syncLimit=5
On 1 June 2017 at 16:49, Del Barrio, Alberto <
alberto.delbar...@360dialog.com> wrote:
> Here are the configs you were asking for
Here are the configs you were asking for:
Zookeeper:
tickTime=1000
initLimit=2000
syncLimit=1000
dataDir=/var/lib/zookeeper
clientPort=2181
server.3=10.0.0.3:2888:3888
server.2=10.0.0.2:2888:3888
server.1=10.0.0.1:2888:3888
Kafka broker (for one of them):
broker.id=10
listeners=PLAINTEXT://10.0.
Could you please share your broker/zookeeper/topic configs ?
On 1 June 2017 at 16:18, Del Barrio, Alberto <
alberto.delbar...@360dialog.com> wrote:
> I tried creating the topic and results are very similar to the current
> situation: there are not ISR and no leader for any of the partitions, but
I tried creating the topic and results are very similar to the current
situation: there are not ISR and no leader for any of the partitions, but
now kafka-topics shows *Leader: none* when for all the other topics, it
shows *Leader: -1*
On 1 June 2017 at 17:05, Mohammed Manna wrote:
> I had a si
I had a similar situation, but only 1 of my ZKs was struggling - but since
the ISR synching time is configurable I was confident to bounce 1 ZK at a
time and it worked out.
does it happen even when you create a new topic with a
replication:partition ration of 1?
i meant, 3 replicas, 3 partitions :
Hi Mohammed,
thanks for your answer.
The ZK cluster is not located in the servers where Kafka runs but in other
3 different machines. This ZK cluster is used by several other services
which are not reporting problems.
As you suggested, I haven't tried restarting the kafka-server processes
because
Hi Alberto,
Usually this means that the leader election/replica syncing couldn't be
successful and the zookeeper logs should be able to show this information
too. The leader -1 is what worries me. For your case (3 broker cluster), I
am assuming you have done the cluster configuration to have 1
bro
Hi all,
I'm experiencing an issue which I don't know how to solve, so I'm trying to
find some guidance on the topic.
I have a cluster composed by 3 servers, one broker per server running Kafka
0.10.0.1-1 which runs in production with around 100 topics, most of them
divided in several partitions a
12 matches
Mail list logo