I should mention that our kafka version is 2.7.
Also I tried the kafka-topic.sh tool via --bootstrap-servers and
--zookeeper options.
same result.

On Tue, Nov 9, 2021 at 4:13 PM David Ballano Fernandez <
dfernan...@demonware.net> wrote:

> We are using Kafka with zookeeper
>
> On Tue, Nov 9, 2021 at 4:12 PM Liam Clarke-Hutchinson <lclar...@redhat.com>
> wrote:
>
>> Yeah, it's broker side, just wanted to eliminate the obscure edge case.
>>
>> Oh, and are you using Zookeeper or KRaft?
>>
>> Cheers,
>>
>> Liam
>>
>> On Wed, Nov 10, 2021 at 1:00 PM David Ballano Fernandez <
>> dfernan...@demonware.net> wrote:
>>
>> > I don't seem to have that config in any of our clusters. Is that broker
>> > config?
>> >
>> >
>> > On Tue, Nov 9, 2021 at 3:50 PM Liam Clarke-Hutchinson <
>> lclar...@redhat.com
>> > >
>> > wrote:
>> >
>> > > Thanks David,
>> > >
>> > > Hmm, is the property create.topic.policy.class.name set in
>> > > server.properties at all?
>> > >
>> > > Cheers,
>> > >
>> > > Liam
>> > >
>> > > On Wed, Nov 10, 2021 at 12:21 PM David Ballano Fernandez <
>> > > dfernan...@demonware.net> wrote:
>> > >
>> > > > Hi Liam,
>> > > >
>> > > > I did a test creating topics with kafka-topics.sh and admin API from
>> > > > confluent kafka python.
>> > > > The same happened for both.
>> > > >
>> > > > thanks!
>> > > >
>> > > > On Tue, Nov 9, 2021 at 2:58 PM Liam Clarke-Hutchinson <
>> > > lclar...@redhat.com
>> > > > >
>> > > > wrote:
>> > > >
>> > > > > Hi David,
>> > > > >
>> > > > > What tool(s) are you using to create new topics? Is it the
>> > > > kafka-topics.sh
>> > > > > that ships with Apache Kafka?
>> > > > >
>> > > > > Cheers,
>> > > > >
>> > > > > Liam Clarke-Hutchinson
>> > > > >
>> > > > > On Wed, Nov 10, 2021 at 11:41 AM David Ballano Fernandez <
>> > > > > dfernan...@demonware.net> wrote:
>> > > > >
>> > > > > > Hi All,
>> > > > > > Trying to figure out why my brokers have some disk imbalance I
>> have
>> > > > found
>> > > > > > that Kafka (maybe this is the way it is supposed to work?) is
>> not
>> > > > > spreading
>> > > > > > all replicas to all available brokers.
>> > > > > >
>> > > > > > I have been trying to figure out how a topic with 5 partitions
>> with
>> > > > > > replication_factor=3  (15 replicas) could endup having all
>> replicas
>> > > > > spread
>> > > > > > over 9 brokers instead of 15, especially when there are more
>> > brokers
>> > > > than
>> > > > > > the total replicas for that specific topic.
>> > > > > >
>> > > > > > cluster has 48 brokers.
>> > > > > >
>> > > > > > # topics.py describe -topic topic1
>> > > > > > {145: 1, 148: 2, *101: 3*, 146: 1, 102: 2, 147: 1, 103: 2, 104:
>> 2,
>> > > 105:
>> > > > > 1}
>> > > > > > the keys are the brokerid and the values is how many replicas
>> they
>> > > > have.
>> > > > > >
>> > > > > > As you can see brokerid 101 has 3 replicas. which make the disk
>> > > > > unbalanced
>> > > > > > compared to other brokers.
>> > > > > >
>> > > > > > I created a brand new topic in a test cluster with 24 brokers.
>> > topic
>> > > > has
>> > > > > 5
>> > > > > > partitions with replication factor 3
>> > > > > > topics.py describe -topic test
>> > > > > > {119: 1, 103: 1, 106: 2, 109: 1, 101: 2, 114: 1, 116: 2, 118: 1,
>> > 111:
>> > > > 2,
>> > > > > > 104: 1, 121: 1}
>> > > > > >
>> > > > > > This time kafka decided to spread the replicas over 11 brokers
>> > > instead
>> > > > of
>> > > > > > 15.
>> > > > > > just for fun i ran a partition reassignment  for  topic test,
>> > > spreading
>> > > > > all
>> > > > > > replicas to all brokers, result:
>> > > > > >
>> > > > > > # topics.py describe -topic test
>> > > > > > {110: 1, 111: 1, 109: 1, 108: 1, 112: 1, 103: 1, 107: 1, 105: 1,
>> > 104:
>> > > > 1,
>> > > > > > 106: 1, 102: 1, 118: 1, 116: 1, 113: 1, 117: 1}
>> > > > > >
>> > > > > > Now all replicas are spread across 15 brokers.
>> > > > > >
>> > > > > > Is there something I am missing? Maybe the reason is to keep
>> > network
>> > > > > > chatter down?. By the way, I don't have any rack awareness
>> > > configured.
>> > > > > > Thanks!
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>

Reply via email to