t;>>>>>>>>>>>> ummm, it does not work for downgrade as the old
>> coordinator has
>>>>>>>> no
>>>>>>>>>> idea
>>>>>>>>>>>>>>>> about new format :(
&g
; >>>>>>>>>>>>>>>> Best,
> >>>>>>>>>>>>>>>> David
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Fri, Dec 2
additional
> >>>>>>>>>>>>>> information ("magic") to the hash does not help the upgraded
> >>>>>>>> coordinator
> >>>>>>>>>>>>>> determine the "version." This means that the upgraded
> >>>>>
a new field to ConsumerGroupMetadataValue
>>>>> to
>>>>>>>>>>>>>> indicate the version of the "hash." This would allow the
>>>>>>>> coordinator,
>>>>>>>>>>
;>>>> there
> >>>>>>>>>>>> would be extensive recomputing of old hashes.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> I believe this idea should also work for downgrades
t;>>> it seems that it will trigger refreshes which are not
> > necessary.
> > >>>>>>>> However, a
> > >>>>>>>>>>> rebalance won't be triggered because the hash won't change.
> > >>>>>>
>>>>>>>
>>>>>>>>>>>>>> On 2024/12/19 14:39:41 David Jacot wrote:
>>>>>>>>>>>>>>> Hi PoAn and Chia-Ping,
>>>>>>>>>>>>&g
plicas are updated
> > when
> > > a
> > > >>>>>>>> broker
> > > >>>>>>>>>>> shuts down. What you said applies to the ISR. I suppose that
> > we
> > > >> can
> > > >>>>&
gt;>>>>>>> hashes. Is my understanding correct?
> >>>>>>>>>>>
> >>>>>>>>>>> DJ05: Fair enough.
> >>>>>>>>>>>
> >>>>>>>>>>> DJ06: You menti
oning them. I wonder if this is a good practice.
>> Intuitively,
>>>>>>>> I would
>>>>>>>>>>> have used XOR or hashed the hashed. Guava has a method for
>> combining
>>>>>>>>>>> hashes. It may be worth looking into the alg
gt;>>>> DJ08: Regarding the per topic hash, I wonder whether we should
> >>>>>> precise in
> >>>>>>>>> the KIP how we will compute it. I had the following in mind:
> >>>>>>>>> hash(topicName; numPartitions; [partitionId;sorted racks]). We
> could
> >>&
;>>>>>>>> we would handle changing the format in the future.
>>>>>>>>>
>>>>>>>>> DJ09: It would be great if we could provide more details about
>>>>>> backward
>>>>>>>>> compat
date KIP-1071. It may be worth pigging them in the
> > > >>>>> discussion thread of KIP-1071.
> > > >>>>>
> > > >>>>> Best,
> > > >>>>> David
> > > >>>>>
> > > >>>>> On Tue, Dec 1
implementation details.
> > >>>>>>
> > >>>>>> DJ02: Does the “incrementally” mean that we only calculate the
> > >> difference
> > >>>>>> parts?
> > >>>>>> For example, if the number of partiti
the hash
> >>>>>> of number of partition and reconstruct it to the topic hash.
> >>>>>> IMO, we only calculate topic hash one time. With cache mechanism,
> >> the
> >>>>>> value can be reused in different groups on a same broke
e broker.
>>>>>> The CPU usage for this part is not very high.
>>>>>>
>>>>>> DJ03: Added the update path to KIP for both cases.
>>>>>>
>>>>>> DJ04: Yes, it’s a good idea. With cache mechanism and single
; How about we move the hash to metadata image when we find more use
> cases?
> > > > >
> > > > > AS1, AS2: Thanks for the reminder. I will simply delete
> > > > > ShareGroupPartitionMetadataKey/Value and add a new field to
> > > > > ShareGroupMetadataValue.
> &g
add a new field to
> > > > ShareGroupMetadataValue.
> > > >
> > > > Best,
> > > > PoAn
> > > >
> > > > > On Dec 17, 2024, at 5:50 AM, Andrew Schofield <
> > > > andrew_schofield_j...@outlook.com> wrote:
> > > > >
> > > > > Hi
>
> > > > Hi PoAn,
> > > > Thanks for the KIP.
> > > >
> > > > AS1: From the point of view of share groups, the API and record schema
> > > > definitions are unstable in AK 4.0. In AK 4.1, we will start supporting
> > > proper
> > >
>
> > > > Hi PoAn,
> > > > Thanks for the KIP.
> > > >
> > > > AS1: From the point of view of share groups, the API and record schema
> > > > definitions are unstable in AK 4.0. In AK 4.1, we will start supporting
> > > proper
> > >
adataValue. Just include the schema for the fields
> > > which are actually needed, and I'll update the schema in the code when
> > > the KIP is implemented.
> > >
> > > AS2: In the event that DJ04 actually removes the need for
> > > ConsumerGroupParti
t; > > versioning. As a result, I think you do not need to deprecate the fields
> > in the
> > > ShareGroupPartitionMetadataValue. Just include the schema for the fields
> > > which are actually needed, and I'll update the schema in the code when
> > > the KI
ate the schema in the code when
> > the KIP is implemented.
> >
> > AS2: In the event that DJ04 actually removes the need for
> > ConsumerGroupPartitionMetadataKey/Value entirely, I would simply
> > delete ShareGroupPartitionMetadataKey/Value, assuming that it is
> > accepted in tim
> accepted in time for AK 4.1.
>
>
> Thanks,
> Andrew
>
> From: Chia-Ping Tsai
> Sent: 16 December 2024 16:27
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-1101: Trigger rebalance on rack topology changes
>
>
Value, assuming that it is
accepted in time for AK 4.1.
Thanks,
Andrew
From: Chia-Ping Tsai
Sent: 16 December 2024 16:27
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-1101: Trigger rebalance on rack topology changes
hi David
> DJ05
One of the ben
hi David
> DJ05
One of the benefits of having a single hash per group (DJ04) is the reduction
in the size of stored data. Additionally, the cost of re-computing can be
minimized thanks to caching. So I'm + 1 to DJ04. However, the advantage of
storing the topic cache in the metadata image is so
Hi PoAn,
Thanks for the KIP. I have some comments about it.
DJ01: Please, remove all the code from the KIP. We only care about public
interface changes, not about implementation details.
DJ02: Regarding the hash computation, I agree that we should use Murmur3.
However, I don't quite like the impl
Hi Chia-Ping,
Thanks for the review and suggestions.
Q0: Add how rack change and how it affects topic partition.
Q1: Add why we need a balance algorithm to Motivation section.
Q2: After checking again, we don’t need to update cache when we replay records.
We only need to renew it in consumer h
hi PoAn
Thanks for for this KIP!
Q0: Could you add more details about `A topic partition has rack change`?
IIRC, the "rack change" includes both follower and leader, right?
Q1: Could you please add the 'concerns' we discussed to the Motivation
section? This should include topics like 'computatio
Hi all,
I would like to start a discussion thread on KIP-1101. Trigger rebalance on
rack topology changes. In this KIP, we aim to use less memory / disk resources
to detect rack changes in the new coordinator.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1101%3A+Trigger+rebalance+on+ra
30 matches
Mail list logo