FrankYang0529 commented on code in PR #19523:
URL: https://github.com/apache/kafka/pull/19523#discussion_r2087027573


##########
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/Utils.java:
##########
@@ -324,4 +329,84 @@ static void throwIfRegularExpressionIsInvalid(
                     regex, ex.getDescription()));
         }
     }
+
+    /**
+     * The magic byte used to identify the version of topic hash function.
+     */
+    static final byte TOPIC_HASH_MAGIC_BYTE = 0x00;
+
+    /**
+     * Computes the hash of the topics in a group.
+     * <p>
+     * The computed hash value is stored as part of the metadata hash in the 
*GroupMetadataValue.
+     * <p>
+     * The hashing process involves the following steps:
+     * 1. Sort the topic hashes by topic name.
+     * 2. Write each topic hash in order.
+     *
+     * @param topicHashes The map of topic hashes. Key is topic name and value 
is the topic hash.
+     * @return The hash of the group.
+     */
+    static long computeGroupHash(Map<String, Long> topicHashes) {
+        // Sort entries by topic name
+        List<Map.Entry<String, Long>> sortedEntries = new 
ArrayList<>(topicHashes.entrySet());
+        sortedEntries.sort(Map.Entry.comparingByKey());
+
+        HashStream64 hasher = Hashing.xxh3_64().hashStream();
+        for (Map.Entry<String, Long> entry : sortedEntries) {
+            hasher.putLong(entry.getValue());
+        }
+
+        return hasher.getAsLong();
+    }
+
+    /**
+     * Computes the hash of the topic id, name, number of partitions, and 
partition racks by XXHash64.
+     * <p>
+     * The computed hash value for the topic is utilized in conjunction with 
the {@link #computeGroupHash(Map)}
+     * method and is stored as part of the metadata hash in the 
*GroupMetadataValue.
+     * It is important to note that if the hash algorithm is changed, the 
magic byte must be updated to reflect the
+     * new hash version.
+     * <p>
+     * The hashing process involves the following steps:
+     * 1. Write a magic byte to denote the version of the hash function.
+     * 2. Write the hash code of the topic ID.
+     * 3. Write the topic name.
+     * 4. Write the number of partitions associated with the topic.
+     * 5. For each partition, write the partition ID and a sorted list of rack 
identifiers.
+     *    - Rack identifiers are formatted as 
"<length1><value1><length2><value2>" to prevent issues with simple separators.
+     *
+     * @param topicImage   The topic image.
+     * @param clusterImage The cluster image.
+     * @return The hash of the topic.
+     */
+    static long computeTopicHash(TopicImage topicImage, ClusterImage 
clusterImage) throws IOException {
+        HashStream64 hasher = Hashing.xxh3_64().hashStream();
+        hasher = hasher.putByte(TOPIC_HASH_MAGIC_BYTE) // magic byte
+            .putLong(topicImage.id().hashCode()) // topic ID
+            .putString(topicImage.name()) // topic name
+            .putInt(topicImage.partitions().size()); // number of partitions
+
+        for (int i = 0; i < topicImage.partitions().size(); i++) {
+            hasher = hasher.putInt(i); // partition id
+            // The rack string combination cannot use simple separator like 
",", because there is no limitation for rack character.
+            // If using simple separator like "," it may hit edge case like 
",," and ",,," / ",,," and ",,".
+            // Add length before the rack string to avoid the edge case.
+            List<String> racks = new ArrayList<>();

Review Comment:
   Yes, I initialize `racks` outside the for-loop and call `clear` before 
adding data.



##########
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/Utils.java:
##########
@@ -324,4 +329,84 @@ static void throwIfRegularExpressionIsInvalid(
                     regex, ex.getDescription()));
         }
     }
+
+    /**
+     * The magic byte used to identify the version of topic hash function.
+     */
+    static final byte TOPIC_HASH_MAGIC_BYTE = 0x00;
+
+    /**
+     * Computes the hash of the topics in a group.
+     * <p>
+     * The computed hash value is stored as part of the metadata hash in the 
*GroupMetadataValue.
+     * <p>
+     * The hashing process involves the following steps:
+     * 1. Sort the topic hashes by topic name.
+     * 2. Write each topic hash in order.
+     *
+     * @param topicHashes The map of topic hashes. Key is topic name and value 
is the topic hash.
+     * @return The hash of the group.
+     */
+    static long computeGroupHash(Map<String, Long> topicHashes) {
+        // Sort entries by topic name
+        List<Map.Entry<String, Long>> sortedEntries = new 
ArrayList<>(topicHashes.entrySet());
+        sortedEntries.sort(Map.Entry.comparingByKey());
+
+        HashStream64 hasher = Hashing.xxh3_64().hashStream();
+        for (Map.Entry<String, Long> entry : sortedEntries) {
+            hasher.putLong(entry.getValue());
+        }
+
+        return hasher.getAsLong();
+    }
+
+    /**
+     * Computes the hash of the topic id, name, number of partitions, and 
partition racks by XXHash64.
+     * <p>
+     * The computed hash value for the topic is utilized in conjunction with 
the {@link #computeGroupHash(Map)}
+     * method and is stored as part of the metadata hash in the 
*GroupMetadataValue.
+     * It is important to note that if the hash algorithm is changed, the 
magic byte must be updated to reflect the
+     * new hash version.
+     * <p>
+     * The hashing process involves the following steps:
+     * 1. Write a magic byte to denote the version of the hash function.
+     * 2. Write the hash code of the topic ID.
+     * 3. Write the topic name.
+     * 4. Write the number of partitions associated with the topic.
+     * 5. For each partition, write the partition ID and a sorted list of rack 
identifiers.
+     *    - Rack identifiers are formatted as 
"<length1><value1><length2><value2>" to prevent issues with simple separators.
+     *
+     * @param topicImage   The topic image.
+     * @param clusterImage The cluster image.
+     * @return The hash of the topic.
+     */
+    static long computeTopicHash(TopicImage topicImage, ClusterImage 
clusterImage) throws IOException {
+        HashStream64 hasher = Hashing.xxh3_64().hashStream();
+        hasher = hasher.putByte(TOPIC_HASH_MAGIC_BYTE) // magic byte
+            .putLong(topicImage.id().hashCode()) // topic ID
+            .putString(topicImage.name()) // topic name
+            .putInt(topicImage.partitions().size()); // number of partitions
+
+        for (int i = 0; i < topicImage.partitions().size(); i++) {
+            hasher = hasher.putInt(i); // partition id
+            // The rack string combination cannot use simple separator like 
",", because there is no limitation for rack character.
+            // If using simple separator like "," it may hit edge case like 
",," and ",,," / ",,," and ",,".
+            // Add length before the rack string to avoid the edge case.
+            List<String> racks = new ArrayList<>();
+            for (int replicaId : topicImage.partitions().get(i).replicas) {
+                BrokerRegistration broker = clusterImage.broker(replicaId);
+                if (broker != null) {
+                    Optional<String> rackOptional = broker.rack();
+                    rackOptional.ifPresent(racks::add);

Review Comment:
   Updated it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to