glaksh100 commented on a change in pull request #7679: [FLINK-11501][Kafka 
Connector] Add ratelimiting to Kafka consumer
URL: https://github.com/apache/flink/pull/7679#discussion_r256270580
 
 

 ##########
 File path: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java
 ##########
 @@ -482,6 +502,49 @@ void 
reassignPartitions(List<KafkaTopicPartitionState<TopicPartition>> newPartit
                return new KafkaConsumer<>(kafkaProperties);
        }
 
+       @VisibleForTesting
+       RateLimiter getRateLimiter() {
+               return rateLimiter;
+       }
+
+       // 
-----------------------------------------------------------------------
+       // Rate limiting methods
+       // 
-----------------------------------------------------------------------
+       /**
+        *
+        * @param records List of ConsumerRecords.
+        * @return Total batch size in bytes, including key and value.
+        */
+       private int getRecordBatchSize(ConsumerRecords<byte[], byte[]> records) 
{
 
 Review comment:
   The reason for doing the iteration here was because we needed the record 
sizes prior to deserialization and I believe the records are already 
deserialized at the `AbstractFetcher` level?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to