becketqin commented on a change in pull request #7679: [FLINK-11501][Kafka Connector] Add ratelimiting to Kafka consumer URL: https://github.com/apache/flink/pull/7679#discussion_r256687206
########## File path: flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java ########## @@ -482,6 +502,49 @@ void reassignPartitions(List<KafkaTopicPartitionState<TopicPartition>> newPartit return new KafkaConsumer<>(kafkaProperties); } + @VisibleForTesting + RateLimiter getRateLimiter() { + return rateLimiter; + } + + // ----------------------------------------------------------------------- + // Rate limiting methods + // ----------------------------------------------------------------------- + /** + * + * @param records List of ConsumerRecords. + * @return Total batch size in bytes, including key and value. + */ + private int getRecordBatchSize(ConsumerRecords<byte[], byte[]> records) { Review comment: The `AbstractFetcher` has two subclasses - `Kafka09Fetcher` and `Kafka08Fetcher`. The serialization is done inside them. `Kafka09Fetcher` is sort of an abstract fetcher for Kafka 0.9+. `Kafka08Fetcher` is more of a legacy. Maybe we can just do that in `Kafka09Fetcher`? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services