apoorvmittal10 opened a new pull request, #19145:
URL: https://github.com/apache/kafka/pull/19145

   The Pr fixes the behaviour when records are fetched which are larger than 
`fetch.max.bytes` config. Executing below command prior the PR shall result in 
no records being fetched:
   
   ```
   bin/kafka-producer-perf-test.sh --topic T1 --producer-props 
bootstrap.servers=localhost:9092 --record-size 20000 --num-records 1 
--throughput -1
   [6:10](https://confluent.slack.com/archives/D02JTUVAKHB/p1740507054385659)
   bin/kafka-console-share-consumer.sh --bootstrap-server localhost:9092 
--topic T1 --consumer-property fetch.max.bytes=10000
   ```
   
   The change corrects the `hardMaxBytesLimit` condition. The condition was 
originally written for regular consumer with `requestVersion`. As `shareFetch` 
is new RPC hence `requestVersion` starts from 0 which means this 
`hardMaxBytesLimit`,  returns `true`, prior to this change for share fetch.
   
   The usage of `hardMaxBytesLimit` is in ReplicaManager where it decides 
whether to fetch a single record or not. The file records get sliced based on 
the bytes requested. However, if `hardMaxBytesLimit` is false then atleast one 
record is fetched and bytes are adjusted accordinlgy in `localLog`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to