showuon opened a new pull request #9877:
URL: https://github.com/apache/kafka/pull/9877


   Originally, we make sure consumer `awaitAssignment`, and then produce 
records. We total send 30 records to 3 topics, and each topic has 30 
partitions, so it takes some time to process it. If it exceeds 6 seconds, it'll 
make the consumer left due to the `max.poll.interval.ms` is set to 6 secs. And 
then it will delete the logs and then increment the start offset. So, later 
when the consumer is back, we'll first listOffset, and got the new and 
unexpected start offset. That's why we sometimes cannot receive expected amount 
of records.
   
   To fix it, I produce records before `awaitAssignment`, and then, I make sure 
we consume records right after awaitAssignemnt. This will make this test much 
more reliable.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to