ppkarwasz commented on PR #748:
URL: https://github.com/apache/commons-io/pull/748#issuecomment-2889075382

   > The remaining major issue is why the new `read(...)` method ignores the 
configured timeout some of the time and not some other times?
   
   From my perspective, the intended contract of the `read(...)` method is to 
attempt to read **at least one byte** from the queue, waiting **up to the 
configured timeout** for data to become available. If no data arrives within 
that period, it should return `-1`.
   As far as I can tell, the current implementation adheres to this contract. 
It waits for the timeout at most **once**, not per byte, which ensures 
consistent and predictable behavior.
   
   In contrast, the previous implementation—inherited from the super 
class—effectively applied the timeout **per byte requested**—so if, for 
example, the caller attempted to read into a buffer of size 1000, the method 
could block for up to 1000 times the configured timeout. This behavior could 
easily lead to unexpectedly long delays and violated the "at most the 
configured timeout" expectation.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to