AHeise commented on pull request #13885: URL: https://github.com/apache/flink/pull/13885#issuecomment-732215815
Hi @1996fanrui , that's an interesting discovery and investigation that you did there! I think the approach on the filesystem level is also much better than the previous way. Let's try to not change any public API (FileSystem) as this would slow down the progress. I'd probably focus on Hadoop file systems entirely (for now). What I'd propose is the following: - Use `HadoopFsFactory#configure` to extract the buffer size and pass it to the `ctor` of all filesystems created by the factory. - Use that default buffer size in `HadoopFileSystem#open(Path)` to call `#open(Path, int)`. - `HadoopFileSystem#open(Path, int)` should use the buffer size both in the call to Hadoop and to wrap it as you did as in the `BufferedFSInputStream`. I dug a bit into the Hadoop code and noticed that the cache is by default just 4kb. So even if we have cache on top of it with 64kb, we would still need to ask Hadoop several times. So, that means you are not adding any new methods, but just modify existing ones. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org