AHeise commented on a change in pull request #13499: URL: https://github.com/apache/flink/pull/13499#discussion_r496999542
########## File path: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java ########## @@ -322,23 +350,71 @@ private MemorySegment requestMemorySegment() { return requestMemorySegment(UNKNOWN_CHANNEL); } - @Nullable - private MemorySegment requestMemorySegmentFromGlobal() { - assert Thread.holdsLock(availableMemorySegments); + private boolean requestMemorySegmentFromGlobal() { + if (numberOfRequestedMemorySegments >= currentPoolSize) { + return false; + } + + MemorySegment segment = networkBufferPool.requestMemorySegment(); + if (segment != null) { + availableMemorySegments.add(segment); + numberOfRequestedMemorySegments++; + return true; + } + return false; + } Review comment: Is your intent to reach equilibrium much quicker? If so, I like the idea. (If not, then I haven't understood) My main concern is that it means that the first buffer pool would potentially take all available segments while the last buffer pool gets nothing although each of them could take some buffers. However, I must admit that I have not fully understood when excess buffers actually occur in reality. I'd assume that during start of an application all pools are created and exclusive segments are acquired more or less simultaneously and handed out a bit later to the writer/input channels, such that excess buffers are close to non-existant. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org