[ https://issues.apache.org/jira/browse/FLINK-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16245330#comment-16245330 ]
ASF GitHub Bot commented on FLINK-7517: --------------------------------------- Github user zhijiangW commented on a diff in the pull request: https://github.com/apache/flink/pull/4594#discussion_r149886365 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyBufferPool.java --- @@ -52,51 +48,61 @@ /** Configured chunk size for the arenas. */ private final int chunkSize; + /** We strictly prefer direct buffers and disallow heap allocations. */ + private static final boolean PREFER_DIRECT = true; + + /** + * Arenas allocate chunks of pageSize << maxOrder bytes. With these defaults, this results in + * chunks of 16 MB. + * + * @see #MAX_ORDER + */ + private static final int PAGE_SIZE = 8192; + + /** + * Arenas allocate chunks of pageSize << maxOrder bytes. With these defaults, this results in + * chunks of 16 MB. + * + * @see #PAGE_SIZE + */ + private static final int MAX_ORDER = 11; + /** * Creates Netty's buffer pool with the specified number of direct arenas. * * @param numberOfArenas Number of arenas (recommended: 2 * number of task * slots) */ public NettyBufferPool(int numberOfArenas) { + super( + PREFER_DIRECT, + // No heap arenas, please. + 0, + // Number of direct arenas. Each arena allocates a chunk of 16 MB, i.e. + // we allocate numDirectArenas * 16 MB of direct memory. This can grow + // to multiple chunks per arena during runtime, but this should only + // happen with a large amount of connections per task manager. We + // control the memory allocations with low/high watermarks when writing + // to the TCP channels. Chunks are allocated lazily. + numberOfArenas, + PAGE_SIZE, + MAX_ORDER); + checkArgument(numberOfArenas >= 1, "Number of arenas"); --- End diff -- Is it better to checkArgument before call super method? > let NettyBufferPool extend PooledByteBufAllocator > ------------------------------------------------- > > Key: FLINK-7517 > URL: https://issues.apache.org/jira/browse/FLINK-7517 > Project: Flink > Issue Type: Sub-task > Components: Network > Affects Versions: 1.4.0 > Reporter: Nico Kruber > Assignee: Nico Kruber > > {{NettyBufferPool}} wraps {{PooledByteBufAllocator}} but due to this, any > allocated buffer's {{alloc()}} method is returning the wrapped > {{PooledByteBufAllocator}} which allowed heap buffers again. By extending the > {{PooledByteBufAllocator}}, we prevent this loop hole and also fix the > invariant that a copy of a buffer should have the same allocator. -- This message was sent by Atlassian JIRA (v6.4.14#64029)