mdedetrich commented on PR #2409: URL: https://github.com/apache/pekko/pull/2409#issuecomment-3461351929
> > on the `compress` method in `ZstdCompressor` as this will trigger a cleanup on every `onPush` of the stream. I don't know how expensive this cleanup is, there can be an argument that for this specific case we should defer it for GC as GC cleaning up many ByteBuffers as a batch would be faster, @jrudolph can you comment? > > There used to be a reason to use buffer pools especially for direct buffers, probably not least because you might fragment the native heap. Would probably make sense to not churn through them in quick succession. Why not use the buffer pool from the zstd-jni jar? So for `ZstdDecompressor` this is easy as all of the `DirectBuffers` are the same size. With `ZstdCompressor` its a bit more complicated as one `ByteBuffer` is statically sized (this is the `targetBuffer`) where as the temporary direct input buffers that we create whenever we get an element from a stream has a dynamic size so I am not sure what is the best way to handle this (ideally I would have just done `compressingStream.compress(input.asByteByffer)` where `input` is the `ByteString` from the stream but `ZstdDirectBufferCompressingStreamNoFinalizer.compress` only supports direct byte buffers). I guess in this case using a pool of `ByteBuffers` would be better anyways, even with a static size? In which case I would have to redo the logic a bit but it is what it is. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
