mdedetrich commented on code in PR #2409:
URL: https://github.com/apache/pekko/pull/2409#discussion_r2472877219


##########
stream/src/main/scala/org/apache/pekko/stream/scaladsl/Compression.scala:
##########
@@ -85,4 +87,28 @@ object Compression {
    */
   def inflate(maxBytesPerChunk: Int, nowrap: Boolean): Flow[ByteString, 
ByteString, NotUsed] =
     Flow[ByteString].via(new DeflateDecompressor(maxBytesPerChunk, 
nowrap)).named("inflate")
+
+  /**
+   * @since 2.0.0
+   */
+  def zstd: Flow[ByteString, ByteString, NotUsed] = 
zstd(Zstd.defaultCompressionLevel())
+
+  /**
+   * Same as [[zstd]] with a custom level and an optional dictionary.
+   * @param level The compression level, must be greater or equal to 
[[Zstd.minCompressionLevel]] and less than or equal
+   *              to [[Zstd.maxCompressionLevel]]
+   * @param dictionary An optional dictionary that can be used for compression
+   * @since 2.0.0
+   */
+  def zstd(level: Int, dictionary: Option[ZstdDictCompress] = None): 
Flow[ByteString, ByteString, NotUsed] = {
+    require(level <= Zstd.maxCompressionLevel() && level >= 
Zstd.minCompressionLevel())
+    CompressionUtils.compressorFlow(() => new ZstdCompressor(level, 
dictionary))
+  }
+
+  /**
+   * @since 2.0.0
+   */
+  def zstdDecompress(maxBytesPerChunk: Int = MaxBytesPerChunkDefault): 
Flow[ByteString, ByteString, NotUsed] =
+    Flow[ByteString].via(new 
ZstdDecompressor(maxBytesPerChunk)).named("zstdDecompress")

Review Comment:
   > One difference is that zlib is distributed differently and already part of 
the JDK while zstd-jni might bring extra weight that not everyone is interested 
in. The full jar including binaries for all platforms is >7MB. Whether that is 
significant or not is a different question but it goes somewhat against the 
usual idea to keep pekko-stream reasonably minimal and add stuff like this to 
the connectors (which probably does not work if we want to depend on it from 
http). So, I'd also slightly favor having an extra module at least in pekko 
core.
   
   What would you recommend here, because there are many ways to go about this? 
For example, it would be possible to have the `zstd`/`zstdDecompress` method in 
the `Compression` object so that the API is clean but then the user would have 
to provide their own artifact (of which we can provide a default that uses 
`zstd-jni`).
   
   Or should the artifact just contain everything related to zstd and should 
the artifact be a `pekko-streams-compress-extra` (which initially contains a 
zstd)  or `pekko-streams-zstd`?
   
   One thing to note is that when doing this, I also have 
https://github.com/apache/pekko-http/issues/860 in mind, and I wanted to add 
zstd to the standard spot where all of our content encoder's are i.e. 
https://github.com/apache/pekko-http/blob/4833a8e42f946682a72a72a0f3bee4c4d662b9a6/http-core/src/main/scala/org/apache/pekko/http/scaladsl/model/headers/HttpEncoding.scala#L63-L76



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to