wsry commented on a change in pull request #10375: [FLINK-14845][runtime] 
Introduce data compression to reduce disk and network IO of shuffle.
URL: https://github.com/apache/flink/pull/10375#discussion_r354167152
 
 

 ##########
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/BoundedBlockingSubpartitionWriteReadTest.java
 ##########
 @@ -151,16 +162,24 @@ public void testRead10ConsumersConcurrent() throws 
Exception {
        //  common test passes
        // 
------------------------------------------------------------------------
 
-       private static void readLongs(ResultSubpartitionView reader, long 
numLongs, int numBuffers) throws Exception {
+       private static void readLongs(
+                       ResultSubpartitionView reader,
+                       long numLongs,
+                       int numBuffers,
+                       boolean compressedEnabled,
+                       BufferDecompressor decompressor) throws Exception {
 
 Review comment:
   for the multi-thread test, each thread need a decompressor, because 
BufferDecompressor is not thread safe.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to