[ 
https://issues.apache.org/jira/browse/KAFKA-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645621#comment-16645621
 ] 

ASF GitHub Bot commented on KAFKA-4514:
---------------------------------------

hachikuji closed pull request #5777: MINOR: Adjust test params pursuant to 
KAFKA-4514.
URL: https://github.com/apache/kafka/pull/5777
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/kafkatest/tests/client/compression_test.py 
b/tests/kafkatest/tests/client/compression_test.py
index 2085d9b6259..23b30eac24c 100644
--- a/tests/kafkatest/tests/client/compression_test.py
+++ b/tests/kafkatest/tests/client/compression_test.py
@@ -29,6 +29,7 @@ class CompressionTest(ProduceConsumeValidateTest):
     """
     These tests validate produce / consume for compressed topics.
     """
+    COMPRESSION_TYPES = ["snappy", "gzip", "lz4", "zstd", "none"]
 
     def __init__(self, test_context):
         """:type test_context: ducktape.tests.test.TestContext"""
@@ -42,7 +43,7 @@ def __init__(self, test_context):
         self.num_partitions = 10
         self.timeout_sec = 60
         self.producer_throughput = 1000
-        self.num_producers = 4
+        self.num_producers = len(self.COMPRESSION_TYPES)
         self.messages_per_producer = 1000
         self.num_consumers = 1
 
@@ -53,15 +54,15 @@ def min_cluster_size(self):
         # Override this since we're adding services outside of the constructor
         return super(CompressionTest, self).min_cluster_size() + 
self.num_producers + self.num_consumers
 
-    @cluster(num_nodes=7)
-    @parametrize(compression_types=["snappy","gzip","lz4","zstd","none"])
+    @cluster(num_nodes=8)
+    @parametrize(compression_types=COMPRESSION_TYPES)
     def test_compressed_topic(self, compression_types):
         """Test produce => consume => validate for compressed topics
         Setup: 1 zk, 1 kafka node, 1 topic with partitions=10, 
replication-factor=1
 
         compression_types parameter gives a list of compression types (or no 
compression if
-        "none"). Each producer in a VerifiableProducer group (num_producers = 
4) will use a
-        compression type from the list based on producer's index in the group.
+        "none"). Each producer in a VerifiableProducer group (num_producers = 
number of compression
+        types) will use a compression type from the list based on producer's 
index in the group.
 
             - Produce messages in the background
             - Consume messages in the background


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


> Add Codec for ZStandard Compression
> -----------------------------------
>
>                 Key: KAFKA-4514
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4514
>             Project: Kafka
>          Issue Type: Improvement
>          Components: compression
>            Reporter: Thomas Graves
>            Assignee: Lee Dongjin
>            Priority: Major
>             Fix For: 2.1.0
>
>
> ZStandard: https://github.com/facebook/zstd and 
> http://facebook.github.io/zstd/ has been in use for a while now. v1.0 was 
> recently released. Hadoop 
> (https://issues.apache.org/jira/browse/HADOOP-13578)  and others are adopting 
> it. 
>  We have done some initial trials and seen good results. Zstd seems to give 
> great results => Gzip level Compression with Lz4 level CPU.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to