[ https://issues.apache.org/jira/browse/KAFKA-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jun Rao updated KAFKA-2043: --------------------------- Resolution: Fixed Fix Version/s: 0.8.3 Status: Resolved (was: Patch Available) Thanks for the patch. +1 and committed to trunk. > CompressionType is passed in each RecordAccumulator append > ---------------------------------------------------------- > > Key: KAFKA-2043 > URL: https://issues.apache.org/jira/browse/KAFKA-2043 > Project: Kafka > Issue Type: Bug > Components: clients > Affects Versions: 0.8.2.0 > Reporter: Grant Henke > Assignee: Grant Henke > Priority: Minor > Fix For: 0.8.3 > > Attachments: KAFKA-2043.patch, KAFKA-2043_2015-03-25_13:28:52.patch > > > Currently org.apache.kafka.clients.producer.internals.RecordAccumulator > append method accepts the compressionType on a per record basis. It looks > like the code would only work on a per batch basis because the > CompressionType is only used when creating a new RecordBatch. My > understanding is this should only support setting per batch at most. > public RecordAppendResult append(TopicPartition tp, byte[] key, byte[] > value, CompressionType compression, Callback callback) throws > InterruptedException; > The compression type is a producer > level config. Instead of passing it in for each append, we probably should > just pass it in once during the creation RecordAccumulator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)