[ 
https://issues.apache.org/jira/browse/KAFKA-1253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13906422#comment-13906422
 ] 

Jay Kreps commented on KAFKA-1253:
----------------------------------

Also the high-level idea I had for implementation was to instantiate a 
compressor instance with each empty InMemoryRecords we create. A compressor 
would be something like
public interface Compressor {
    public void write(byte[] bytes, int offset, byte[] key, byte[] value);
}
For the non-compressed case we can either have a no-op compressor or just have 
special case logic.

This compressor would be initialized lazily and stored with the InMemoryRecords 
instance and would incapsulate the GZIP or snappy compression dictionary (e.g. 
the Defalter instance or whatever).

The previous compression code was quite complex so really simplifying that 
logic will be one of the core challenges. One thing we can do is avoid 
supporting arbitrarily nested messages. The scala code currently allows any 
amount of recursive message nesting. This is a bit complex and not really 
needed.

Also feel free to change any of this stuff as it definitely isn't fully thought 
out.


> Implement compression in new producer
> -------------------------------------
>
>                 Key: KAFKA-1253
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1253
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: producer 
>            Reporter: Jay Kreps
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to