[ 
https://issues.apache.org/jira/browse/KAFKA-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13906580#comment-13906580
 ] 

Jay Kreps commented on KAFKA-1260:
----------------------------------

I think this approach may work but we have to work out a few issues to know.

I like that the compressed message set uses a different impl than the 
non-compressed but I wonder if this will work? On the producer side data is 
either compressed or not so I think this works great. But on the consumer side 
you may get a mixture of compressed and non-compressed records and you won't 
know ahead of time so I'm not sure if you can choose impls.

We also need to avoid the double copying of data. We should ideally find a way 
to refactor to not duplicate code too. But regardless double allocating all our 
memory and double writing is a non-starter.

> Integration Test for New Producer Part II: Broker Failure Handling
> ------------------------------------------------------------------
>
>                 Key: KAFKA-1260
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1260
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: producer 
>            Reporter: Guozhang Wang
>            Assignee: Guozhang Wang
>         Attachments: KAFKA-1260.patch, KAFKA-1260_2014-02-13_15:14:21.patch, 
> KAFKA-1260_2014-02-14_15:00:16.patch, KAFKA-1260_2014-02-19_13:49:19.patch, 
> KAFKA-1260_2014-02-19_15:55:06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to