[ 
https://issues.apache.org/jira/browse/KAFKA-3961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380940#comment-15380940
 ] 

Dieter Plaetinck commented on KAFKA-3961:
-----------------------------------------

So I noticed I don't even have to switch from no compression to compression.
I can just start fresh and send gzip/snappy data straight away to trigger the 
issue.
If I use no compression it works fine.

The output of the script looks good to me. Here's the output for all 3 cases 
(none, gzip and snappy)

docker exec -t -i $(docker ps | grep raintank_kafka_1 | cut -d' ' -f1) 
/opt/kafka_2.11-0.10.0.0/bin/kafka-run-class.sh kafka.tools.DumpLogSegments 
--files /tmp/kafka-logs/mdm-0/00000000000000000000.log | head
Dumping /tmp/kafka-logs/mdm-0/00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 isvalid: true payloadsize: 179 magic: 1 compresscodec: 
NoCompressionCodec crc: 3280747185 keysize: 8
offset: 1 position: 221 isvalid: true payloadsize: 179 magic: 1 compresscodec: 
NoCompressionCodec crc: 3546883992 keysize: 8
offset: 2 position: 442 isvalid: true payloadsize: 179 magic: 1 compresscodec: 
NoCompressionCodec crc: 3572906062 keysize: 8
offset: 3 position: 663 isvalid: true payloadsize: 179 magic: 1 compresscodec: 
NoCompressionCodec crc: 397468137 keysize: 8
offset: 4 position: 884 isvalid: true payloadsize: 179 magic: 1 compresscodec: 
NoCompressionCodec crc: 2327265879 keysize: 8
offset: 5 position: 1105 isvalid: true payloadsize: 179 magic: 1 compresscodec: 
NoCompressionCodec crc: 693965344 keysize: 8
offset: 6 position: 1326 isvalid: true payloadsize: 179 magic: 1 compresscodec: 
NoCompressionCodec crc: 1844153547 keysize: 8
offset: 7 position: 1547 isvalid: true payloadsize: 179 magic: 1 compresscodec: 
NoCompressionCodec crc: 1888993996 keysize: 8


docker exec -t -i $(docker ps | grep raintank_kafka_1 | cut -d' ' -f1) 
/opt/kafka_2.11-0.10.0.0/bin/kafka-run-class.sh kafka.tools.DumpLogSegments 
--files /tmp/kafka-logs/mdm-0/00000000000000000000.log | head
Dumping /tmp/kafka-logs/mdm-0/00000000000000000000.log
Starting offset: 0
offset: 1 position: 0 isvalid: true payloadsize: 292 magic: 1 compresscodec: 
SnappyCompressionCodec crc: 7281723
offset: 9 position: 326 isvalid: true payloadsize: 689 magic: 1 compresscodec: 
SnappyCompressionCodec crc: 3347985156
offset: 10 position: 1049 isvalid: true payloadsize: 214 magic: 1 
compresscodec: SnappyCompressionCodec crc: 4000746891
offset: 19 position: 1297 isvalid: true payloadsize: 726 magic: 1 
compresscodec: SnappyCompressionCodec crc: 1610492591
offset: 20 position: 2057 isvalid: true payloadsize: 214 magic: 1 
compresscodec: SnappyCompressionCodec crc: 3804305430
offset: 29 position: 2305 isvalid: true payloadsize: 731 magic: 1 
compresscodec: SnappyCompressionCodec crc: 595144250
offset: 30 position: 3070 isvalid: true payloadsize: 214 magic: 1 
compresscodec: SnappyCompressionCodec crc: 4231320622
offset: 39 position: 3318 isvalid: true payloadsize: 731 magic: 1 
compresscodec: SnappyCompressionCodec crc: 2800209123

docker exec -t -i $(docker ps | grep raintank_kafka_1 | cut -d' ' -f1) 
/opt/kafka_2.11-0.10.0.0/bin/kafka-run-class.sh kafka.tools.DumpLogSegments 
--files /tmp/kafka-logs/mdm-0/00000000000000000000.log | head
Dumping /tmp/kafka-logs/mdm-0/00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 isvalid: true payloadsize: 200 magic: 1 compresscodec: 
GZIPCompressionCodec crc: 314894844
offset: 9 position: 234 isvalid: true payloadsize: 612 magic: 1 compresscodec: 
GZIPCompressionCodec crc: 2990244273
offset: 11 position: 880 isvalid: true payloadsize: 260 magic: 1 compresscodec: 
GZIPCompressionCodec crc: 3785043074
offset: 19 position: 1174 isvalid: true payloadsize: 551 magic: 1 
compresscodec: GZIPCompressionCodec crc: 2341216064
offset: 21 position: 1759 isvalid: true payloadsize: 261 magic: 1 
compresscodec: GZIPCompressionCodec crc: 3850584842
offset: 29 position: 2054 isvalid: true payloadsize: 556 magic: 1 
compresscodec: GZIPCompressionCodec crc: 2303391279
offset: 31 position: 2644 isvalid: true payloadsize: 260 magic: 1 
compresscodec: GZIPCompressionCodec crc: 2729038381
offset: 39 position: 2938 isvalid: true payloadsize: 557 magic: 1 
compresscodec: GZIPCompressionCodec crc: 1498839171


> broker sends malformed response when switching from no compression to 
> snappy/gzip
> ---------------------------------------------------------------------------------
>
>                 Key: KAFKA-3961
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3961
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.10.0.0
>         Environment: docker container java:openjdk-8-jre on arch linux 
> 4.5.4-1-ARCH
>            Reporter: Dieter Plaetinck
>
> Hi this is my first time using this tracker, so please bear with me (priority 
> seems to be major by default?)
> I should be allowed to switch back and forth between none/gzip/snappy 
> compression to the same topic/partition, right?
> (I couldn't find this explicitly anywhere but seems implied through the docs 
> and also from https://issues.apache.org/jira/browse/KAFKA-1499)
> when I try this, first i use no compression, than kill my producer, restart 
> it with snappy or gzip compression, send data to the same topic/partition 
> again, it seems the broker is sending a malformed response to my consumer.  
> At least that's what was suggested when i was reporting this problem in the 
> tracker for the client library I use 
> (https://github.com/Shopify/sarama/issues/698). Also noteworthy is that the 
> broker doesn't log anything when this happens.
> thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to