[ https://issues.apache.org/jira/browse/KAFKA-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13530654#comment-13530654 ]
Scott Carey commented on KAFKA-374: ----------------------------------- Awesome. I was just profiling some Kafka 0.7.1 stuff and noticed the CRC eating up time, and since I co-authored the pure java hadoop one, was just about to file a JIRA here.... On a related note, it would be nice to offload the CRC and decompression to a different thread than the user's thread. Does 0.8's client do all of this work on the user thread like 0.7.1 ? Our 0.7.1 consumers are often throughput bound by CPU including kafka code. If the client is currently single-threaded I can file a JIRA with some ideas. > Move to java CRC32 implementation > --------------------------------- > > Key: KAFKA-374 > URL: https://issues.apache.org/jira/browse/KAFKA-374 > Project: Kafka > Issue Type: New Feature > Components: core > Affects Versions: 0.8 > Reporter: Jay Kreps > Priority: Minor > Labels: newbie > Attachments: KAFKA-374-draft.patch, KAFKA-374.patch > > > We keep a per-record crc32. This is fairly cheap algorithm, but the java > implementation uses JNI and it seems to be a bit expensive for small records. > I have seen this before in Kafka profiles, and I noticed it on another > application I was working on. Basically with small records the native > implementation can only checksum < 100MB/sec. Hadoop has done some analysis > of this and replaced it with a Java implementation that is 2x faster for > large values and 5-10x faster for small values. Details are here HADOOP-6148. > We should do a quick read/write benchmark on log and message set iteration > and see if this improves things. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira