[ https://issues.apache.org/jira/browse/KAFKA-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526136#comment-13526136 ]
David Arthur commented on KAFKA-374: ------------------------------------ I pulled in the pure-Java implementation from Hadoop for comparison: https://docs.google.com/spreadsheet/pub?key=0AksaPvYfWJQFdG5fZFNyWnpOUzZfZEtnVl9YZ21FWUE&output=html Pure Java has a slight advantage over pure Scala, both have good speedup over JNI. This was done against 0.8 using the same test code in the patch > Move to java CRC32 implementation > --------------------------------- > > Key: KAFKA-374 > URL: https://issues.apache.org/jira/browse/KAFKA-374 > Project: Kafka > Issue Type: New Feature > Components: core > Affects Versions: 0.8 > Reporter: Jay Kreps > Priority: Minor > Labels: newbie > Attachments: KAFKA-374-draft.patch > > > We keep a per-record crc32. This is fairly cheap algorithm, but the java > implementation uses JNI and it seems to be a bit expensive for small records. > I have seen this before in Kafka profiles, and I noticed it on another > application I was working on. Basically with small records the native > implementation can only checksum < 100MB/sec. Hadoop has done some analysis > of this and replaced it with a Java implementation that is 2x faster for > large values and 5-10x faster for small values. Details are here HADOOP-6148. > We should do a quick read/write benchmark on log and message set iteration > and see if this improves things. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira