Hi, I'm wondering when you call kafka.javaapi.Producer.send() with a list of messages, and also have compression on (snappy in this case), how does it decide how many messages to put together as one?
The reason I'm asking is that even though my messages are only 70kb uncompressed, the broker complains that I'm hitting the 1mb message limit such as: kafka.common.MessageSizeTooLargeException: Message size is 1035608 bytes which exceeds the maximum configured message size of 1000012. at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:378) at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:361) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32) at kafka.log.Log.analyzeAndValidateMessageSet(Log.scala:361) at kafka.log.Log.append(Log.scala:257) at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:379) at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:365) at kafka.utils.Utils$.inLock(Utils.scala:535) at kafka.utils.Utils$.inReadLock(Utils.scala:541) at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:365) at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:291) at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:282) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) Thanks, Jamie