Hi,
On Fri, 3 Jan 2020 at 19:48, Clark Sims <clark.norton.s...@gmail.com> wrote: > Why do some people so strongly recommend cutting large messages into > many small messages, as opposed to changing max.message.bytes? > > For example, Stéphane Maarek at > https://www.quora.com/How-do-I-send-Large-messages-80-MB-in-Kafkam, > says "Kafka isn’t meant to handle large messages and that’s why the > message max size is 1MB (the setting in your brokers is called > message.max.bytes). See Apache Kafka. If you wanted to you could > increase that as well as make sure to increase the network buffers > for your producers and consumers. Let me be clear, I DO NOT ENCOURAGE > THIS." > > It seems to me that a 100 megabyte message should be fine on any large > server. > It depends on what you are trying to do. In streams, or a event-centric ecosystem having a 100MB payload sounds a bit extreme to me. I would expect large message size when you are dealing with log aggregation or e2e integration (e.g. via Kafka Connect). It's not about who recommends what, it's about what your end goal is with Kafka. Also, it's not related to Kafka, but any other modern messaging system e.g. Pulsar, Nats etc. You will determine what throughput/latency meets your target and what kind of payload you have. Based on that there will always be an upper bound of what you can/cannot realistically achieve. Thanks, > > Thanks in Advance, > Clark >