Also, even though replica fetcher makes progress, individual message size
should be less than
or equal to message.max.bytes. otherwise, we will get
RecordTooLargeException.

On Thu, Mar 23, 2017 at 2:34 PM, Ben Stopford <b...@confluent.io> wrote:

> Hi Kostas - The docs for replica.fetch.max.bytes should be helpful here:
>
> The number of bytes of messages to attempt to fetch for each partition.
> This is not an absolute maximum, if the first message in the first
> non-empty partition of the fetch is larger than this value, the message
> will still be returned to ensure that progress can be made.
>
> -B
>
> On Thu, Mar 23, 2017 at 3:27 AM Kostas Christidis <kos...@gmail.com>
> wrote:
>
> > Can fetch.replica.max.bytes be equal to message.max.bytes?
> >
> > 1. The defaults in the official Kafka documentation [1] have the
> > parameter "fetch.replica.max.bytes" set to a higher value than
> > "message.max.bytes". However, nothing in the description of these
> > parameters implies that equality would be wrong.
> >
> > 2. The relevant passage in pg. 41 in the Definitive Guide book [2]
> > does not imply that the former needs to be larger than the latter
> > either.
> >
> > 3. A Cloudera doc [3] however notes that: "replica.fetch.max.bytes
> > [...] must be larger than message.max.bytes, or a broker can accept
> > messages it cannot replicate, potentially resulting in data loss."
> >
> > 4. The only other reference I could find to this strict inequality was
> > this StackOverflow comment [4].
> >
> > So:
> >
> > Does fetch.replica.max.bytes *have* to be strictly larger to
> > message.max.bytes?
> >
> > If so, what is the technical reason behind this?
> >
> > Thank you.
> >
> > [1] https://kafka.apache.org/documentation/
> > [2] https://shop.oreilly.com/product/0636920044123.do
> > [3]
> > https://www.cloudera.com/documentation/kafka/latest/
> topics/kafka_performance.html
> > [4] http://stackoverflow.com/a/39026744/2363529
> >
>

Reply via email to