Thanks Collin.

I read this statement " Fetch request from replicas will also be affected
by the *fetch.max.bytes* limit."

Which made me think whether this was referring to replica fetcher byte
size. But thanks for clarifying.

Regards,

On Tue, 22 Oct 2019 at 00:26, Colin McCabe <cmcc...@apache.org> wrote:

> On Mon, Oct 21, 2019, at 15:52, M. Manna wrote:
> > Hello Colin,
> >
> > The KIP looks concise. My comments are below.
> >
> > replica.fetch.max.bytes is relevant when there is replication involved,
> so
> > I am trying to understand how fetch.max.bytes for a broker will play a
> role
> > here. Apologies for any limited assumptions (always trying to catchup
> with
> > Kafka :).
>
> Hi M. Manna,
>
> Thanks for taking a look.
>
> replica.fetch.max.bytes is only used to control how big the fetches that
> the replicas make to other brokers are.  It does not act as an upper limit
> on the size of inbound fetches made by Kafka consumers.  It is only
> involved in the fetch requests that the broker itself initiates.
>
> >
> > Also, would you kindly suggest how (or if ) the traditional performance
> > tests are affected due to this change?
> > Regards,
> >
>
> There shouldn't be any effect at all, since the upper limit that we are
> setting is higher than the limit which the consumer sets by default.  The
> main goal here is to prevent clients from setting values which don't really
> make sense, not to find the optimum value.  The optimum value will depend
> somewhat on how fast the cluster's disks are, and other factors.
>
> best,
> Colin
>
> >
> > On Mon, 21 Oct 2019 at 22:57, Colin McCabe <cmcc...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I wrote a KIP about creating a fetch.max.bytes configuration for the
> > > broker.  Please take a look here:
> > > https://cwiki.apache.org/confluence/x/4g73Bw
> > >
> > > thanks,
> > > Colin
> > >
> >
>

Reply via email to