If I want to get higher throughput, should I increase the
log.segment.bytes?

I don't see log.retention.check.interval.ms, but there is
log.cleanup.interval.mins, is that what you mean?

If I set log.roll.ms or log.cleanup.interval.mins too small, will it hurt
the throughput? Thanks.

On Fri, Jul 24, 2015 at 11:03 PM, Ewen Cheslack-Postava <e...@confluent.io>
wrote:

> You'll want to set the log retention policy via
> log.retention.{ms,minutes,hours} or log.retention.bytes. If you want really
> aggressive collection (e.g., on the order of seconds, as you specified),
> you might also need to adjust log.segment.bytes/log.roll.{ms,hours} and
> log.retention.check.interval.ms.
>
> On Fri, Jul 24, 2015 at 12:49 PM, Yuheng Du <yuheng.du.h...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am testing the kafka producer performance. So I created a queue and
> > writes a large amount of data to that queue.
> >
> > Is there a way to delete the data automatically after some time, say
> > whenever the data size reaches 50GB or the retention time exceeds 10
> > seconds, it will be deleted so my disk won't get filled and new data
> can't
> > be written in?
> >
> > Thanks.!
> >
>
>
>
> --
> Thanks,
> Ewen
>

Reply via email to