Thanks Ben for the detail explanation.
-Tao

On Fri, Aug 7, 2015 at 3:28 AM, Ben Stopford <b...@confluent.io> wrote:

> Hi Tao
>
> 1. I am wondering if the fsync operation is called by the last two routines
> internally?
> => Yes
>
> 2. If log.flush.interval.ms is not specified, is it true that Kafka let OS
> to handle pagecache flush in background?
> => Yes
>
> 3. If we specify ack=1 and ack=-1 in new producer, do those request only
> persist in pagecache or actual disk?
> => acknowledgements do not imply a channel is flushed. ack = -1 will
> increase durability through redundancy. If you really want to control fsync
> you can configure Kafka for force a flush after a defined number of
> messages using log.flush.interval.messages. I should add that this isn’t
> generally the best approach. You’ll get much better performance if you use
> multiple redundant replicas to manage your durability concerns, if you can.
>
> B
>
> > On 7 Aug 2015, at 05:49, Tao Feng <fengta...@gmail.com> wrote:
> >
> > Hi ,
> >
> > I am trying to understand the Kafka log flush behavior. My understanding
> is
> > when the broker specifies broker config param "log.flush.interval.ms",
> it
> > will specify log config param "flush.ms" internally. In logManager
> logic,
> > when the log exceed flush.ms, it will call Log.flush which will call
> > FileChannel.force(true) and MappedByteBuffer.flush() .
> >
> > Couple of questions:
> > 1. I am wondering if the fsync operation is called by the last two
> routines
> > internally?
> > 2. If log.flush.interval.ms is not specified, is it true that Kafka let
> OS
> > to handle pagecache flush in background?
> > 3. If we specify ack=1 and ack=-1 in new producer, do those request only
> > persist in pagecache or actual disk?
> >
> > Thanks,
> > -Tao
>
>

Reply via email to