Hey, thanks for update,

do you know if there is any mention of this in "official" docs?

It was golang client for kafka.


On Tue, May 30, 2017 at 3:50 PM, Ismael Juma <ism...@juma.me.uk> wrote:

> Hi Dmitriy,
>
> Yes, the broker only updates the timestamp of the outer message (or record
> batch in message format V2) so that it does not need to recompress if log
> append time is used. Consumers should ignore the timestamp in the inner
> message (or record-level timestamp in message format V2) if log append time
> is used. Example in the Java implementation:
>
> https://github.com/apache/kafka/blob/trunk/clients/src/
> main/java/org/apache/kafka/common/record/DefaultRecord.java#L341
>
> Hope this helps.
>
> Out of curiosity, which clients do this differently?
>
> Ismael
>
> On Tue, May 30, 2017 at 8:30 AM, Dmitriy Vsekhvalnov <
> dvsekhval...@gmail.com
> > wrote:
>
> > Hi all,
> >
> > we noticed that when kafka broker configured with:
> >
> >   log.message.timestamp.type=LogAppendTime
> >
> > to timestamp incoming messages on its own and producer is configured to
> use
> > any kind of compression. What we end up on the wire for consumer:
> >
> >   - outer compressed envelope  - LogAppendTime, by broker
> >   - inner messages                     - CreateTime, by producer
> >
> > Tried 0.10 and 0.10.2 version with producers in various languages.
> >
> > Is it by design? Broker just update timestamp of the outer compressed
> > message and never look inside?  Or is it bug in broker?
> >
> > Is there are guidelines for implementing clients, somebody can point to?
> > Should client respect outer timestamp or inner?
> >
> > The main reason we got the question is different client implementations
> > returns different timestamps in that case.
> >
> > Thanks all.
> >
>

Reply via email to