Ah. I updated the Streams app, the brokers still ran a slightly older
version. Updating the brokers seems to fix this. Thanks.


On Mon, May 15, 2017 at 8:50 AM, Eno Thereska <eno.there...@gmail.com>
wrote:

> Hi Frank,
>
> Could you confirm that you're using 0.10.2.1? This error was fixed ad part
> of this JIRA I believe: https://issues.apache.org/jira/browse/KAFKA-4861 <
> https://issues.apache.org/jira/browse/KAFKA-4861>
>
> Thanks
> Eno
> > On 14 May 2017, at 23:09, Frank Lyaruu <flya...@gmail.com> wrote:
> >
> > Hi Kafka people...
> >
> > After a bit of tuning and an upgrade to Kafka 0.10.1.2, this error starts
> > showing up and the whole thing kind of dies.
> >
> > 2017-05-14 18:51:52,342 | ERROR | hread-3-producer | RecordCollectorImpl
> >           | 91 - com.dexels.kafka.streams - 0.0.115.201705131415 | task
> > [1_0] Error sending record to topic
> > KNBSB-test-generation-7-personcore-personcore-photopersongeneration-7-
> repartition.
> > No more offsets will be recorded for this task and the exception will
> > eventually be thrown
> > org.apache.kafka.common.errors.InvalidTimestampException: The timestamp
> of
> > the message is out of acceptable range.
> > 2017-05-14 18:51:52,343 | INFO  | StreamThread-3   | StreamThread
> >          | 91 - com.dexels.kafka.streams - 0.0.115.201705131415 |
> > stream-thread [StreamThread-3] Flushing state stores of task 1_0
> > 2017-05-14 18:51:52,345 | ERROR | StreamThread-3   | StreamThread
> >          | 91 - com.dexels.kafka.streams - 0.0.115.201705131415 |
> > stream-thread [StreamThread-3] Failed while executing StreamTask 1_0 due
> to
> > flush state:
> > org.apache.kafka.streams.errors.StreamsException: task [1_0] exception
> > caught when producing
> >
> > at
> > org.apache.kafka.streams.processor.internals.RecordCollectorImpl.
> checkForException(RecordCollectorImpl.java:121)[
> 91:com.dexels.kafka.streams:0.0.115.201705131415]
> > at
> > org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush(
> RecordCollectorImpl.java:129)[91:com.dexels.kafka.streams:0.
> 0.115.201705131415]
> > at
> > org.apache.kafka.streams.processor.internals.StreamTask.flushState(
> StreamTask.java:422)[91:com.dexels.kafka.streams:0.0.115.201705131415]
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread$4.apply(
> StreamThread.java:555)[91:com.dexels.kafka.streams:0.0.115.201705131415]
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.performOnTasks(StreamThread.java:501)[91:com.
> dexels.kafka.streams:0.0.115.201705131415]
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.flushAllState(
> StreamThread.java:551)[91:com.dexels.kafka.streams:0.0.115.201705131415]
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.
> shutdownTasksAndState(StreamThread.java:449)[91:com.
> dexels.kafka.streams:0.0.115.201705131415]
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.shutdown(
> StreamThread.java:391)[91:com.dexels.kafka.streams:0.0.115.201705131415]
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:372)[91:com.dexels.kafka.
> streams:0.0.115.201705131415]
> >
> > Caused by: org.apache.kafka.common.errors.InvalidTimestampException: The
> > timestamp of the message is out of acceptable range.
> >
> > What does this mean? How can I debug this?
> >
> > Two observations:
> > - I only see this on *-repartition topics
> > - ... which are also the only topic with cleanup policy = 'delete'
>
>

Reply via email to