Re: Kafka Streams reports: "The timestamp of the message is out of acceptable range"

2017-05-14 Thread Eno Thereska
Hi Frank, Could you confirm that you're using 0.10.2.1? This error was fixed ad part of this JIRA I believe: https://issues.apache.org/jira/browse/KAFKA-4861 Thanks Eno > On 14 May 2017, at 23:09, Frank Lyaruu wrote: > > Hi Kafka people... >

Re: session window bug not fixed in 0.10.2.1?

2017-05-14 Thread Guozhang Wang
Hello Ara, I think KAFKA-5172 could be the cause, if your session windows grow larger over time with caching turned on. The fix itself have been committed into trunk and will be included in the June. release, before that, could you try still turn off caching and see if this issue does not show up

Re: Can state stores function as a caching layer for persistent storage

2017-05-14 Thread João Peixoto
Very useful links, thank you. Part of my original misunderstanding was that the at-least-once guarantee was considered fulfilled if the record reached a sink node. Thanks for all the feedback, you may consider my question answered. Feel free to ask further questions about the use case if found in

Re: Order of punctuate() and process() in a stream processor

2017-05-14 Thread Mahendra Kariya
We use Kafka Streams for quite a few aggregation tasks. For instance, counting the number of messages with a particular status in a 1-minute time window. We have noticed that whenever we restart a stream, we see a sudden spike in the aggregated numbers. After a few minutes, things are back to norm

Advantages of 0.10.0 protocol over 0.8.0

2017-05-14 Thread Milind Vaidya
Hi We are using 0.8.1.1 for producer, broker(cluster) as well as for storm integration. We are planning to upgrade it to 0.10.0 the main reason being producer API supporting flush(). That said, we have test it in QA and look like as long as protocol is not bumped with newer dependencies, roll ba

Re: Can state stores function as a caching layer for persistent storage

2017-05-14 Thread Matthias J. Sax
Yes. It is basically "documented", as Streams guarantees at-least-once semantics. Thus, we make sure, you will not loose any data in case of failure. (ie, the overall guarantee is documented) To achieve this, we always flush before we commit offsets. (This is not explicitly documented as it's an

Re: Can state stores function as a caching layer for persistent storage

2017-05-14 Thread João Peixoto
I think I now understand what Matthias meant when he said "If you use a global remote store, you would not need to back your changes in a changelog topic, as the store would not be lost if case of failure". I had the misconception that if a state store threw an exception during "flush", all messag

Kafka Streams reports: "The timestamp of the message is out of acceptable range"

2017-05-14 Thread Frank Lyaruu
Hi Kafka people... After a bit of tuning and an upgrade to Kafka 0.10.1.2, this error starts showing up and the whole thing kind of dies. 2017-05-14 18:51:52,342 | ERROR | hread-3-producer | RecordCollectorImpl | 91 - com.dexels.kafka.streams - 0.0.115.201705131415 | task [1_0] Error s