Hello John,

Thank you for your response,

I am using custom time extractor, in final stage i am persisting streamed
data into timeseries database and when i did a double check from there, i
confirmed that time calculation seems correct.

How about warning message that i mentioned ? How can be possible that i am
taking that warning ? I mean that i could not get the point/root cause ...
or do i need to pass that message without taking any action ?

On 11 May 2020 Mon at 18:33 John Roesler <vvcep...@apache.org> wrote:

> Hello Baki,
>
> It looks like option 2 is really what you want. The purpose of the time
> window stores is to allow deleting old data when you need to group by a
> time dimension, which naturally results in an infinite key space.
>
> If you don’t want to wait for the final result, can you just not add the
> suppression? It’s only purpose is to _not_ emit any data until _after_ the
> grace period expires. Without it, streams will still respect the grace
> period by updating the result whenever there is late arriving data.
>
> Lastly, that is a check for overflow. The timestamp is supposed to be a
> timestamp in milliseconds since the epoch. If you’re getting an overflow,
> it means your time stamps are from the far future. You might want to
> manually inspect them.
>
> I hope this helps,
> John
>
>
> On Sun, May 10, 2020, at 05:29, Baki Hayat wrote:
> > Hello Friends,
> >
> > I wrote into stackoverflow but also i am writing here,
> >
> > I have couple of questions about window operation, grace period and late
> > events.
> >
> > Could you please check my problem about group by with adding time field
> as
> > a key or window by and group by without time field ?
> >
> > Here is detail explanation...
> >
> >
> https://stackoverflow.com/questions/61680407/kafka-streams-groupby-late-event-persistentwindowstore-windowby-with-gra
> >
>

Reply via email to