Hi,
All this makes perfect sense now and I could not be more clearer on how
kafka and streams handle times.
So if we use event time semantics (with or without custom timestamp
extractor) getting out of order records is something expected and ones
stream topology design should take care of it.
Righ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
>> This really helped to understand that grace period takes care of
>> out of order records rather than late arriving records.
Well, the grace period defines if (or when) an out-of-order record is
consider late. Of course, per definition of "late',
Hi,
This really helped to understand that grace period takes care of out of
order records rather than late arriving records.
I however have a question that why would a record arrive out of order.
Doesn't kafka guarantees the order.
If we use default timestamp extractor then it will use the embedde
Sachin,
"late" data is data that arrives after the grace period and is not
processed but dropped for this reason. What you mean is "out-of-order
data" for which you can use the grace period to process it -- increasing
the window size would be a semantic change, while increasing the grace
period al