When it comes to event time processing and watermarks, I believe that if you stick to the lower level APIs, then the milliseconds assumption is indeed arbitrary, but at higher levels that assumption is baked in.
In other words, that rules out using Flink SQL, or things like TumblingEventTimeWindows.of(Time.milliseconds(10)). It might not be difficult to build something to work around those assumptions, but I haven't given it much thought. But if you stick to KeyedProcessFunction, it should be fine. Best, David On Fri, Nov 25, 2022 at 5:32 AM Salva Alcántara <salcantara...@gmail.com> wrote: > As mentioned in the docs > <https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/datastream/event-time/generating_watermarks/#introduction-to-watermark-strategies> > : > > > Attention: Both timestamps and watermarks are specified as milliseconds > since the Java epoch of 1970-01-01T00:00:00Z. > > Are there any plans for supporting higher time resolutions? > > Also, internally, Flink uses the `long` type for the timestamps, so maybe > the milliseconds assumption is arbitrary and things would actually work > just fine for higher resolutions provided that they fit into the long type > (???). I found this SO post: > > > https://stackoverflow.com/questions/54402759/streaming-data-processing-and-nano-second-time-resolution > > which touches upon this but it's a bit old already and there seems to be > no clear answer in the end. So maybe we could touch base on it. > > Regards, > > Salva >