Hi,
There are a few JIRA tickets that address this problem [1] [2].
Summary: The best execution strategy depends on the amount of data / window
configuration.
Best, Fabian
[1] https://issues.apache.org/jira/browse/FLINK-7001
[2] https://issues.apache.org/jira/browse/FLINK-5387
2018-05-04 7:22
Agree with Bowen on this note: you should probably use some more efficient
way of handling the data in sliding window, since data will be "assigned"
to each sliding window through a window assigner and thus costs extra
memory usage.
BTW: since we are on this topic, I was wondering if there's any w
Hi Gabriel,
Yes, using RocksDB state backend can relieve your RAM usage. I see a few
issues with your job: 1) it's keeping track of 672 windows (28x24), that's
lots of data, so try to reduce number of windows 2) use reduce functions to
incrementally aggregate state, rather than buffering data inte
We use Flink to process transactional events. A job was created to aggregate
information about the clients, day of week and hour of day and thus creating a
profile as shown in the attached code.
val stream = env.addSource(consumer)
val result = stream
.map(openTransaction => {
val transac