Hi,

1) At the moment, state is kept on the JVM heap in a regular HashMap.

However, we added an interface for pluggable state backends. State backends
store the operator state (Flink's built-in window operators are based on
operator state as well). A pull request to add a RocksDB backend (going to
disk) will be merged soon [1]. Another backend using Flink's managed memory
is planned.

2) I am not sure what you mean by trigger / schedule a delayed event, but
have a few pointers that might be helpful:
- Flink can handle late arriving events. Check the event-time feature [2].
- Flink's window triggers can be used to schedule window computations [3]
- You can implement a custom source function that emits / triggers events.

Best, Fabian

[1] https://github.com/apache/flink/pull/1562
[2]
http://data-artisans.com/how-apache-flink-enables-new-streaming-applications-part-1/
[3] http://flink.apache.org/news/2015/12/04/Introducing-windows.html

2016-02-03 5:39 GMT+01:00 Soumya Simanta <soumya.sima...@gmail.com>:

> I'm getting started with Flink and had a very fundamental doubt.
>
> 1) Where does Flink capture/store intermediate state?
>
> For example, two streams of data have a common key. The streams can lag in
> time (second, hours or even days). My understanding is that Flink somehow
> needs to store the data from the first (faster) stream so that it can match
> and join the data with the second(slower) stream.
>
> 2) Is there a mechanism to trigger/schedule a delayed event in Flink?
>
> Thanks
> -Soumya
>
>
>
>

Reply via email to