If you use RocksDB, you will not run into OutOfMemory errors.
On Wed, Aug 31, 2016 at 6:34 PM, Fabian Hueske wrote:
> Hi Vinaj,
>
> if you use user-defined state, you have to manually clear it.
> Otherwise, it will stay in the state backend (heap or RocksDB) until the
> job goes down (planned or
Hi Vinaj,
if you use user-defined state, you have to manually clear it.
Otherwise, it will stay in the state backend (heap or RocksDB) until the
job goes down (planned or due to an OOM error).
This is esp. important to keep in mind, when using keyed state.
If you have an unbounded, evolving key s
Hi Stephan,
Just wanted to jump into this discussion regarding state.
So do you mean that if we maintain user-defined state (for non-window
operators), then if we do not clear it explicitly will the data for that
key remains in RocksDB.
What happens in case of checkpoint ? I read in the documen
In streaming, memory is mainly needed for state (key/value state). The
exact representation depends on the chosen StateBackend.
State is explicitly released: For windows, state is cleaned up
automatically (firing / expiry), for user-defined state, keys have to be
explicitly cleared (clear() method
As per the docs, in Batch mode, dynamic memory allocation is avoided by storing
messages being processed in ByteBuffers via Unsafe methods.
Couldn't find any docs describing mem mgmt in Streamingn mode. So...
- Am wondering if this is also the case with Streaming ?
- If so, how does Flink dete