512mb is just too small for a TaskManager. You would need to either
increase it, or decrease the other memory components (which currently use
default values).
The 64mb Total Flink Memory comes from the 512mb Total Process Memory minus
192mb minimum JVM Overhead and 256mb default JVM Metaspace.
Be
Hi John,
May I know what is your Flink version you are trying?
On Thu, 23 Jun 2022 at 3:43 AM, John Tipper wrote:
> Hi all,
>
> I'm wanting to run quite a number of PyFlink jobs on Kubernetes, where the
> amount of state and number of events being processed is small and therefore
> I'd like t
Hi all,
I'm wanting to run quite a number of PyFlink jobs on Kubernetes, where the
amount of state and number of events being processed is small and therefore I'd
like to use as little memory in my clusters as possible so I can bin pack most
efficiently. I'm running a Flink cluster per job. I'm
Hi Tim,
Reference a blog comes from Ververica:
"When you choose RocksDB as your state backend, your state lives as a
serialized byte-string in either the off-heap memory or the local disk."
It also contains many tune config options you can consider.[1]
Best,
Vino
[1]: https://www.ververica.com
For Streaming Jobs that use RocksDB my understanding is that state is
allocated off-year via RocksDB.
If this is true then does it still make sense to leave 70% (default
taskmanager.memory.fraction) of the heap for Flink Manged memory given that
it is likely not being used for state?Or am I mi