Hi!
We are running a high available Flink cluster in standalone mode with Zookeeper
with 2 jobmanagers and 5 taskmanagers.
When the jobmanager is killed, the standby jobmanager takes over. But the job
is also restarted.
Is this the default behavior and can we avoid job restarts (for jobmanage
Dear community,
We have a Flink job which does some parsing, a join and a window.
When we increase the load, CPU increases gradually with the throughput. But
around 65% CPU, there is suddenly a jump to 98%.
The job starts experiencing backpressure and becomes unstable (increasing
latency, memory
work at Ghent University.
The included frameworks at this time are, in no particular order, Spark,
Flink, Kafka (Streams), Storm (Trident) and Drizzle. Any pointers to
previous work or relevant benchmarks would be appreciated.
Best regards,
Giselle van Dongen