How does Flink use memory? We're seeing cases when running a job on larger 
datasets where it throws OOM exceptions during the job. We're using the Dataset 
API. Shouldn't flink be streaming from disk to disk? We workaround by using 
fewer slots but it seems unintuitive that I need to change these settings given 
Flink != Spark. Why isn't Flinks memory usage constant? Why couldn't I run a 
job with a single task and a single slot for any size job successfully other 
than it takes much longer to run.

Thanks
Billy


Reply via email to