Thanks for your attention, I have streaming jobs and use RocksDB state backends. Do you mean that I don't need to be worry about memory management even if the allocated memory not be released after cancellation?
Kind regards, Nastaran Motavalli ________________________________ From: Kostas Kloudas <k.klou...@data-artisans.com> Sent: Thursday, November 29, 2018 1:22:12 PM To: Nastaran Motavali Cc: user Subject: Re: Memory does not be released after job cancellation Hi Nastaran, Can you specify what more information do you need? >From the discussion that you posted: 1) If you have batch jobs, then Flink does its own memory management (outside the heap, so it is not subject to JVM's GC) and although when you cancel the job, you do not see the memory being de-allocated, this memory is available to other jobs and you do not have to worry about de-allocating manually. 2) if you use streaming, then you should use one of the provided state backends and they will do the memory management for you (see [1] and [2]). Cheers, Kostas [1] https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/state/state_backends.html [2] https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/state/large_state_tuning.html On Wed, Nov 28, 2018 at 7:11 AM Nastaran Motavali <n.motav...@son.ir<mailto:n.motav...@son.ir>> wrote: Hi, I have a simple java application uses flink 1.6.2. When I run the jar file, I can see that the job consumes a part of the host's main memory. If I cancel the job, the consumed memory does not be released until I stop the whole cluster. How can I release the memory after cancellation? I have followed the conversation around this issue at the mailing list archive[1] but still need more explanations. [1] http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Need-help-to-understand-memory-consumption-td23821.html#a23926 Kind regards, Nastaran Motavalli