Hi.

I would like to know if there are any guidelines/recommendations for the
memory overhead we need to calculate for when doing savepoint to s3. We use
RockDb state backend.

We run our job on relative small task managers and we can see we get memory
problems if the state size for each task manager get "big" (we haven't
found the rule of thumbs yet) and we can remove the problem if we reduce
the state size, or increase parallelism and jobs with none or small state
don't have any problems.
So I see a relation between between allocated memory to a task manager and
the state it can handle.

So do anyone have any recommendations/ base practices for this and can
someone explain why savepoint requires memory.

Thanks

In advance

Lasse Nedergaard

Reply via email to