Hi all,

I have a flink job running version 1.10.2, it simply read from a kafka
topic with 96 partitions, and output to another kafka topic.

It is running in k8s, with 1 JM (not in HA mode), 12 task managers each has
4 slots.
The checkpoint persists the snapshot to azure blob storage, checkpoints
interval every 3 seconds, with 10 seconds timeout and minimum pause of 1
second.

I observed that the job manager pod memory usage grows over time, any hints
on why this is the case? And the memory usage for JM is significantly more
compared to no checkpoint enabled.
[image: image.png]

Thanks a lot!
Eleanore

Reply via email to