What you can always do to reduce pressure on the heap from large state is
using the RocksDB state backend. Then, all the state will be kept on disk.
On Thu, May 25, 2017 at 7:20 AM, Fritz Budiyanto
wrote:
> Hi Robert,
>
> Yes, lots of buffering in the heap. State backend is JobManager with Heap
Hi Robert,
Yes, lots of buffering in the heap. State backend is JobManager with Heap
backend, and I disabled checkpointing to debug this issue.
I found a bug in my apps during restart. On a restart, the app is reading Kafka
from earliest offset with days of data and its getting burst of stream
Hi Fritz,
what are you doing on your task manager?
Are you keeping many objects on the heap in your application?
Are you using any window operators of Flink? If so, which statebackend are
you using?
On Tue, May 23, 2017 at 7:02 AM, Fritz Budiyanto
wrote:
> Hi Robert,
>
> Thanks Robert, I’ll s
Hi Robert,
Thanks Robert, I’ll start using the logger.
I didn’t pay attention whether the error occur when I accessed the log from job
manager.
I will do that in my next test.
Anyone has any suggestion on how to debug out of memory exception on flink
jm/tm ?
—
Fritz
> On May 22, 2017, at 1
Hi Fritz,
The TaskManagers are not buffering all stdout for the webinterface (at
least I'm not aware of that). Did the error occur when accessing the log
from the JobManager?
Flinks web front end lazily loads the logs from the taskmanagers.
The suggested method for logging is to use slf4j for log
Hi,
I notice that when I enabled DataStreamSink’s print() for debugging, (kinda
excessive printing), its causing java Heap out of memory.
Possibly the Task Manager is buffering all stdout for the WebInterface? I
haven’t spent time debugging it, but I wonder if this is expected where massive
pri