tried to increase memory:
flink run  -m yarn-cluster -p 16 -ys 1 -ytm 200000 -yjm 8096 myjar

and still got the same OOM exception.

my sql is like:

select id, hop_end(created_at, interval '30' second, interval '1'
minute), sum(field)... #20 of these sums

from table group by id, hop(created_at, interval '30' second, interval
'1' minute)



On Wed, Jan 22, 2020 at 3:19 PM Fanbin Bu <fanbin...@coinbase.com> wrote:

> Hi,
>
> I have a batch job using blink planner. and got the following error. I was
> able to successfully run the same job with flink 1.8 on yarn.
>
> I set conf as:
> taskmanager.heap.size: 50000m
>
> and flink UI gives me
> Last Heartbeat:20-01-22
> 14:56:25ID:container_1579720108062_0018_01_000020Data Port:41029Free Slots
> / All Slots:1 / 0CPU Cores:16Physical Memory:62.9 GBJVM Heap Size:10.3
> GBFlink Managed Memory:24.9 GB
>
> any suggestions on how to move forward?
> Thanks,
> Fanbin
>
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job
> execution failed.
> at
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
> at
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:259)
> ... 25 more
>
> *Caused by: java.io.IOException: Hash window aggregate map OOM.* at
> HashWinAggWithKeys$534.processElement(Unknown Source)
> at
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processElement(StreamOneInputProcessor.java:164)
> at
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:143)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.performDefaultAction(StreamTask.java:276)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
> at java.lang.Thread.run(Thread.java:748)
>

Reply via email to