Hi jieluo,
I think you need to check the network connectivity between Flink
client(local machine) and
JobManager rest endpoint(YARN cluster). Usually, you could use the "telnet"
to test. Moreover,
it will be easier for others the help with debugging if you could provide
the JobManager logs.
BTW,
Thanks, looks well, nice job!
Best,
Jingsong Lee
On Fri, Apr 10, 2020 at 5:56 PM wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:
>
> https://issues.apache.org/jira/browse/FLINK-17086
>
> It is my first time to create a flink jira issue.
> Just point it out and correct it if I write s
Hi Mitch,
Have you configured 'state.backend.rocksdb.memory.managed'? The default
should be 'true' and if you have set it to 'false', the RocksDB memory
footprint might grow to more than configured task manager memory size.
Besides, by any chance your UDFs use any native memory? E.g., launch
anot
Hi Anuj,
It seems that you are using hadoop version 2.4.1. I think "L" could not be
supported in
this version. Could you upgrade your hadoop version to 2.8 and have a try?
If your
YARN cluster version is 2.8+, then you could directly remove the
flink-shaded-hadoop
in your lib directory. Otherwise,
This is a stateful stream join application using RocksDB state backend with
incremental checkpoint enabled.
-
JVM heap usage is pretty similar. Main difference is in non-heap usage,
probably related to RocksDB state.
-
Also observed cgroup memory failure count showing up in the 1
Hey Jark, thank you so much for confirming!
Out of curiosity, even though I agree that having too many config classes are
confusing, not knowing when the config values are used during pipeline setup is
also pretty confusing. For example, the name of 'TableConfig' makes me feel
it's global to th
Hi,
I have a quick question about the "EventTimeTrigger". I notice it's based
on TimeWindow instead of Window. Is there any reason why this cannot apply
to GlobalWindow?
Thanks,
Jiawei
Thank you for the quick response
Your answer related to the checkpoint folder that contains the _metadata file
e.g. chk-1829
What about the "shared" folder , how do I know which files in that folder are
still relevant and which are left over from a failed checkpoint , they are not
directly rel