Hi Till,
Currently I'm doing as you said for the purpose of testing.
So that's not a big deal at this moment.
But I hope it will be supported in Flink sooner or later as we're going to
adopt Flink on a very large cluster in which GPU resources are very scarce.
Anyway thank you for your attenti
Hi Till,
> It could be as simple as giving Flink the right role via
> `mesos.resourcemanager.framework.role`.
The problem seems more related to resources (GPUs) than framework roles.
The cluster I'm working on consists of servers all equipped with GPUs.
When DC/OS is installed, a GPU-specific co
Hi Stefan,
I don't want to introduce Hadoop in Flink clusters.
I think the exception is not that serious as it is shown only when log-level is
set to DEBUG.
Do I have to set HADOOP_HOME to use Flink on dc/os?
Regards,
Dongwon
> 2018. 1. 3. 오후 7:34, Stefan Richter 작성:
>
> Hi,
>
> did you see
Hi,
did you see this exception right at the head of your log?
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265)
at org.apache.hadoop.util.Shell.(Shell.java:290)
at org.apache.hadoop.util.StringU
Hi,
I try to launch a Flink cluster on top of dc/os but TaskManagers are not
launched at all.
What I do to launch a Flink cluster is as follows:
- Click "flink" from "Catalog" on the left panel of dc/os GUI.
- Click "Run service" without any modification on configuration for the purpose
of test