Hi Stefan,

I don't want to introduce Hadoop in Flink clusters.
I think the exception is not that serious as it is shown only when log-level is 
set to DEBUG.

Do I have to set HADOOP_HOME to use Flink on dc/os?

Regards,
Dongwon

> 2018. 1. 3. 오후 7:34, Stefan Richter <s.rich...@data-artisans.com> 작성:
> 
> Hi,
> 
> did you see this exception right at the head of your log?
> 
> java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
>       at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:265)
>       at org.apache.hadoop.util.Shell.<clinit>(Shell.java:290)
>       at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
>       at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:93)
>       at org.apache.hadoop.security.Groups.<init>(Groups.java:77)
>       at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>       at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
>       at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:232)
>       at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:718)
>       at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:703)
>       at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:605)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.flink.runtime.util.EnvironmentInformation.getHadoopUser(EnvironmentInformation.java:96)
>       at 
> org.apache.flink.runtime.util.EnvironmentInformation.logEnvironmentInfo(EnvironmentInformation.java:285)
>       at 
> org.apache.flink.mesos.runtime.clusterframework.MesosApplicationMasterRunner.main(MesosApplicationMasterRunner.java:131)
> 
> I think you forgot to configure the HADOOP_HOME properly. Does that solve 
> your problem?
> 
> Best,
> Stefan
> 
> 
>> Am 03.01.2018 um 07:12 schrieb 김동원 <eastcirc...@gmail.com 
>> <mailto:eastcirc...@gmail.com>>:
>> 
>> Oops, I forgot to include files in the previous mail.
>> 
>> <Figure 3.png>
>> <Figure 2.png>
>> <Figure 1.png>
>> 
>> <log.txt>
>> 
>> 
>>> 2018. 1. 3. 오후 3:10, 김동원 <eastcirc...@gmail.com 
>>> <mailto:eastcirc...@gmail.com>> 작성:
>>> 
>>> Hi,
>>> 
>>> I try to launch a Flink cluster on top of dc/os but TaskManagers are not 
>>> launched at all.
>>> 
>>> What I do to launch a Flink cluster is as follows:
>>> - Click "flink" from "Catalog" on the left panel of dc/os GUI.
>>> - Click "Run service" without any modification on configuration for the 
>>> purpose of testing (Figure 1).
>>> 
>>> Until now, everything seems okay as shown in Figure 2.
>>> However, Figure 3 shows that TaskManager has never been launched.
>>> 
>>> So I take a look at JobManager log (see the attached "log.txt" for full 
>>> log).
>>> LaunchCoordinator is spitting the same log messages while staying in 
>>> "GetheringOffers" state as follows:
>>> INFO  org.apache.flink.mesos.scheduler.LaunchCoordinator            - 
>>> Processing 1 task(s) against 0 new offer(s) plus outstanding off$
>>> DEBUG com.netflix.fenzo.TaskScheduler                               - Found 
>>> 0 VMs with non-zero offers to assign from
>>> INFO  org.apache.flink.mesos.scheduler.LaunchCoordinator            - 
>>> Resources considered: (note: expired offers not deducted from be$
>>> DEBUG org.apache.flink.mesos.scheduler.LaunchCoordinator            - 
>>> SchedulingResult{resultMap={}, failures={}, leasesAdded=0, lease$
>>> INFO  org.apache.flink.mesos.scheduler.LaunchCoordinator            - 
>>> Waiting for more offers; 1 task(s) are not yet launched.
>>> (FYI, ConnectionMonitor is in its "ConnectedState" as you can see in the 
>>> full log file.)
>>> 
>>> Can anyone point out what's going wrong on my dc/os installation?
>>> Thanks you for attention. I'm really looking forward to running Flink 
>>> clusters on dc/os :-)
>>> 
>>> p.s. I tested whether dc/os is working correctly by using the following 
>>> scripts and it works.
>>> {
>>>      "id": "simple-gpu-test",
>>>      "acceptedResourceRoles":["slave_public", "*"],
>>>      "cmd": "while [ true ] ; do nvidia-smi; sleep 5; done",
>>>      "cpus": 1,
>>>      "mem": 128,
>>>      "disk": 0,
>>>      "gpus": 1,
>>>      "instances": 8
>>> }
>>> 
>>> 
>> 
> 

Reply via email to