okay, now it run on my hadoop.
how i can start my flink job? and where must the jar file save, at hdfs or
as local file?

2015-06-04 16:31 GMT+02:00 Robert Metzger <rmetz...@apache.org>:

> Yes, you have to run these commands in the command line of the Cloudera VM.
>
> On Thu, Jun 4, 2015 at 4:28 PM, Pa Rö <paul.roewer1...@googlemail.com>
> wrote:
>
>> you mean run this command on terminal/shell and not define a hue job?
>>
>> 2015-06-04 16:25 GMT+02:00 Robert Metzger <rmetz...@apache.org>:
>>
>>> It should be certainly possible to run Flink on a cloudera live VM
>>>
>>> I think these are the commands you need to execute:
>>>
>>> wget
>>> http://stratosphere-bin.s3-website-us-east-1.amazonaws.com/flink-0.9-SNAPSHOT-bin-hadoop2.tgz
>>> tar xvzf flink-0.9-SNAPSHOT-bin-hadoop2.tgz
>>> cd flink-0.9-SNAPSHOT/
>>> *export HADOOP_CONF_DIR=/usr/lib/hadoop/etc/hadoop/*
>>> ./bin/yarn-session.sh -n 1 -jm 1024 -tm 1024
>>>
>>> If that is not working for you, please post the exact error message you
>>> are getting and I can help you to get it to run.
>>>
>>>
>>> On Thu, Jun 4, 2015 at 4:18 PM, Pa Rö <paul.roewer1...@googlemail.com>
>>> wrote:
>>>
>>>> hi robert,
>>>>
>>>> i think the problem is the hue api,
>>>> i had the same problem with spark submit script,
>>>> but on the new hue release, they have a spark submit api.
>>>>
>>>> i asked the group for the same problem with spark, no reply.
>>>>
>>>> i want test my app on local cluster, before i run it on the big cluster,
>>>> for that i use cloudera live. maybe it give an other way to test flink
>>>> on a local cluster vm?
>>>>
>>>> 2015-06-04 16:12 GMT+02:00 Robert Metzger <rmetz...@apache.org>:
>>>>
>>>>> Hi Paul,
>>>>>
>>>>> why did running Flink from the regular scripts not work for you?
>>>>>
>>>>> I'm not an expert on Hue, I would recommend asking in the Hue user
>>>>> forum / mailing list:
>>>>> https://groups.google.com/a/cloudera.org/forum/#!forum/hue-user.
>>>>>
>>>>> On Thu, Jun 4, 2015 at 4:09 PM, Pa Rö <paul.roewer1...@googlemail.com>
>>>>> wrote:
>>>>>
>>>>>> thanks,
>>>>>> now i want run my app on cloudera live vm single node,
>>>>>> how i can define my flink job with hue?
>>>>>> i try to run the flink script in the hdfs, it's not work.
>>>>>>
>>>>>> best regards,
>>>>>> paul
>>>>>>
>>>>>> 2015-06-02 14:50 GMT+02:00 Robert Metzger <rmetz...@apache.org>:
>>>>>>
>>>>>>> I would recommend using HDFS.
>>>>>>> For that, you need to specify the paths like this:
>>>>>>> hdfs:///path/to/data.
>>>>>>>
>>>>>>> On Tue, Jun 2, 2015 at 2:48 PM, Pa Rö <
>>>>>>> paul.roewer1...@googlemail.com> wrote:
>>>>>>>
>>>>>>>> nice,
>>>>>>>>
>>>>>>>> which file system i must use for the cluster? java.io or hadoop.fs
>>>>>>>> or flink?
>>>>>>>>
>>>>>>>> 2015-06-02 14:29 GMT+02:00 Robert Metzger <rmetz...@apache.org>:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>> you can start Flink on YARN on the Cloudera distribution.
>>>>>>>>>
>>>>>>>>> See here for more:
>>>>>>>>> http://ci.apache.org/projects/flink/flink-docs-master/setup/yarn_setup.html
>>>>>>>>>
>>>>>>>>> These are the commands you need to execute
>>>>>>>>>
>>>>>>>>> wget 
>>>>>>>>> http://stratosphere-bin.s3-website-us-east-1.amazonaws.com/flink-0.9-SNAPSHOT-bin-hadoop2.tgz
>>>>>>>>> tar xvzf flink-0.9-SNAPSHOT-bin-hadoop2.tgzcd flink-0.9-SNAPSHOT/
>>>>>>>>> ./bin/yarn-session.sh -n 4 -jm 1024 -tm 4096
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Jun 2, 2015 at 2:03 PM, Pa Rö <
>>>>>>>>> paul.roewer1...@googlemail.com> wrote:
>>>>>>>>>
>>>>>>>>>> hi community,
>>>>>>>>>>
>>>>>>>>>> i want test my flink k-means on a hadoop cluster. i use the
>>>>>>>>>> cloudera live distribution. how i can run flink on this cluster? 
>>>>>>>>>> maybe only
>>>>>>>>>> the java dependencies are engouth?
>>>>>>>>>>
>>>>>>>>>> best regards,
>>>>>>>>>> paul
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to