I am pointing to the dirs on my local machine, what I want is simply for my
jobs to be submitted to the distant yarn cluster
Thanks
On Wed, Nov 2, 2016 at 4:00 PM, Abhi Basu <9000r...@gmail.com> wrote:
> I am assuming you are pointing to hadoop/spark on remote host, right? Can
> you not point ha
I am assuming you are pointing to hadoop/spark on remote host, right? Can
you not point hadoop conf and spark dirs to remote machine? Not sure if
this works, just suggesting, others may have tried.
On Wed, Nov 2, 2016 at 9:58 AM, Hyung Sung Shim wrote:
> Hello.
> You don't need to install hadoop
Hello.
You don't need to install hadoop in your machine but you need a proper
version of spark[0] to use spark-submit.
and then you can set[1] the SPARK_HOME where the spark exists and
HADOOP_CONF_DIR, master as yarn-client your spark interpreter in the
interpreter menu.
[0]
http://spark.apache.or
I have only set HADOOP_CONF_DIR as following (my hadoop conf files are in
/usr/local/lib/hadoop/etc/hadoop/, eg
/usr/local/lib/hadoop/etc/hadoop/yarn-site.xml):
#!/bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See
Could you share your zeppelin-env.sh ?
2016년 11월 2일 (수) 오후 4:57, Benoit Hanotte 님이 작성:
> Thanks for your reply,
> I have tried setting it within zeppelin-env.sh but it doesn't work any
> better.
>
> Thanks
>
> On Wed, Nov 2, 2016 at 2:13 AM, Hyung Sung Shim wrote:
>
> Hello.
> You should set the
Thanks for your reply,
I have tried setting it within zeppelin-env.sh but it doesn't work any
better.
Thanks
On Wed, Nov 2, 2016 at 2:13 AM, Hyung Sung Shim wrote:
> Hello.
> You should set the HADOOP_CONF_DIR to /usr/local/lib/hadoop/etc/hadoop/
> in the conf/zeppelin-env.sh.
> Thanks.
> 2016년
Hello.
You should set the HADOOP_CONF_DIR to /usr/local/lib/hadoop/etc/hadoop/ in
the conf/zeppelin-env.sh.
Thanks.
2016년 11월 2일 (수) 오전 5:07, Benoit Hanotte 님이 작성:
> Hello,
>
> I'd like to use zeppelin on my local computer and use it to run spark
> executors on a distant yarn cluster since I can't
Hello,
I'd like to use zeppelin on my local computer and use it to run spark
executors on a distant yarn cluster since I can't easily install zeppelin
on the cluster gateway.
I installed the correct hadoop version (2.6), and compiled zeppelin (from
the master branch) as following:
*mvn clean pac