HADOOP_CONF_DIR would affect the whole zeppelin instance. and define it
interpreter setting would affect that interpreter.

And all the capitalized property name would be taken as env variable.

Serega Sheypak <serega.shey...@gmail.com>于2017年7月1日周六 上午3:20写道:

> hi, thanks for your reply. How should I set this variable?
> I'm looking at Spark interpreter config UI. It doesn't allow me to set env
> variable.
>
> https://zeppelin.apache.org/docs/latest/interpreter/spark.html#1-export-spark_home
> tells that HADOOP_CONF_DIR should be set once per whole Zeppelin instance.
>
> What do I miss?
> Thanks!
>
> 2017-06-30 16:43 GMT+02:00 Jeff Zhang <zjf...@gmail.com>:
>
>>
>> Right, create three spark interpreters for your 3 yarn cluster.
>>
>>
>>
>> Serega Sheypak <serega.shey...@gmail.com>于2017年6月30日周五 下午10:33写道:
>>
>>> Hi, thanks for your reply!
>>> What do you mean by that?
>>> I can have only one env variable HADOOP_CONF_DIR...
>>> And how can user pick which env to run?
>>>
>>> Or you mean I have to create three Spark interpreters and each of them
>>> would have it's own HADOOP_CONF_DIR pointed to single cluster config?
>>>
>>> 2017-06-30 16:21 GMT+02:00 Jeff Zhang <zjf...@gmail.com>:
>>>
>>>>
>>>> Try set HADOOP_CONF_DIR for each yarn conf in interpreter setting.
>>>>
>>>> Serega Sheypak <serega.shey...@gmail.com>于2017年6月30日周五 下午10:11写道:
>>>>
>>>>> Hi I have several different hadoop clusters, each of them has it's own
>>>>> YARN.
>>>>> Is it possible to configure single Zeppelin instance to work with
>>>>> different clusters?
>>>>> I want to run spark on cluster A if data is there. Right now my
>>>>> Zeppelin runs on single cluster and it sucks data from remote clusters
>>>>> which is inefficient. Zeppelin can easily access any HDFS cluster, but 
>>>>> what
>>>>> about YARN?
>>>>>
>>>>> What are the correct approaches to solve the problem?
>>>>>
>>>>
>>>
>

Reply via email to