If you don't have spark cluster, then you don't need to do 2).
After 1) %spark.r interpreter should work.

If you do have spark cluster, export SPARK_HOME env variable in
conf/zeppelin-env.sh, that should be enough make it work.

Hope this helps.

Thanks,
moon

On Fri, Mar 17, 2017 at 2:41 PM Shanmukha Sreenivas Potti <
shanmu...@utexas.edu> wrote:

> Hello Group!
>
> I'm trying to leverage various R functions in Zeppelin but am having
> challenges in figuring out how to configure the Spark interpreter/
> SPARK_HOME variable.
>
> I'm going by this
> <https://zeppelin.apache.org/docs/0.6.0/interpreter/r.html> documentation
> for now, and specifically have issues with the following steps:
>
>    1.
>
>    To run R code and visualize plots in Apache Zeppelin, you will need R
>    on your master node (or your dev laptop).
>
>    For Centos: yum install R R-devel libcurl-devel openssl-devel For
>    Ubuntu: apt-get install r-base
>
> How do I figure out the master node and install the R interpreter? Novice
> user here.
>
>
> 2. To run Zeppelin with the R Interpreter, the SPARK_HOME environment
> variable must be set. The best way to do this is by editing
> conf/zeppelin-env.sh. If it is not set, the R Interpreter will not be able
> to interface with Spark. You should also copy
> conf/zeppelin-site.xml.template to conf/zeppelin-site.xml. That will ensure
> that Zeppelin sees the R Interpreter the first time it starts up.
>
> No idea as to how to do step 2 either.
>
> Appreciate your help. If there is a video that you can point me to that
> talks about these steps, that would be fantabulous.
>
> Thanks! Shan
>
> --
> Shan S. Potti,
>
>

Reply via email to