Easiest way to figure out what your environment needs is,

1. run SPARK_HOME/bin/sparkR in your shell and make sure it works in the
same host where Zeppelin going to run.
2. try use %spark.r in Zeppelin with SPARK_HOME configured. Normally it
should work when 1) works without problem, otherwise take a look error
message and error log to get more informations.

Thanks,
moon

On Sat, Mar 18, 2017 at 8:47 PM Shanmukha Sreenivas Potti <
shanmu...@utexas.edu> wrote:

> I'm not 100% sure as I haven't set it up but it looks like I'm using
> Zeppelin preconfigured with Spark and I've also taken a snapshot of the
> Spark Interpreter configuration that I have access to/using in Zeppelin.
> This interpreter comes with SQL and Python integration and I'm figuring out
> how do I get to use R.
>
> On Sat, Mar 18, 2017 at 8:06 PM, moon soo Lee <m...@apache.org> wrote:
>
> AFAIK, Amazon EMR service has an option that launches Zeppelin
> (preconfigured) with Spark. Do you use Zeppelin provided by EMR or are you
> setting up Zeppelin separately?
>
> Thanks,
> moon
>
> On Sat, Mar 18, 2017 at 4:13 PM Shanmukha Sreenivas Potti <
> shanmu...@utexas.edu> wrote:
>
> ​​
> Hi Moon,
>
> Thanks for responding. Exporting Spark_home is exactly where I have a
> problem. I'm using Zeppelin notebook with Spark on EMR clusters from an AWS
> account on cloud. I'm not the master account holder for that AWS account
> but I'm guessing I'm a client account with limited access probably. Can I
> still do it?
>
> If yes, can you explain where and how should I do that shell scripting to
> export the variable? Can I do this in the notebook itself by starting the
> paragraph with sh% or do I need to do something else?
> If you can share any video that would be great. I would like to let you
> know that I'm a novice user just getting to explore Big Data.
>
> Sharing more info for better context.
>
> Here's my AWS account detail type:
> assumed-role/ConduitAccessClientRole-DO-NOT-DELETE/shan
>
> Spark Interpreter config in Zeppelin:
> [image: image.png]
>
> Thanks for your help.
>
> Shan
>
> On Sat, Mar 18, 2017 at 8:39 AM, moon soo Lee <m...@apache.org> wrote:
>
> If you don't have spark cluster, then you don't need to do 2).
> After 1) %spark.r interpreter should work.
>
> If you do have spark cluster, export SPARK_HOME env variable in
> conf/zeppelin-env.sh, that should be enough make it work.
>
> Hope this helps.
>
> Thanks,
> moon
>
> On Fri, Mar 17, 2017 at 2:41 PM Shanmukha Sreenivas Potti <
> shanmu...@utexas.edu> wrote:
>
> Hello Group!
>
> I'm trying to leverage various R functions in Zeppelin but am having
> challenges in figuring out how to configure the Spark interpreter/
> SPARK_HOME variable.
>
> I'm going by this
> <https://zeppelin.apache.org/docs/0.6.0/interpreter/r.html> documentation
> for now, and specifically have issues with the following steps:
>
>    1.
>
>    To run R code and visualize plots in Apache Zeppelin, you will need R
>    on your master node (or your dev laptop).
>
>    For Centos: yum install R R-devel libcurl-devel openssl-devel For
>    Ubuntu: apt-get install r-base
>
> How do I figure out the master node and install the R interpreter? Novice
> user here.
>
>
> 2. To run Zeppelin with the R Interpreter, the SPARK_HOME environment
> variable must be set. The best way to do this is by editing
> conf/zeppelin-env.sh. If it is not set, the R Interpreter will not be able
> to interface with Spark. You should also copy
> conf/zeppelin-site.xml.template to conf/zeppelin-site.xml. That will ensure
> that Zeppelin sees the R Interpreter the first time it starts up.
>
> No idea as to how to do step 2 either.
>
> Appreciate your help. If there is a video that you can point me to that
> talks about these steps, that would be fantabulous.
>
> Thanks! Shan
>
> --
> Shan S. Potti,
>
>
>
>
> --
> Shan S. Potti,
> 737-333-1952 <(737)%20333-1952>
> https://www.linkedin.com/in/shanmukhasreenivas
>
> On Sat, Mar 18, 2017 at 8:39 AM, moon soo Lee <m...@apache.org> wrote:
>
> If you don't have spark cluster, then you don't need to do 2).
> After 1) %spark.r interpreter should work.
>
> If you do have spark cluster, export SPARK_HOME env variable in
> conf/zeppelin-env.sh, that should be enough make it work.
>
> Hope this helps.
>
> Thanks,
> moon
>
> On Fri, Mar 17, 2017 at 2:41 PM Shanmukha Sreenivas Potti <
> shanmu...@utexas.edu> wrote:
>
> Hello Group!
>
> I'm trying to leverage various R functions in Zeppelin but am having
> challenges in figuring out how to configure the Spark interpreter/
> SPARK_HOME variable.
>
> I'm going by this
> <https://zeppelin.apache.org/docs/0.6.0/interpreter/r.html> documentation
> for now, and specifically have issues with the following steps:
>
>    1.
>
>    To run R code and visualize plots in Apache Zeppelin, you will need R
>    on your master node (or your dev laptop).
>
>    For Centos: yum install R R-devel libcurl-devel openssl-devel For
>    Ubuntu: apt-get install r-base
>
> How do I figure out the master node and install the R interpreter? Novice
> user here.
>
>
> 2. To run Zeppelin with the R Interpreter, the SPARK_HOME environment
> variable must be set. The best way to do this is by editing
> conf/zeppelin-env.sh. If it is not set, the R Interpreter will not be able
> to interface with Spark. You should also copy
> conf/zeppelin-site.xml.template to conf/zeppelin-site.xml. That will ensure
> that Zeppelin sees the R Interpreter the first time it starts up.
>
> No idea as to how to do step 2 either.
>
> Appreciate your help. If there is a video that you can point me to that
> talks about these steps, that would be fantabulous.
>
> Thanks! Shan
>
> --
> Shan S. Potti,
>
>
>
>
> --
> Shan S. Potti,
> 737-333-1952 <(737)%20333-1952>
> https://www.linkedin.com/in/shanmukhasreenivas
>
>
>
>
> --
> Shan S. Potti,
> 737-333-1952 <(737)%20333-1952>
> https://www.linkedin.com/in/shanmukhasreenivas
>

Reply via email to