Hi 

I came across the following link about Livy Interpreter. Following is a
question which falls both under Zeppelin and Livy from an integration
standpoint. 

https://zeppelin.apache.org/docs/0.6.0-SNAPSHOT/interpreter/livy.html

I am looking for something of following workflow to create, test   and
deploy Spark Jobs in Production

Step 1. A user login to the Zeppelin 
Step 2. Author a notebook with all the steps required after doing
interactive data analysis and Publish the notebook.
Step 3. Create a Spark Job out from the notebook authored with the code that
runs in Spark Cluster
Step 4. Post the Spark Job to the Livy Server
Step 5. Finally Deploy into Spark Cluster via Livy Server 

I see in the Livy server we can create a batch session but if we author a
notebook using Zeppelin with Livy Interpreter how can we reuse the same code
which was used during interactive session ? So that we can deploy into spark
cluster 

or Livy is used only in the context of security and multi-tenancy, only in
interactive session?

Any help is appreciated.

Thanks
Kishore





--
View this message in context: 
http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Zeppelin-Integration-with-Livy-Server-tp3272.html
Sent from the Apache Zeppelin Users (incubating) mailing list mailing list 
archive at Nabble.com.

Reply via email to