Our thought is to use the Zeppelin to create the algorithms leveraging the
interactive data analysis and eventually deploy the algorithm in the
production environment

In that process after the algorithm is created and published. we can
eventually create job artifact and post the artifact to the rest api (using
either spark job server or livy server) and get the algorithm deployed into
the spark production cluster and run it in a scheduled interval. For
creating the job artifact may be the Zeppelin may not have out of the box we
thought of extending the capability to publish to a repository.

Let me know if you have any suggestions on how we can get the notebook
created during the interactive analysis to eventually get deployed in
production environment.

Thanks
Kishore



--
View this message in context: 
http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Zeppelin-Integration-with-Livy-Server-tp3272p3292.html
Sent from the Apache Zeppelin Users (incubating) mailing list mailing list 
archive at Nabble.com.

Reply via email to