Few more follow-up questions to see if anyone has insights on them
1. What is the recommended way of interpreter binding mode setting Zeppelin
over Livy in a multi-user environment?
2. Is there a way to allow the user restart interpreter but restrict the
user from changing interpreter configs.
3.
Hi, I'd like to discuss best practices for using zeppelin in the
multi-user environment. There are several naive approaches, I've tried for
at least couple month each and not a single one worked:
*All users on one zeppelin.*
- One spark context - people really break sc and when they are all i
Hello guys
Has anyone attempted to run Zeppelin inside Docker that connect to a real
Spark cluster running on the host machine ?
I've spend a day trying to make it work but unsuccessfully, the job never
completed because the driver program (Zeppelin Spark shell) is listening on
an Internal IP add
can you share your docker file? also, are you using docker-compose or just
docker-machine
On Fri, Aug 5, 2016 at 8:46 PM, DuyHai Doan wrote:
> Hello guys
>
> Has anyone attempted to run Zeppelin inside Docker that connect to a real
> Spark cluster running on the host machine ?
>
> I've spend a d
Egor,
Running a scale out system like Spark with multiple users is always tricky.
Operating systems are designed to let multiple users share a single machine.
But for “big data” a single user requires the use of several machines which is
the exact opposite. Having said that I would suggest the f
>
> - Use spark driver in “cluster mode” where driver runs on a worker instead
> of the node running Z
Even without driver Z is heavy process. You need a lot of RAM to keep big
results from job. And most of all - zeppelin 0.5.6 does not support cluster
mode and I'm not ready to move to 0.6.
2016
put your big results somewhere else not in Z’s memory?
> On Aug 5, 2016, at 12:26 PM, Egor Pahomov wrote:
>
> - Use spark driver in “cluster mode” where driver runs on a worker instead of
> the node running Z
>
> Even without driver Z is heavy process. You need a lot of RAM to keep big
> resu
No, using Docker compose is easy, what I want is:
1) Zeppelin running inside a Docker container
2) Spark deployed in stand-alone mode, running somewhere on bare-metal /
cloud / Docker but in another network
In this scenario, it's very hard to make the Zeppelin client that is living
inside the Doc
Just curious, what is the use case to run Z inside docker?
On Friday, August 5, 2016, DuyHai Doan wrote:
> No, using Docker compose is easy, what I want is:
>
> 1) Zeppelin running inside a Docker container
> 2) Spark deployed in stand-alone mode, running somewhere on bare-metal /
> cloud / Dock
Not exactly what you want, but I have an example here :
https://github.com/lresende/docker-systemml-notebook
You should be able to accomplish what you want playing with --link which I
did in the example below (but just with Yarn and HDFS)
https://github.com/lresende/docker-yarn-cluster
On Fri, Au
I heard that supposedly the IBM guys were able to do this earlier this year.
Unfortunately these folks may have been part of the continued layoffs at
IBM.
On Fri, Aug 5, 2016 at 11:46 AM, DuyHai Doan wrote:
> Hello guys
>
> Has anyone attempted to run Zeppelin inside Docker that connect to a re
On Fri, Aug 5, 2016 at 11:18 PM, Luciano Resende
wrote:
> Not exactly what you want, but I have an example here :
> https://github.com/lresende/docker-systemml-notebook
>
> You should be able to accomplish what you want playing with --link which I
> did in the example below (but just with Yarn an
I need to build a chart for 10 days for all countries(200) for several
products by some dimensions. I would need at least 4-6 gb per zeppelin for
it.
2016-08-05 12:31 GMT-07:00 Mohit Jaggi :
> put your big results somewhere else not in Z’s memory?
>
> On Aug 5, 2016, at 12:26 PM, Egor Pahomov wr
One zeppelin per user in mesos container on datanode type server is fine
for me. An Ansible script configure each instance with user specifities and
launch it in Marathon. A service discovery (basic shell script) update an
apache server with basic auth and route each user to his instance. Mesos
als
Was having some issues with this too.
Did you try using the %sh to ping the machine with spark? e.g. check for
networking issue.
Trevor Grant
Data Scientist
https://github.com/rawkintrevo
http://stackexchange.com/users/3002022/rawkintrevo
http://trevorgrant.org
*"Fortunate is he, who is able to
Hi Chen,
1. Currently, Interpreter binding mode is more like whether each Note will
have a separate Interpreter session(scoped/isolated) or not(shared), rather
than per user.
https://issues.apache.org/jira/browse/ZEPPELIN-1210 will bring interpreter
session per user in the Zeppelin server side, bu
16 matches
Mail list logo