Hi York,

Thanks for the question.

1. How you install zeppelin is up to you and your use case. You can either
run single instances of Zeppelin and configure authentication and let many
user login, or let each user run their own Zeppelin instance.
I see both use cases from users, and it really depends on your environment.

2. From 0.6.0 release, Zeppelin ships python interpreter. You can try
%python.

3. You can run Zeppelin on windows by running bin/zeppelin.cmd

4. Interpreter can share data through resource pool. You can think resource
pool as a distributed map across all interpreters. Although every
interpreter can access the resource pool, few interpreters expose API to
user and let user directly access the resource pool.

SparkInterpreter, PysparkInterpreter, SparkRInterpreter are interpreters
that expose resource pool API to user. You can access resource pool via
z.get(), z.put() api. Check [1].


Thanks,
moon

[1]
http://zeppelin.apache.org/docs/latest/interpreter/spark.html#object-exchange

On Sat, Sep 3, 2016 at 6:45 PM York Huang <yorkhuang.d...@gmail.com> wrote:

> Hi,
>
> I am new to Zeppelin and have a few questions.
> 1. Should I install Zeppelin on a Hadoop edge node and every users access
> from browser? Or should every users have to install their own Zeppelin ?
>
> 2. How do I run standard Python without using spark?
>
> 3. Can I install Zeppelin on Windows server?
>
> 4. Is it possible to share data between interpreters ?
>
> Thanks
>
> York
>
> Sent from my iPhone

Reply via email to