There is really basic usage[1] in Spark interpreter docs. [2] is
corresponding source code.
And this diagram [3] from Helium proposal [4] might help little bit more.
Currently, DistributedResourcePool [5] implements ResourcePool api.
DistributedResourcePool keeps the data in the individual interpr
Moon: Are z. Context and resource pool different or the same?
Sent from my iPhone
> On Jun 16, 2016, at 10:23 PM, Cameron McBride
> wrote:
>
> Appreciate the quick responses!
>
> The patch (#pr836) does sound like a good solution, as it will generically
> address this issue (and makes use of
Appreciate the quick responses!
The patch (#pr836) does sound like a good solution, as it will generically
address this issue (and makes use of the existing ResourcePool framework).
The example of this working with the shell is also nice. Obviously my vote
is to include it in 0.6.0. :)
Are there
ResourcePool [1] is a data store that all the different types of
interpreter can access.
It allows exchange data between paragraphs/notebooks/interpreters.
However, not all interpreter uses ResourcePool, at the moment.
For example Spark interpreter family provides API to use it (spark,
pyspark, sp
I believe z.context() is the only way to share data between interpreters.
Within an interpreter data is usually available across paragraphs…perhaps even
across notebooks as I guess zeppelin will create a single interpreter in the
backend unless you somehow make it use separate ones.
> On Jun 16
Greetings,
I'm brand new to zeppelin, and this notebook technology looks great. I'm
evaluating using it for our data science team, but got it up and running
quickly using some PostgreSQL data and some spark tests. The distributed
nature of each paragraph, and naturally varying interpreters within