Hi Deenar,

It is possible to use Zeppelin Context  via Pyspark interpreter.

Example (based on Zeppelin 0.6.0)

paragraph1
---------------
%spark

# do some stuff and store result (dataframe) into Zeppelin context. In this
case as sql dataframe
...
z.put("scala_df", scala_df: org.apache.spark.sql.DataFrame)

paragraph2
---------------

%spark.pyspark

from pyspark.sql import DataFrame

# take dataframe from Zeppelin context
df_pyspark = DataFrame(z.get("scala_df"), sqlContext)

# display first 5 rows
df_pyspark.show(5)

Regards,

Andres Koitmäe


On 17 January 2017 at 10:15, Deenar Toraskar <deenar.toras...@gmail.com>
wrote:

> Hi
>
> Is it possible to access Zeppelin context via the Pyspark interpreter. Not
> all the method available via the Spark Scala interpreter seem to be
> available in the Pyspark one (unless i am doing something wrong). I would
> like to do something like this from the Pyspark interpreter.
>
> z.show(df, 100)
>
> or
>
> z.run(z.listParagraphs.indexOf(z.getInterpreterContext().
> getParagraphId())+1)
>
>

Reply via email to