Re: Passing variables from %pyspark to %sh

2017-01-12 Thread t p
Is it possible to have similar support to exchange  checkbox/dropdown variables 
and can variables be exchanged with other interpreters like PSQL (e.g. variable 
set by spark/pyspark and accessible in another para which is running PSQL 
interpreter).

I’m interested in doing this and I’d like to know if there is a way to 
accomplish this:
https://lists.apache.org/thread.html/a1b3530e5a20f983acd70f8fca029f90b6bfe8d0d999597342447e6f@%3Cusers.zeppelin.apache.org%3E
 



> On Jan 12, 2017, at 2:16 AM, Jongyoul Lee  wrote:
> 
> There's no way to communicate between spark and sh intepreter. It need to 
> implement it but it doesn't yet. But I agree that it would be helpful for 
> some cases. Can you create issue?
> 
> On Thu, Jan 12, 2017 at 3:32 PM, Ruslan Dautkhanov  > wrote:
> It's possible to exchange variables between Scala and Spark
> through z.put and z.get.
> 
> How to pass a variable to %sh?
> 
> In Jupyter it's possible to do for example as
>   ! hadoop fs -put {localfile} {hdfsfile}
> 
> where localfile and and hdfsfile are Python variables.
> 
> Can't find any references for something similar in Shell Interpreter
> https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/shell.html 
> 
> 
> In many notebooks we have to pass small variabels 
> from Zeppelin notes to external scripts as parameters.
> 
> It would be awesome to have something like
> 
> %sh
> /path/to/script --param8={var1} --param9={var2}
> 
> where var1 and var2 would be implied to be fetched as z.get('var1') 
> and z.get('var2') respectively.
> 
> Other thoughts?
> 
> 
> Thank you,
> Ruslan Dautkhanov
> 
> 
> 
> 
> -- 
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net 



Re: Passing variables from %pyspark to %sh

2017-01-12 Thread Jongyoul Lee
Yes, many users suggest that feature to share results between paragraphs
and different interpreters. I think this would be one of major features in
a next release.

On Thu, Jan 12, 2017 at 10:30 PM, t p  wrote:

> Is it possible to have similar support to exchange  checkbox/dropdown
> variables and can variables be exchanged with other interpreters like PSQL
> (e.g. variable set by spark/pyspark and accessible in another para which is
> running PSQL interpreter).
>
> I’m interested in doing this and I’d like to know if there is a way to
> accomplish this:
> https://lists.apache.org/thread.html/a1b3530e5a20f983acd70f8fca029f
> 90b6bfe8d0d999597342447e6f@%3Cusers.zeppelin.apache.org%3E
>
>
> On Jan 12, 2017, at 2:16 AM, Jongyoul Lee  wrote:
>
> There's no way to communicate between spark and sh intepreter. It need to
> implement it but it doesn't yet. But I agree that it would be helpful for
> some cases. Can you create issue?
>
> On Thu, Jan 12, 2017 at 3:32 PM, Ruslan Dautkhanov 
> wrote:
>
>> It's possible to exchange variables between Scala and Spark
>> through z.put and z.get.
>>
>> How to pass a variable to %sh?
>>
>> In Jupyter it's possible to do for example as
>>
>>>   ! hadoop fs -put {localfile} {hdfsfile}
>>
>>
>> where localfile and and hdfsfile are Python variables.
>>
>> Can't find any references for something similar in Shell Interpreter
>> https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/shell.html
>>
>> In many notebooks we have to pass small variabels
>> from Zeppelin notes to external scripts as parameters.
>>
>> It would be awesome to have something like
>>
>> %sh
>>> /path/to/script --param8={var1} --param9={var2}
>>
>>
>> where var1 and var2 would be implied to be fetched as z.get('var1')
>> and z.get('var2') respectively.
>>
>> Other thoughts?
>>
>>
>> Thank you,
>> Ruslan Dautkhanov
>>
>>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>
>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Re: Passing variables from %pyspark to %sh

2017-01-12 Thread Jeff Zhang
Agree to share variables between interpreters. Currently zeppelin launch
one JVM for each interpreter group. So it is not possible to share
variables between spark and sh. But for some interpreters like sh, md, it
is not necessary to create separate JVM for them. We can embed them in
spark interpreter JVM.  But we could not do it for all interpreters,
because it would cause potential jar conflicts.



Jongyoul Lee 于2017年1月12日周四 下午10:18写道:

> Yes, many users suggest that feature to share results between paragraphs
> and different interpreters. I think this would be one of major features in
> a next release.
>
> On Thu, Jan 12, 2017 at 10:30 PM, t p  wrote:
>
> Is it possible to have similar support to exchange  checkbox/dropdown
> variables and can variables be exchanged with other interpreters like PSQL
> (e.g. variable set by spark/pyspark and accessible in another para which is
> running PSQL interpreter).
>
> I’m interested in doing this and I’d like to know if there is a way to
> accomplish this:
>
> https://lists.apache.org/thread.html/a1b3530e5a20f983acd70f8fca029f90b6bfe8d0d999597342447e6f@%3Cusers.zeppelin.apache.org%3E
>
>
> On Jan 12, 2017, at 2:16 AM, Jongyoul Lee  wrote:
>
> There's no way to communicate between spark and sh intepreter. It need to
> implement it but it doesn't yet. But I agree that it would be helpful for
> some cases. Can you create issue?
>
> On Thu, Jan 12, 2017 at 3:32 PM, Ruslan Dautkhanov 
> wrote:
>
> It's possible to exchange variables between Scala and Spark
> through z.put and z.get.
>
> How to pass a variable to %sh?
>
> In Jupyter it's possible to do for example as
>
>   ! hadoop fs -put {localfile} {hdfsfile}
>
>
> where localfile and and hdfsfile are Python variables.
>
> Can't find any references for something similar in Shell Interpreter
> https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/shell.html
>
> In many notebooks we have to pass small variabels
> from Zeppelin notes to external scripts as parameters.
>
> It would be awesome to have something like
>
> %sh
> /path/to/script --param8={var1} --param9={var2}
>
>
> where var1 and var2 would be implied to be fetched as z.get('var1')
> and z.get('var2') respectively.
>
> Other thoughts?
>
>
> Thank you,
> Ruslan Dautkhanov
>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>


Re: Passing variables from %pyspark to %sh

2017-01-12 Thread t p
Is something like feasible from the front end perspective - i.e the web UI 
(Angular?) - i.e. not matter which process/JVM runs the interpreter, I’d assume 
that a book is executed in the context of a we browser which unifies all the 
pages of the book...

> On Jan 12, 2017, at 9:56 AM, Jeff Zhang  wrote:
> 
> 
> Agree to share variables between interpreters. Currently zeppelin launch one 
> JVM for each interpreter group. So it is not possible to share variables 
> between spark and sh. But for some interpreters like sh, md, it is not 
> necessary to create separate JVM for them. We can embed them in spark 
> interpreter JVM.  But we could not do it for all interpreters, because it 
> would cause potential jar conflicts.
> 
>  
> 
> Jongyoul Lee mailto:jongy...@gmail.com>>于2017年1月12日周四 
> 下午10:18写道:
> Yes, many users suggest that feature to share results between paragraphs and 
> different interpreters. I think this would be one of major features in a next 
> release.
> 
> On Thu, Jan 12, 2017 at 10:30 PM, t p  > wrote:
> Is it possible to have similar support to exchange  checkbox/dropdown 
> variables and can variables be exchanged with other interpreters like PSQL 
> (e.g. variable set by spark/pyspark and accessible in another para which is 
> running PSQL interpreter).
> 
> I’m interested in doing this and I’d like to know if there is a way to 
> accomplish this:
> https://lists.apache.org/thread.html/a1b3530e5a20f983acd70f8fca029f90b6bfe8d0d999597342447e6f@%3Cusers.zeppelin.apache.org%3E
>  
> 
> 
> 
>> On Jan 12, 2017, at 2:16 AM, Jongyoul Lee > > wrote:
>> 
>> There's no way to communicate between spark and sh intepreter. It need to 
>> implement it but it doesn't yet. But I agree that it would be helpful for 
>> some cases. Can you create issue?
>> 
>> On Thu, Jan 12, 2017 at 3:32 PM, Ruslan Dautkhanov > > wrote:
>> It's possible to exchange variables between Scala and Spark
>> through z.put and z.get.
>> 
>> How to pass a variable to %sh?
>> 
>> In Jupyter it's possible to do for example as
>>   ! hadoop fs -put {localfile} {hdfsfile}
>> 
>> where localfile and and hdfsfile are Python variables.
>> 
>> Can't find any references for something similar in Shell Interpreter
>> https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/shell.html 
>> 
>> 
>> In many notebooks we have to pass small variabels 
>> from Zeppelin notes to external scripts as parameters.
>> 
>> It would be awesome to have something like
>> 
>> %sh
>> /path/to/script --param8={var1} --param9={var2}
>> 
>> where var1 and var2 would be implied to be fetched as z.get('var1') 
>> and z.get('var2') respectively.
>> 
>> Other thoughts?
>> 
>> 
>> Thank you,
>> Ruslan Dautkhanov
>> 
>> 
>> 
>> 
>> -- 
>> 이종열, Jongyoul Lee, 李宗烈
>> http://madeng.net 
> 
> 
> 
> 
> -- 
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net 



Problem with spark.r plotting functionality

2017-01-12 Thread Andres Koitmäe
Hi,

I have problem with R plots in Zeppelin. I have Hortonworks Sandbox 2.5
where I installed  Zeppelin 0.6.2 (I'm not using HDP provided version on
Zeppelin there)

I followed the instructions from
https://zeppelin.apache.org/docs/0.6.2/interpreter/r.html

I can do all data manipulations but plotting is not working. For example:

%spark.r

data(mtcars)
mtcars

should according to the documentation display the results using  Zeppelin
built-in visualizations but instead it just show it as text:


Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1

The second example

%spark.r

data(faithful) f <- createDataFrame(sqlContext,faithful)
registerTempTable(f,"faithful") ed <- sql(sqlContext,"SELECT * FROM
faithful") SparkR:::head(ed)

is not using Zeppelin built-in visualizations as well.

Is it somehow possible to make Zeppelin & spark.r to work in a way as show
in documentation.

Regards,

Andres


Re: Using CDH dynamic resource pools with Zeppelin

2017-01-12 Thread Yaar Reuveni
Is it known when v0.7.0 is expected to be released?

On Wed, Jan 11, 2017 at 4:09 PM, Paul Brenner  wrote:

> My understanding is that this kind of user specific control isn’t coming
> until v0.70. Currently when we run zeppelin all tasks are submitted by the
> user that started the zeppelin process (so we start zeppelin from the yarn
> account and everything is submitted as yarn). At least for spark there is a
> user queue parameter that can be set in the interpreter which ensures that
> users are only getting the resources they are allowed. We just create a
> different interpreter for each user and set that parameter. It isn’t
> perfect, and might not even be available for your JDBC, but I thought the
> detail might help.
>
>  
>  Paul Brenner 
>  
>  
> 
> 
> DATA SCIENTIST
> *(217) 390-3033 *
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [image:
> PlaceIQ:Location Data Accuracy]
> 
>
> On Wed, Jan 11, 2017 at 8:04 AM Yaar Reuveni  > wrote:
>
>> Hey,
>> Since no answer yet, I'll try a simpler question.
>> I have Zeppelin defined with a *JDBC* interpreter configured with
>> *Impala* that works against a CDH5.5 Hadoop cluster.
>> When I run queries from Zeppelin, these queries run without a user in
>> Hadoop, also no user seen in the Cloudera manager.
>> How can I configure it so there is a user defined on the connection and
>> on the running queries?
>>
>> Thanks,
>> Yaar
>>
>> On Tue, Dec 20, 2016 at 10:25 AM, Yaar Reuveni 
>> wrote:
>>
>>> Hey,
>>> We're using a cloudera distribution hadoop.
>>> We want to know how can we configure Zeppelin user authentication and
>>> link between users and resource pools
>>> 
>>> in our YARN Hadoop cluster
>>>
>>> Thanks,
>>> Yaar
>>>
>>> --
>>>
>>>
>>>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
> the addressee you must not use, copy, disclose or take action based on this
> message or any information herein.
> If you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
>

-- 
This message may contain confidential and/or privileged information. 
If you are not the addressee or authorized to receive this on behalf of the 
addressee you must not use, copy, disclose or take action based on this 
message or any information herein. 
If you have received this message in error, please advise the sender 
immediately by reply email and delete this message. Thank you.