RE: Spark-sql showing no table

2016-07-13 Thread Vikash Kumar
I am creating a sqlContext from exiting sc. Var tables = sqlContext.sql("show tables") Thanks and regards, Vikash Kumar From: Mohit Jaggi [mailto:mohitja...@gmail.com] Sent: Wednesday, July 13, 2016 10:24 PM To: users@zeppelin.apache.org Subject: Re: Spark-sql showing no table m

Connection refused when creating remote interperter

2016-07-13 Thread Jeff Zhang
I use the zeppelin 0.6 but hit this weird issue when running the tutorial notebook. It seems this happens when creating remote interpreter. This is a very basic feature to me, not sure why would it not work (Actually I don't have this issue in master). so I suspect whether it is caused by config

Re: Order of paragraphs vs. different interpreters (spark vs. pyspark)

2016-07-13 Thread CloverHearts
nice to meet you. I have created a . Do you need the feature to run for all paragraph in note? I think that the function is needed. I will implement it. Thank you. 출발: xiufeng liu 회신 대상: 날짜: 2016년 7월 14일 목요일 오전 3:18 받는 사람: "users@zeppelin.

Re: Order of paragraphs vs. different interpreters (spark vs. pyspark)

2016-07-13 Thread Hyung Sung Shim
hi I think you can run the workflows that you defined just 'run' paragraph. and I believe functionality of view are going to be better. :) 2016년 7월 14일 목요일, xiufeng liu님이 작성한 메시지: > It is easy to change the code. I did myself and use it as an ETL tool. It > is very powerful > > Afancy > > On Wedn

Re: Order of paragraphs vs. different interpreters (spark vs. pyspark)

2016-07-13 Thread xiufeng liu
It is easy to change the code. I did myself and use it as an ETL tool. It is very powerful Afancy On Wednesday, July 13, 2016, Ahmed Sobhi wrote: > I think this pr addresses what I need. Case 2 seem to describe the issue > I'm having if I'm reading it correctly. > > The proposed solution, howev

Re: Order of paragraphs vs. different interpreters (spark vs. pyspark)

2016-07-13 Thread Ahmed Sobhi
I think this pr addresses what I need. Case 2 seem to describe the issue I'm having if I'm reading it correctly. The proposed solution, however, is not that clear to me. Is it that you define workflows where a work flow is a sequence of (notebook, paragraph) pairs that are to be run in a specific

Re: Clear results from Zeppelin notebook json

2016-07-13 Thread Ahmed Sobhi
I could not reproduce with 0.6.0. I reran several trials again with 0.5.6, and it now works as expected. I was trying both exporting, and checking the notebooks directory at the same time and that got me confused where directly grabbing the notebook json from the notebooks directory didn't seem t

Re: Spark-sql showing no table

2016-07-13 Thread Mohit Jaggi
make sure you use a hive context > On Jul 13, 2016, at 12:42 AM, Vikash Kumar wrote: > > Hi all, > I am using spark with scala to read phoenix tables and > register as temporary table. Which I am able to do. > After that when I am running query : >

Re: Pass parameters to paragraphs via URL

2016-07-13 Thread TEJA SRIVASTAV
PS typo On Wed, Jul 13, 2016 at 9:03 PM TEJA SRIVASTAV wrote: > We do have work around for that but Validate. > You need to use angularBinding to achieve it > %angular > > var > scope=angular.element(document.getElementById("main")).scope().$root.compiledScope; > scope.getLocationParams = fun

Re: Pass parameters to paragraphs via URL

2016-07-13 Thread TEJA SRIVASTAV
We do have work around for that but Validate. You need to use angularBinding to achieve it %angular var scope=angular.element(document.getElementById("main")).scope().$root.compiledScope; scope.getLocationParams = function(){ var pairs = window.location.search.substring(1).split("&"), obj = {}

Re: Order of paragraphs vs. different interpreters (spark vs. pyspark)

2016-07-13 Thread Hyung Sung Shim
hi. Maybe https://github.com/apache/zeppelin/pull/1176 is related what you want. Please check this pr. 2016년 7월 13일 수요일, xiufeng liu님이 작성한 메시지: > You have to change the source codes to add the dependencies of running > paragraphs. I think it is a really interesting feature, for example, it can >

Re: Order of paragraphs vs. different interpreters (spark vs. pyspark)

2016-07-13 Thread xiufeng liu
You have to change the source codes to add the dependencies of running paragraphs. I think it is a really interesting feature, for example, it can be use as an ETL tool. But, unfortunately, there is no configuration option right now. /afancy On Wed, Jul 13, 2016 at 12:27 PM, Ahmed Sobhi wrote:

Order of paragraphs vs. different interpreters (spark vs. pyspark)

2016-07-13 Thread Ahmed Sobhi
Hello, I have been working on a large Spark Scala notebook. I recently had the requirement to produce graphs/plots out of these data. Python and PySpark seemed like a natural fit but since I've already invested a lot of time and effort into the Scala version, I want to restrict my usage of python

Re: Pass parameters to paragraphs via URL

2016-07-13 Thread Rajesh Balamohan
+1 on this. I am not sure if this is possible. If so, it would be really helpful. ~Rajesh.B On Fri, Jul 8, 2016 at 11:33 PM, on wrote: > Hi, > > I am trying to pass parameters via URL to a published paragraph (and to > run it after that), e.g., I would like to get variable test of > /paragraph

Spark-sql showing no table

2016-07-13 Thread Vikash Kumar
Hi all, I am using spark with scala to read phoenix tables and register as temporary table. Which I am able to do. After that when I am running query : %sql show tables Its giving all possible output, but when I am run