RE: Whats the best practice limit of Query results?

2018-01-23 Thread Belousov Maksim Eduardovich
Hi Alexander! There was PR2323 [1] "[ZEPPELIN-2411] Improve Table" that added UI-grid [2] The UI-grid excellent processes a huge amount of data and has a nice functionality. [1] https://github.com/apache/zeppelin/pull/2323 [2] http://ui-grid.info/ Regards, Maksim Belousov From: alexander.m

Re: Supporting several spark versions simltaneously

2018-01-23 Thread Jeff Zhang
Zeppelin doesn't need to take care of py4j, it is spark's responsibility. As long as you set SPARK_HOME, spark will find py4j properly. Michaël Van de Borne 于2018年1月23日周二 下午10:37写道: > As long as SPARK_HOME is concerned, this might work. > However, other variables, such as PYTHONPATH and SPARK_Y

Re: Supporting several spark versions simltaneously

2018-01-23 Thread Michaël Van de Borne
As long as SPARK_HOME is concerned, this might work. However, other variables, such as PYTHONPATH and SPARK_YARN_USER_ENV should be defined in zeppelin-env.sh as well. Without those variables, zeppelin cannot access, for instance, py4j or pyspark. And guess where py4j sits? in SPARK_HOME/python/lib

Whats the best practice limit of Query results?

2018-01-23 Thread Alexander.Meier
Hi guys We're using Zeppelin to do some analysis of log files (Cloudera Cluster, Zeppelin 0.7.1 currently) and we're experiencing that zeppelin tends to get really slow when notebooks / queries return large datasets. * Is there a best practice on what amounts of data / query results z

Re: JDBC connection to Impala fails

2018-01-23 Thread Abhi Basu
Yes correct On Jan 22, 2018 11:44 PM, "Jeff Zhang" wrote: > > Just curious to know does impala use HiveDriver ? > > Abhi Basu <9000r...@gmail.com>于2018年1月23日周二 上午12:50写道: > >> Seems like a 0.7.3 bug, please verify. >> >> The same configurations worked fine in 0.7.2. >> >> Thanks, >> >> Abhi >> >

Re: Supporting several spark versions simltaneously

2018-01-23 Thread Jeff Zhang
Just remove SPARK_HOME in spark-env.sh, and instead define them in spark’s interpreter setting. You can create 2 spark interpreter, one for spark 1.6, another for spark 2. Only difference between them is the SPARK_HOME you defined in interpreter setting. Michaël Van de Borne 于2018年1月23日周二 下午6:42

Supporting several spark versions simltaneously

2018-01-23 Thread Michaël Van de Borne
Hi list, I'd like my notebooks to support both spark 1.6 & spark 2. I managed to get both versions working fine, with the SPARK_HOME variable in zeppelin-env.sh. But just one at a time. So I need to change the variable and restart zeppelin when I want to swap spark versions. Is it possible to some