Hi,
I want to set up a environment for a group of users so that they can access
zeppelin. Each of them should have their own space, should not interfere
each other.
I install zeppelin on the MapR sandbox. If I access it from different
computers, even I access different notebooks, the data are sti
Hi Team,
I am trying to integrate Zeppelin 0.6.0 with DataStax 4.8.8 (which has Spark
1.4.2). After I configured following properties in zeppelin-env.sh when I start
zeppelin daemon it started and in the browser I can see zeppelin is running but
when I am trying to execute spark query in the no
I think I found the cause. I think it is font problem. In docker
environment, it only has a small set of fonts installed. But I have not
find out which font should I install...I will update you guys later.
On Thu, Sep 15, 2016, 00:33 moon soo Lee wrote:
> Tried x = np.arange(100), x = np.linspac
I feel there is a scala compatibility issue and I will try compiling with
the right switches.
On Wed, Sep 14, 2016 at 1:54 PM, Abhi Basu <9000r...@gmail.com> wrote:
> Yes that fixed some of the problems.
>
> I am using Zeppelin 0.6.1 binaries against CDH 5.8 (Spark 1.6.0). Would
> there be a comp
Yes that fixed some of the problems.
I am using Zeppelin 0.6.1 binaries against CDH 5.8 (Spark 1.6.0). Would
there be a compatibility issue?
Thanks
Abhi
On Wed, Sep 14, 2016 at 12:55 PM, moon soo Lee wrote:
> Could you try to set full path of python command on zeppelin.python
> property? not
Hi Team,
I am trying to integrate Zeppelin 0.6.0 with DataStax 4.8.8 (which has Spark
1.4.2). After I configured following properties in zeppelin-env.sh when I start
zeppelin daemon it started and in the browser I can see zeppelin is running but
when I am trying to execute spark query in the n
Could you try to set full path of python command on zeppelin.python
property? not the bin directory.
On Wed, Sep 14, 2016 at 10:19 AM Abhi Basu <9000r...@gmail.com> wrote:
> Tried pyspark command on same machine which uses Anaconda python and
> sc.version returned value.
>
> Zeppelin:
> zeppelin.
Tried pyspark command on same machine which uses Anaconda python and
sc.version returned value.
Zeppelin:
zeppelin.python /home/cloudera/anaconda2/bin
In zeppelin, nothing is returned.
On Wed, Sep 14, 2016 at 11:53 AM, moon soo Lee wrote:
> Did you export SPARK_HOME in conf/zeppelin-env.sh?
>
Did you export SPARK_HOME in conf/zeppelin-env.sh?
Could you verify the some code works with ${SPARK_HOME}/bin/pyspark, on the
same machine that zeppelin runs?
Thanks,
moon
On Wed, Sep 14, 2016 at 8:07 AM Abhi Basu <9000r...@gmail.com> wrote:
> Oops sorry. the above code generated this error:
>
I think there should be code chances to address this problem.
Maybe line chart can have a checkbox option that user can select ignore
empty value or treats empty value as zero.
Do you mind file an issue for it?
Thanks,
moon
On Mon, Sep 12, 2016 at 8:11 AM Ayestaran Nerea
wrote:
> Hi everyone!
Tried x = np.arange(100), x = np.linspace(-2,2,1000) with both python2 and
python3 in %python interpreter. I don't have any problem.
On Wed, Sep 14, 2016 at 3:12 AM Xi Shen wrote:
> OK, for this problem, it is discussed at
> https://stackoverflow.com/questions/15538099/conversion-of-unicode-minu
Regarding data in the note.json,
In case of user doesn't want include data in exported note.json, user can
clean the outputs before export, for now.
We might think displaying two export options with / without data when click
export button, if exporting notebook without data is important and need
Oops sorry. the above code generated this error:
RROR [2016-09-14 10:04:27,121] ({qtp2003293121-11}
NotebookServer.java[onMessage]:221) - Can't handle message
org.apache.zeppelin.interpreter.InterpreterException:
org.apache.thrift.transport.TTransportException
at
org.apache.zeppelin.interpreter.re
%pyspark
input_file = "hdfs:tmp/filenname.gz"
raw_rdd = sc.textFile(input_file)
Hi Afancy,
if you want to build with Scala 2.11 by using -Pscala-2.11 flag, you will
need to run `./dev/change_scala_version.sh 2.11` prior to running mvn
command. Scala dependent modules in Zeppelin have _2.10 suffix in artifact
id by default and running ./dev/change_scala_version.sh will change t
OK, for this problem, it is discussed at
https://stackoverflow.com/questions/15538099/conversion-of-unicode-minus-sign-from-matplotlib-ticklabels
However, I just tried with Jupyter notebook, and its matplotlib can plot
with negative values on the axes correctly, and
matplotlib.rcParams['axes.unico
Hi Jeff,
I think there might be some examples here:
https://www.zeppelinhub.com/viewer/showcases/Visualization
But I'm sure others that have some of their own, would post it here too
On Wed, Sep 14, 2016 at 5:43 PM, Jeff Zhang wrote:
> I looked at the following link about angular display syste
Hi,
I worked it out...So I have start a new instance of Zeppelin...creating a
new notebook wont take effect...So all the Python code are executed in one
python vm? Shouldn't separating ones are better?
After I get matplotlib work, I have a new problem.
This code snippet works
%python
import num
I looked at the following link about angular display system, it is very
interesting. I just wonder is there any more examples and small widget
built upon angular. Thanks
https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/displaysystem/back-end-angular.html
--
Best Regards
Jeff Zhang
Hello Folk,
I am using this command "mvn -X clean package -Pbuild-distr -DskipTests
-Pspark-2.0 -Phadoop-2.4 -Pyarn -Pscala-2.11 -Ppyspark -Psparkr" to build
the source code pulled from master branch, but got the following error. Any
suggestion is appreciated if you encounter the same problem. Th
20 matches
Mail list logo