. I am not completely sure if it was a
timeout or not but would still like to understand which timeouts are present
and how to change them.
Thanks,
Mohit.
> On May 17, 2016, at 6:15 AM, Felix Cheung wrote:
>
> Do you have the error message?
>
>
>
>
>
> On Mon, May 16
That's a great solution. If one of you don't mind opening a JIRA for this we
should investigate and fix this lien ending check issue.
On Tue, May 17, 2016 at 6:05 AM -0700, "Chris Winne"
wrote:
On Mon, May 16, 2016 at 10:06:18AM -0300, Guilherme Silveira wrote:
>> Hi Folks,
>>
Do you have the error message?
On Mon, May 16, 2016 at 4:37 PM -0700, "Mohit Jaggi"
wrote:
Hi All,
I want to run a long job from zeppelin but it seems that it gets killed. My
guess is that it is due to the spark REPL timing out(because Spark logs say
killed by user). How can I increa
I'm not sure if this has been reported, could you please open a JIRA?Do you
know how to reproduce this?
On Fri, Apr 22, 2016 at 3:13 PM -0700, "Johnny W." wrote:
Hi zeppelin-users,
Our team is continuously encountering a bug which may disable the editor
after run a paragraph. We have
On 4/13/16 8:53 PM, Felix Cheung wrote:
hi Scott
Vendor-repo would be the way to go. It is possible in this
case CDH Spark 1.6 has some incompatible API changes, though I
couldn't find it yet. Do you have more f
hi Scott
Vendor-repo would be the way to go. It is possible in this case CDH Spark 1.6
has some incompatible API changes, though I couldn't find it yet. Do you have
more from the logs on that NoSuchMethodException?
_
From: Scott Zelenka
Sent: Wednesday, April 13,
I think you would need to build all of Zeppelin not just your interpreter.
For instance you should run mvn clean package from the directory that had
Zeppelin-server, Zeppelin-interpreter and so on.
On Sat, Apr 2, 2016 at 1:28 PM -0700, "John Omernik" wrote:
Hey all,
I am a very novice
You should be able to access that from Spark SQL through a package like
http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase
This package seems like have not been updated for a while though.
On Tue, Mar 22, 2016 at 11:06 AM -0700, "Kumiko Yada"
wrote:
Hello,
Is there a way
min Kim mailto:bbuil...@gmail.com>>
> Sent: Tuesday, February 23, 2016 6:19 PM
> Subject: Re: HBase Interpreter
> To: <mailto:users@zeppelin.incubator.apache.org>>
>
>
> Hi Felix,
>
> Any updates? Does the latest merged master have the hbase quorum
me. Would that code only be loaded in the
Zeppelin process?
On Fri, Mar 11, 2016 at 5:32 PM, Felix Cheung
wrote:
Not from the stack, I think the best way is to run
jps -v
Y
?
On Fri, Mar 11, 2016 at 1:55 PM, Felix Cheung
wrote:
As you can see in the stack below, it's just calling SQLContext.sql()
org.apache.spark.sql.SQLContext.sql(SQLContext.scala:725) at
As you can see in the stack below, it's just calling
SQLContext.sql()org.apache.spark.sql.SQLContext.sql(SQLContext.scala:725) at
It is possible this is caused by some issue with line parsing. I will try to
take a look.
_
From: Adam Hull
Sent: Friday, March 11
It looks like it might have timeout or disconnected?
On Thu, Mar 10, 2016 at 3:39 AM -0800, "Skanda"
wrote:
Hi
The zeppelin server hangs after couple of days with the following
exceptions in the server log. After a restart everything looks good. I
understand it is a socket connection
Hi - this seems to be an issue with the way the python code is imported from a
jar or from a Spark package. I ran into the same. I tried bt couldn't find any
guideline on how a Spark package should make its Python binding available. If
you would open an issue at graph frame, I could chime in the
by projjwal ? On Mar 4, 2016 7:31
AM, "Felix Cheung" < felixcheun...@hotmail.com> wrote:
Please do try the release builds.
As for pyspark issue, do you see any error? Do you have
SPARK_HOME set in
You might want to start a new email thread with CDH in the title.
On Thu, Mar 3, 2016 at 1:39 AM -0800, "pseudo oduesp"
wrote:
hi silvio , have you any idea ?
thank you
2016-03-02 14:39 GMT+01:00 Silvio Fiorito :
> This is probably the issue:
>
> export *SPARK_SUBMIT_OPTIONS=SPARK_SU
On Mar 3, 2016, at 9:06 AM, Felix Cheung <
felixcheun...@hotmail.com> wrote:
Have you tried the release binaries on
https://zeppelin.incubator.apache.org/downloa
Have you tried the release binaries on
https://zeppelin.incubator.apache.org/download.html?
On Thu, Mar 3, 2016 at 4:51 AM -0800, "pseudo oduesp"
wrote:
I don't Know why zeppelin team can't provide for people binary , to make
easy , we are differents ? (beigner , expert confirmed )
Here should be a link on the Zeppelin homepage that says "import note"
On Wed, Mar 2, 2016 at 12:22 PM -0800, "cs user" wrote:
Hi All,
Within the notebook UI, there is a button to export a notebook. This seems
to work fine.
However, there doesn't seem to be a button to then import the
Sounds like it could be an interesting feature to add.
Would you like to contribute? :)
On Tue, Mar 1, 2016 at 3:49 AM -0800, "魏龙星" wrote:
In that case, users have to write code for every notebook.
Eran Witkon 于2016年3月1日周二 下午7:48写道:
> I guess that if the scheduler can run a notebook
HBase Interpreter
To:
Hi Felix,
Any updates? Does the latest merged master have the hbase quorum
properties?
Thanks, Ben
On Feb 12, 2016, at 1:29 AM, Felix Cheung <
felixcheun...@hotmail.co
elin how to work with h2o
algoriths.
It sounds very good that I can work from Zeppelin notebook with Spark and
H2O algorithms inside one workplace.
On Sat, Feb 20, 2016 at 8:44 AM, Felix Cheung
wrote:
> H2o works in Python, Java, Scala or with Spark (Sparkling Water) as well.
>
>
>
&
Would %spark z.load() have the earlier limitation of that it must be run before
anything to do with spark?
On Sat, Feb 20, 2016 at 12:33 AM -0800, "Ankur Jain"
wrote:
LGTM…
Thanks
Ankur
From: moon soo Lee [mailto:m...@apache.org]
Sent: 20 February 2016 11:24 AM
To: users@zeppelin.in
H2o works in Python, Java, Scala or with Spark (Sparkling Water) as well.
On Fri, Feb 19, 2016 at 10:11 AM -0800, "Girish Reddy"
wrote:
You'll need an R interpreter - https://github.com/elbamos/Zeppelin-With-R
You can then load the H2O libraries just as you would from RStudio.
On Fr
If it is running you should be able to see it on he YARN service Application
tab in Cloudera Manager. It should also be on Yarn history server page as well.
On Thu, Feb 18, 2016 at 2:37 PM -0800, "moon soo Lee" wrote:
'zeppelin-root-cdh-lsedge.local.com.out' file looks like have your S
s that it is listening on that port 8080
· Firewall is turned off
· But still get 404 on localhost:8080
· Have tried other ports before with the same result
· Also, a jstack against the Zeppelin process cannot attach to it.
Almost indicating that it hang
..@gmail.com> wrote:
> Yes that is resolved now. Looks like I am missing npm on my system, which
> was not mentioned here -
> https://zeppelin.incubator.apache.org/docs/0.5.5-incubating/install/yarn_install.html.
> I will retry after I install npm.
>
> On Wed, Feb 17, 2016 at 1:42 P
Could you check task manager that it is running?Also could this be blocked by
firewall rules?
On Wed, Feb 17, 2016 at 2:30 PM -0800, "Rohit Jain"
wrote:
Hi folks,
I tried various ways to get Zeppelin to work and don’t seem to be having
any luck.
I tried these on my Windows 10 PC
It looks like the jar file is corrupted somehow?
So Spark 1.6 works when you run be pre built official 0.5.6 release?
On Wed, Feb 17, 2016 at 10:53 AM -0800, "vincent gromakowski"
wrote:
Hi all,
I have built Zeppelin from source (master branch) but I cannot get it work
with Spark.
Here
>> Compiled by jenkins on 2015-12-02T18:38Z
>> Compiled with protoc 2.5.0
>> From source with checksum 98e07176d1787150a6a9c087627562c
>> This command was run using
>> /opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/jars/hadoop-common-2.6.0-cdh5.5.1.jar
>>
>>
Try building with -Pvendor-repo
On Wed, Feb 17, 2016 at 11:38 AM -0800, "Abhi Basu" <9000r...@gmail.com> wrote:
[INFO]
[INFO] Reactor Summary:
[INFO]
[INFO] Zeppelin ... SUCCE
Cool, I think I have figured out how to set properties too. I might open a PR
tomorrow or later.
On Thu, Feb 11, 2016 at 9:24 PM -0800, "Rajat Venkatesh"
wrote:
Hi,
I'll take a look over the weekend. Sorry for the delay in replying.
On Wed, Feb 10, 2016 at 6:44
The HBase quorum is actually namenode001, namenode002,
hbase-master001. Where do I set this?
Thanks, Ben
On Feb 4, 2016, at 9:15 PM, Felix Cheung <
felixcheun...@hotmail.com> wrote:
tell me what values to put as the properties for hbase version?
Thanks,Ben
On Feb 4, 2016, at 9:15 PM, Felix Cheung <
felixcheun...@hotmail.com> wrote:
: Thursday, February 4, 2016 9:39 PM
Subject: Re: HBase Interpreter
To:
Please, tell me what values to put as the properties for hbase version?
Thanks, Ben
On Feb 4, 2016, at 9:15 PM, Felix Cheung <
felixcheun...@hotma
We hate that they do that without informing
> anyone.
>
> Thanks,
> Ben
>
>
>
> On Feb 4, 2016, at 9:18 AM, Felix Cheung
> wrote:
>
> CDH is known to cherry pick patches from later releases. Maybe it is
> because of that.
>
> Rajat do you have any lead on the rel
CDH is known to cherry pick patches from later releases. Maybe it is because of
that.
Rajat do you have any lead on the release compatibility issue?
_
From: Rajat Venkatesh
Sent: Wednesday, February 3, 2016 10:05 PM
Subject: Re: HBase Interpreter
To:
O
pecific version of Hadoop, I actually removed it from the
build command and still get the error, I just need spark 1.6
> On Feb 3, 2016, at 9:05 AM, Felix Cheung wrote:
>
> I think his build command only works with Cloudera CDH 5.4.8, as you can see.
> Mismatch Akka version i
n package -Pspark-1.6 -Dspark.version=1.6.0
>> -Dhadoop.version=2.6.0-cdh5.4.8 -Phadoop-2.6 -Pyarn -Ppyspark -Pvendor-repo
>> -DskipTests
>>
>> This worked for me.
>>
>> Cheers,
>> Ben
>>
>>
>>> On Feb 1, 2016, at 7:44 PM, Felix Che
Great, perhaps it helps to add this tip to the md interpreter doc?
On Tue, Feb 2, 2016 at 9:26 PM -0800, "류아영" wrote:
Glad to hear that you find a reason and solve it ! 😁
Thanks,
Ahyoung
On Wed, Feb 3, 2016 at 2:13 PM, Zhong Wang wrote:
> Thank you Ahyoung for working on reproduc
Hi
You can see the build command line example here for spark 1.6 profile
https://github.com/apache/incubator-zeppelin/blob/master/README.md
On Mon, Feb 1, 2016 at 3:59 PM -0800, "Daniel Valdivia"
wrote:
Hi,
I'd like to ask if there's an easy way to upgrade spark to 1.6.0 from the
curr
You probably would need to inspect the log files under the log directory of
your Zeppelin binaries.
What does it shows on tte notebook when you hit an error? Propagating error
message better could be something we could improve on if it is an issue.
On Thu, Jan 28, 2016 at 12:26 PM -0800, "D
It doesn’t look like we have that functionality yet.
On Tue, Jan 26, 2016 at 12:51 AM -0800, "Marek Wiewiorka"
wrote:
Hi All - I've got a simple question - I found on the web page with the
release notes for 0.5.6 information:
- New features (Import/export notebook, read only serve
FYI
_
*Note, expedite your check in at Galvanize and register here
Talk 1: Using Spark MLlib To Predict Most Popular Tweets
Spark's Machine Learning Library (MLlib) enables running Machine Learning
algorithms in a scalable way on massive datase
You should be able to setup a client only machine and assign spark and hive
clients on that.
On Fri, Dec 4, 2015 at 1:15 PM -0800, "Hoc Phan" wrote:
When I setup Cloudera, there is no /hive dir in management node. I guess I had
to add that role in Cloudera Manager first?
As a best pra
users@zeppelin.incubator.apache.org
That is understandable, but what about if you stop execution by pressing button
in notebook? If you do that after you cached some rdd or broadcasted a
variable, the cleanup code won't be executed, right ?
On Thu, Dec 3, 2015 at 6:25 PM, Felix Cheung wrote:
I think that's
On Dec 3, 2015, at 11:51 AM, Felix Cheung <
felixcheun...@hotmail.com> wrote:
Could you send us your configuration of the Spark
Interpreter in Zeppelin?
It can see how both jobs can be long lived in
So far it seems it stopped after I started destroying them +
cachedRdd.unpersist
On Thu, Dec 3, 2015 at 5:52 PM, Felix Cheung
wrote:
> Do you know what version of spark you are running with?
>
>
>
>
>
> On Thu, Dec 3, 2015 at 12:52 AM -0800, "Kevin (Sangwoo) Kim" <
&
e it hasn't
> crashed since then, the following runs are always a little slower though.
>
> On Thu, Dec 3, 2015 at 8:08 AM, Felix Cheung
> wrote:
>
>> How are you running jobs? Do you schedule a notebook to run from Zeppelin?
>>
>>
bmit, it is still single job there. I think it is
because of using Tez. Is Tez having conflict with zeppelin?
On Dec 3, 2015, at 2:15 AM, Felix Cheung <
felixcheun...@hotmail.com> wrote:
I don't know enough about
I don't know enough about HDP, but there should be a way to check user queue in
YARN?
Spark job shouldn't affect Hive job though. Have you tried running spark-shell
(--master yarn-client) and a Hive job at the same time?
From: will...@gmail.com
Subject: Re: zeppelin job is running all the time
How are you running jobs? Do you schedule a notebook to run from Zeppelin?
Date: Mon, 30 Nov 2015 12:42:16 +0100
Subject: Spark worker memory not freed up after zeppelin run finishes
From: liska.ja...@gmail.com
To: users@zeppelin.incubator.apache.org
Hey,
I'm connecting Zeppelin with a remote Sp
Hi
Please feel free to open a issue in JIRA or better, make a contribution!
https://issues.apache.org/jira/browse/ZEPPELIN
On Tue, Nov 24, 2015 at 8:36 AM -0800, "Partridge, Lucas (GE Aviation)"
wrote:
I'd mistakenly downloaded the source version of Zeppelin 0.5.5 rather than the
binary
Hi,
Please see work in progress:
https://github.com/apache/incubator-zeppelin/pull/463
_
From: Oriol López Massaguer
Sent: Tuesday, November 24, 2015 7:57 AM
Subject: Spark 1.6 support
To:
Hi;
I'm interested in using the next Spa
In the short term if you couldn't get the latest from the master branch, you
could work around by having a spark/Scala paragraph calling z.input and then
passing the returned value to Pyspark/Python with z.put and z.get.
On Mon, Nov 23, 2015 at 12:03 AM -0800, "moon soo Lee" wrote:
Re
Try %hive. We should be updating the text shortly.
On Tue, Nov 10, 2015 at 10:51 PM -0800, "Abhi Basu" <9000r...@gmail.com> wrote:
I have tried using the hive interpreter and changing the JDBC url to the
impala port, but it says %hql not found.
On Tue, Nov 10, 2015 at 7:18 PM, Silvio Fiorito
Thank you for your detailed and thoughtful mail.
As you shall
see,https://github.com/apache/incubator-zeppelin/blob/master/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java#L354
Env var overrides property setting, so if env var is set in zeppelin-env.sh
then the user setting the
I'm not sure you could use that syntax with the Hive metastore. It would be
LOAD DATA LOCAL PATH like here:
http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables
_
From: moon soo Lee
Sent: Saturday, November 7, 2015 10:12 PM
Subject: Re: sq
Yes, it should work
On Wed, Nov 4, 2015 at 11:09 PM -0800, "Fengdong Yu"
wrote:
Hi Team
Can I changed spark version to 1.5.1 and re-build? does that support
spark-1.5.1 currently?
It is possible there are some version mismatches. As you know, Zeppelin is
built with a version Spark (and Jackson) - do you know how your copy of
Zeppelin is built?
_
From: Ji, Hao Wei Jeffery
Sent: Tuesday, November 3, 2015 7:38 AM
Subject: some questions
To:
And if you set SPARK_HOME, Zeppelin will launch Spark with spark-submit and you
can customize by setting SPARK_SUBMIT_OPTIONS in the environment. Check our
spark-submit for options.
On Sun, Oct 25, 2015 at 6:15 AM -0700, "Jongyoul Lee"
wrote:
Hi,
Zeppelin runs Spark with two different way
+1
Please see https://issues.apache.org/jira/browse/ZEPPELIN-299
_
From: Vinay Shukla
Sent: Tuesday, October 20, 2015 8:47 AM
Subject: Clear Notebook output
To: ,
Hi Guys,
Is there a way to clear the output of an existing Noteboo
let me know if I could help...
From: m...@apache.org
Date: Tue, 13 Oct 2015 08:20:52 +
Subject: Re: Question on converting pandas dataframe to spark frame
To: users@zeppelin.incubator.apache.org
Hi Bala,
Thanks for sharing the problem.I reproduced the same error, the same code works
in bin
.
Which version of hadoop is compatible with zeppelin binary available on apache
site. On 11 Oct 2015 09:05, "Felix Cheung" < felixcheun...@hotmail.com>
wrote:
Is your Zeppelin built with Hadoop 2.6?
On Sat, Oct 10, 2015
Is your Zeppelin built with Hadoop 2.6?
On Sat, Oct 10, 2015 at 7:35 PM -0700, "Ranveer kumar"
wrote:
Hi All,
I am new in Zepplin and HDFS. I manage to install zeppelin and working fine
while loading data from local directory . But when same I am trying to load
from HDFS (install locally s
+1
spark.executor.instances
http://spark.apache.org/docs/latest/running-on-yarn.html
Date: Fri, 9 Oct 2015 10:26:08 +0530
From: praag...@gmail.com
To: users@zeppelin.incubator.apache.org
Subject: Re: how to speed up zeppelin spark job?
try spark.executor.instances=N
and t
Do you have the %spark line in the middle of a notebook "box"? It should be
only at the beginning of a paragraph.
On Tue, Oct 6, 2015 at 6:42 AM -0700, "Alexander Bezzubov"
wrote:
Hi,
it's really hard to say more without looking into the logs of Zeppelin
Server and Spark interpreter in yo
I think he meant
%pyspark
It doesn't support embedded matplotlib plot by itself but it should work with a
handy helper function to do some conversion. You can follow up with me
directly, should be able to dig this up in git somewhere.
_
From: IT CTO
Sent: Tuesda
lho
wrote:
> Actually, I've done this already, but I'd forgotten the outcome.
>
> I get "pyspark is not responding" message.
>
>
> On Wed, Sep 16, 2015 at 4:01 PM Felix Cheung
> wrote:
>
>> Could you try setting zeppelin.pyspark.python in the interp
Could you try setting zeppelin.pyspark.python in the interpreter setting to the
matching Python 3? "python3" in your example below.
_
From: Paulo Cheadi Haddad Filho
Sent: Wednesday, September 16, 2015 9:21 AM
Subject: Fwd: Zeppelin error when trying to run pysp
There is, stay tuned!
On Wed, Sep 16, 2015 at 9:31 AM -0700, "Sourav Mazumder"
wrote:
Hi,
Is there any plan to integrate Spark R interpreter soon ?
Regards,
Sourav
Could you check under logs/ and include any error there?
On Fri, Sep 11, 2015 at 8:45 AM -0700, "MrAsanjar ." wrote:
hi all,
After moving up to the spark 1.4.1, and rebuilding zeppelin accordingly,
pyspark snippet fails run spark job sc.textFile("/etc/hosts").count() . I
get the following err
- anywhere else to look?
We've independently reproduced this on several machines/environments, all
Ubuntu 14.04.
zeppelin-*.outzeppelin-*.log
zeppelin-interpreter-spark-*.log
/T
On 11 August 2015 at 08:27, Felix Cheung wrote:
Could you check under the log directory for log files to
Could you check under the log directory for log files to see if there is any
error?
On Mon, Aug 10, 2015 at 1:08 PM -0700, "Exception Badger"
wrote:
Hi all,
We've been using Zeppelin for a little while with CDH clusters and it's
great.
Recently a few of us have tried getting it working o
What are the errors?
Date: Sun, 9 Aug 2015 20:06:13 +
From: djilokui...@yahoo.fr
To: m...@apache.org
CC: users@zeppelin.incubator.apache.org
Subject: Convert Spark dataframe to pandas dataframe in Zeppelin
Hi
How to Convert Spark dataframe to pandas dataframe in Zeppelin .
I tried Mydatafram
btw, it should work better in python if you first convert it to Row as the
example from the documentation
(http://spark.apache.org/docs/latest/sql-programming-guide.html#inferring-the-schema-using-reflection),
and use sqlContext.createDataFrame():
lines = sc.textFile("examples/src/main/resourc
Just a thought, try this instead?
wordcount = sc.textFile("some path to file")wcDF = wordcount.toDF()z.show(wcDF)
From: goi@gmail.com
Date: Mon, 20 Jul 2015 08:54:44 +
Subject: Re: Print RDD as table
To: users@zeppelin.incubator.apache.org
Here is the code first is a paragraph in pySpark
77 matches
Mail list logo