let me know if I could help...
From: m...@apache.org
Date: Tue, 13 Oct 2015 08:20:52 +
Subject: Re: Question on converting pandas dataframe to spark frame
To: users@zeppelin.incubator.apache.org
Hi Bala,
Thanks for sharing the problem.I reproduced the same error, the same code works
in bin
Hi,
I have Zeppelin running in AWS with latest EMR 4.1 and Spark 1.5.
Everything was fine. Then today all attempts to run a paragraph get stuck in
the pending state.
Is there any way to re-initialize the Zeppelin server state ? Thanks.
--
Nick
Noti
Good to know.
Post again if you have orher problems or questions.
On Oct 14, 2015 12:27 AM, "Stephen Boesch" wrote:
> It was 'my bad': the problem was a mistake of mine with git. I use git
> fetch upstream and then git rebase as the process. This time I just
> neglected to do the second step.
It was 'my bad': the problem was a mistake of mine with git. I use git
fetch upstream and then git rebase as the process. This time I just
neglected to do the second step. Corneau's tip about 0.16 vs 0.23 made me
suspicious of the seetup.
thanks for the help.
2015-10-13 5:12 GMT-07:00 Corneau
Thanks for your response. Looking forward for any update.
Thanks
Bala
On 13-Oct-2015 1:51 pm, "moon soo Lee" wrote:
> Hi Bala,
>
> Thanks for sharing the problem.
> I reproduced the same error, the same code works in bin/pyspark but not in
> Zeppelin.
> I'm going to take a look and keep you upd
We can think some mechanism to get instance of InterpreterContext from any
user library. For example, store instance of InterpreterContext to some
staticfield everytime before run interpret().
If you think it would be useful, please don't hesitate to file an issue.
Best,
moon
On Tue, Oct 13, 2015
Hi moon,
Thanks very much for the workaround. Rather than go through all my methods to
add an extra parameter for the InterpreterContext I decided to pass it in once
to a new setup() method which then keeps a reference to it for use by the other
methods. It’s not as user-friendly as before be
Can you try cleaning your ~/.m2 directory?
On Tue, Oct 13, 2015 at 6:35 PM, Stephen Boesch wrote:
> I have made no change to the build: this is a clean install from git
> sources. So the comment about zeppelin to use 0.0.23 vs 0.0.16 .. the
> question is : why is the *build *selecting that ver
Hi,
"org.apache.zeppelin" % "zeppelin-interpreter" % "0.5.0-incubating'
Will provide class for InterpreterContext. However ZeppelinContext is
inside of "org.apache.zeppelin" % "spark" and it is not published to public
maven repository.
So, you can add "org.apache.zeppelin" % "zeppelin-interpreter
Hi moon,
Sorry – I should’ve asked this before. How can I make use of
“z.getInterpreterContext().getParagraphId()” in my own Scala code which I’m
building outside of Zeppelin, please?
I’ve added this line to the scala file where I make the calls:
import org.apache.zeppelin.spark.Zeppelin
I have made no change to the build: this is a clean install from git
sources. So the comment about zeppelin to use 0.0.23 vs 0.0.16 .. the
question is : why is the *build *selecting that version. I have made no
customizations.
re: "try again": I have tried to build this a few times before vent
Zeppelin is supposed to use 0.0.23, but your logs are showing 0.0.16
On Oct 13, 2015 6:21 PM, "moon soo Lee" wrote:
> I'm using os/x yosemite and maven 3.3.3, too and i have no problem
> building zeppelin-web module.
>
> Here's related issue
> https://github.com/eirslett/frontend-maven-plugin/iss
I am also using os x and maven 3.3.3 and found no issue. Please try again
as sometimes it gives issue but trying again will resolve issue.
On Tue, 13 Oct 2015 14:51 moon soo Lee wrote:
> I'm using os/x yosemite and maven 3.3.3, too and i have no problem
> building zeppelin-web module.
>
> Here'
I'm using os/x yosemite and maven 3.3.3, too and i have no problem building
zeppelin-web module.
Here's related issue
https://github.com/eirslett/frontend-maven-plugin/issues/179. The issue is
resolved and Zeppelin uses 0.0.23 version which is version with the fix.
Can someone also help to try bu
Basically right. more precisely it depends on how individual interpreter
you use is implemented. For example, hive interpreter shipped in Zeppelin
uses hive-jdbc driver to connect to hiveserver2. So in this case Zeppelin
will only need to connect to the node where hiveserver2 runs. not entire
clust
os/x yosemite and maven 3.3.3
2015-10-13 1:06 GMT-07:00 moon soo Lee :
> Could you share your OS and maven version?
>
> Thanks,
> moon
>
> On Mon, Oct 12, 2015 at 4:39 PM Stephen Boesch wrote:
>
>> I have cloned from git and run:
>>
>>mvn clean package -DskipTests
>>
>> The core and engine
Thanks moon; that’s exactly what I wanted☺. I confirm it works for me in
standalone zeppelin-0.5.0-incubating. I get strings back with date/time stamps
and a unique ID that differs between paragraphs.
Thanks again,
Lucas.
From: moon soo Lee [mailto:m...@apache.org]
Sent: 12 October 2015 17:17
Thanks for sharing your use case.
Then, let's say Zeppelin runs SparkInterpreter process using spark-submit
with yarn-cluster mode without error. SparkInterpreter is then runs inside
an application master process which is managed by YARN on the cluster. and
ZeppelinServer can get host and port som
ok, then if for instance I want to access to the HDFS using Hive, the only
thing that I need to do is give permissions to the machine with Zeppelin
(so Zeppelin has permissions to access to the cluster) and create a
interpreter from Zeppelin, no?
2015-10-13 10:22 GMT+02:00 moon soo Lee :
> Depend
Depends on the version you use.
You'll need export HADOOP_CONF_DIR if you're using 0.5.0 version.
If you're on 0.6.0-SNAPSHOT, it's recommended to export SPARK_HOME to
Zeppelin use the same configuration that your spark installation use.
Best,
moon
On Tue, Oct 13, 2015 at 10:16 AM Pablo Torre w
Hi Bala,
Thanks for sharing the problem.
I reproduced the same error, the same code works in bin/pyspark but not in
Zeppelin.
I'm going to take a look and keep you updated here.
Thanks,
moon
On Wed, Oct 7, 2015 at 6:17 AM Balachandar R.A.
wrote:
> Hello
>
> I am new to zeppelin. Quite interest
Moon I have one question. Since I am going to execute Zeppelin in a
different machine, should no configure in conf/zeppelin-env.sh the next
environment variable?
export HADOOP_CONF_DIR="path to the conf hadoop directory in the cluster"
Thanks!
2015-10-13 10:10 GMT+02:00 Pablo Torre :
> Thanks
Thanks for your help!! I will try it, and I will let you if it works for
me!!
2015-10-12 23:17 GMT+02:00 moon soo Lee :
> Hi,
>
> Yes, of course. Zeppelin can run on different machine.
> It is recommended to install Spark in the machine that runs Zeppelin and
> point spark installation path in
Could you share your OS and maven version?
Thanks,
moon
On Mon, Oct 12, 2015 at 4:39 PM Stephen Boesch wrote:
> I have cloned from git and run:
>
>mvn clean package -DskipTests
>
> The core and engine build but the web Application does not:
>
>
> INFO] Reactor Summary:
> [INFO]
> [INFO] Zep
Did anyone solve that?
I'm still getting it on Spark 1.5
On Sat, Jun 27, 2015 at 5:03 AM, moon soo Lee wrote:
> That is really strange.
> Could you make sure your core-site.xml does not have anything related
> tachyon?
>
>
> On Fri, Jun 26, 2015 at 4:09 PM Udit Mehta wrote:
>
>> could this be r
25 matches
Mail list logo