I’ve done this previously, my answer here
https://stackoverflow.com/a/44238195/1335793
I was using an older version though.
Did you try appending catalog, or a catalog & schema to the url?
Error message suggests problem is with the “default.url” in the interpreter
settings.
On another note, AW
This should be implemented as a DAG that is defined sequentially by default;
additional paragraphs should be appended to the DAG. Reordering paragraphs
should reorder the DAG.
Implementing it as a DAG will make adding future functionality easier.
Later you can add the functionality to rearrange
From: Cyril Scetbon [mailto:cyril.scet...@free.fr]
Sent: Thursday, 5 October 2017 8:14 AM
To: David Howell
Cc: users@zeppelin.apache.org
Subject: Re: python.docker interpreter not working
Oh thanks David, interesting, however that’s super counterintuitive .. Then how
do you manage to you use one
Hi Keiji,
In the paragraph you would write:
%sh
spark-submit myapp.jar ...
The %sh interpreter is a shell, and runs as the zeppelin service user with
whatever permissions it has. You can run any shell commands in it.
Although, this is a fairly strange way to run zeppelin so I’m not really sure
My experience is that to save to S3, you don't need to press that button at
all. Just executing any paragraph seems to be enough, doesn't matter which
interpreter. I often run a markdown paragraph if I want to save spark code that
I am not ready to execute.
That button is more of a workbook relo
l Message-----
From: David Howell
Sent: Tuesday, 27 June 2017 4:44 PM
To: users@zeppelin.apache.org; us...@zeppelin.incubator.apache.org
Subject: RE: InvalidClassException using Zeppelin (master) and spark-2.1 on a
standalone spark cluster
Hi Jeff,
The ticket says it is fixed from Zeppelin 0.
On 6/27/17, 12:46 PM, "David Howell" wrote:
>Hi,
>I know this issue is resolved for reading from json, and tested for
>that use case, but I'm seeing the exact same error message when writing
>to json.
>
>java.io.InvalidClassException:
>org.apache.commo
Hi,
I know this issue is resolved for reading from json, and tested for that use
case, but I'm seeing the exact same error message when writing to json.
java.io.InvalidClassException: org.apache.commons.lang3.time.FastDateParser;
local class incompatible: stream classdesc serialVersionUID = 2, lo
ory
does spark really need it? What I'm supposed to do?
2017-05-02 15:16 GMT+02:00 David Howell
mailto:david.how...@zipmoney.com.au>>:
Hi Serega,
I see this in the error log “error: ';' expected but ',' found.”
Are you running the %sql in the same paragrap
Hi Serega,
I see this in the error log “error: ';' expected but ',' found.”
Are you running the %sql in the same paragraph as the %spark? I don’t think
that is supported. I think you have to shift the %sql to a new paragraph, you
can then run the spark and then the sql separately.
From: Serega
:0.4.1
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val df = sqlContext.read
.format("com.databricks.spark.xml")
…
[zipMoney Logo]
David Howell
Data Engineering
+61 477 150 379
[Facebook link]<https://www.facebook.com/Zipmon
11 matches
Mail list logo