I think this is a simple question. Appreciate any confirmation or pointers.
On Thu, Mar 21, 2019 at 10:32 PM Lian Jiang wrote:
> Thanks. I saw https://issues.apache.org/jira/browse/ZEPPELIN-3614 is
> still open which make me think zeppelin does not support offline helium.
> However, be
;
> El jue., 21 mar. 2019 a las 16:24, Lian Jiang ()
> escribió:
>
>> Any clue is highly appreciated!
>>
>> On Wed, Mar 20, 2019 at 9:27 PM Lian Jiang wrote:
>>
>>> Hi,
>>>
>>> I am using Horton work HDP3.0 which has z
Any clue is highly appreciated!
On Wed, Mar 20, 2019 at 9:27 PM Lian Jiang wrote:
> Hi,
>
> I am using Horton work HDP3.0 which has zeppelin 0.8.0. I followed
> https://zeppelin.apache.org/docs/0.8.0/development/helium/writing_visualization_basic.html
> to install helium viz pac
Hi,
I am using Horton work HDP3.0 which has zeppelin 0.8.0. I followed
https://zeppelin.apache.org/docs/0.8.0/development/helium/writing_visualization_basic.html
to install helium viz packages. Since my environment does not have internet
access, I have to install packages into local registry. Here
Hi,
I am trying to use oracle jdbc to read oracle database table. I have added
below property in custom zeppelin-env:
SPARK_SUBMIT_OPTIONS="--jars /my/path/to/ojdbc8.jar"
But
val df = spark.read.format("jdbc").option("url", "jdbc:oracle:thin:@
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.9.44.9
Hi,
Our HDP3.0 hadoop is in an environment that has not internet access. How
can I install Helium packages offline? Thanks for any hint.
ra/browse/KNOX-1420
>
>
>
> Lian Jiang 于2018年9月1日周六 上午12:03写道:
>
>> Jeff,
>>
>> This issue does not happen when I ssh to the zeppelin host and use
>> zeppelin via port forwarding. It happens when I use zeppelin via knox. Let
>> me know any other info you need. Thanks.
>>
>
manually first to verify what's wrong.
>
>
>
> Lian Jiang 于2018年8月30日周四 上午10:34写道:
>
>> Jeff,
>>
>> R is installed on namenode and all data nodes. The R packages have been
>> copied to them all too. I am not sure if an R script launched by pyspark's
Jeff,
This issue does not happen when I ssh to the zeppelin host and use zeppelin
via port forwarding. It happens when I use zeppelin via knox. Let me know
any other info you need. Thanks.
60
b@
https://mydomain.com/gateway/ui/zeppelin/scripts/vendor.49d751b0c72342f6.js:76:3748
a/n.prototype._onMessageHandler@
https://mydomain.com/gateway/ui/zeppelin/scripts/vendor.49d751b0c72342f6.js:76:3960
R/<@
https://mydomain.com/gateway/ui/zeppelin/scripts/vendor.49d751b0c72342f6.js:36:56
I am using HDP3.0 (having zeppelin 0.8.0).
Below queries do not show any result, even though downloading the csv show
the data correctly (e.g. if there is no tables, show the header).
%livy2.sql
show tables
%spark2.sql
show tables
Infrequently I saw the table show a short time and then disappea
k driver could be
> launched in any node of this cluster.
>
>
>
> Lian Jiang 于2018年8月30日周四 上午1:46写道:
>
>> After calling a sample R script, we found another issue when running a
>> real R script. This R script failed to load changepoint library.
>>
>> I tried:
>>
e error: Error in library(changepoint) : there is no package called
‘changepoint’
test.r is simply:
library(changepoint)
Any idea how to make changepoint available for the R script? Thanks.
On Tue, Aug 28, 2018 at 10:07 PM Lian Jiang wrote:
> Thanks Jeff.
>
> This worked:
&g
e the driver may run any node of this cluster.
>
>
>
> Lian Jiang 于2018年8月29日周三 上午1:35写道:
>
>> Thanks Lucas. We tried and got the same error. Below is the code:
>>
>> %livy2.pyspark
>> import subprocess
>> sc.addFile("hdfs:///user/zeppelin/test.r&
at 1:13 AM Partridge, Lucas (GE Aviation) <
lucas.partri...@ge.com> wrote:
> Have you tried SparkContext.addFile() (not addPyFile()) to add your R
> script?
>
>
> https://spark.apache.org/docs/2.2.0/api/python/pyspark.html#pyspark.SparkContext.addFile
>
>
>
> *From:* Lia
Hi,
We are using HDP3.0 (using zeppelin 0.8.0) and are migrating Jupyter
notebooks to Zeppelin. One issue we came across is that a python script
calling R script does not work in Zeppelin.
%livy2.pyspark
import os
sc.addPyFile("hdfs:///user/zeppelin/my.py")
import my
my.test()
my.test() calls R
instead of a local file
>
>
> Lian Jiang 于2018年8月23日周四 上午7:02写道:
>
>> Hi,
>>
>> I am using HDP3.0 (zeppelin 0.8.0) and my notebook using livy2.pyspark
>> interpreter crashes (RPC channel is stopped) the livy session frequently.
>> The yarn log tells:
>>
&
Hi,
I am using HDP3.0 (zeppelin 0.8.0) and my notebook using livy2.pyspark
interpreter crashes (RPC channel is stopped) the livy session frequently.
The yarn log tells:
18/08/22 22:39:47 ERROR ApplicationMaster: RECEIVED SIGNAL TERM
18/08/22 22:39:47 INFO SparkContext: Invoking stop() from shutd
Problem solved. Thanks.
On Sun, Aug 19, 2018 at 9:20 AM Lian Jiang wrote:
> Hi,
>
>
> I am using HDP3.0 and try to create a zeppelin (0.8.0) interpreter setting
> by using zeppelin rest api:
>
>
> https://zeppelin.apache.org/docs/0.7.0/rest-api/rest-interpreter.html#
Hi,
>
> I am using HDP3.0 and try to create a zeppelin (0.8.0) interpreter setting by
> using zeppelin rest api:
>
> https://zeppelin.apache.org/docs/0.7.0/rest-api/rest-interpreter.html#create-a-new-interpreter-setting
>
> I copied the json from the example of "creating a new interpreter setti
Hi,
How can I set shiro_ini_content using ambari blueprint?
This post raised the same question but not answered.
https://community.hortonworks.com/questions/150979/how-can-
the-zeppelin-shiro-ini-content-property-be.html
I tried several ways but none worked.
Appreciate any clue!
21 matches
Mail list logo