Hi
Do you know how can I change the folder path where the interpreters are
executed?.
The reason why I want to change that default location (which is
$ZEPPELIN_HOME) is because we are getting very large core dumps files in
that location when the interpreter process die.
As we are in a k8s ecosys
ion files and notebook files will be stored in
> the same storage layer.
>
>
> Jhon Anderson Cardenas Diaz 于2018年10月5日周五
> 上午6:21写道:
>
> > Also maybe the *helium configuration* should be included in the
> > ConfigStorage component, in order to be persisted like others
Also maybe the *helium configuration* should be included in the
ConfigStorage component, in order to be persisted like others
configurations.
El jue., 4 oct. 2018 a las 16:44, Jhon Anderson Cardenas Diaz (<
jhonderson2...@gmail.com>) escribió:
> Hi!
>
> Currently in the ConfigSt
Hi!
Currently in the ConfigStorage component there are methods to persist and
retrieve the credentials of the zeppelin users:
public abstract String loadCredentials() throws IOException;
public abstract void saveCredentials(String credentials) throws IOException;
But those methods are not being
las 20:07, Jeff Zhang () escribió:
>
> This is the first time I see user reporting this issue, what interpreter
> do you use ? Is it easy to reproduce ?
>
>
> Jhon Anderson Cardenas Diaz 于2018年8月3日周五
> 上午12:34写道:
>
>> Hi!
>>
>> Has someone else experimen
Hi!
Has someone else experimented this problem?:
Sometimes *when a paragraph is executed it shows random output from another
notebook* (from other users also).
We are using zeppelin 0.7.3 and Spark and all other interpreters are
configured in "Per User - Scoped" mode..
Regards.
Hi !
Is there any way to configure a custom spark UI url in the new spark
interpreter implementation ?
That feature was introduced in
https://issues.apache.org/jira/browse/ZEPPELIN-2949, and it is working on
old spark interpreter but not working on the new one.
Regards.
Hi!.
Right now the Zeppelin starting time depends directly on the time it takes
to load the notebooks from the repository. If the user has a lot of
notebooks (ex more than 1000), the starting time starts to be too long.
Is there some plan to re implement this notebooks loading so that it is
done
> 1. Use per user scoped mode so that each user own his own python process
> 2. Use IPySparkInterpreter of zeppelin 0.8 which is better for integration
> python with zeppelin.
>
>
>
> Jhon Anderson Cardenas Diaz 于2018年6月13日周三
> 上午6:15写道:
>
> > Hi!
> >
> &g
be cancelled:
context.py:
# create a signal handler which would be invoked on receiving SIGINT
def signal_handler(signal, frame):
*self.cancelAllJobs()*
raise KeyboardInterrupt()
Is this a zeppelin bug ?
Thank you.
2018-06-12 17:12 GMT-05:00 Jhon Anderson Cardenas Diaz <
jhonderso
g SIGINT
def signal_handler(signal, frame):
self.cancelAllJobs()
raise KeyboardInterrupt()
2018-06-12 9:26 GMT-05:00 Jhon Anderson Cardenas Diaz <
jhonderson2...@gmail.com>:
> Hi!.
> I have 0.8.0 version, from September 2017
>
> 2018-06-12 4:48 GMT-05:00 Jianfeng (Jeff)
Hi!.
I have 0.8.0 version, from September 2017
2018-06-12 4:48 GMT-05:00 Jianfeng (Jeff) Zhang :
>
> Which version do you use ?
>
>
> Best Regard,
> Jeff Zhang
>
>
> From: Jhon Anderson Cardenas Diaz jhonderson2...@gmail.com>>
> Reply-To:
Dear community,
Currently we are having problems with multiple users running paragraphs
associated with pyspark jobs.
The problem is that if an user aborts/cancels his pyspark paragraph (job),
the active pyspark jobs of the other users are canceled too.
Going into detail, I've seen that when you
would think that is another security issue of this approach.What do you
think about it?
2018-05-09 12:53 GMT-05:00 Jhon Anderson Cardenas Diaz <
jhonderson2...@gmail.com>:
>
> -- Forwarded message -
> From: Sam Nicholson
> Date: mié., may. 9, 2018 12:04
>
ystem,
>> but zeppelin web can access the zeppelin executable. So, don't put this
>> up for untrusted users!!!
>>
>> Here is my zeppelin start script:
>> #!/bin/sh
>>
>> cd /var/www/zeppelin/home
>>
>> sudo -u zeppelin /opt/apache/zeppelin/zep
Dear Zeppelin Community,
Currently when a Zeppelin paragraph is executed, the code in it can read
sensitive config files, change them, including web app pages and etc. Like
in this example:
%python
f = open("/usr/zeppelin/conf/credentials.json", "r")
f.read()
Do you know if is there a way to con
Hi!
I am trying to implement a filter inside zeppelin in order to intercept the
petitions and collect metrics about zeppelin performance. I registered the
javax servlet filter in the zeppelin-web/src/WEB-INF/web.xml, and the
filter works well for the REST request; but it does not intercept the
Web
Jhon Anderson Cardenas Diaz created ZEPPELIN-3419:
-
Summary: Potential dependency conflict when the version of a
dependency is changed on zeppelin interpreters
Key: ZEPPELIN-3419
URL: https
> September so not sure if you have that.
>
> Check out
> https://medium.com/@zjffdu/zeppelin-0-8-0-new-features-ea53e8810235 how
> to set this up.
>
>
>
> --
> Ruslan Dautkhanov
>
> On Tue, Mar 13, 2018 at 5:24 PM, Jhon Anderson Cardenas Diaz <
> jhonderson2
Hi zeppelin users !
I am working with zeppelin pointing to a spark in standalone. I am trying
to figure out a way to make zeppelin runs the spark driver outside of
client process that submits the application.
According with the documentation (
http://spark.apache.org/docs/2.1.1/spark-standalone.h
Hi fellow Zeppelin users.
I would like to know if is there a way in zeppelin to set interpreter
properties
that can not be changed by the user from the graphic interface.
An example use case in which this can be useful is if we want that zeppelin
users can not kill jobs from the spark ui; for thi
iple Spark UIs and on top
>>> of that maintaining the security and privacy in a shared multi-tenant env
>>> will need all the flexibility we can get!
>>>
>>> Thanks
>>> Ankit
>>>
>>> On Feb 1, 2018, at 7:51 PM, Jeff Zhang wrote:
>&
Hello!
I'm a software developer and as part of a project I require to extend the
functionality of SparkInterpreter without modifying it. I need instead
create a new interpreter that extends it or wrap its functionality.
I also need the spark sub-interpreters to use my new custom interpreter,
but
23 matches
Mail list logo