ler. Let me know if you have any other question.
>
>
>
> Ankit Jain 于2018年7月25日周三 下午10:27写道:
>> Jeff, what you said seems to be in conflict with what is detailed here -
>> https://medium.com/@leemoonsoo/apache-zeppelin-interpreter-mode-explained-bae0525d0555
>>
>> &
ractice as well we see one Interpreter process for scoped mode.
Can you please clarify?
Adding Moon too.
Thanks
Ankit
On Tue, Jul 24, 2018 at 11:09 PM, Ankit Jain
wrote:
> Aah that makes sense - so only all jobs from one user will block in
> FIFOScheduler.
>
> By moving to Parall
ame SparkContext but they
> doesn't share the same FIFOScheduler, each SparkInterpreter use its own
> FIFOScheduler.
>
> Ankit Jain 于2018年7月25日周三 下午12:58写道:
>
>> Thanks for the quick feedback Jeff.
>>
>> Re:1 - I did see Zeppelin-3563 but we are not on .8 yet and
st you to use scoped per user mode. Then each user will share the same
> sparkcontext which means you can save resources, and also they are in each
> FIFOScheduler which is isolated from each other.
>
> Ankit Jain 于2018年7月25日周三 上午8:14写道:
>
>> Forgot to mention this is for sha
Forgot to mention this is for shared scoped mode, so same Spark application and
context for all users on a single Zeppelin instance.
Thanks
Ankit
> On Jul 24, 2018, at 4:12 PM, Ankit Jain wrote:
>
> Hi,
> I am playing around with execution policy of Spark jobs(and all Zeppelin
Hi,
I am playing around with execution policy of Spark jobs(and all Zeppelin
paragraphs actually).
Looks like there are couple of control points-
1) Spark scheduling - FIFO vs Fair as documented in
https://spark.apache.org/docs/2.1.1/job-scheduling.html#fair-scheduler-pools
.
Since we are still o
Add the library to interpreter settings?
On Wed, May 30, 2018 at 11:07 AM, Michael Segel
wrote:
> Hi,
>
> Ok… I wanted to include the Apache commons compress libraries for use in
> my spark/scala note.
>
> I know I can include it in the first note by telling the interpreter to
> load… but I did
lin cluster if all files
> are in a shared NFS directory for example, not as active-active.
> One simple example - they would keep overwriting configuration files /
> notebook files etc.
>
>
>
> --
> Ruslan Dautkhanov
>
> On Mon, Apr 30, 2018 at 4:04 PM, ankit jain
>
You can probably deploy Zeppelin on n machines and manage behind a
LoadBalancer?
Thanks
Ankit
On Mon, Apr 30, 2018 at 6:42 AM, Michael Segel
wrote:
> Ok..
> The answer is no.
>
> You have a web interface. It runs on a web server. Does the web server
> run in a cluster?
>
>
> On Apr 30, 2018, a
We are seeing the same PENDING behavior despite running Spark Interpreter
in "Isolated per User" - we expected one SparkContext to be created per
user and indeed did see multiple SparkSubmit processes spun up on Zeppelin
pod.
But why go to PENDING if there are multiple contexts that can be run in
Also spark standalone cluster moder should work even before this new
release, right?
On Wed, Mar 14, 2018 at 8:43 AM, ankit jain wrote:
> Hi Jhang,
> Not clear on that - I thought spark-submit was done when we run a
> paragraph, how does the .sh file come into play?
>
> Thanks
It is expected to run driver in separate host, but didn't
> guaranteed zeppelin support this.
>
> Ankit Jain 于2018年3月14日周三 上午8:34写道:
>
>> Hi Jhang,
>> What is the expected behavior with standalone cluster mode? Should we see
>> separate driver processes in the
Hi Jhang,
What is the expected behavior with standalone cluster mode? Should we see
separate driver processes in the cluster(one per user) or multiple SparkSubmit
processes?
I was trying to dig in Zeppelin code & didn’t see where Zeppelin does the
Spark-submit to the cluster? Can you please poi
You can save the notebooks in something like S3 and then copy those to
Zeppelin during restart.
On Tue, Feb 13, 2018 at 9:28 AM, moon soo Lee wrote:
> Hi,
>
> Currently we don't have a way I think. But it will be really nice to have.
> Especially with https://issues.apache.org/jira/browse/ZEPPEL
Any rough estimate Jeff - within next week or so or by end of Feb?
Thanks
Ankit
On Fri, Feb 2, 2018 at 6:35 AM, Jeff Zhang wrote:
>
> It would be pretty soon, I am preparing the release of 0.8.0
>
>
> wilsonr guevara arevalo 于2018年2月2日周五 下午10:08写道:
>
>> Hi,
>>
>> I'm currently working with a ze
eed to do is just setting SPARK_HOME
> properly in their interpreter setting.
>
>
> Ankit Jain 于2018年2月2日周五 下午1:36写道:
>> This is exactly what we want Jeff! A hook to plug in our own interpreters.
>> (I am on same team as Jhon btw)
>>
>> Right now there are t
This is exactly what we want Jeff! A hook to plug in our own interpreters.
(I am on same team as Jhon btw)
Right now there are too many concrete references and injecting stuff is not
possible.
Eg of customizations -
1) Spark UI which works differently on EMR than standalone, so that logic will
We can put it like notebook-security jar is handled Jhon.
On Fri, Jan 26, 2018 at 2:21 PM, Jhon Anderson Cardenas Diaz <
jhonderson2...@gmail.com> wrote:
> Hi fellow Zeppelin users,
>
> I would like to create another implementation of
> org.apache.zeppelin.notebook.repo.NotebookRepo interface in
Don't think that works, it just loads a blank page.
On Wed, Jan 24, 2018 at 11:06 PM, Jeff Zhang wrote:
>
> But if you don't set it in interpreter setting, it would get spark ui url
> dynamically.
>
>
>
> ankit jain 于2018年1月25日周四 下午3:03写道:
>
>> That
> url.
>
> https://github.com/apache/zeppelin/blob/master/spark/
> src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java#L940
>
>
> ankit jain 于2018年1月25日周四 下午2:55写道:
>
>> Issue with Spark UI when running on AWS EMR is it requires ssh tunneling
>> to be setu
g, you can file a ticket to fix it. Although you can make a custom
> interpreter by extending the current spark interpreter, it is not a trivial
> work.
>
>
> ankit jain 于2018年1月25日周四 上午8:07写道:
>
>> Hi fellow Zeppelin users,
>> Has anyone tried to write a custom Sp
Hi fellow Zeppelin users,
Has anyone tried to write a custom Spark Interpreter perhaps extending from
the one that ships currently with zeppelin - spark/src/main/java/org/
apache/zeppelin/spark/*SparkInterpreter.java?*
We are coming across cases where we need the interpreter to do "more", eg
chang
Hi Zeppelin users,
We are following https://issues.apache.org/jira/browse/ZEPPELIN-2949 to
launch spark ui.
Our Zeppelin instance is deployed on AWS EMR master node and setting
zeppelin.spark.uiWebUrl to a url which elb maps to https://masternode:4040.
When user clicks on spark url within Zeppel
23 matches
Mail list logo