ler. Let me know if you have any other question.
>
>
>
> Ankit Jain 于2018年7月25日周三 下午10:27写道:
>> Jeff, what you said seems to be in conflict with what is detailed here -
>> https://medium.com/@leemoonsoo/apache-zeppelin-interpreter-mode-explained-bae0525d0555
>>
>> &
ractice as well we see one Interpreter process for scoped mode.
Can you please clarify?
Adding Moon too.
Thanks
Ankit
On Tue, Jul 24, 2018 at 11:09 PM, Ankit Jain
wrote:
> Aah that makes sense - so only all jobs from one user will block in
> FIFOScheduler.
>
> By moving to Parall
ame SparkContext but they
> doesn't share the same FIFOScheduler, each SparkInterpreter use its own
> FIFOScheduler.
>
> Ankit Jain 于2018年7月25日周三 下午12:58写道:
>
>> Thanks for the quick feedback Jeff.
>>
>> Re:1 - I did see Zeppelin-3563 but we are not on .8 yet and
st you to use scoped per user mode. Then each user will share the same
> sparkcontext which means you can save resources, and also they are in each
> FIFOScheduler which is isolated from each other.
>
> Ankit Jain 于2018年7月25日周三 上午8:14写道:
>
>> Forgot to mention this is for sha
Forgot to mention this is for shared scoped mode, so same Spark application and
context for all users on a single Zeppelin instance.
Thanks
Ankit
> On Jul 24, 2018, at 4:12 PM, Ankit Jain wrote:
>
> Hi,
> I am playing around with execution policy of Spark jobs(and all Zeppelin
Hi,
I am playing around with execution policy of Spark jobs(and all Zeppelin
paragraphs actually).
Looks like there are couple of control points-
1) Spark scheduling - FIFO vs Fair as documented in
https://spark.apache.org/docs/2.1.1/job-scheduling.html#fair-scheduler-pools
.
Since we are still o
Also spark standalone cluster moder should work even before this new
release, right?
On Wed, Mar 14, 2018 at 8:43 AM, ankit jain wrote:
> Hi Jhang,
> Not clear on that - I thought spark-submit was done when we run a
> paragraph, how does the .sh file come into play?
>
> Thanks
It is expected to run driver in separate host, but didn't
> guaranteed zeppelin support this.
>
> Ankit Jain 于2018年3月14日周三 上午8:34写道:
>
>> Hi Jhang,
>> What is the expected behavior with standalone cluster mode? Should we see
>> separate driver processes in the
Hi Jhang,
What is the expected behavior with standalone cluster mode? Should we see
separate driver processes in the cluster(one per user) or multiple SparkSubmit
processes?
I was trying to dig in Zeppelin code & didn’t see where Zeppelin does the
Spark-submit to the cluster? Can you please poi
This is exactly what we want Jeff! A hook to plug in our own interpreters.
(I am on same team as Jhon btw)
Right now there are too many concrete references and injecting stuff is not
possible.
Eg of customizations -
1) Spark UI which works differently on EMR than standalone, so that logic will
eed to do is just setting SPARK_HOME
> properly in their interpreter setting.
>
>
> Ankit Jain 于2018年2月2日周五 下午1:36写道:
>> This is exactly what we want Jeff! A hook to plug in our own interpreters.
>> (I am on same team as Jhon btw)
>>
>> Right now there are t
11 matches
Mail list logo