Thank you.
I fully agree with you that we need a framework to support distributed
version. IMHO, we cannot afford to develop our own. I'll dig into atomix as
well.
On Tue, Jul 24, 2018 at 1:57 PM, liuxun wrote:
> @Jongyoul Leeļ¼
> Thank you for your attention.
>
> Indeed, as you said, the `Cop
Hi,
I am playing around with execution policy of Spark jobs(and all Zeppelin
paragraphs actually).
Looks like there are couple of control points-
1) Spark scheduling - FIFO vs Fair as documented in
https://spark.apache.org/docs/2.1.1/job-scheduling.html#fair-scheduler-pools
.
Since we are still o
Forgot to mention this is for shared scoped mode, so same Spark application and
context for all users on a single Zeppelin instance.
Thanks
Ankit
> On Jul 24, 2018, at 4:12 PM, Ankit Jain wrote:
>
> Hi,
> I am playing around with execution policy of Spark jobs(and all Zeppelin
> paragraphs ac
Regarding 1. ZEPPELIN-3563 should be helpful. See
https://github.com/apache/zeppelin/blob/master/docs/interpreter/spark.md#running-spark-sql-concurrently
for more details.
https://issues.apache.org/jira/browse/ZEPPELIN-3563
Regarding 2. If you use ParallelScheduler for SparkInterpreter, you may h
Thanks for the quick feedback Jeff.
Re:1 - I did see Zeppelin-3563 but we are not on .8 yet and also we may
want to force FAIR execution instead of letting user control it.
Re:2 - Is there an architecture issue here or we just need better thread
safety? Ideally scheduler should be able to figure
1. Zeppelin-3563 force FAIR scheduling and just allow to specify the pool
2. scheduler can not to figure out the dependencies between paragraphs.
That's why SparkInterpreter use FIFOScheduler.
If you use per user scoped mode. SparkContext is shared between users but
SparkInterpreter is not shared.
Aah that makes sense - so only all jobs from one user will block in
FIFOScheduler.
By moving to ParallelScheduler, only gain achieved is jobs from same user
can also be run in parallel but may have dependency resolution issues.
Just to confirm I have it right - If "Run all" notebook is not a
requ