Dear all,
Our team is using zeppelin to submit ad hoc queries to our spark cluster.
There are many people using the zeppelin at the same time. Sometimes, we
need to wait each other and the task is pending for a long time. Is there a
place to see the task queue in the zeppelin?
Thanks,
Wush
Hi Wush,
by the moment you can not know but would be a nice feature...
thks
2015-08-25 10:03 GMT+02:00 Wush Wu :
> Dear all,
>
> Our team is using zeppelin to submit ad hoc queries to our spark cluster.
> There are many people using the zeppelin at the same time. Sometimes, we
> need to wa
Hi Eran/moon,
sorry for the late update, but the sad part is I am still facing issue in
installing zeppelin on my machine.
Eran: I was running as sudo due to following failed statement
[INFO] Running 'bower --allow-root install' in
/home/nihal/bigdata/hadoop/incubator-zeppelin/zeppelin-web[ER
Hi Wush,
Spark SQL can run concurrently by setting 'zeppelin.spark.concurrentSQL'
true in Interpreter page.
scala/python code can not run concurrently at the moment. Here's a related
discussion.
http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/why-zeppelin-SparkInterpreter
Hi,
I am new to zeppelin. I am linking a paragraph e.g.
https://usdaspark.azurehdinsight.net:8003/#/notebook/2ARN12QUS/paragraph/20150617-190203_77786547?asIframe
.Now if I want to add a loading anim (fa icon), where I should tweak?
Dipanjan Nag,
https://dipanjan.me
--
Hi I installed zeppelin on a remote machine and the spark.home variable is
set to a spark installation on my local machine.
Any ideas on how to change that and why it happened?
Thanks in advance.
•••
Sean Barzilay
Hi,
I am trying to use Zeppelin Charts to visualize some data.
But I could not find a way to put some labels on axis. Also I have a data
whose value goes between lets say, 30.2 to 30.8 on Y-Axis. Unfortunately,
Y-Axis starts from 0 and thus I can only see a straight line which is not
expected.
So
Dear Victors,
Thanks for your reply. I'll create a ticket for this feature.
Dear Moon,
Thanks for the information. Sadly, our task are usually written in scala
and python. FIFO is OK for us now, but knowing the job queue should be
helpful.
Wush
2015-08-25 23:20 GMT+08:00 moon soo Lee :
> Hi W
Hi Moon,
I think releasing SparkIMain and related objects
By packaging I meant to ask what is the process to "release SparkIMain
and related objects"? for Zeppelin's code uptake?
I have one more question:
Most the changes to allow SparkInterpreter support ParallelScheduler are
implemented bu