hi,Sarthak Sharma
You can be at the zeppelin server,
Run ./bin/spark-submit --class org.apache.spark.examples.SparkPi,
Test it to see if there is a problem with the spark runtime environment on the
zeppelin server.
> 在 2018年11月20日,下午5:39,Sarthak Sharma 写道:
>
> Is it similar to an existing bug
Is it similar to an existing bug related to the interpreter processes
getting stuck ? (wherein the workaround is to kill the application on yarn,
restart the interpreter from the interface and then try resubmitting the
query again).
The problem in this case is that it is intermittently happening on
If *zeppelin.interpreter.connect.timeout *is reached, but the yarn app is
still in ACCEPTED state, then this should be a bug. The yarn app should be
killed it it can not be created in the timeout threashold
Sarthak Sharma 于2018年11月20日周二 下午4:47写道:
> Hey,
>
> Like you mentioned, I'm already using
Hey,
Like you mentioned, I'm already using the *spark.yarn.queue* parameter,
hence I know which yarn queue it is getting scheduled in and this queue has
resources available for applications since other apps are also getting
scheduled there.
However, assuming the queue does NOT have resources for i