Multiple concurrent spark notebooks

2016-10-05 Thread Mark Libucha
Hi everyone, I've got Zeppelin running against a Cloudera/Yarn/Spark cluster and everything seems to be working fine. Very cool. One minor issue, though. When one notebook is running, others queue up behind it. Is there a way to run multiple notebooks concurrently? Both notebooks are running the

Re: Multiple concurrent spark notebooks

2016-10-06 Thread Mark Libucha
Mich, thanks for the suggestion. I tried your settings, but they did not solve the problem. I'm running in yarn-client mode, not local or standalone, so the resources in the Spark cluster (which is very large) should not be an issue. Right? The problem seems to be that Zeppelin is not submitting

Re: Multiple concurrent spark notebooks

2016-10-06 Thread Mark Libucha
he comment of Mohit might be important if you have > spark.dynamicAllocation.enabled set to true and no limits on the number > and resources of executors. > > Andreas > > On Thu, 6 Oct 2016 at 16:28 Mark Libucha wrote: > >> Mich, thanks for the suggestion. I tried your settings, but

No active SparkContext black hole

2016-10-06 Thread Mark Libucha
Hello again, On "longer" running jobs (I'm using yarn-client mode), I sometimes get RPC timeouts. Seems like Zeppelin is losing connectivity with the Spark cluster. I can deal with that. But my notebook has sections stuck in the "Cancel" state, and I can't get them out. When I re-click on cancel,

Re: No active SparkContext black hole

2016-10-06 Thread Mark Libucha
Actually, it's stuck in the Running state. Trying to cancel it causes the No active SparkContext to appear in the log. Seems like a bug. On Thu, Oct 6, 2016 at 9:06 AM, Mark Libucha wrote: > Hello again, > > On "longer" running jobs (I'm using yarn-client mode), I

Re: No active SparkContext black hole

2016-10-10 Thread Mark Libucha
> Best Regard, > Jeff Zhang > > > From: Mark Libucha > Reply-To: "users@zeppelin.apache.org" > Date: Friday, October 7, 2016 at 12:11 AM > To: "users@zeppelin.apache.org" > Subject: Re: No active SparkContext black hole > > Actually, it's