Hi Mark,

Zeppelin on Park uses Spark interpreter

Edit the interpreter. By default Zeppelin uses local mode as seen below

[image: Inline images 1]

You can of course change that to standalone mode by specifying

master spark://<IP_ADDRESS>:7077

and increase cores.max and spark.executor.memory as shown above.

HTH



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 5 October 2016 at 19:20, Mohit Jaggi <mohitja...@gmail.com> wrote:

> change your spark settings so that the REPL does not get the whole
> cluster. e.g. by reducing the executor memory and cpu allocation.
>
> Mohit Jaggi
> Founder,
> Data Orchard LLC
> www.dataorchardllc.com
>
>
>
>
> > On Oct 5, 2016, at 11:02 AM, Mark Libucha <mlibu...@gmail.com> wrote:
> >
> > Hi everyone,
> >
> > I've got Zeppelin running against a Cloudera/Yarn/Spark cluster and
> everything seems to be working fine. Very cool.
> >
> > One minor issue, though. When one notebook is running, others queue up
> behind it. Is there a way to run multiple notebooks concurrently? Both
> notebooks are running the pyspark interpreter.
> >
> > Thanks,
> >
> > Mark
> >
>
>

Reply via email to