Hello,

I'm running Zeppelin in yarn-client mode. I'm using SQL interpreter and
pyspark interpreter to run some query and python jobs in shared mode per
note. Sometimes when I run multiple jobs at same time. It's using lot's of
CPU. I try to check the problem and I found that It's because of It creates
spark driver for each notebook.


My Question are
1. How I can tune Zeppelin to Handle large amount of concurrent jobs to fix
"GC overhead limit exceeded" ?
2. How can I scale the zeppelin with number of users ?
3. If memory or CPU is not available, Is there any way to backlog the jobs ?

Thanks & Regards
Chintan

Reply via email to