Re: Executors and Cores

2016-05-15 Thread Mich Talebzadeh
Hi Pradeep, Resources allocated for each Spark app can be capped to allow a balanced resourcing for all apps. However, you really need to monitor each app. One option would be to use jmonitor package to look at resource usage (heap, CPU, memory etc) for each job. In general you should not alloca

Re: Executors and Cores

2016-05-15 Thread Mail.com
Hi Mich, We have HDP 2.3.2 where spark will run on 21 nodes each having 250 gb memory. Jobs run in yarn-client and yarn-cluster mode. We have other teams using the same cluster to build their applications. Regards, Pradeep > On May 15, 2016, at 1:37 PM, Mich Talebzadeh > wrote: > > Hi Pra

Re: Executors and Cores

2016-05-15 Thread Jacek Laskowski
On Sun, May 15, 2016 at 8:19 AM, Mail.com wrote: > In all that I have seen, it seems each job has to be given the max resources > allowed in the cluster. Hi, I'm fairly sure it was because FIFO scheduling mode was used. You could change it to FAIR and make some adjustments. https://spark.apac

Re: Executors and Cores

2016-05-15 Thread Mich Talebzadeh
Hi Pradeep, In your case what type of cluster we are taking about? A standalone cluster? HTh Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw * h

Re: Executors and Cores

2016-05-15 Thread Ted Yu
For the last question, have you looked at: https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation FYI On Sun, May 15, 2016 at 5:19 AM, Mail.com wrote: > Hi , > > I have seen multiple videos on spark tuning which shows how to determine # > cores, #executors and memory size o