I think he’s running Spark on Kubernetes not YARN as cluster manager 

Sent from my iPhone

> On Jun 3, 2021, at 6:05 AM, Mich Talebzadeh <mich.talebza...@gmail.com> wrote:
> 
> 
> Please provide the spark version, the environment you are running (on-prem, 
> cloud etc), state if you are running in YARN etc and your spark-submit 
> parameters.
> 
> Have you checked spark UI default on port 4040 under stages and executor tabs
> 
> HTH
> 
> 
>    view my Linkedin profile
> 
>  
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  
> 
> 
>> On Thu, 3 Jun 2021 at 10:51, Subash Prabanantham <subashpraba...@gmail.com> 
>> wrote:
>> Hi Team,
>> 
>> I am trying to understand how to estimate Kube cpu with respect to Spark 
>> executor cores. 
>> 
>> For example,
>> Job configuration: (given to start)
>> cores/executor = 4
>> # of executors = 240
>> 
>> 
>> But the allocated resources when we ran job are as follows,
>> cores/executor = 4
>> # of executors = 47
>> 
>> So the question, at the time of taking the screenshot 60 tasks were running 
>> in parallel,
>> <Screenshot 2021-06-03 at 10.37.08.png>
>> 
>> (Apologies since the screenshot was taken terminal in top)
>> 
>> 188 cores are allocated with 60 tasks running currently.
>> 
>> Now we I took the quota for the namespace, I got the below,
>> 
>> <Screenshot 2021-06-03 at 10.36.06.png>
>> 
>> 
>> How do I read 5290m == 5.29 CPU and limits == 97 with that of 60 tasks 
>> running in parallel ?
>> 
>> Say for acquiring 512 cores (Spark executors total) what would be the 
>> configuration for Kube requests.cpu and limits.cpu ?
>> 
>> 
>> Thanks,
>> Subash

Reply via email to