Spark executor pods not getting killed after task completion

2019-10-23 Thread manish gupta
none of the executor pods are getting killed whereas when I run a simple SparkPi application to test it with the same image executors are getting killed and the driver shows the status as Completed.* Can someone please guide me on this issue. Regards Manish Gupta

Re: [External Sender] Spark Executor pod not getting created on kubernetes cluster

2019-10-01 Thread manish gupta
Kube-api server logs are not enabled. I will enable and check and get back on this. Regards Manish Gupta On Tue, Oct 1, 2019 at 9:05 PM Prudhvi Chennuru (CONT) < prudhvi.chenn...@capitalone.com> wrote: > If you are passing the service account for executors as spark property > then e

Re: [External Sender] Spark Executor pod not getting created on kubernetes cluster

2019-10-01 Thread manish gupta
the cluster, if you check the kube-apisever logs you > will know the issue > and try giving privileged access to default service account in the > namespace you are creating the executors it should work. > > On Tue, Oct 1, 2019 at 10:25 AM manish gupta > wrote: > >> Hi

Re: [External Sender] Spark Executor pod not getting created on kubernetes cluster

2019-10-01 Thread manish gupta
sure why it cannot launch executor pod even though it has ample resources.I dont see any error message in the logs apart from the warning message that I have provided above. Not even a single executor pod is getting launched. Regards Manish Gupta On Tue, Oct 1, 2019 at 6:31 PM Prudhvi Chennuru (CONT

Spark Executor pod not getting created on kubernetes cluster

2019-09-30 Thread manish gupta
issue would be of great help. Thanks and Regards Manish Gupta

RE: General configurations on CDH5 to achieve maximum Spark Performance

2015-04-16 Thread Manish Gupta 8
. Thanks, Manish From: Evo Eftimov [mailto:evo.efti...@isecc.com] Sent: Thursday, April 16, 2015 10:38 PM To: Manish Gupta 8; user@spark.apache.org Subject: RE: General configurations on CDH5 to achieve maximum Spark Performance Well there are a number of performance tuning guidelines in dedicated

General configurations on CDH5 to achieve maximum Spark Performance

2015-04-16 Thread Manish Gupta 8
behind a single laptop running Spark. Having a standard checklist (taking a base node size of 4-CPU, 16GB RAM) would be really great. Any pointers in this regards will be really helpful. We are running Spark 1.2.0 on CDH 5.3.0. Thanks, Manish Gupta Specialist | Sapient Global Markets Green

RE: Spark 1.2.0 with Play/Activator

2015-04-07 Thread Manish Gupta 8
If I try to build spark-notebook with "spark.version"="1.2.0-cdh5.3.0", sbt throw these warnings before failing to compile: :: org.apache.spark#spark-yarn_2.10;1.2.0-cdh5.3.0: not found :: org.apache.spark#spark-repl_2.10;1.2.0-cdh5.3.0: not found Any suggestions? Thanks

RE: Spark 1.2.0 with Play/Activator

2015-04-06 Thread Manish Gupta 8
Thanks for the information Andy. I will go through the versions mentioned in Dependencies.scala to identify the compatibility. Regards, Manish From: andy petrella [mailto:andy.petre...@gmail.com] Sent: Tuesday, April 07, 2015 11:04 AM To: Manish Gupta 8; user@spark.apache.org Subject: Re

Spark 1.2.0 with Play/Activator

2015-04-06 Thread Manish Gupta 8
Hi, We are trying to build a Play framework based web application integrated with Apache Spark. We are running Apache Spark 1.2.0 CDH 5.3.0. But struggling with akka version conflicts (errors like java.lang.NoSuchMethodError in akka). We have tried Play 2.2.6 as well as Activator 1.3.2. If any

RE: Port configuration for BlockManagerId

2015-03-29 Thread Manish Gupta 8
Has anyone else faced this issue of running spark-shell (yarn client mode) in an environment with strict firewall rules (on fixed allowed incoming ports)? How can this be rectified? Thanks, Manish From: Manish Gupta 8 Sent: Thursday, March 26, 2015 4:09 PM To: user@spark.apache.org Subject

Port configuration for BlockManagerId

2015-03-26 Thread Manish Gupta 8
Hi, I am running spark-shell and connecting with a yarn cluster with deploy mode as "client". In our environment, there are some security policies that doesn't allow us to open all TCP port. Issue I am facing is: Spark Shell driver is using a random port for BlockManagerID - BlockManagerId(, ho

RE: Column Similarity using DIMSUM

2015-03-19 Thread Manish Gupta 8
Thanks Reza. It makes perfect sense. Regards, Manish From: Reza Zadeh [mailto:r...@databricks.com] Sent: Thursday, March 19, 2015 11:58 PM To: Manish Gupta 8 Cc: user@spark.apache.org Subject: Re: Column Similarity using DIMSUM Hi Manish, With 56431 columns, the output can be as large as 56431

RE: Column Similarity using DIMSUM

2015-03-19 Thread Manish Gupta 8
RAM). My question – Do you think this is a hardware size issue and we should test it on larger machines? Regards, Manish From: Manish Gupta 8 [mailto:mgupt...@sapient.com] Sent: Wednesday, March 18, 2015 11:20 PM To: Reza Zadeh Cc: user@spark.apache.org Subject: RE: Column Similarity using DI

RE: Column Similarity using DIMSUM

2015-03-18 Thread Manish Gupta 8
Hi Reza, I have tried threshold to be only in the range of 0 to 1. I was not aware that threshold can be set to above 1. Will try and update. Thank You - Manish From: Reza Zadeh [mailto:r...@databricks.com] Sent: Wednesday, March 18, 2015 10:55 PM To: Manish Gupta 8 Cc: user@spark.apache.org

Column Similarity using DIMSUM

2015-03-18 Thread Manish Gupta 8
cur? Thanks, Manish Gupta