none of the executor pods are getting killed whereas when I run a
simple SparkPi application to test it with the same image executors are
getting killed and the driver shows the status as Completed.*
Can someone please guide me on this issue.
Regards
Manish Gupta
Kube-api server logs are not enabled. I will enable and check and get back
on this.
Regards
Manish Gupta
On Tue, Oct 1, 2019 at 9:05 PM Prudhvi Chennuru (CONT) <
prudhvi.chenn...@capitalone.com> wrote:
> If you are passing the service account for executors as spark property
> then e
the cluster, if you check the kube-apisever logs you
> will know the issue
> and try giving privileged access to default service account in the
> namespace you are creating the executors it should work.
>
> On Tue, Oct 1, 2019 at 10:25 AM manish gupta
> wrote:
>
>> Hi
sure
why it cannot launch executor pod even though it has ample resources.I dont
see any error message in the logs apart from the warning message that I
have provided above.
Not even a single executor pod is getting launched.
Regards
Manish Gupta
On Tue, Oct 1, 2019 at 6:31 PM Prudhvi Chennuru (CONT
issue would be of great help.
Thanks and Regards
Manish Gupta
.
Thanks,
Manish
From: Evo Eftimov [mailto:evo.efti...@isecc.com]
Sent: Thursday, April 16, 2015 10:38 PM
To: Manish Gupta 8; user@spark.apache.org
Subject: RE: General configurations on CDH5 to achieve maximum Spark Performance
Well there are a number of performance tuning guidelines in dedicated
behind a single laptop
running Spark.
Having a standard checklist (taking a base node size of 4-CPU, 16GB RAM) would
be really great. Any pointers in this regards will be really helpful.
We are running Spark 1.2.0 on CDH 5.3.0.
Thanks,
Manish Gupta
Specialist | Sapient Global Markets
Green
If I try to build spark-notebook with "spark.version"="1.2.0-cdh5.3.0", sbt
throw these warnings before failing to compile:
:: org.apache.spark#spark-yarn_2.10;1.2.0-cdh5.3.0: not found
:: org.apache.spark#spark-repl_2.10;1.2.0-cdh5.3.0: not found
Any suggestions?
Thanks
Thanks for the information Andy. I will go through the versions mentioned in
Dependencies.scala to identify the compatibility.
Regards,
Manish
From: andy petrella [mailto:andy.petre...@gmail.com]
Sent: Tuesday, April 07, 2015 11:04 AM
To: Manish Gupta 8; user@spark.apache.org
Subject: Re
Hi,
We are trying to build a Play framework based web application integrated with
Apache Spark. We are running Apache Spark 1.2.0 CDH 5.3.0. But struggling with
akka version conflicts (errors like java.lang.NoSuchMethodError in akka). We
have tried Play 2.2.6 as well as Activator 1.3.2.
If any
Has anyone else faced this issue of running spark-shell (yarn client mode) in
an environment with strict firewall rules (on fixed allowed incoming ports)?
How can this be rectified?
Thanks,
Manish
From: Manish Gupta 8
Sent: Thursday, March 26, 2015 4:09 PM
To: user@spark.apache.org
Subject
Hi,
I am running spark-shell and connecting with a yarn cluster with deploy mode as
"client". In our environment, there are some security policies that doesn't
allow us to open all TCP port.
Issue I am facing is: Spark Shell driver is using a random port for
BlockManagerID - BlockManagerId(, ho
Thanks Reza. It makes perfect sense.
Regards,
Manish
From: Reza Zadeh [mailto:r...@databricks.com]
Sent: Thursday, March 19, 2015 11:58 PM
To: Manish Gupta 8
Cc: user@spark.apache.org
Subject: Re: Column Similarity using DIMSUM
Hi Manish,
With 56431 columns, the output can be as large as 56431
RAM).
My question – Do you think this is a hardware size issue and we should test it
on larger machines?
Regards,
Manish
From: Manish Gupta 8 [mailto:mgupt...@sapient.com]
Sent: Wednesday, March 18, 2015 11:20 PM
To: Reza Zadeh
Cc: user@spark.apache.org
Subject: RE: Column Similarity using DI
Hi Reza,
I have tried threshold to be only in the range of 0 to 1. I was not aware that
threshold can be set to above 1.
Will try and update.
Thank You
- Manish
From: Reza Zadeh [mailto:r...@databricks.com]
Sent: Wednesday, March 18, 2015 10:55 PM
To: Manish Gupta 8
Cc: user@spark.apache.org
cur?
Thanks,
Manish Gupta
16 matches
Mail list logo