There’s at least one test (the persistent volumes one) that relies on some
Minikube functionality because we run integration tests for our $dayjob Spark
image builds using Docker for Desktop instead and that one test fails because
it relies on some minikube specific functionality. That test cou
So the point Khalid was trying to make is that there are legitimate reasons you
might use different container images for the driver pod vs the executor pod.
It has nothing to do with Docker versions.
Since the bulk of the actual work happens on the executor you may want
additional libraries
Mich
I think you may just have a typo in your configuration.
These properties all have container in the name, e.g.
spark.kubernetes.driver.container.image, BUT you seem to be replacing container
with docker in your configuration files so Spark doesn’t recognise the property
(i.e. you hav
Folks
For those using the Kubernetes support and building custom images are you using
a JDK or a JRE in the container images?
Using a JRE saves a reasonable chunk of image size (about 50MB with our
preferred Linux distro) but I didn’t want to make this change if there was a
reason to hav
n Wed, Apr 17, 2019 at 8:49 AM Rob Vesse wrote:
>
> Folks
>
>
>
> For those using the Kubernetes support and building custom images are
you using a JDK or a JRE in the container images?
>
>
>
&g
I have seen issues with some versions of the Scala Maven plugin auto-detecting
the wrong JAVA_HOME when both a JRE and JDK are present on the system. Setting
JAVA_HOME explicitly to a JDK skips the plugins auto-detect logic and avoids
the problem.
This may be related - https://github.com/da
The difficulty with a custom Spark config is that you need to be careful that
the Spark config the user provides does not conflict with the auto-generated
portions of the Spark config necessary to make Spark on K8S work. So part of
any “API” definition might need to be what Spark config is cons
Kubernetes support was only added as an experimental feature in Spark 2.3.0
It does not exist in the Apache Spark branch-2.2
If you really must build for Spark 2.2 you will need to use
branch-2.2-kubernetes from the apache-spark-on-k8s fork on GitHub
Note that there are various functio
Hey all
For those following the K8S backend you are probably aware of SPARK-24434 [1]
(and PR 22416 [2]) which proposes a mechanism to allow for advanced pod
customisation via pod templates. This is motivated by the fact that
introducing additional Spark configuration properties for each as
Folks
One of the big limitations of the current Spark on K8S implementation is that
it isn’t possible to use local dependencies (SPARK-23153 [1]) i.e. code, JARs,
data etc that only lives on the submission client. This basically leaves end
users with several options on how to actually run t
.
Rob
From: Felix Cheung
Date: Sunday, 7 October 2018 at 23:00
To: Yinan Li , Stavros Kontopoulos
Cc: Rob Vesse , dev
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes
Jars and libraries only accessible locally at the driver is fairly limited?
Don’t you want the same on
?
I guess at this stage I am just throwing ideas out there and trying to figure
out what’s practical/reasonable
Rob
From: Yinan Li
Date: Monday, 8 October 2018 at 17:36
To: Rob Vesse
Cc: dev
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes
However, the pod must be up and
Right now the Kerberos support for Spark on K8S is only on master AFAICT i.e.
the feature is not present on branch-2.4
Therefore I don’t see any point in adding the tests into branch-2.4 unless the
plan is to also merge the Kerberos support to branch-2.4
Rob
From: Erik Erlandson
Dat
13 matches
Mail list logo