todays
question.
From: Matt Cheah
Reply: Matt Cheah
Date: July 1, 2019 at 5:14:05 PM
To: Pat Ferrel ,
user@spark.apache.org
Subject: Re: k8s orchestrating Spark service
> We’d like to deploy Spark Workers/Executors and Master (whatever master
is easiest to talk about since we really do
n the ML server process has to create
a SparkContext object parameterized against the Kubernetes server in question.
I hope this helps!
-Matt Cheah
From: Pat Ferrel
Date: Monday, July 1, 2019 at 5:05 PM
To: "user@spark.apache.org" , Matt Cheah
Subject: Re: k8s orchestrating Spar
Subject: Re: k8s orchestrating Spark service
k8s as master would be nice but doesn’t solve the problem of running the
full cluster and is an orthogonal issue.
We’d like to deploy Spark Workers/Executors and Master (whatever master is
easiest to talk about since we really don’t care) in pods as we do
anyone have something they like?
From: Matt Cheah
Reply: Matt Cheah
Date: July 1, 2019 at 4:45:55 PM
To: Pat Ferrel ,
user@spark.apache.org
Subject: Re: k8s orchestrating Spark service
Sorry, I don’t quite follow – why use the Spark standalone cluster as an
in-between layer when one can just
apache.org
Subject: Re: k8s orchestrating Spark service
I would recommend looking into Spark’s native support for running on
Kubernetes. One can just start the application against Kubernetes directly
using spark-submit in cluster mode or starting the Spark context with the right
parameters in c
services including Spark. The rest work,
we are asking if anyone has seen a good starting point for adding Spark as
a k8s managed service.
From: Matt Cheah
Reply: Matt Cheah
Date: July 1, 2019 at 3:26:20 PM
To: Pat Ferrel ,
user@spark.apache.org
Subject: Re: k8s orchestrating Spark service
I would recommend looking into Spark’s native support for running on
Kubernetes. One can just start the application against Kubernetes directly
using spark-submit in cluster mode or starting the Spark context with the right
parameters in client mode. See
https://spark.apache.org/docs/latest/run