Hi, Amin
In general, the Apache Spark community has received many feedbacks and been
moving forward to
- Use the latest Hadoop versions for more bug fixes including CVEs.
- Use Hadoop's shaded clients to minimize the dependency issues
Since the above is not achievable with Hadoop 2 clients, I be
the official doc, https://spark.apache.org/docs/latest/job-scheduling.html,
didn't mention that its working for kubernete cluster?
Can anyone quickly answer this?
TIA.
Jason
Hi,
On Mon, Apr 11, 2022 at 7:43 AM Jason Jun wrote:
> the official doc, https://spark.apache.org/docs/latest/job-scheduling.html,
> didn't mention that its working for kubernete cluster?
>
You could use Volcano scheduler for more advanced setups on Kubernetes.
Here is an article explaining ho