Hi,
I have a three node k8s cluster (GKE) in Google cloud with E2
standard machines that have 4 GB of system memory per VCPU giving 4 VPCU
and 16,384MB of RAM.
An optimum sizing of the number of executors, CPU and memory allocation is
important here. These are the assumptions:
1. You want to
log4j 1.2.17 is not vulnerable. There is an existing CVE there from a log
aggregation servlet; Cloudera products ship a patched release with that
servlet stripped...asf projects are not allowed to do that.
But: some recent Cloudera Products do include log4j 2.x, so colleagues of
mine are busy patc
FWIW here is the Databricks statement on it. Not the same as Spark but
includes Spark of course.
https://databricks.com/blog/2021/12/13/log4j2-vulnerability-cve-2021-44228-research-and-assessment.html
Yes the question is almost surely more whether user apps are affected, not
Spark itself.
On Tue