Hi,

We have recently migrated from Spark 2.4.4 to Spark 3.0.1 and using Spark
in virtual machine/bare metal as standalone deployment and as kubernetes
deployment as well.

There is a kernel parameter named as 'vm.swappiness' and we keep its value
as '1' in standard deployment. Now since we are moving to kubernetes and on
kubernetes worker nodes the value of this parameter is '60'.

Now my question is if it is OK to keep such a high value of
'vm.swappiness'=60 in kubernetes environment for Spark workloads.

Will such high value of this kernel parameter have performance impact on
Spark PODs?
As per below link from cloudera, they suggest not to set such a high value.

https://docs.cloudera.com/cloudera-manager/7.2.6/managing-clusters/topics/cm-setting-vmswappiness-linux-kernel-parameter.html

Any thoughts/suggestions on this are highly appreciated.

Regards
Jahar Tyagi

Reply via email to