We are going to deploy 20 physical Linux servers for use as an on-premise
Spark & HDFS on Kubernetes cluster. My question is: within this
architecture, is it best to have the pods run directly on bare metal or
under VMs or system containers like LXC and/or under an on-premise instance
of something like OpenStack - or something else altogether ?

I am looking to garner any experience around this question relating
directly to the specific use case of Spark & HDFS on Kuberenetes - I know
there are also general points to consider regardless of the use case.

Reply via email to