Hey, 

We have a cluster of 10 nodes each of which consists 128GB memory. We are about 
to running Spark and Alluxio on the cluster.  We wonder how shall allocate the 
memory to the Spark executor and the Alluxio worker on a machine? Are there 
some recommendations? Thanks!


Best,
Andy Li

Reply via email to