Hi

I'm using Spark 0.9.1 and Shark 0.9.1. My dataset does not fit into memory
I have in my cluster setup, so I want to use also disk for caching. I guess
MEMORY_ONLY is the default storage level in Spark. If that's the case how
could I change the storage level to  MEMORY_AND_DISK in Spark?

thanks
Salih

Reply via email to