Hello Everyone

I use Spark 1.6.0 on YARN  (EMR-4.3.0)

I use MEMORY_AND_DISK_SER StorageLevel for my RDD. And I use Kryo Serializer

I noticed that Spark uses Disk to store some RDD blocks even if Executors
have lots memory available. See the screenshot
http://postimg.org/image/gxpsw1fk1/

Any ideas why it might happen?

Thank you
Alex

Reply via email to