There is one thing that I am confused about.
Spark has codes that have been implemented in Scala. Now, can we run any
Scala code on the Spark framework? What will be the difference in the
execution of the scala code in normal systems and on Spark?
The reason for my question is the following:
I had a variable
*val temp = <some operations>*
This temp was being created inside the loop, so as to manually throw it out
of the cache, every time the loop ends I was calling *temp.unpersist()*,
this was returning an error saying that *value unpersist is not a method of
Int*, which means that temp is an Int.
Can some one explain to me why I was not able to call *unpersist* on *temp*?

Thank You

Reply via email to