IMHO, you are making mistake. spark manages tasks and cores internally. when you open new threads inside executor - meaning you "over-provisioning" executor(e.g. tasks on other cores will be preempted)
On 26 January 2016 at 07:59, Elango Cheran <elango.che...@gmail.com> wrote: > Hi everyone, > I've gone through the effort of figuring out how to modify a Spark job to > have an operation become multi-threaded inside an executor. I've written > up an explanation of what worked, what didn't work, and why: > > > http://www.elangocheran.com/blog/2016/01/using-clojure-to-create-multi-threaded-spark-jobs/ > > I think the ideas there should be applicable generally -- which would > include Scala and Java since the JVM is genuinely multi-threaded -- and > therefore may be of interest to others. I will need to convert this code > to Scala for personal requirements in the near future, anyways. > > I hope this helps. > > -- Elango >