The problem is that SparkPi uses Math.random(), which is a synchronized method,
so it can’t scale to multiple cores. In fact it will be slower on multiple
cores due to lock contention. Try another example and you’ll see better
scaling. I think we’ll have to update SparkPi to create a new Random
Hi,
Relatively new on spark and have tried running SparkPi example on a
standalone 12 core three machine cluster. What I'm failing to understand is,
that running this example with a single slice gives better performance as
compared to using 12 slices. Same was the case when I was using parallelize