gt; From: "Andy Konwinski"
>> To: dev@spark.apache.org
>> Sent: Tuesday, May 20, 2014 4:06:33 PM
>> Subject: Re: Scala examples for Spark do not work as written in documentation
>>
>> I fixed the bug, but I kept the parameter "i" instead of "
nd need to change
"spark" to "sc". (I noticed because this was a speedbump for a colleague who
is trying out Spark.)
thanks,
wb
- Original Message -
> From: "Andy Konwinski"
> To: dev@spark.apache.org
> Sent: Tuesday, May 20, 2014 4:06:33 PM
>
I fixed the bug, but I kept the parameter "i" instead of "_" since that (1)
keeps it more parallel to the python and java versions which also use
functions with a named variable and (2) doesn't require readers to know
this particular use of the "_" syntax in Scala.
Thanks for catching this Glenn.
Why does the reduce function only work on sums of keys of the same type and
does not support other functional forms?
I am having trouble in another example where instead of 1s and 0s, the
output of the map function is something like A=(1,2) and B=(3,4). I need a
reduce function that can return so
Sorry, looks like an extra line got inserted in there. One more try:
val count = spark.parallelize(1 to NUM_SAMPLES).map { _ =>
val x = Math.random()
val y = Math.random()
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
On Fri, May 16, 2014 at 12:36 PM, Mark Hamstra wrote:
> Actually, the b
Actually, the better way to write the multi-line closure would be:
val count = spark.parallelize(1 to NUM_SAMPLES).map { _ =>
val x = Math.random()
val y = Math.random()
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
On Fri, May 16, 2014 at 9:41 AM, GlennStrycker wrote:
> On the webpage http
Thanks for pointing it out. We should update the website to fix the code.
val count = spark.parallelize(1 to NUM_SAMPLES).map { i =>
val x = Math.random()
val y = Math.random()
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / NUM_SAMPLES)
On Fri, May 16