* 2 - 1 (breakpoint-1)
> val y = random * 2 - 1
> if (x*x + y*y < 1) 1 else 0
> }.reduce(_ + _)
> println("Pi is roughly " + 4.0 * count / (n - 1))
> spark.stop()
> }
> }
>
>
>
>
> --------------
> *发件人:*
o
发送时间: 2016年9月16日 22:27
收件人: chen yong
抄送: user@spark.apache.org
主题: Re: 答复: it does not stop at breakpoints which is in an anonymous function
No, that's not the right way of doing it.
Remember that RDD operations are lazy, due to performance reasons. Whenever you
call one of those ope
No, that's not the right way of doing it.
Remember that RDD operations are lazy, due to performance reasons. Whenever
you call one of those operation methods (count, reduce, collect, ...) they
will execute all the functions that you have done to create that RDD.
It would help if you can post your c
Sorry, it wasn't the count it was the reduce method that retrieves
information from the RDD.
I has to go through all the rdd values to return the result.
2016-09-16 11:18 GMT-03:00 chen yong :
> Dear Dirceu,
>
>
> I am totally confused . In your reply you mentioned ".the count does
> that, .
Also, I wonder what is the right way to debug spark program. If I use ten
anonymous function in one spark program, for debugging each of them, i have to
place a COUNT action in advace and then remove it after debugging. Is that the
right way?
发件人: Dirceu Semig
Dear Dirceu,
I am totally confused . In your reply you mentioned ".the count does that,
..." .However, in the code snippet shown in the attachment file
FelixProblem.png of your previous mail, I cannot find any 'count' ACTION is
called. Would you please clearly show me the line it is whi