Thank you Jeff.

I have filed a JIRA under the following link :

https://issues.apache.org/jira/browse/SPARK-13634

For some reason the spark context is being pulled into the referencing
environment of the closure.
I also had no problems with batch jobs.

On Wed, Mar 2, 2016 at 7:18 PM, Jeff Zhang <[email protected]> wrote:

> I can reproduce it in spark-shell. But it works for batch job. Looks like
> spark repl issue.
>
> On Thu, Mar 3, 2016 at 10:43 AM, Rahul Palamuttam <[email protected]>
> wrote:
>
>> Hi All,
>>
>> We recently came across this issue when using the spark-shell and
>> zeppelin.
>> If we assign the sparkcontext variable (sc) to a new variable and
>> reference
>> another variable in an RDD lambda expression we get a task not
>> serializable exception.
>>
>> The following three lines of code illustrate this :
>>
>> val temp = 10
>> val newSC = sc
>> val new RDD = newSC.parallelize(0 to 100).map(p => p + temp).
>>
>> I am not sure if this is a known issue, or we should file a JIRA for it.
>> We originally came across this bug in the SciSpark project.
>>
>> Best,
>>
>> Rahul P
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>

Reply via email to