loves the type safety it provides). Not even
>>>> sure if changing to DataFrame will for sure solve the issue.
>>>>
>>>> On Wed, Feb 3, 2016 at 1:33 PM, Mohammed Guller >>> > wrote:
>>>>
>>>>> Nirav,
>>>>
;>>>
>>>> Sorry to hear about your experience with Spark; however, sucks is a
>>>> very strong word. Many organizations are processing a lot more than 150GB
>>>> of data with Spark.
>>>>
>>>>
>>>>
>>>> Mohammed
>
actitioners/dp/1484209656/>
>>>
>>>
>>>
>>> *From:* Nirav Patel [mailto:npa...@xactlycorp.com]
>>> *Sent:* Wednesday, February 3, 2016 11:31 AM
>>> *To:* Stefan Panayotov
>>> *Cc:* Jim Green; Ted Yu; Jakob Odersky; user@spark.ap
ord. Many organizations are processing a lot more than 150GB of
>>> data with Spark.
>>>
>>>
>>>
>>> Mohammed
>>>
>>> Author: Big Data Analytics with Spark
>>> <http://www.amazon.com/Big-Data-Analytics-Spark-Practitio
rp.com
> <mailto:npa...@xactlycorp.com>]
> Sent: Wednesday, February 3, 2016 11:31 AM
> To: Stefan Panayotov
> Cc: Jim Green; Ted Yu; Jakob Odersky; user@spark.apache.org
> <mailto:user@spark.apache.org>
>
> Subject: Re: Spark 1.5.2 memory error
>
>
&g
>> Mohammed
>>
>> Author: Big Data Analytics with Spark
>> <http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/>
>>
>>
>>
>> *From:* Nirav Patel [mailto:npa...@xactlycorp.com]
>> *Sent:* Wednesday, February 3, 2016
tp://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/>
>
>
>
> *From:* Nirav Patel [mailto:npa...@xactlycorp.com]
> *Sent:* Wednesday, February 3, 2016 11:31 AM
> *To:* Stefan Panayotov
> *Cc:* Jim Green; Ted Yu; Jakob Odersky; user@spark.apac
656/>
From: Nirav Patel [mailto:npa...@xactlycorp.com]
Sent: Wednesday, February 3, 2016 11:31 AM
To: Stefan Panayotov
Cc: Jim Green; Ted Yu; Jakob Odersky; user@spark.apache.org
Subject: Re: Spark 1.5.2 memory error
Hi Stefan,
Welcome to the OOM - heap space club. I have been struggling with s
_01_01: 319.8 MB of 1.5 GB
> physical memory used; 1.7 GB of 3.1 GB virtual memory used
> 2016-02-03 17:33:22,627 INFO nodemanager.NodeStatusUpdaterImpl
> (NodeStatusUpdaterImpl.java:removeOrTrackCompletedContainersFromContext(529))
> - Removed completed containers from NM conte
terImpl.java:removeOrTrackCompletedContainersFromContext(529))
> - Removed completed containers from NM context:
> [container_1454509557526_0014_01_93]
>
> I'll appreciate any suggestions.
>
> Thanks,
>
>
> *Stefan Panayotov, PhD **Home*: 610-355-0919
> *Cell*: 610-517-5586
> *email*:
, 2016 4:52 PM
To: Jakob Odersky
Cc: Stefan Panayotov; user@spark.apache.org
Subject: Re: Spark 1.5.2 memory error What value do you use for
spark.yarn.executor.memoryOverhead ? Please see
https://spark.apache.org/docs/latest/running-on-yarn.html for description of
the parameter. Which Spark rel
the default of 10% of 16g, and Spark version
> is 1.5.2.
>
>
>
> Stefan Panayotov, PhD
> Sent from Outlook Mail for Windows 10 phone
>
>
>
>
> *From: *Ted Yu
> *Sent: *Tuesday, February 2, 2016 4:52 PM
> *To: *Jakob Odersky
> *Cc: *Stefan Panayotov ; user@
For the memoryOvethead I have the default of 10% of 16g, and Spark version is
1.5.2.
Stefan Panayotov, PhD
Sent from Outlook Mail for Windows 10 phone
From: Ted Yu
Sent: Tuesday, February 2, 2016 4:52 PM
To: Jakob Odersky
Cc: Stefan Panayotov; user@spark.apache.org
Subject: Re: Spark 1.5.2
What value do you use for spark.yarn.executor.memoryOverhead ?
Please see https://spark.apache.org/docs/latest/running-on-yarn.html for
description of the parameter.
Which Spark release are you using ?
Cheers
On Tue, Feb 2, 2016 at 1:38 PM, Jakob Odersky wrote:
> Can you share some code that
Can you share some code that produces the error? It is probably not
due to spark but rather the way data is handled in the user code.
Does your code call any reduceByKey actions? These are often a source
for OOM errors.
On Tue, Feb 2, 2016 at 1:22 PM, Stefan Panayotov wrote:
> Hi Guys,
>
> I need
15 matches
Mail list logo