Thanks Imran. I will give it a shot when I have some time.
Nezih
On Thu, Apr 14, 2016 at 9:25 AM Imran Rashid wrote:
> Hi Nezih,
>
> I just reported a somewhat similar issue, and I have a potential fix --
> SPARK-14560, looks like you are already watching it :). You can try out
> that patch, y
Hi Nezih,
I just reported a somewhat similar issue, and I have a potential fix --
SPARK-14560, looks like you are already watching it :). You can try out
that patch, you have to explicitly enable the change in behavior with
"spark.shuffle.spillAfterRead=true". Honestly, I don't think these issue
Nope, I didn't have a chance to track the root cause, and IIRC we didn't
observe it when dyn. alloc. is off.
On Mon, Apr 4, 2016 at 6:16 PM Reynold Xin wrote:
> BTW do you still see this when dynamic allocation is off?
>
> On Mon, Apr 4, 2016 at 6:16 PM, Reynold Xin wrote:
>
>> Nezih,
>>
>> Hav
Nezih,
Have you had a chance to figure out why this is happening?
On Tue, Mar 22, 2016 at 1:32 AM, james wrote:
> I guess different workload cause diff result ?
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/java-lang-OutOfMemoryError-Una
BTW do you still see this when dynamic allocation is off?
On Mon, Apr 4, 2016 at 6:16 PM, Reynold Xin wrote:
> Nezih,
>
> Have you had a chance to figure out why this is happening?
>
>
> On Tue, Mar 22, 2016 at 1:32 AM, james wrote:
>
>> I guess different workload cause diff result ?
>>
>>
>>
>
I guess different workload cause diff result ?
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/java-lang-OutOfMemoryError-Unable-to-acquire-bytes-of-memory-tp16773p16789.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com
Interesting. After experimenting with various parameters increasing
spark.sql.shuffle.partitions and decreasing spark.buffer.pageSize helped my
job go through. BTW I will be happy to help getting this issue fixed.
Nezih
On Tue, Mar 22, 2016 at 1:07 AM james wrote:
Hi,
> I also found 'Unable to
Hi,
I also found 'Unable to acquire memory' issue using Spark 1.6.1 with Dynamic
allocation on YARN. My case happened with setting
spark.sql.shuffle.partitions larger than 200. From error stack, it has a
diff with issue reported by Nezih and not sure if these has same root cause.
Thanks
James
16
Andrew, thanks for the suggestion, but unfortunately it didn't work --
still getting the same exception.
On Mon, Mar 21, 2016 at 10:32 AM Andrew Or wrote:
> @Nezih, can you try again after setting `spark.memory.useLegacyMode` to
> true? Can you still reproduce the OOM that way?
>
> 2016-03-21 10:
@Nezih, can you try again after setting `spark.memory.useLegacyMode` to
true? Can you still reproduce the OOM that way?
2016-03-21 10:29 GMT-07:00 Nezih Yigitbasi :
> Hi Spark devs,
> I am using 1.6.0 with dynamic allocation on yarn. I am trying to run a
> relatively big application with 10s of j
10 matches
Mail list logo