uot;;;
> *发送时间:* 2015年8月4日(星期二) 晚上10:28
> *收件人:* "Igor Berman";
> *抄送:* "Sea"<261810...@qq.com>; "Barak Gitsis"; "
> user@spark.apache.org"; "rxin";
> "joshrosen"; "davies";
> *主题:* Re: About memory leak in s
ot;Barak Gitsis";
"user@spark.apache.org"; "rxin";
"joshrosen"; "davies";
: Re: About memory leak in spark 1.4.1
w.r.t. spark.deploy.spreadOut , here is the scaladoc:
// As a temporary workaround before better ways of configuring memory, we
al
rk.io.compression.codec org.apache.spark.io.LZ4CompressionCodec
>>
>>
>>
>>
>>
>> -- 原始邮件 --
>> *发件人:* "Igor Berman";;
>> *发送时间:* 2015年8月3日(星期一) 晚上7:56
>> *收件人:* "Sea"<261810...@qq.com>;
>>
iles true
>> spark.io.compression.codec org.apache.spark.io.LZ4CompressionCodec
>>
>>
>>
>>
>>
>> -- 原始邮件 --
>> *发件人:* "Igor Berman";;
>> *发送时间:* 2015年8月3日(星期一) 晚上7:56
>> *收件人:* "Sea"<261810...
> *发送时间:* 2015年8月3日(星期一) 晚上7:56
> *收件人:* "Sea"<261810...@qq.com>;
> *抄送:* "Barak Gitsis"; "Ted Yu";
> "user@spark.apache.org"; "rxin";
> "joshrosen"; "davies";
> *主题:* Re: About memory leak in spark 1.4.1
-- --
??: "Igor Berman";;
: 2015??8??3??(??) 7:56
??: "Sea"<261810...@qq.com>;
: "Barak Gitsis"; "Ted Yu";
"user@spark.apache.org"; "rxin";
"joshrosen"; "
> -- 原始邮件 --
> *发件人:* "Barak Gitsis";;
> *发送时间:* 2015年8月2日(星期天) 晚上9:55
> *收件人:* "Sea"<261810...@qq.com>; "Ted Yu";
> *抄送:* "user@spark.apache.org"; "rxin"<
> r...@databricks.com>; "
9:55
> *收件人:* "Sea"<261810...@qq.com>; "Ted Yu";
> *抄送:* "user@spark.apache.org"; "rxin"<
> r...@databricks.com>; "joshrosen"; "davies"<
> dav...@databricks.com>;
> *主题:* Re: About memory leak in spark 1.4.1
>
>
261810...@qq.com>; "Ted Yu";
: "user@spark.apache.org";
"rxin"; "joshrosen";
"davies";
: Re: About memory leak in spark 1.4.1
spark uses a lot more than heap memory, it is the expected behavior.in 1.4
off-heap memory usage is
gt; because it is still in heap memory.
>>
>>
>> ---------- 原始邮件 --
>> *发件人:* "Barak Gitsis";;
>> *发送时间:* 2015年8月2日(星期天) 下午4:11
>> *收件人:* "Sea"<261810...@qq.com>; "user";
>> *抄送:* "rxin&qu
uot;Sea"<261810...@qq.com>;
: "Barak Gitsis";
"user@spark.apache.org"; "rxin";
"joshrosen"; "davies";
: Re: About memory leak in spark 1.4.1
http://spark.apache.org/docs/latest/tuning.html does mention
spark.storage.memoryFraction
@qq.com>; "user";
> *抄送:* "rxin"; "joshrosen";
> "davies";
> *主题:* Re: About memory leak in spark 1.4.1
>
> Hi,
> reducing spark.storage.memoryFraction did the trick for me. Heap doesn't
> get filled because it is reserved..
>
?) 4:11
??: "Sea"<261810...@qq.com>; "user";
: "rxin"; "joshrosen";
"davies";
: Re: About memory leak in spark 1.4.1
Hi,reducing spark.storage.memoryFraction did the trick for me. Heap doesn't get
filled because it
Hi,
reducing spark.storage.memoryFraction did the trick for me. Heap doesn't
get filled because it is reserved..
My reasoning is:
I give executor all the memory i can give it, so that makes it a boundary.
>From here i try to make the best use of memory I can.
storage.memoryFraction is in a sense us
Hi, all
I upgrage spark to 1.4.1, many applications failed... I find the heap memory is
not full , but the process of CoarseGrainedExecutorBackend will take more
memory than I expect, and it will increase as time goes on, finally more than
max limited of the server, the worker will die.
An
15 matches
Mail list logo