Re: About memory leak in spark 1.4.1

2015-09-28 Thread Jon Chase
uot;;; > *发送时间:* 2015年8月4日(星期二) 晚上10:28 > *收件人:* "Igor Berman"; > *抄送:* "Sea"<261810...@qq.com>; "Barak Gitsis"; " > user@spark.apache.org"; "rxin"; > "joshrosen"; "davies"; > *主题:* Re: About memory leak in s

Re?? About memory leak in spark 1.4.1

2015-08-05 Thread Sea
ot;Barak Gitsis"; "user@spark.apache.org"; "rxin"; "joshrosen"; "davies"; : Re: About memory leak in spark 1.4.1 w.r.t. spark.deploy.spreadOut , here is the scaladoc: // As a temporary workaround before better ways of configuring memory, we al

Re: About memory leak in spark 1.4.1

2015-08-04 Thread Ted Yu
rk.io.compression.codec org.apache.spark.io.LZ4CompressionCodec >> >> >> >> >> >> -- 原始邮件 -- >> *发件人:* "Igor Berman";; >> *发送时间:* 2015年8月3日(星期一) 晚上7:56 >> *收件人:* "Sea"<261810...@qq.com>; >>

Re: About memory leak in spark 1.4.1

2015-08-04 Thread Barak Gitsis
iles true >> spark.io.compression.codec org.apache.spark.io.LZ4CompressionCodec >> >> >> >> >> >> -- 原始邮件 -- >> *发件人:* "Igor Berman";; >> *发送时间:* 2015年8月3日(星期一) 晚上7:56 >> *收件人:* "Sea"<261810...

Re: About memory leak in spark 1.4.1

2015-08-04 Thread Igor Berman
> *发送时间:* 2015年8月3日(星期一) 晚上7:56 > *收件人:* "Sea"<261810...@qq.com>; > *抄送:* "Barak Gitsis"; "Ted Yu"; > "user@spark.apache.org"; "rxin"; > "joshrosen"; "davies"; > *主题:* Re: About memory leak in spark 1.4.1

Re?? About memory leak in spark 1.4.1

2015-08-04 Thread Sea
-- -- ??: "Igor Berman";; : 2015??8??3??(??) 7:56 ??: "Sea"<261810...@qq.com>; : "Barak Gitsis"; "Ted Yu"; "user@spark.apache.org"; "rxin"; "joshrosen"; "

Re: About memory leak in spark 1.4.1

2015-08-03 Thread Igor Berman
> -- 原始邮件 -- > *发件人:* "Barak Gitsis";; > *发送时间:* 2015年8月2日(星期天) 晚上9:55 > *收件人:* "Sea"<261810...@qq.com>; "Ted Yu"; > *抄送:* "user@spark.apache.org"; "rxin"< > r...@databricks.com>; "

Re: About memory leak in spark 1.4.1

2015-08-03 Thread Barak Gitsis
9:55 > *收件人:* "Sea"<261810...@qq.com>; "Ted Yu"; > *抄送:* "user@spark.apache.org"; "rxin"< > r...@databricks.com>; "joshrosen"; "davies"< > dav...@databricks.com>; > *主题:* Re: About memory leak in spark 1.4.1 > >

Re?? About memory leak in spark 1.4.1

2015-08-02 Thread Sea
261810...@qq.com>; "Ted Yu"; : "user@spark.apache.org"; "rxin"; "joshrosen"; "davies"; : Re: About memory leak in spark 1.4.1 spark uses a lot more than heap memory, it is the expected behavior.in 1.4 off-heap memory usage is

Re: About memory leak in spark 1.4.1

2015-08-02 Thread Barak Gitsis
gt; because it is still in heap memory. >> >> >> ---------- 原始邮件 -- >> *发件人:* "Barak Gitsis";; >> *发送时间:* 2015年8月2日(星期天) 下午4:11 >> *收件人:* "Sea"<261810...@qq.com>; "user"; >> *抄送:* "rxin&qu

Re?? About memory leak in spark 1.4.1

2015-08-02 Thread Sea
uot;Sea"<261810...@qq.com>; : "Barak Gitsis"; "user@spark.apache.org"; "rxin"; "joshrosen"; "davies"; : Re: About memory leak in spark 1.4.1 http://spark.apache.org/docs/latest/tuning.html does mention spark.storage.memoryFraction

Re: About memory leak in spark 1.4.1

2015-08-02 Thread Ted Yu
@qq.com>; "user"; > *抄送:* "rxin"; "joshrosen"; > "davies"; > *主题:* Re: About memory leak in spark 1.4.1 > > Hi, > reducing spark.storage.memoryFraction did the trick for me. Heap doesn't > get filled because it is reserved.. >

Re?? About memory leak in spark 1.4.1

2015-08-02 Thread Sea
?) 4:11 ??: "Sea"<261810...@qq.com>; "user"; : "rxin"; "joshrosen"; "davies"; : Re: About memory leak in spark 1.4.1 Hi,reducing spark.storage.memoryFraction did the trick for me. Heap doesn't get filled because it

Re: About memory leak in spark 1.4.1

2015-08-02 Thread Barak Gitsis
Hi, reducing spark.storage.memoryFraction did the trick for me. Heap doesn't get filled because it is reserved.. My reasoning is: I give executor all the memory i can give it, so that makes it a boundary. >From here i try to make the best use of memory I can. storage.memoryFraction is in a sense us

About memory leak in spark 1.4.1

2015-08-01 Thread Sea
Hi, all I upgrage spark to 1.4.1, many applications failed... I find the heap memory is not full , but the process of CoarseGrainedExecutorBackend will take more memory than I expect, and it will increase as time goes on, finally more than max limited of the server, the worker will die. An