Re: 回复: Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-12 Thread Cheng Lian
M *To:* 姜超才; Hester wang; user@spark.apache.org *Subject:* Re: 回复: Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows. Thanks for the extra details and explanations Chaocai, will try to reproduce this when I got chance. Cheng On 6/12/15 3:44 PM, 姜超才 wrote: I said

RE: 回复: Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-12 Thread Cheng, Hao
超才; Hester wang; user@spark.apache.org Subject: Re: 回复: Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows. Thanks for the extra details and explanations Chaocai, will try to reproduce this when I got chance. Cheng On 6/12/15 3:44 PM, 姜超才 wrote: I said "OOM occurr

RE: 回复: Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-12 Thread Cheng, Hao
: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows. Thanks for the extra details and explanations Chaocai, will try to reproduce this when I got chance. Cheng On 6/12/15 3:44 PM, 姜超才 wrote: I said "OOM occurred on slave node", because I monitored memory utilizati

Re: 回复: Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-12 Thread Cheng Lian
才" , "Hester wang" , *主题:* Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows. *日期:* 2015/06/12 15:30:08 (Fri) Hi Chaocai, Glad that 1.4 fixes your case. However, I'm a bit confused by your last comment saying "The OOM or lose heartbeat was oc

Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-12 Thread Cheng Lian
ate my spark to v1.4. This issue resolved. Thanks, SuperJ - 原始邮件信息 - *发件人:* "姜超才" *收件人:* "Cheng Lian" , "Hester wang" , *主题:* 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows. *日期:* 2015/06/11 08:56:28 (Thu) No problem on L

Re: 回复: Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-10 Thread Cheng Lian
quot;Cheng Lian" *收件人:* "姜超才" , "Hester wang" , *主题:* Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows. *日期:* 2015/06/10 16:37:34 (Wed) Also, if the data isn't confidential, would you mind to send me a compressed copy (don't cc user@spark.apach

Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-10 Thread Cheng Lian
end the result to you. Thanks, SuperJ - 原始邮件信息 - *发件人:* "Cheng Lian" *收件人:* "Hester wang" , *主题:* Re: Met OOM when fetching more than 1,000,000 rows. *日期:* 2015/06/10 16:15:47 (Wed) Hi Xiaohan, Would you please try to set "spark.sql.thriftServer.incrementalC

Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-10 Thread Cheng Lian
*发件人:* "Cheng Lian" *收件人:* "Hester wang" , *主题:* Re: Met OOM when fetching more than 1,000,000 rows. *日期:* 2015/06/10 16:15:47 (Wed) Hi Xiaohan, Would you please try to set "spark.sql.thriftServer.incrementalCollect" to "true" and increasing driver

Re: Met OOM when fetching more than 1,000,000 rows.

2015-06-10 Thread Cheng Lian
Hi Xiaohan, Would you please try to set "spark.sql.thriftServer.incrementalCollect" to "true" and increasing driver memory size? In this way, HiveThriftServer2 uses RDD.toLocalIterator rather than RDD.collect().iterator to return the result set. The key difference is that RDD.toLocalIterator