M
*To:* 姜超才; Hester wang; user@spark.apache.org
*Subject:* Re: 回复: Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when
fetching more than 1,000,000 rows.
Thanks for the extra details and explanations Chaocai, will try to
reproduce this when I got chance.
Cheng
On 6/12/15 3:44 PM, 姜超才 wrote:
I said
超才; Hester wang; user@spark.apache.org
Subject: Re: 回复: Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than
1,000,000 rows.
Thanks for the extra details and explanations Chaocai, will try to reproduce
this when I got chance.
Cheng
On 6/12/15 3:44 PM, 姜超才 wrote:
I said "OOM occurr
: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than
1,000,000 rows.
Thanks for the extra details and explanations Chaocai, will try to reproduce
this when I got chance.
Cheng
On 6/12/15 3:44 PM, 姜超才 wrote:
I said "OOM occurred on slave node", because I monitored memory utilizati
才" , "Hester wang"
,
*主题:* Re: 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more
than 1,000,000 rows.
*日期:* 2015/06/12 15:30:08 (Fri)
Hi Chaocai,
Glad that 1.4 fixes your case. However, I'm a bit confused by your
last comment saying "The OOM or lose heartbeat was oc
ate my spark to v1.4. This issue resolved.
Thanks,
SuperJ
- 原始邮件信息 -
*发件人:* "姜超才"
*收件人:* "Cheng Lian" , "Hester wang"
,
*主题:* 回复: Re: 回复: Re: 回复: Re: Met OOM when fetching more than
1,000,000 rows.
*日期:* 2015/06/11 08:56:28 (Thu)
No problem on L
quot;Cheng Lian"
*收件人:* "姜超才" , "Hester wang"
,
*主题:* Re: 回复: Re: Met OOM when fetching more than 1,000,000 rows.
*日期:* 2015/06/10 16:37:34 (Wed)
Also, if the data isn't confidential, would you mind to send me a
compressed copy (don't cc user@spark.apach
end the result to you.
Thanks,
SuperJ
- 原始邮件信息 -
*发件人:* "Cheng Lian"
*收件人:* "Hester wang" ,
*主题:* Re: Met OOM when fetching more than 1,000,000 rows.
*日期:* 2015/06/10 16:15:47 (Wed)
Hi Xiaohan,
Would you please try to set
"spark.sql.thriftServer.incrementalC
*发件人:* "Cheng Lian"
*收件人:* "Hester wang" ,
*主题:* Re: Met OOM when fetching more than 1,000,000 rows.
*日期:* 2015/06/10 16:15:47 (Wed)
Hi Xiaohan,
Would you please try to set
"spark.sql.thriftServer.incrementalCollect" to "true" and increasing
driver
Hi Xiaohan,
Would you please try to set "spark.sql.thriftServer.incrementalCollect"
to "true" and increasing driver memory size? In this way,
HiveThriftServer2 uses RDD.toLocalIterator rather than
RDD.collect().iterator to return the result set. The key difference is
that RDD.toLocalIterator