At Uber we have observed the same resource efficiency issue with dynamic
allocation. Our workload is migrated from Hive on MR to Hive on Spark. We
saw significant performance improvement (>2X) with our workload. We also
expected big resource savings from this migration because there will be one
sin
I saw this quite often in our clusters. we have increased
spark.executor.heartbeatInterval
to 60s from the default value, which should help. The problem seems due to
poor Spark driver performance and/or locking issues when it cannot process
incoming events quickly enough.
Thanks,
Xuefu
On Thu, Se
Congratulations, Takuya!
--Xuefu
On Mon, Feb 13, 2017 at 11:25 AM, Xiao Li wrote:
> Congratulations, Takuya!
>
> Xiao
>
> 2017-02-13 11:24 GMT-08:00 Holden Karau :
>
>> Congratulations Takuya-san :D!
>>
>> On Mon, Feb 13, 2017 at 11:16 AM, Reynold Xin
>> wrote:
>>
>>> Hi all,
>>>
>>> Takuya-sa