[ https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549704#comment-16549704 ]
Sahil Takiar commented on HIVE-19937: ------------------------------------- The overheads probably won't grow in proportion to the heap, but the goal is to allow users to run Hive-on-Spark successfully even with low heap settings (e.g. 1g). The overheads are more of a function of the workload. In this case, the workload is TPC-DS (a standard SQL benchmarks). Hive users to run queries that scan more partitions can expect the overheads to increase. > Intern fields in MapWork on deserialization > ------------------------------------------- > > Key: HIVE-19937 > URL: https://issues.apache.org/jira/browse/HIVE-19937 > Project: Hive > Issue Type: Improvement > Components: Spark > Reporter: Sahil Takiar > Assignee: Sahil Takiar > Priority: Major > Attachments: HIVE-19937.1.patch, HIVE-19937.2.patch, > HIVE-19937.3.patch, post-patch-report.html, report.html > > > When fixing HIVE-16395, we decided that each new Spark task should clone the > {{JobConf}} object to prevent any {{ConcurrentModificationException}} from > being thrown. However, setting this variable comes at a cost of storing a > duplicate {{JobConf}} object for each Spark task. These objects can take up a > significant amount of memory, we should intern them so that Spark tasks > running in the same JVM don't store duplicate copies. -- This message was sent by Atlassian JIRA (v7.6.3#76005)