[ https://issues.apache.org/jira/browse/HIVE-3387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13600092#comment-13600092 ]
Johndee Burks commented on HIVE-3387: ------------------------------------- Adding this comment to make this easier to find, the error message is below. java.io.IOException: Split metadata size exceeded 10000000. > meta data file size exceeds limit > --------------------------------- > > Key: HIVE-3387 > URL: https://issues.apache.org/jira/browse/HIVE-3387 > Project: Hive > Issue Type: Bug > Affects Versions: 0.7.1 > Reporter: Alexander Alten-Lorenz > Assignee: Navis > Fix For: 0.10.0 > > Attachments: HIVE-3387.1.patch.txt > > > The cause is certainly that we use an array list instead of a set structure > in the split locations API. Looks like a bug in Hive's CombineFileInputFormat. > Reproduce: > Set mapreduce.jobtracker.split.metainfo.maxsize=100000000 when submitting the > Hive query. Run a big hive query that write data into a partitioned table. > Due to the large number of splits, you encounter an exception on the job > submitted to Hadoop and the exception said: > meta data size exceeds 100000000. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira