[ https://issues.apache.org/jira/browse/HIVE-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280317#comment-13280317 ]
Srinivas commented on HIVE-2907: -------------------------------- Thanks for the timely clarification. > Hive error when dropping a table with large number of partitions > ---------------------------------------------------------------- > > Key: HIVE-2907 > URL: https://issues.apache.org/jira/browse/HIVE-2907 > Project: Hive > Issue Type: Bug > Components: Metastore > Affects Versions: 0.9.0 > Environment: General. Hive Metastore bug. > Reporter: Mousom Dhar Gupta > Assignee: Mousom Dhar Gupta > Priority: Minor > Fix For: 0.10.0 > > Attachments: HIVE-2907.1.patch.txt, HIVE-2907.2.patch.txt, > HIVE-2907.3.patch.txt, HIVE-2907.D2505.1.patch, HIVE-2907.D2505.2.patch, > HIVE-2907.D2505.3.patch, HIVE-2907.D2505.4.patch, HIVE-2907.D2505.5.patch, > HIVE-2907.D2505.6.patch, HIVE-2907.D2505.7.patch > > Original Estimate: 10h > Remaining Estimate: 10h > > Running into an "Out Of Memory" error when trying to drop a table with 128K > partitions. > The methods dropTable in > metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java > and dropTable in ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java > encounter out of memory errors > when dropping tables with lots of partitions because they try to load the > metadata for every partition into memory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira