[ https://issues.apache.org/jira/browse/HIVE-16220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979402#comment-16979402 ]
Thomas Mann (FiduciaGAD) commented on HIVE-16220: ------------------------------------------------- can confirm same issue for HDP 3.1.0 and Hive in Version 3.0.0.3.1 > Memory leak when creating a table using location and NameNode in HA > ------------------------------------------------------------------- > > Key: HIVE-16220 > URL: https://issues.apache.org/jira/browse/HIVE-16220 > Project: Hive > Issue Type: Bug > Components: HiveServer2 > Affects Versions: 1.2.1 > Environment: HDP-2.4.0.0 > Reporter: Angel Alvarez Pascua > Priority: Major > > The following simple DDL > CREATE TABLE `test`(`field` varchar(1)) LOCATION > 'hdfs://benderHA/apps/hive/warehouse/test' > ends up generating a huge memory leak in the HiveServer2 service. > After two weeks without a restart, the service stops suddenly because of > OutOfMemory errors. > This only happens when we're in an environment in which the NameNode is in > HA, otherwise, nothing (so weird) happens. If the location clause is not > present, everything is also fine. > It seems, multiples instances of Hadoop configuration are created when we're > in an HA environment: > <AFTER ONE EXECUTIONS OF CREATE TABLE WITH LOCATION> > 2.618 instances of "org.apache.hadoop.conf.Configuration", loaded by > "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" > occupy 350.263.816 (81,66%) bytes. These instances are referenced from one > instance of "java.util.HashMap$Node[]", > loaded by "<system class loader>" > <AFTER TWO EXECUTIONS OF CREATE TABLE WITH LOCATION> > 5.216 instances of "org.apache.hadoop.conf.Configuration", loaded by > "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" > occupy 699.901.416 (87,32%) bytes. These instances are referenced from one > instance of "java.util.HashMap$Node[]", > loaded by "<system class loader>" -- This message was sent by Atlassian Jira (v8.3.4#803005)