[ https://issues.apache.org/jira/browse/HIVE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13779494#comment-13779494 ]
Hudson commented on HIVE-5318: ------------------------------ ABORTED: Integrated in Hive-trunk-hadoop2 #458 (See [https://builds.apache.org/job/Hive-trunk-hadoop2/458/]) HIVE-5318 : Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10 (Xuefu Zhang via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1526325) * /hive/trunk/build-common.xml * /hive/trunk/data/files/exported_table * /hive/trunk/data/files/exported_table/_metadata * /hive/trunk/data/files/exported_table/data * /hive/trunk/data/files/exported_table/data/data * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java * /hive/trunk/ql/src/test/queries/clientpositive/import_exported_table.q * /hive/trunk/ql/src/test/results/clientpositive/import_exported_table.q.out > Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10 > ---------------------------------------------------------------------------- > > Key: HIVE-5318 > URL: https://issues.apache.org/jira/browse/HIVE-5318 > Project: Hive > Issue Type: Bug > Components: Import/Export > Affects Versions: 0.9.0, 0.10.0 > Reporter: Brad Ruderman > Assignee: Xuefu Zhang > Priority: Critical > Fix For: 0.13.0 > > Attachments: HIVE-5318.1.patch, HIVE-5318.patch > > > When Exporting hive tables using the hive command in Hive 0.9 "EXPORT table > TO 'hdfs_path'" then importing to another hive 0.10 instance using "IMPORT > FROM 'hdfs_path'", hive throws this error: > 13/09/18 13:14:02 ERROR ql.Driver: FAILED: SemanticException Exception while > processing > org.apache.hadoop.hive.ql.parse.SemanticException: Exception while processing > at > org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:277) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:349) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:938) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:347) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:706) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.util.RunJar.main(RunJar.java:208) > Caused by: java.lang.NullPointerException > at java.util.ArrayList.<init>(ArrayList.java:131) > at > org.apache.hadoop.hive.ql.plan.CreateTableDesc.<init>(CreateTableDesc.java:128) > at > org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:99) > ... 16 more > 13/09/18 13:14:02 INFO ql.Driver: </PERFLOG method=compile > start=1379535241411 end=1379535242332 duration=921> > 13/09/18 13:14:02 INFO ql.Driver: <PERFLOG method=releaseLocks> > 13/09/18 13:14:02 INFO ql.Driver: </PERFLOG method=releaseLocks > start=1379535242332 end=1379535242332 duration=0> > 13/09/18 13:14:02 INFO ql.Driver: <PERFLOG method=releaseLocks> > 13/09/18 13:14:02 INFO ql.Driver: </PERFLOG method=releaseLocks > start=1379535242333 end=1379535242333 duration=0> > This is probably a critical blocker for people who are trying to test Hive > 0.10 in their staging environments prior to the upgrade from 0.9 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira