[ https://issues.apache.org/jira/browse/HIVE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792214#comment-13792214 ]
jeff little commented on HIVE-5245: ----------------------------------- hive (test)> create table test_10 as > select a.* from test_01 a > join test_02 b > on (a.id=b.id); 13/10/11 09:17:16 INFO ql.Driver: <PERFLOG method=Driver.run> 13/10/11 09:17:16 INFO ql.Driver: <PERFLOG method=TimeToSubmit> 13/10/11 09:17:16 INFO ql.Driver: <PERFLOG method=compile> 13/10/11 09:17:17 INFO parse.ParseDriver: Parsing command: create table test_10 as select a.* from test_01 a join test_02 b on (a.id=b.id) 13/10/11 09:17:17 INFO parse.ParseDriver: Parse Completed 13/10/11 09:17:17 INFO parse.SemanticAnalyzer: Starting Semantic Analysis 13/10/11 09:17:17 INFO parse.SemanticAnalyzer: Creating table test_10 position=13 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_database: test 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: test 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_table : db=test tbl=test_10 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_table : db=test tbl=test_10 13/10/11 09:17:17 ERROR metastore.RetryingHMSHandler: NoSuchObjectException(message:test.test_10 table not found) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1369) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at $Proxy10.get_table(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:838) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) at $Proxy11.getTable(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:948) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:9385) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8647) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:278) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:433) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) 13/10/11 09:17:17 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis 13/10/11 09:17:17 INFO parse.SemanticAnalyzer: Get metadata for source tables 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_table : db=test tbl=test_02 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_table : db=test tbl=test_02 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_table : db=test tbl=test_01 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_table : db=test tbl=test_01 13/10/11 09:17:17 INFO parse.SemanticAnalyzer: Get metadata for subqueries 13/10/11 09:17:17 INFO parse.SemanticAnalyzer: Get metadata for destination tables 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_database: test 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: test 13/10/11 09:17:17 INFO parse.SemanticAnalyzer: Completed getting MetaData in Semantic Analysis 13/10/11 09:17:17 WARN parse.TypeCheckProcFactory: Invalid type entry TOK_TABLE_OR_COL=null 13/10/11 09:17:17 WARN parse.TypeCheckProcFactory: Invalid type entry TOK_TABLE_OR_COL=null 13/10/11 09:17:17 INFO ppd.OpProcFactory: Processing for FS(6) 13/10/11 09:17:17 INFO ppd.OpProcFactory: Processing for SEL(5) 13/10/11 09:17:17 INFO ppd.OpProcFactory: Processing for JOIN(4) 13/10/11 09:17:17 INFO ppd.OpProcFactory: Processing for RS(3) 13/10/11 09:17:17 INFO ppd.OpProcFactory: Processing for TS(0) 13/10/11 09:17:17 INFO ppd.OpProcFactory: Processing for RS(2) 13/10/11 09:17:17 INFO ppd.OpProcFactory: Processing for TS(1) 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_database: test 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: test 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_database: test 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: test 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_partitions_with_auth : db=test tbl=test_02 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_partitions_with_auth : db=test tbl=test_02 13/10/11 09:17:17 INFO metastore.HiveMetaStore: 0: get_partitions_with_auth : db=test tbl=test_01 13/10/11 09:17:17 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_partitions_with_auth : db=test tbl=test_01 13/10/11 09:17:17 INFO exec.Utilities: Cache Content Summary for hdfs://namenode:9000/user/hive/warehouse/test.db/test_01/record_day=20130812 length: 76 file count: 1 directory count: 1 13/10/11 09:17:17 INFO exec.Utilities: Cache Content Summary for hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130813 length: 169 file count: 1 directory count: 1 13/10/11 09:17:17 INFO exec.Utilities: Cache Content Summary for hdfs://namenode:9000/user/hive/warehouse/test.db/test_01/record_day=20130813 length: 76 file count: 1 directory count: 1 13/10/11 09:17:17 INFO exec.Utilities: Cache Content Summary for hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130812 length: 169 file count: 1 directory count: 1 13/10/11 09:17:17 INFO physical.MetadataOnlyOptimizer: Looking for table scans where optimization is applicable 13/10/11 09:17:17 INFO physical.MetadataOnlyOptimizer: Found 0 metadata only table scans 13/10/11 09:17:17 INFO physical.MetadataOnlyOptimizer: Looking for table scans where optimization is applicable 13/10/11 09:17:17 INFO physical.MetadataOnlyOptimizer: Found 0 metadata only table scans 13/10/11 09:17:17 INFO parse.SemanticAnalyzer: Completed plan generation 13/10/11 09:17:17 INFO ql.Driver: Semantic Analysis Completed 13/10/11 09:17:17 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:name, type:string, comment:null), FieldSchema(name:sex, type:string, comment:null), FieldSchema(name:record_day, type:string, comment:null)], properties:null) 13/10/11 09:17:17 INFO ql.Driver: </PERFLOG method=compile start=1381454236999 end=1381454237416 duration=417> 13/10/11 09:17:17 INFO ql.Driver: <PERFLOG method=Driver.execute> 13/10/11 09:17:17 INFO ql.Driver: Starting command: create table test_10 as select a.* from test_01 a join test_02 b on (a.id=b.id) Total MapReduce jobs = 2 13/10/11 09:17:17 INFO ql.Driver: Total MapReduce jobs = 2 13/10/11 09:17:17 INFO ql.Driver: </PERFLOG method=TimeToSubmit start=1381454236999 end=1381454237422 duration=423> 13/10/11 09:17:17 INFO exec.MapredLocalTask: Generating plan file file:/tmp/hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-local-10007/plan.xml 13/10/11 09:17:17 INFO exec.MapredLocalTask: Executing: /home/hadoop/package/hadoop-1.0.4/libexec/../bin/hadoop jar /home/hadoop/package/hive-0.11.0/lib/hive-exec-0.11.0.jar org.apache.hadoop.hive.ql.exec.ExecDriver -localtask -plan file:/tmp/hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-local-10007/plan.xml -jobconffile file:/tmp/hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-local-10008/jobconf.xml setting HADOOP_USER_NAME hadoop 13/10/11 09:17:17 INFO exec.Task: setting HADOOP_USER_NAME hadoop Execution log at: /tmp/hadoop/.log 2013-10-11 09:17:19 Starting to launch local task to process map join; maximum memory = 932118528 2013-10-11 09:17:19 Processing rows: 6 Hashtable size: 6 Memory usage: 111004256 rate: 0.119 2013-10-11 09:17:19 Dump the hashtable into file: file:/tmp/hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-local-10005/HashTable-Stage-6/MapJoin-mapfile30--.hashtable 2013-10-11 09:17:19 Upload 1 File to: file:/tmp/hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-local-10005/HashTable-Stage-6/MapJoin-mapfile30--.hashtable File size: 692 2013-10-11 09:17:19 End of local task; Time Taken: 0.44 sec. Execution completed successfully 13/10/11 09:17:19 INFO exec.Task: Execution completed successfully Mapred Local Task Succeeded . Convert the Join into MapJoin 13/10/11 09:17:19 INFO exec.Task: Mapred Local Task Succeeded . Convert the Join into MapJoin 13/10/11 09:17:19 INFO exec.MapredLocalTask: Execution completed successfully Mapred Local Task Succeeded . Convert the Join into MapJoin 13/10/11 09:17:19 INFO exec.Task: Mapred Local Task Succeeded . Convert the Join into MapJoin Launching Job 1 out of 2 13/10/11 09:17:19 INFO ql.Driver: Launching Job 1 out of 2 Number of reduce tasks is set to 0 since there's no reduce operator 13/10/11 09:17:19 INFO exec.Task: Number of reduce tasks is set to 0 since there's no reduce operator 13/10/11 09:17:19 INFO exec.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat 13/10/11 09:17:19 INFO exec.ExecDriver: adding libjars: file:///home/hadoop/hive/lib/hive-contrib-0.11.0.jar 13/10/11 09:17:19 INFO exec.ExecDriver: Archive 1 hash table files to /tmp/hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-local-10005/HashTable-Stage-6/Stage-6.tar.gz 13/10/11 09:17:19 INFO exec.ExecDriver: Upload 1 archive file from/tmp/hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-local-10005/HashTable-Stage-6/Stage-6.tar.gz to: hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-mr-10006/HashTable-Stage-6/Stage-6.tar.gz 13/10/11 09:17:19 INFO exec.ExecDriver: Add 1 archive file to distributed cache. Archive file: hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-mr-10006/HashTable-Stage-6/Stage-6.tar.gz 13/10/11 09:17:19 INFO exec.ExecDriver: Processing alias b 13/10/11 09:17:19 INFO exec.ExecDriver: Adding input file hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130812 13/10/11 09:17:19 INFO exec.Utilities: Content Summary hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130812length: 169 num files: 1 num directories: 1 13/10/11 09:17:19 INFO exec.ExecDriver: Adding input file hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130813 13/10/11 09:17:19 INFO exec.Utilities: Content Summary hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130813length: 169 num files: 1 num directories: 1 13/10/11 09:17:21 INFO exec.ExecDriver: Making Temp Directory: hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-ext-10001 13/10/11 09:17:21 INFO exec.ExecDriver: Making Temp Directory: hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-ext-10001 13/10/11 09:17:21 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 13/10/11 09:17:21 INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130812; using filter path hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130812 13/10/11 09:17:21 INFO io.CombineHiveInputFormat: CombineHiveInputSplit: pool is already created for hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130813; using filter path hdfs://namenode:9000/user/hive/warehouse/test.db/test_02/record_day=20130813 13/10/11 09:17:21 INFO mapred.FileInputFormat: Total input paths to process : 2 13/10/11 09:17:21 INFO io.CombineHiveInputFormat: number of splits 2 Starting Job = job_201308241420_3712, Tracking URL = http://namenode:50030/jobdetails.jsp?jobid=job_201308241420_3712 13/10/11 09:17:21 INFO exec.Task: Starting Job = job_201308241420_3712, Tracking URL = http://namenode:50030/jobdetails.jsp?jobid=job_201308241420_3712 Kill Command = /home/hadoop/package/hadoop-1.0.4/libexec/../bin/hadoop job -kill job_201308241420_3712 13/10/11 09:17:21 INFO exec.Task: Kill Command = /home/hadoop/package/hadoop-1.0.4/libexec/../bin/hadoop job -kill job_201308241420_3712 Hadoop job information for Stage-6: number of mappers: 2; number of reducers: 0 13/10/11 09:17:34 INFO exec.Task: Hadoop job information for Stage-6: number of mappers: 2; number of reducers: 0 2013-10-11 09:17:34,882 Stage-6 map = 0%, reduce = 0% 13/10/11 09:17:34 INFO exec.Task: 2013-10-11 09:17:34,882 Stage-6 map = 0%, reduce = 0% 2013-10-11 09:17:40,967 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 13/10/11 09:17:40 INFO exec.Task: 2013-10-11 09:17:40,967 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 2013-10-11 09:17:41,975 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 13/10/11 09:17:41 INFO exec.Task: 2013-10-11 09:17:41,975 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 2013-10-11 09:17:42,981 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 13/10/11 09:17:42 INFO exec.Task: 2013-10-11 09:17:42,981 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 2013-10-11 09:17:43,987 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 13/10/11 09:17:43 INFO exec.Task: 2013-10-11 09:17:43,987 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 2013-10-11 09:17:44,993 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 13/10/11 09:17:44 INFO exec.Task: 2013-10-11 09:17:44,993 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 2013-10-11 09:17:46,005 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec 13/10/11 09:17:46 INFO exec.Task: 2013-10-11 09:17:46,005 Stage-6 map = 100%, reduce = 0%, Cumulative CPU 2.69 sec MapReduce Total cumulative CPU time: 2 seconds 690 msec 13/10/11 09:17:46 INFO exec.Task: MapReduce Total cumulative CPU time: 2 seconds 690 msec Ended Job = job_201308241420_3712 13/10/11 09:17:46 INFO exec.Task: Ended Job = job_201308241420_3712 13/10/11 09:17:46 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/_tmp.-ext-10001 to: hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/_tmp.-ext-10001.intermediate 13/10/11 09:17:46 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/_tmp.-ext-10001.intermediate to: hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-ext-10001 Stage-7 is filtered out by condition resolver. 13/10/11 09:17:46 INFO exec.Task: Stage-7 is filtered out by condition resolver. 24 Rows loaded to hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-ext-10000 13/10/11 09:17:46 INFO exec.HiveHistory: 24 Rows loaded to hdfs://namenode:9000/tmp/hive-hadoop/hive_2013-10-11_09-17-16_999_5875059535154038958/-ext-10000 13/10/11 09:17:46 INFO ql.Driver: </PERFLOG method=Driver.execute start=1381454237416 end=1381454266048 duration=28632> MapReduce Jobs Launched: 13/10/11 09:17:46 INFO ql.Driver: MapReduce Jobs Launched: Job 0: Map: 2 Cumulative CPU: 2.69 sec HDFS Read: 822 HDFS Write: 452 SUCCESS 13/10/11 09:17:46 INFO ql.Driver: Job 0: Map: 2 Cumulative CPU: 2.69 sec HDFS Read: 822 HDFS Write: 452 SUCCESS Total MapReduce CPU Time Spent: 2 seconds 690 msec 13/10/11 09:17:46 INFO ql.Driver: Total MapReduce CPU Time Spent: 2 seconds 690 msec OK 13/10/11 09:17:46 INFO ql.Driver: OK 13/10/11 09:17:46 INFO ql.Driver: <PERFLOG method=releaseLocks> 13/10/11 09:17:46 INFO ql.Driver: </PERFLOG method=releaseLocks start=1381454266049 end=1381454266049 duration=0> 13/10/11 09:17:46 INFO ql.Driver: </PERFLOG method=Driver.run start=1381454236999 end=1381454266050 duration=29051> Time taken: 29.057 seconds 13/10/11 09:17:46 INFO CliDriver: Time taken: 29.057 seconds 13/10/11 09:17:46 INFO ql.Driver: <PERFLOG method=releaseLocks> 13/10/11 09:17:46 INFO ql.Driver: </PERFLOG method=releaseLocks start=1381454266057 end=1381454266057 duration=0> hive (test)> > hive create table as select(CTAS) can not work(not support) with join on > operator > --------------------------------------------------------------------------------- > > Key: HIVE-5245 > URL: https://issues.apache.org/jira/browse/HIVE-5245 > Project: Hive > Issue Type: Bug > Components: HiveServer2 > Affects Versions: 0.11.0 > Reporter: jeff little > Labels: CTAS, hive > Original Estimate: 96h > Remaining Estimate: 96h > > hello everyone, recently i came across one hive problem as below: > hive (test)> create table test_09 as > > select a.* from test_01 a > > join test_02 b > > on (a.id=b.id); > Automatically selecting local only mode for query > Total MapReduce jobs = 2 > setting HADOOP_USER_NAME hadoop > 13/09/09 17:22:36 WARN conf.Configuration: > file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a > attempt to override final parameter: mapred.system.dir; Ignoring. > 13/09/09 17:22:36 WARN conf.Configuration: > file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a > attempt to override final parameter: mapred.local.dir; Ignoring. > Execution log at: /tmp/hadoop/.log > 2013-09-09 05:22:36 Starting to launch local task to process map join; > maximum memory = 932118528 > 2013-09-09 05:22:37 Processing rows: 4 Hashtable size: 4 > Memory usage: 113068056 rate: 0.121 > 2013-09-09 05:22:37 Dump the hashtable into file: > file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable > 2013-09-09 05:22:37 Upload 1 File to: > file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable > File size: 788 > 2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec. > Execution completed successfully > Mapred Local Task Succeeded . Convert the Join into MapJoin > Mapred Local Task Succeeded . Convert the Join into MapJoin > Launching Job 1 out of 2 > Number of reduce tasks is set to 0 since there's no reduce operator > 13/09/09 17:22:38 WARN conf.Configuration: > file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a > attempt to override final parameter: mapred.system.dir; Ignoring. > 13/09/09 17:22:38 WARN conf.Configuration: > file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a > attempt to override final parameter: mapred.local.dir; Ignoring. > Execution log at: /tmp/hadoop/.log > Job running in-process (local Hadoop) > Hadoop job information for null: number of mappers: 0; number of reducers: 0 > 2013-09-09 17:22:41,807 null map = 0%, reduce = 0% > 2013-09-09 17:22:44,814 null map = 100%, reduce = 0% > Ended Job = job_local_0001 > Execution completed successfully > Mapred Local Task Succeeded . Convert the Join into MapJoin > Stage-7 is filtered out by condition resolver. > OK > Time taken: 13.138 seconds > hive (test)> select * from test_09; > FAILED: SemanticException [Error 10001]: Line 1:14 Table not found 'test_09' > hive (test)> > Problem: > I can't get the created table, namely this CTAS is nonavailable, and this > table is not created by this hql sentence at all.who can explain for > me.Thanks. -- This message was sent by Atlassian JIRA (v6.1#6144)