I am testing sqoop loads to hive from Oracle 11.2.0.3 (CDH4 w/
hive-common-0.8.1-cdh4.0.1.jar) and getting the error in the subject line. The
BU_NM column is the partition key in Oracle for the EDW_PROD table so I am not
sure what the error means in the hive "lingo" ;-)

Any suggestions or pointers to solutions are welcomed. 
Here is the complete stack:
-bash-3.2$ sqoop import  --hive-import --hive-overwrite --connect
jdbc:oracle:thin:@devdw.hdsupply.net:1522:devdw --username ah037047 -P --table
EDW_COMMON.EDW_PROD --verbose --split-by 'BU_NM' --hive-table EDW_PROD
--hive-partition-key 'BU_NM'
Enter password:
12/09/21 16:22:59 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for
output. You can override
12/09/21 16:22:59 INFO tool.BaseSqoopTool: delimiters with
--fields-terminated-by, etc.
12/09/21 16:22:59 INFO manager.SqlManager: Using default fetchSize of 1000
12/09/21 16:22:59 INFO tool.CodeGenTool: Beginning code generation
12/09/21 16:23:00 INFO manager.OracleManager: Time zone has been set to GMT
12/09/21 16:23:00 INFO manager.SqlManager: Executing SQL statement: SELECT t.*
FROM EDW_COMMON.EDW_PROD t WHERE 1=0
12/09/21 16:23:00 INFO orm.CompilationManager: HADOOP_HOME is /usr/lib/hadoop
Note:
/tmp/sqoop-hdfs/compile/91c05551ea119c6f720f11c73c231449/EDW_COMMON_EDW_PROD.
java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/09/21 16:23:02 INFO orm.CompilationManager: Writing jar file:
/tmp/sqoop-hdfs/compile/91c05551ea119c6f720f11c73c231449/
EDW_COMMON.EDW_PROD.jar
12/09/21 16:23:02 INFO mapreduce.ImportJobBase: Beginning import of
EDW_COMMON.EDW_PROD
12/09/21 16:23:02 INFO manager.OracleManager: Time zone has been set to GMT
12/09/21 16:23:03 WARN mapred.JobClient: Use GenericOptionsParser for parsing
the arguments. Applications should implement Tool for the same.
12/09/21 16:23:04 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT
MIN(BU_NM), MAX(BU_NM) FROM EDW_COMMON.EDW_PROD
12/09/21 16:23:05 WARN db.TextSplitter: Generating splits for a textual index
column.
12/09/21 16:23:05 WARN db.TextSplitter: If your database sorts in a case-
insensitive order, this may result in a partial import or duplicate records.
12/09/21 16:23:05 WARN db.TextSplitter: You are strongly encouraged to choose 
an integral split column.
12/09/21 16:23:06 INFO mapred.JobClient: Running job: job_201209201408_0026
12/09/21 16:23:07 INFO mapred.JobClient:  map 0% reduce 0%
12/09/21 16:23:21 INFO mapred.JobClient:  map 20% reduce 0%
12/09/21 16:23:28 INFO mapred.JobClient:  map 40% reduce 0%
12/09/21 16:24:12 INFO mapred.JobClient:  map 60% reduce 0%
12/09/21 16:25:08 INFO mapred.JobClient:  map 80% reduce 0%
12/09/21 16:25:44 INFO mapred.JobClient:  map 100% reduce 0%
12/09/21 16:25:45 INFO mapred.JobClient: Job complete: job_201209201408_0026
12/09/21 16:25:45 INFO mapred.JobClient: Counters: 23
12/09/21 16:25:45 INFO mapred.JobClient:   File System Counters
12/09/21 16:25:45 INFO mapred.JobClient:     FILE: Number of bytes read=0
12/09/21 16:25:45 INFO mapred.JobClient:     FILE: Number of bytes 
written=335336
12/09/21 16:25:45 INFO mapred.JobClient:     FILE: Number of read operations=0
12/09/21 16:25:45 INFO mapred.JobClient:     FILE: Number of large read 
operations=0
12/09/21 16:25:45 INFO mapred.JobClient:     FILE: Number of write operations=0
12/09/21 16:25:45 INFO mapred.JobClient:     HDFS: Number of bytes read=654
12/09/21 16:25:45 INFO mapred.JobClient:     HDFS: Number of bytes
written=3999082946
12/09/21 16:25:45 INFO mapred.JobClient:     HDFS: Number of read operations=8
12/09/21 16:25:45 INFO mapred.JobClient:     HDFS: Number of large read 
operations=0
12/09/21 16:25:45 INFO mapred.JobClient:     HDFS: Number of write operations=5
12/09/21 16:25:45 INFO mapred.JobClient:   Job Counters
12/09/21 16:25:45 INFO mapred.JobClient:     Launched map tasks=5
12/09/21 16:25:45 INFO mapred.JobClient:     Total time spent by all maps in
occupied slots (ms)=362999
12/09/21 16:25:45 INFO mapred.JobClient:     Total time spent by all reduces in
occupied slots (ms)=0
12/09/21 16:25:45 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
12/09/21 16:25:45 INFO mapred.JobClient:     Total time spent by all reduces
waiting after reserving slots (ms)=0
12/09/21 16:25:45 INFO mapred.JobClient:   Map-Reduce Framework
12/09/21 16:25:45 INFO mapred.JobClient:     Map input records=8070533
12/09/21 16:25:45 INFO mapred.JobClient:     Map output records=8070533
12/09/21 16:25:45 INFO mapred.JobClient:     Input split bytes=654
12/09/21 16:25:45 INFO mapred.JobClient:     Spilled Records=0
12/09/21 16:25:45 INFO mapred.JobClient:     CPU time spent (ms)=360920
12/09/21 16:25:45 INFO mapred.JobClient:     Physical memory (bytes)
snapshot=1933713408
12/09/21 16:25:45 INFO mapred.JobClient:     Virtual memory (bytes)
snapshot=7363780608
12/09/21 16:25:45 INFO mapred.JobClient:     Total committed heap usage
(bytes)=2506227712
12/09/21 16:25:45 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 162.6588
seconds (0 bytes/sec)
12/09/21 16:25:45 INFO mapreduce.ImportJobBase: Retrieved 8070533 records.
12/09/21 16:25:45 INFO hive.HiveImport: Removing temporary files from import
process: EDW_COMMON.EDW_PROD/_logs
12/09/21 16:25:45 INFO hive.HiveImport: Loading uploaded data into Hive
12/09/21 16:25:45 INFO manager.OracleManager: Time zone has been set to GMT
12/09/21 16:25:45 INFO manager.SqlManager: Executing SQL statement: SELECT t.*
FROM EDW_COMMON.EDW_PROD t WHERE 1=0
12/09/21 16:25:45 WARN hive.TableDefWriter: Column PROD_ID had to be cast to a
less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column ORG_HIER_ID had to be cast 
to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column SKU_START_DT had to be cast
to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column PO_REPL_COST had to be cast
to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column EFF_DT_FOR_PO_REPL_COST had
to be cast to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column STD_COST had to be cast to a
less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column EFF_DT_FOR_STD_COST had to be
cast to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column TRADE_PRC had to be cast to a
less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column EFF_DT_FOR_TRADE_PRC had to
be cast to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column PROD_PURCH_UOM_QTY had to be
cast to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column CON_FACT_BETW_PUR_UOM_REC_UOM
had to be cast to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column PRC_UOM_QTY had to be cast to
a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column STK_UOM_QTY had to be cast to
a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column UNT_PER_PKG had to be cast to
a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column CASE_QTY had to be cast to a
less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column PALLET_QTY had to be cast to
a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column LEN_OF_BUY_PKG had to be cast
to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column HGHT_OF_BUY_PKG had to be
cast to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column WID_OF_BUY_PKG had to be cast
to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column SINGLE_PC_WT_OF_BUY_PKG had
to be cast to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column CUB_OF_BUY_PKG had to be cast
to a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column IMP_DUTY had to be cast to a
less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column MIN_ORD_QTY had to be cast to
a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column PRC_MTRX had to be cast to a
less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column CRT_BTCH_ID had to be cast to
a less precise type in Hive
12/09/21 16:25:45 WARN hive.TableDefWriter: Column UPD_BTCH_ID had to be cast to
a less precise type in Hive
12/09/21 16:25:46 INFO hive.HiveImport: WARNING:
org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use
org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
12/09/21 16:25:47 INFO hive.HiveImport: Logging initialized using configuration
in jar:file:/usr/lib/hive/lib/hive-common-0.8.1-cdh4.0.1.jar!/hive-log4j.
properties
12/09/21 16:25:47 INFO hive.HiveImport: Hive history
file=/tmp/hdfs/hive_job_log_hdfs_201209211625_1831920.txt
12/09/21 16:25:50 INFO hive.HiveImport: OK
12/09/21 16:25:50 INFO hive.HiveImport: Time taken: 3.375 seconds
12/09/21 16:25:51 INFO hive.HiveImport: FAILED: Error in semantic analysis:
Non-Partition column appears in the partition specification:  bu_nm
12/09/21 16:25:51 ERROR tool.ImportTool: Encountered IOException running import
job: java.io.IOException: Hive exited with status 10
        at
org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:388)
        at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:338)
        at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:249)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
        at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)



Reply via email to