Hi ,

I have been seeing errors at OS level when running sqoop import or hbase to
get data into Hive and Sqoop respectively.

The gist of the error is at the last line.

2016-09-22 10:49:39,472 [myid:] - INFO  [main:Job@1356] - Job
job_1474535924802_0003 completed successfully
2016-09-22 10:49:39,611 [myid:] - ERROR [main:ImportTool@607] - Imported
Failed: No enum constant org.apache.hadoop.mapreduce.
JobCounter.MB_MILLIS_MAPS


In short the first part of sqoop job  (importing RDBMS table data into
hdfs) finishes OK, then that error comes up and sqoop stops short of
creating and putting data in Hive table.

With Hbase all finishes OK but you end up with the similar error

Sounds like both sqoop and Hbase create java scripts that are run against
MapReduce.

Now sqoop compiles java with the available JDK (in my case java version
"1.8.0_77") and the produced jar file is run against Hadoop MapReduce.

Writing jar file:
/tmp/sqoop-hduser/compile/b3ed391a517259ba2d2434e6f6ee3542/QueryResult.jar

So it suggests that there is incompatibility between Java versions between
the one used to compile sqoop and the one used for Hadoop? As far I know
there are the same.

I was wondering how can I investigate further?

I have attached the jar file

thanks




Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Attachment: QueryResult.java
Description: Binary data

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to