While I was doing JOIN operation of three tables using Spark 1.1.1, and
always got the following error. However, I've never met the exception in
Spark 1.1.0 with the same operation and same data. Does anyone meet the
problem?
14/12/30 17:49:33 ERROR CliDriver:
org.apache.hadoop.hive.ql.metadata.Hi
While I was running spark MR job, there was FetchFailed(BlockManagerId(47,
xx.com, 40975, 0), shuffleId=2, mapId=5, reduceId=286), then there
were many retries, and the job failed finally.
And the log showed the following error, does anybody meet this error ? or is
it a known issue in Spa
Got it, thanks a lot!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Is-There-Any-Benchmarks-Comparing-Spark-SQL-and-Hive-tp17469p17484.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
Currently we are using Hive in some products, however, seems maybe Spark SQL
is a better choice. Is there any official comparation between them? Thanks a
lot!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Is-There-Any-Benchmarks-Comparing-Spark-SQL-and-Hiv