The effort of configuring an apache big data system by hand for your particular needs is equivalent to herding rattlesnakes and cats into one small room.The documentation is poor and most of the time the community developers don't really feel like helping you.Use Ambari or any other orchestration tool you can find. It will save you a lot of angry moments and time.
On Tuesday, April 18, 2017 11:45 AM, Vihang Karajgaonkar <vih...@cloudera.com> wrote: +sergio Thank you for pointing this out. Based on what I see here https://github.com/ apache/hive/blob/branch-2.1/ pom.xml#L179 Hive 2.1 supports Sparks 1.6. There is a JIRA to add support for Spark 2.0 https://issues.apache.org/ jira/browse/HIVE-14029 but that is available from Hive 2.2.x I have created https://issues.apache.org/ jira/browse/HIVE-16472 to fix the wiki for documentation issues and any bugs in the code if needed. On Mon, Apr 17, 2017 at 6:19 PM, hernan saab <hernan_javier_s...@yahoo.com> wrote: IMO, that page is a booby trap for the newbies to make them waste their time needlessly.As far as I know Hive on Spark does not work today.I would be the reason that page still stays on is because there is a level of shame in the Hive dev community that a feature like this should be functional by now.DO NOT USE SPARK ON HIVE.Instead use Tez on Hive. Hernan On Monday, April 17, 2017 3:45 PM, Krishnanand Khambadkone <kkhambadk...@yahoo.com> wrote: Hi, I am trying to run Hive queries by using Spark as the execution engine. I am following the instructions on this page, https://cwiki.apache.org/ confluence/display/Hive/Hive+ on+Spark%3A+Getting+Started When I try to run my query which is. a simple count(*) command, I get this error, Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql. metadata.HiveException(Failed to create spark client.)'FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql. exec.spark.SparkTask