Re: "You must build Spark with Hive. Export 'SPARK_HIVE=true'"

2016-11-26 Thread Ruslan Dautkhanov
Yes I can work with hiveContext from spark-shell. Back to the original question. Getting You must *build Spark with Hive*. Export 'SPARK_HIVE=true' See full stack [2] above. Any ideas? -- Ruslan Dautkhanov On Thu, Nov 24, 2016 at 4:48 PM, Jeff Zhang wrote: > My point is that I suspect CDH

SparkInterpreter.java[getSQLContext_1]:256 - Can't create HiveContext. Fallback to SQLContext

2016-11-26 Thread Ruslan Dautkhanov
Getting SparkInterpreter.java[getSQLContext_1]:256 - Can't create HiveContext. Fallback to SQLContext See full stack [1] below. Java 7 Zeppelin 0.6.2 CDH 5.8.3 Spark 1.6 How to fix this? Thank you. [1] WARN [2016-11-26 09:38:39,028] ({pool-2-thread-2} SparkInterpreter.java[getSQLContext_1]:

JDBC Interpreter does not commit updates workaround?

2016-11-26 Thread Matt L
Hello All! From Zeppelin Notes, is there a way to make the JDBCInterpreter commit after I run an upset command? Or somehow call the connection.commit method? I know there’s an open JIRA for this(https://issues.apache.org/jira/browse/ZEPPELIN-1645