Hi Since you have many issues, let's focus one issue first.
>>> *not able to use the HiveContext to read the Hive table* Can you paste your code of how you use HiveContext ? Do you create it by yourself ? It should be created by zeppelin, so you don't need to create it. What's in interpreter log ? On Thu, Aug 18, 2016 at 7:35 PM, Nagasravanthi, Valluri < valluri.nagasravan...@pfizer.com> wrote: > Hi, > > > > I am using Zeppelin 0.6.0. Please find below the issues along with their > detailed explanation. > > > > *Zeppelin 0.6.0 Issues:* > > a. *not able to execute DDL statements like Create/Drop tables > using temptables derived from the hive table* > > · Error Log:* “*java.lang.RuntimeException: [1.1] failure: > ``with'' expected but identifier drop found : When using sql interpreter to > drop*”* > > > > b. *not able to use the HiveContext to read the Hive table* > > · Error Log: *“*error: object HiveContext in package hive cannot > be accessed in package org.apache.spark.sql.hive*”* > > > > *Detailed Explanation:* > > I upgraded to 0.6.0 from Zeppelin 0.5.6 last week. I am facing some issues > while using notebooks on 0.6.0. I am using Ambari 2.4.2 as my Cluster > Manager and Spark version is 1.6. > > > > The workflow of notebook is as follows: > > 1. Create a spark scala dataframe by reading a hive table in > parquet/text format using sqlContext (sqlContext.read.parquet(“/ > tablelocation/tablename”) > > 2. Import sqlcontext_implicits > > 3. Register the dataframe as a temp table > > 4. Write queries using %sql interpreter or sqlContext.sql > > > > The issue which I am facing right now is that Even though I am able to > execute “SELECT” query on the temptables but *I am not able to execute > DDL statements like Create/Drop tables using temptables derived from the > hive table. * > > Following is my code: > > 1st case: sqlContext.sql(“drop if exists tablename”) > > 2nd case: %sql > > drop if exists tablename > > > > I am getting the same error for both the cases: > java.lang.RuntimeException: [1.1] failure: ``with'' expected but identifier > drop found : When using sql interpreter to drop > > > > It is to be noted that, the same code used to work in Zeppelin 0.5.6. > > > > After researching a bit, I came across that I need to use HiveContext to > query hive table. > > > > The second issue which I am facing is I was able to import HiveContext > using “import org.apache.spark.sql.hive.HiveContext” *but I was not able > to use the HiveContext to read the Hive table.* > > > > This is the code which I wrote : > > (HiveContext.read.parquet(“/tablelocation/tablename”) > > > > I got the following error: > > error: object HiveContext in package hive cannot be accessed in package > org.apache.spark.sql.hive > > > > I am not able to deep dive into this error as there is not much support > online. > > > > Could anyone please suggest any fix for the errors ? > > > > Thanks and Regards, > > *…………………………………………………………………………………………………………………………………………………………………………* > > *[image: Description: cid:image001.png@01D1EBF4.36D373B0]* > > *Valluri Naga Sravanthi| On assignment to **P**fizerWorks* > > Cell*: +91 9008412366 <%2B91%209008412366>* > > *Email**: **pfizerwor...@pfizer.com* <pfizerwor...@pfizer.com>*; * > *valluri.nagasravan...@pfizer.com* <valluri.nagasravan...@pfizer.com%7C> > > *Website**: **http://pfizerWorks.pfizer.com* > <http://pfizerworks.pfizer.com/> > > > > *…………………………………………………………………………………………………………………………………………………………* > > > -- Best Regards Jeff Zhang