Dear all,
I am following this article to try Hive on Spark
http://hortonworks.com/hadoop-tutorial/using-hive-with-orc-from-apache-spark/

My environment:
Hive 1.2.1
Spark 1.5.1

in a nutshell, I ran spark-shell, created a hive table

hiveContext.sql("create table yahoo_orc_table (date STRING, open_price
FLOAT, high_price FLOAT, low_price FLOAT, close_price FLOAT, volume INT,
adj_price FLOAT) stored as orc")

I also computed a dataframe and can show correct contents.
val results = sqlContext.sql("SELECT * FROM yahoo_stocks_temp")

Then I executed the save command
results.write.format("orc").save("yahoo_stocks_orc")

I can see a folder named "yahoo_stocks_orc" got successfully and there is a
_SUCCESS file inside it, but no orc file at all. I repeated this for many
times and it's the same result.

But
results.write.format("orc").save("hdfs://*****:8020/yahoo_stocks_orc") can
successfully write contents.

Please kindly help.

Regards,
Sai

Reply via email to