See this thread:
http://search-hadoop.com/m/q3RTtwwjNxXvPEe1

A brief search in Spark JIRAs didn't find anything opened on this subject.

On Tue, Oct 6, 2015 at 8:51 AM, unk1102 <umesh.ka...@gmail.com> wrote:

> Hi I have a spark job which creates ORC files in partitions using the
> following code
>
>
> dataFrame.write().mode(SaveMode.Append).partitionBy("entity","date").format("orc").save("baseTable");
>
> Above code creates successfully orc files which is readable in Spark
> dataframe
>
> But when I try to load orc files generated using above code into hive orc
> table or hive external table nothing gets printed looks like table is empty
> what's wrong here I can see orc files in hdfs but hive table does not read
> it please guide
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/ORC-files-created-by-Spark-job-can-t-be-accessed-using-hive-table-tp24954.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to