I found a workaround, when I create Hive Table using Spark “saveAsTable”, I see
filters being pushed down.
-> other approaches I tried where filters are not pushed down Is,
1) when I create Hive Table upfront and load orc into it using Spark SQL
2) when I create orc files using spark SQL and t
Well the meta information is in the file so I am not surprised that it reads
the file, but it should not read all the content, which is probably also not
happening.
> On 24. Oct 2017, at 18:16, Siva Gudavalli
> wrote:
>
>
> Hello,
>
> I have an update here.
>
> spark SQL is pushing pre
Hello, I have an update here. spark SQL is pushing predicates down, if I load
the orc files in spark Context and Is not the same when I try to read hive
Table directly.please let me know if i am missing something here. Is this
supported in spark ? when I load the files in spark Context
scal
Hello,
I am working with Spark SQL to query Hive Managed Table (in Orc Format)
I have my data organized by partitions and asked to set indexes for each
50,000 Rows by setting ('orc.row.index.stride'='5')
lets say -> after evaluating partition there are around 50 files in which
data is