> If I have an orc table bucketed and sorted on a column, where does hive keep
> the mapping from column value to bucket? Specifically, if I know the column
> value, and need to find the specific hdfs file, is there an api to do this?
The closest to an API is ObjectInspectorUtils.getBucketNumbe
Hi.
If I have an orc table bucketed and sorted on a column, where does hive
keep the mapping from column value to bucket? Specifically, if I know the
column value, and need to find the specific hdfs file, is there an api to
do this?
Related, is there any documentation on how the read path works f
Dear Apache enthusiast,
ApacheCon and Apache Big Data will be held at the Intercontinental in
Miami, Florida, May 16-18, 2017. Submit your talks, and register, at
http://apachecon.com/ Talks aimed at the Big Data section of the event
should go to
http://events.linuxfoundation.org/events/apache-bi
Looks like a Hive issue executing truncate. Can you please add --trace
option to command line, maybe it shows more details. Did you try to execute
truncate on this table in Hive CLI? Does it work? Is it managed or external
table?
Thanks,
Dmitry
On Wed, Nov 30, 2016 at 5:46 PM, Su Changfeng
wrote
Hi Alan,
I'm using Hive version 1.1.0. The metastore database is PostgreSQL.
I've reduced the rate and now I get the following error when compacting
manually:
ERROR org.apache.hadoop.hive.ql.txn.compactor.CompactorMR: [hvi1x0194-29]: No
delta files found to compact in
hdfs://hvi1x0220:8020/u