[ 
https://issues.apache.org/jira/browse/HIVE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-6060:
--------------------------------

    Attachment: h-5317.patch

Rowids are not unique across buckets and thus the unique identifier is thus: 
(transaction id, bucket id, row id). Alan suggested offline that I add bucket 
id to API so that we aren't forced to maintain the current restriction of one 
HDFS file per a bucket. I've also added my thoughts on what the reader would 
look like.

I also need to look at what the API looks like for vectorization.

> Define API for RecordUpdater and UpdateReader
> ---------------------------------------------
>
>                 Key: HIVE-6060
>                 URL: https://issues.apache.org/jira/browse/HIVE-6060
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Owen O'Malley
>            Assignee: Owen O'Malley
>         Attachments: h-5317.patch, h-5317.patch
>
>
> We need to define some new APIs for how Hive interacts with the file formats 
> since it needs to be much richer than the current RecordReader and 
> RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to