[ 
https://issues.apache.org/jira/browse/HIVE-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-3874:
------------------------------

    Attachment: HIVE-3874.D8529.2.patch

omalley updated the revision "HIVE-3874 [jira] Create a new Optimized Row 
Columnar file format for Hive".

  Addressed Kevin's feedback.

  * Fixed 500+ warnings out of checkstyle - there are a few left that I couldn't
    avoid.
  * Fixed all of the cases that I could find where operators didn't have space
    around them. If we care about that, we should configure checkstyle to check
    for it.
  * Fixed the directory with the code to match the package name.
  * Removed the loop initializing the boolean array in OrcInputFormat.
  * Added code to include the dictionary size when estimating memory size.

Reviewers: JIRA

REVISION DETAIL
  https://reviews.facebook.net/D8529

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D8529?vs=27621&id=28101#toc

AFFECTED FILES
  build.properties
  build.xml
  ivy/libraries.properties
  ql/build.xml
  ql/ivy.xml
  ql/src/gen/protobuf/gen-java/org/apache/hadoop/hive/ql/io/orc/OrcProto.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/BitFieldReader.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/BitFieldWriter.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/BooleanColumnStatistics.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatistics.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatisticsImpl.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/CompressionCodec.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/CompressionKind.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/DoubleColumnStatistics.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicByteArray.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicIntArray.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/FileDump.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/InStream.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/IntegerColumnStatistics.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcFile.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcOutputFormat.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcUnion.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OutStream.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/PositionProvider.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/PositionRecorder.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/PositionedOutputStream.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/Reader.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/ReaderImpl.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReader.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RedBlackTree.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthByteReader.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthByteWriter.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerReader.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerWriter.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/SerializationUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/SnappyCodec.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/StreamName.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/StringColumnStatistics.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/StringRedBlackTree.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/StripeInformation.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/Writer.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/ZlibCodec.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/package-info.java
  ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestBitFieldReader.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestDynamicArray.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestFileDump.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInStream.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcStruct.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestRunLengthByteReader.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestRunLengthIntegerReader.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestSerializationUtils.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestStreamName.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestStringRedBlackTree.java
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestZlib.java
  ql/src/test/resources/orc-file-dump.out

To: JIRA, omalley
Cc: kevinwilfong, njain

                
> Create a new Optimized Row Columnar file format for Hive
> --------------------------------------------------------
>
>                 Key: HIVE-3874
>                 URL: https://issues.apache.org/jira/browse/HIVE-3874
>             Project: Hive
>          Issue Type: Improvement
>          Components: Serializers/Deserializers
>            Reporter: Owen O'Malley
>            Assignee: Owen O'Malley
>         Attachments: hive.3874.2.patch, HIVE-3874.D8529.1.patch, 
> HIVE-3874.D8529.2.patch, OrcFileIntro.pptx, orc.tgz
>
>
> There are several limitations of the current RC File format that I'd like to 
> address by creating a new format:
> * each column value is stored as a binary blob, which means:
> ** the entire column value must be read, decompressed, and deserialized
> ** the file format can't use smarter type-specific compression
> ** push down filters can't be evaluated
> * the start of each row group needs to be found by scanning
> * user metadata can only be added to the file when the file is created
> * the file doesn't store the number of rows per a file or row group
> * there is no mechanism for seeking to a particular row number, which is 
> required for external indexes.
> * there is no mechanism for storing light weight indexes within the file to 
> enable push-down filters to skip entire row groups.
> * the type of the rows aren't stored in the file

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to