[ 
https://issues.apache.org/jira/browse/HIVE-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5504:
-----------------------------------

    Attachment: HIVE-5504.patch

Attaching patch.

Changed Orc's RecordWriter instantiation to fall back to checking jobConf if 
the orc-specific properties are not present in the provided table properties. 
This way, users of OrcOutputFormat other than hive have a way of passing these 
parameters, such as the compression specification to orc.

In addition, changed HCat's FileOutputFormatContainer to have a way of 
special-casing instantiation. We already had one RCFile-specific instantiation, 
that is now refactored out to a separate class intended to be a collection of 
special cases, and added in the ability to copy orc-specific table properties 
from the TableDesc to job properties (which will then later be copied to 
jobconf by hive) so as pass on these parameters to orc.


> OrcOutputFormat honors  compression  properties only from within hive
> ---------------------------------------------------------------------
>
>                 Key: HIVE-5504
>                 URL: https://issues.apache.org/jira/browse/HIVE-5504
>             Project: Hive
>          Issue Type: Bug
>          Components: HCatalog
>    Affects Versions: 0.11.0, 0.12.0
>            Reporter: Venkat Ranganathan
>         Attachments: HIVE-5504.patch
>
>
> When we import data into a HCatalog table created with the following storage  
> description
> .. stored as orc tblproperties ("orc.compress"="SNAPPY") 
> the resultant orc file still uses the default zlib compression
> It looks like HCatOutputFormat is ignoring the tblproperties specified.   
> show tblproperties shows that the table indeed has the properties properly 
> saved.
> An insert/select into the table has the resulting orc file honor the tbl 
> property.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to