[ 
https://issues.apache.org/jira/browse/HIVE-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13591350#comment-13591350
 ] 

Tamas Tarjanyi commented on HIVE-4014:
--------------------------------------

Hi Vinod,

As I have stated above 

BAD: CDH4.1.3 - which is using hadoop-2.0.0+556 / hive-0.9.0+158
GOOD: hadoop 1.0.3 / hive 0.10.0 (apache download)
GOOD: hadoop 1.0.4 / hive 0.10.0 (apache download)
Meanwhile I have also tried Hortonworks Data Platform 1.2.1
GOOD: HDP1.2.1 Apache Hadoop 1.1.2-rc3 / Apache Hive 0.10.0

So it seems that the issue is in hive-0.9 now.

My real problem is that both Hortonworks and Cloudera bundle hive-0.9 with 
hadoop-2.x.y and I wanted to use hadoop-2.x.y with hive-0.10.x and not hadoop-1.



                
> Hive+RCFile is not doing column pruning and reading much more data than 
> necessary
> ---------------------------------------------------------------------------------
>
>                 Key: HIVE-4014
>                 URL: https://issues.apache.org/jira/browse/HIVE-4014
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Vinod Kumar Vavilapalli
>            Assignee: Vinod Kumar Vavilapalli
>
> With even simple projection queries, I see that HDFS bytes read counter 
> doesn't show any reduction in the amount of data read.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to