[
https://issues.apache.org/jira/browse/HIVE-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285368#comment-14285368
]
Hive QA commented on HIVE-9341:
-------------------------------
{color:red}Overall{color}: -1 no tests executed
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12693542/HIVE-9341.3.patch.txt
Test results:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2461/testReport
Console output:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2461/console
Test logs:
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2461/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2461/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
svn: Error converting entry in directory
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn:
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt"
+ rm -rf
+ svn update
svn: Error converting entry in directory
'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8
svn: Can't convert string from native encoding to 'UTF-8':
svn:
artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt"
+ exit 1
'
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12693542 - PreCommit-HIVE-TRUNK-Build
> Apply ColumnPrunning for noop PTFs
> ----------------------------------
>
> Key: HIVE-9341
> URL: https://issues.apache.org/jira/browse/HIVE-9341
> Project: Hive
> Issue Type: Improvement
> Components: PTF-Windowing
> Reporter: Navis
> Assignee: Navis
> Priority: Trivial
> Attachments: HIVE-9341.1.patch.txt, HIVE-9341.2.patch.txt,
> HIVE-9341.3.patch.txt
>
>
> Currently, PTF disables CP optimization, which can make a huge burden. For
> example,
> {noformat}
> select p_mfgr, p_name, p_size,
> rank() over (partition by p_mfgr order by p_name) as r,
> dense_rank() over (partition by p_mfgr order by p_name) as dr,
> sum(p_retailprice) over (partition by p_mfgr order by p_name rows between
> unbounded preceding and current row) as s1
> from noop(on part
> partition by p_mfgr
> order by p_name
> );
> STAGE PLANS:
> Stage: Stage-1
> Map Reduce
> Map Operator Tree:
> TableScan
> alias: part
> Statistics: Num rows: 26 Data size: 3147 Basic stats: COMPLETE
> Column stats: NONE
> Reduce Output Operator
> key expressions: p_mfgr (type: string), p_name (type: string)
> sort order: ++
> Map-reduce partition columns: p_mfgr (type: string)
> Statistics: Num rows: 26 Data size: 3147 Basic stats: COMPLETE
> Column stats: NONE
> value expressions: p_partkey (type: int), p_name (type:
> string), p_mfgr (type: string), p_brand (type: string), p_type (type:
> string), p_size (type: int), p_container (type: string), p_retailprice (type:
> double), p_comment (type: string), BLOCK__OFFSET__INSIDE__FILE (type:
> bigint), INPUT__FILE__NAME (type: string), ROW__ID (type:
> struct<transactionid:bigint,bucketid:int,rowid:bigint>)
> ...
> {noformat}
> There should be a generic way to discern referenced columns but before that,
> we know CP can be safely applied to noop functions.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)