[ https://issues.apache.org/jira/browse/HIVE-8454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14185633#comment-14185633 ]
Hive QA commented on HIVE-8454: ------------------------------- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12677367/HIVE-8454.6.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1481/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1481/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1481/ Messages: {noformat} **** This message was trimmed, see log for full details **** [INFO] CP: /data/hive-ptest/working/maven/org/apache/hadoop/hadoop-yarn-common/2.5.0/hadoop-yarn-common-2.5.0.jar [INFO] CP: /data/hive-ptest/working/maven/org/apache/hadoop/hadoop-yarn-api/2.5.0/hadoop-yarn-api-2.5.0.jar [INFO] CP: /data/hive-ptest/working/maven/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar [INFO] CP: /data/hive-ptest/working/maven/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar [INFO] CP: /data/hive-ptest/working/maven/javax/activation/activation/1.1/activation-1.1.jar [INFO] CP: /data/hive-ptest/working/maven/com/google/inject/guice/3.0/guice-3.0.jar [INFO] CP: /data/hive-ptest/working/maven/javax/inject/javax.inject/1/javax.inject-1.jar [INFO] CP: /data/hive-ptest/working/maven/aopalliance/aopalliance/1.0/aopalliance-1.0.jar [INFO] CP: /data/hive-ptest/working/maven/com/sun/jersey/contribs/jersey-guice/1.9/jersey-guice-1.9.jar [INFO] CP: /data/hive-ptest/working/maven/com/google/inject/extensions/guice-servlet/3.0/guice-servlet-3.0.jar [INFO] CP: /data/hive-ptest/working/maven/io/netty/netty/3.4.0.Final/netty-3.4.0.Final.jar [INFO] CP: /data/hive-ptest/working/maven/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar [INFO] CP: /data/hive-ptest/working/maven/org/slf4j/slf4j-log4j12/1.7.5/slf4j-log4j12-1.7.5.jar DataNucleus Enhancer (version 3.2.10) for API "JDO" using JRE "1.7" DataNucleus Enhancer : Classpath >> /usr/local/apache-maven-3.0.5/boot/plexus-classworlds-2.4.jar ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MDatabase ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MFieldSchema ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MType ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MTable ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MSerDeInfo ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MOrder ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MColumnDescriptor ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MStringList ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MStorageDescriptor ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MPartition ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MIndex ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MRole ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MRoleMap ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MGlobalPrivilege ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MDBPrivilege ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MTablePrivilege ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MPartitionPrivilege ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MTableColumnPrivilege ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MPartitionColumnPrivilege ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MPartitionEvent ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MMasterKey ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MDelegationToken ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MTableColumnStatistics ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MVersionTable ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MResourceUri ENHANCED (PersistenceCapable) : org.apache.hadoop.hive.metastore.model.MFunction DataNucleus Enhancer completed with success for 27 classes. Timings : input=486 ms, enhance=1048 ms, total=1534 ms. Consult the log for full details [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-metastore --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/metastore/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-metastore --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/tmp/conf [copy] Copying 7 files to /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-metastore --- [INFO] Compiling 15 source files to /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-metastore --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-metastore --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/hive-metastore-0.15.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hive-metastore --- [INFO] [INFO] --- maven-jar-plugin:2.2:test-jar (default) @ hive-metastore --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/hive-metastore-0.15.0-SNAPSHOT-tests.jar [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-metastore --- [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/hive-metastore-0.15.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/hive-metastore/0.15.0-SNAPSHOT/hive-metastore-0.15.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/metastore/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/hive-metastore/0.15.0-SNAPSHOT/hive-metastore-0.15.0-SNAPSHOT.pom [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/metastore/target/hive-metastore-0.15.0-SNAPSHOT-tests.jar to /data/hive-ptest/working/maven/org/apache/hive/hive-metastore/0.15.0-SNAPSHOT/hive-metastore-0.15.0-SNAPSHOT-tests.jar [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Hive Ant Utilities 0.15.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-ant --- [INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/ant (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-ant --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-ant --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/ant/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-ant --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-ant --- [INFO] Compiling 5 source files to /data/hive-ptest/working/apache-svn-trunk-source/ant/target/classes [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/ant/src/org/apache/hadoop/hive/ant/QTestGenTask.java: /data/hive-ptest/working/apache-svn-trunk-source/ant/src/org/apache/hadoop/hive/ant/QTestGenTask.java uses or overrides a deprecated API. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/ant/src/org/apache/hadoop/hive/ant/QTestGenTask.java: Recompile with -Xlint:deprecation for details. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/ant/src/org/apache/hadoop/hive/ant/DistinctElementsClassPath.java: /data/hive-ptest/working/apache-svn-trunk-source/ant/src/org/apache/hadoop/hive/ant/DistinctElementsClassPath.java uses unchecked or unsafe operations. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/ant/src/org/apache/hadoop/hive/ant/DistinctElementsClassPath.java: Recompile with -Xlint:unchecked for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-ant --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/ant/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-ant --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/ant/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/ant/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/ant/target/tmp/conf [copy] Copying 7 files to /data/hive-ptest/working/apache-svn-trunk-source/ant/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-ant --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-ant --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-ant --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/ant/target/hive-ant-0.15.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hive-ant --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-ant --- [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/ant/target/hive-ant-0.15.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/hive-ant/0.15.0-SNAPSHOT/hive-ant-0.15.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/ant/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/hive-ant/0.15.0-SNAPSHOT/hive-ant-0.15.0-SNAPSHOT.pom [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Hive Query Language 0.15.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ Downloading: http://repository.apache.org/snapshots/org/apache/calcite/calcite-core/0.9.2-incubating-SNAPSHOT/maven-metadata.xml [WARNING] Could not transfer metadata org.apache.calcite:calcite-core:0.9.2-incubating-SNAPSHOT/maven-metadata.xml from/to apache.snapshots (http://repository.apache.org/snapshots): Failed to transfer file: http://repository.apache.org/snapshots/org/apache/calcite/calcite-core/0.9.2-incubating-SNAPSHOT/maven-metadata.xml. Return code is: 503 , ReasonPhrase:Service Temporarily Unavailable. [WARNING] Failure to transfer org.apache.calcite:calcite-core:0.9.2-incubating-SNAPSHOT/maven-metadata.xml from http://repository.apache.org/snapshots was cached in the local repository, resolution will not be reattempted until the update interval of apache.snapshots has elapsed or updates are forced. Original error: Could not transfer metadata org.apache.calcite:calcite-core:0.9.2-incubating-SNAPSHOT/maven-metadata.xml from/to apache.snapshots (http://repository.apache.org/snapshots): Failed to transfer file: http://repository.apache.org/snapshots/org/apache/calcite/calcite-core/0.9.2-incubating-SNAPSHOT/maven-metadata.xml. Return code is: 503 , ReasonPhrase:Service Temporarily Unavailable. Downloading: http://repository.apache.org/snapshots/org/apache/calcite/calcite-core/0.9.2-incubating-SNAPSHOT/calcite-core-0.9.2-incubating-SNAPSHOT.pom Downloading: http://repository.apache.org/snapshots/org/apache/calcite/calcite-avatica/0.9.2-incubating-SNAPSHOT/maven-metadata.xml [WARNING] Could not transfer metadata org.apache.calcite:calcite-avatica:0.9.2-incubating-SNAPSHOT/maven-metadata.xml from/to apache.snapshots (http://repository.apache.org/snapshots): Failed to transfer file: http://repository.apache.org/snapshots/org/apache/calcite/calcite-avatica/0.9.2-incubating-SNAPSHOT/maven-metadata.xml. Return code is: 503 , ReasonPhrase:Service Temporarily Unavailable. [WARNING] Failure to transfer org.apache.calcite:calcite-avatica:0.9.2-incubating-SNAPSHOT/maven-metadata.xml from http://repository.apache.org/snapshots was cached in the local repository, resolution will not be reattempted until the update interval of apache.snapshots has elapsed or updates are forced. Original error: Could not transfer metadata org.apache.calcite:calcite-avatica:0.9.2-incubating-SNAPSHOT/maven-metadata.xml from/to apache.snapshots (http://repository.apache.org/snapshots): Failed to transfer file: http://repository.apache.org/snapshots/org/apache/calcite/calcite-avatica/0.9.2-incubating-SNAPSHOT/maven-metadata.xml. Return code is: 503 , ReasonPhrase:Service Temporarily Unavailable. Downloading: http://repository.apache.org/snapshots/org/apache/calcite/calcite-avatica/0.9.2-incubating-SNAPSHOT/calcite-avatica-0.9.2-incubating-SNAPSHOT.pom [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Hive .............................................. SUCCESS [12.567s] [INFO] Hive Shims Common ................................. SUCCESS [6.790s] [INFO] Hive Shims 0.20 ................................... SUCCESS [3.799s] [INFO] Hive Shims Secure Common .......................... SUCCESS [4.770s] [INFO] Hive Shims 0.20S .................................. SUCCESS [2.291s] [INFO] Hive Shims 0.23 ................................... SUCCESS [6.919s] [INFO] Hive Shims ........................................ SUCCESS [1.906s] [INFO] Hive Common ....................................... SUCCESS [11.361s] [INFO] Hive Serde ........................................ SUCCESS [20.761s] [INFO] Hive Metastore .................................... SUCCESS [36.692s] [INFO] Hive Ant Utilities ................................ SUCCESS [1.727s] [INFO] Hive Query Language ............................... FAILURE [4:05.164s] [INFO] Hive Service ...................................... SKIPPED [INFO] Hive Accumulo Handler ............................. SKIPPED [INFO] Hive JDBC ......................................... SKIPPED [INFO] Hive Beeline ...................................... SKIPPED [INFO] Hive CLI .......................................... SKIPPED [INFO] Hive Contrib ...................................... SKIPPED [INFO] Hive HBase Handler ................................ SKIPPED [INFO] Hive HCatalog ..................................... SKIPPED [INFO] Hive HCatalog Core ................................ SKIPPED [INFO] Hive HCatalog Pig Adapter ......................... SKIPPED [INFO] Hive HCatalog Server Extensions ................... SKIPPED [INFO] Hive HCatalog Webhcat Java Client ................. SKIPPED [INFO] Hive HCatalog Webhcat ............................. SKIPPED [INFO] Hive HCatalog Streaming ........................... SKIPPED [INFO] Hive HWI .......................................... SKIPPED [INFO] Hive ODBC ......................................... SKIPPED [INFO] Hive Shims Aggregator ............................. SKIPPED [INFO] Hive TestUtils .................................... SKIPPED [INFO] Hive Packaging .................................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 5:57.454s [INFO] Finished at: Mon Oct 27 15:05:32 EDT 2014 [INFO] Final Memory: 65M/359M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal on project hive-exec: Could not resolve dependencies for project org.apache.hive:hive-exec:jar:0.15.0-SNAPSHOT: Failed to collect dependencies for [org.apache.hive:hive-ant:jar:0.15.0-SNAPSHOT (compile), org.apache.hive:hive-common:jar:0.15.0-SNAPSHOT (compile), org.apache.hive:hive-metastore:jar:0.15.0-SNAPSHOT (compile), org.apache.hive:hive-serde:jar:0.15.0-SNAPSHOT (compile), org.apache.hive:hive-shims:jar:0.15.0-SNAPSHOT (compile), com.esotericsoftware.kryo:kryo:jar:2.22 (compile), com.twitter:parquet-hadoop-bundle:jar:1.5.0 (compile), commons-codec:commons-codec:jar:1.4 (compile), commons-httpclient:commons-httpclient:jar:3.0.1 (compile), commons-io:commons-io:jar:2.4 (compile), org.apache.commons:commons-lang3:jar:3.1 (compile), commons-lang:commons-lang:jar:2.6 (compile), commons-logging:commons-logging:jar:1.1.3 (compile), javolution:javolution:jar:5.5.1 (compile), log4j:log4j:jar:1.2.16 (compile), org.antlr:antlr-runtime:jar:3.4 (compile), org.antlr:ST4:jar:4.0.4 (compile), org.apache.avro:avro:jar:1.7.5 (compile), org.apache.avro:avro-mapred:jar:hadoop2:1.7.5 (compile), org.apache.ant:ant:jar:1.9.1 (compile), org.apache.commons:commons-compress:jar:1.4.1 (compile), org.apache.thrift:libfb303:jar:0.9.0 (compile), org.apache.thrift:libthrift:jar:0.9.0 (compile), org.apache.zookeeper:zookeeper:jar:3.4.5 (compile), org.codehaus.groovy:groovy-all:jar:2.1.6 (compile), org.codehaus.jackson:jackson-core-asl:jar:1.9.2 (compile), org.jodd:jodd-core:jar:3.5.2 (compile), org.codehaus.jackson:jackson-mapper-asl:jar:1.9.2 (compile), org.datanucleus:datanucleus-core:jar:3.2.10 (compile), org.apache.calcite:calcite-core:jar:0.9.2-incubating-SNAPSHOT (compile), org.apache.calcite:calcite-avatica:jar:0.9.2-incubating-SNAPSHOT (compile), com.google.guava:guava:jar:11.0.2 (compile), com.google.protobuf:protobuf-java:jar:2.5.0 (compile), com.googlecode.javaewah:JavaEWAH:jar:0.3.2 (compile), org.iq80.snappy:snappy:jar:0.2 (compile), org.json:json:jar:20090211 (compile), stax:stax-api:jar:1.0.1 (compile), net.sf.opencsv:opencsv:jar:2.3 (compile), com.twitter:parquet-column:jar:tests:1.5.0 (test), junit:junit:jar:4.10 (test), org.mockito:mockito-all:jar:1.9.5 (test), org.apache.tez:tez-api:jar:0.5.1 (compile?), org.apache.tez:tez-runtime-library:jar:0.5.1 (compile?), org.apache.tez:tez-runtime-internals:jar:0.5.1 (compile?), org.apache.tez:tez-mapreduce:jar:0.5.1 (compile?), org.apache.hadoop:hadoop-common:jar:2.5.0 (compile?), org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.5.0 (compile?), org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.5.0 (test?), org.apache.hadoop:hadoop-hdfs:jar:2.5.0 (compile?), org.apache.hadoop:hadoop-yarn-api:jar:2.5.0 (compile?), org.apache.hadoop:hadoop-yarn-common:jar:2.5.0 (compile?), org.apache.hadoop:hadoop-yarn-client:jar:2.5.0 (compile?), org.slf4j:slf4j-api:jar:1.7.5 (compile), org.slf4j:slf4j-log4j12:jar:1.7.5 (compile)]: Failed to read artifact descriptor for org.apache.calcite:calcite-core:jar:0.9.2-incubating-SNAPSHOT: Could not transfer artifact org.apache.calcite:calcite-core:pom:0.9.2-incubating-SNAPSHOT from/to apache.snapshots (http://repository.apache.org/snapshots): Failed to transfer file: http://repository.apache.org/snapshots/org/apache/calcite/calcite-core/0.9.2-incubating-SNAPSHOT/calcite-core-0.9.2-incubating-SNAPSHOT.pom. Return code is: 503 , ReasonPhrase:Service Temporarily Unavailable. -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hive-exec + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12677367 - PreCommit-HIVE-TRUNK-Build > Select Operator does not rename column stats properly in case of select star > ---------------------------------------------------------------------------- > > Key: HIVE-8454 > URL: https://issues.apache.org/jira/browse/HIVE-8454 > Project: Hive > Issue Type: Sub-task > Components: Physical Optimizer > Affects Versions: 0.14.0 > Reporter: Mostafa Mokhtar > Assignee: Prasanth J > Priority: Critical > Fix For: 0.14.0 > > Attachments: HIVE-8454.1.patch, HIVE-8454.2.patch, HIVE-8454.3.patch, > HIVE-8454.3.patch, HIVE-8454.4.patch, HIVE-8454.5.patch, HIVE-8454.6.patch > > > The estimated data size of some Select Operators is 0. BytesBytesHashMap uses > data size to determine the estimated initial number of entries in the > hashmap. If this data size is 0 then exception is thrown (refer below) > Query > {code} > select count(*) from > store_sales > JOIN store_returns ON store_sales.ss_item_sk = > store_returns.sr_item_sk and store_sales.ss_ticket_number = > store_returns.sr_ticket_number > JOIN customer ON store_sales.ss_customer_sk = customer.c_customer_sk > JOIN date_dim d1 ON store_sales.ss_sold_date_sk = d1.d_date_sk > JOIN date_dim d2 ON customer.c_first_sales_date_sk = d2.d_date_sk > JOIN date_dim d3 ON customer.c_first_shipto_date_sk = d3.d_date_sk > JOIN store ON store_sales.ss_store_sk = store.s_store_sk > JOIN item ON store_sales.ss_item_sk = item.i_item_sk > JOIN customer_demographics cd1 ON store_sales.ss_cdemo_sk= > cd1.cd_demo_sk > JOIN customer_demographics cd2 ON customer.c_current_cdemo_sk = > cd2.cd_demo_sk > JOIN promotion ON store_sales.ss_promo_sk = promotion.p_promo_sk > JOIN household_demographics hd1 ON store_sales.ss_hdemo_sk = > hd1.hd_demo_sk > JOIN household_demographics hd2 ON customer.c_current_hdemo_sk = > hd2.hd_demo_sk > JOIN customer_address ad1 ON store_sales.ss_addr_sk = > ad1.ca_address_sk > JOIN customer_address ad2 ON customer.c_current_addr_sk = > ad2.ca_address_sk > JOIN income_band ib1 ON hd1.hd_income_band_sk = ib1.ib_income_band_sk > JOIN income_band ib2 ON hd2.hd_income_band_sk = ib2.ib_income_band_sk > JOIN > (select cs_item_sk > ,sum(cs_ext_list_price) as > sale,sum(cr_refunded_cash+cr_reversed_charge+cr_store_credit) as refund > from catalog_sales JOIN catalog_returns > ON catalog_sales.cs_item_sk = catalog_returns.cr_item_sk > and catalog_sales.cs_order_number = catalog_returns.cr_order_number > group by cs_item_sk > having > sum(cs_ext_list_price)>2*sum(cr_refunded_cash+cr_reversed_charge+cr_store_credit)) > cs_ui > ON store_sales.ss_item_sk = cs_ui.cs_item_sk > WHERE > cd1.cd_marital_status <> cd2.cd_marital_status and > i_color in ('maroon','burnished','dim','steel','navajo','chocolate') > and > i_current_price between 35 and 35 + 10 and > i_current_price between 35 + 1 and 35 + 15 > and d1.d_year = 2001; > {code} > {code} > ], TaskAttempt 3 failed, info=[Error: Failure while running > task:java.lang.RuntimeException: java.lang.RuntimeException: > java.lang.AssertionError: Capacity must be a power of two > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:187) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:142) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324) > at > org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180) > at > org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at > org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:172) > at > org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:167) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: java.lang.RuntimeException: java.lang.AssertionError: Capacity > must be a power of two > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:93) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:70) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:273) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:164) > ... 13 more > Caused by: java.lang.AssertionError: Capacity must be a power of two > at > org.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap.validateCapacity(BytesBytesMultiHashMap.java:302) > at > org.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap.<init>(BytesBytesMultiHashMap.java:159) > at > org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer.<init>(MapJoinBytesTableContainer.java:73) > at > org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer.<init>(MapJoinBytesTableContainer.java:64) > at > org.apache.hadoop.hive.ql.exec.tez.HashTableLoader.load(HashTableLoader.java:145) > at > org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:201) > at > org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:236) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1035) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1039) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1039) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1039) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1039) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1039) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:37) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:85) > ... 16 more > {code} > Plan > {code} > OK > STAGE DEPENDENCIES: > Stage-1 is a root stage > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-1 > Tez > Edges: > Map 11 <- Map 10 (BROADCAST_EDGE), Map 20 (BROADCAST_EDGE) > Map 12 <- Map 14 (BROADCAST_EDGE) > Map 16 <- Map 4 (BROADCAST_EDGE), Map 7 (BROADCAST_EDGE) > Map 4 <- Map 1 (BROADCAST_EDGE), Map 15 (BROADCAST_EDGE), Map 18 > (BROADCAST_EDGE), Map 19 (BROADCAST_EDGE), Map 2 (BROADCAST_EDGE), Map 3 > (BROADCAST_EDGE), Map 5 (BROADCAST_EDGE), Map 6 (BROADCAST_EDGE), Map 8 > (BROADCAST_EDGE), Map 9 (BROADCAST_EDGE), Reducer 13 (BROADCAST_EDGE) > Map 5 <- Map 11 (BROADCAST_EDGE) > Map 6 <- Map 21 (BROADCAST_EDGE) > Reducer 13 <- Map 12 (SIMPLE_EDGE) > Reducer 17 <- Map 16 (SIMPLE_EDGE) > DagName: mmokhtar_20141013195656_e993c552-4b66-4bc4-8f22-3ca49c8727bb:14 > Vertices: > Map 1 > Map Operator Tree: > TableScan > alias: d1 > filterExpr: ((d_year = 2001) and d_date_sk is not null) > (type: boolean) > Statistics: Num rows: 73049 Data size: 81741831 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ((d_year = 2001) and d_date_sk is not null) > (type: boolean) > Statistics: Num rows: 652 Data size: 5216 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: d_date_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 652 Data size: 2608 Basic stats: > COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 652 Data size: 2608 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 652 Data size: 0 Basic stats: > PARTIAL Column stats: COMPLETE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 652 Data size: 0 Basic stats: > PARTIAL Column stats: COMPLETE > Dynamic Partitioning Event Operator > Target Input: store_sales > Partition key expr: ss_sold_date_sk > Statistics: Num rows: 652 Data size: 0 Basic > stats: PARTIAL Column stats: COMPLETE > Target column: ss_sold_date_sk > Target Vertex: Map 4 > Execution mode: vectorized > Map 10 > Map Operator Tree: > TableScan > alias: d1 > filterExpr: d_date_sk is not null (type: boolean) > Statistics: Num rows: 73049 Data size: 81741831 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: d_date_sk is not null (type: boolean) > Statistics: Num rows: 73049 Data size: 292196 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: d_date_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 73049 Data size: 292196 Basic > stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 73049 Data size: 292196 Basic > stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 11 > Map Operator Tree: > TableScan > alias: customer > filterExpr: (((((c_first_sales_date_sk is not null and > c_first_shipto_date_sk is not null) and c_current_cdemo_sk is not null) and > c_customer_sk is not null) and c_current_addr_sk is not null) and > c_current_hdemo_sk is not null) (type: boolean) > Statistics: Num rows: 1600000 Data size: 1241633212 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (((((c_first_sales_date_sk is not null and > c_first_shipto_date_sk is not null) and c_current_cdemo_sk is not null) and > c_customer_sk is not null) and c_current_addr_sk is not null) and > c_current_hdemo_sk is not null) (type: boolean) > Statistics: Num rows: 1387729 Data size: 32529300 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: c_customer_sk (type: int), > c_current_cdemo_sk (type: int), c_current_hdemo_sk (type: int), > c_current_addr_sk (type: int), c_first_shipto_date_sk (type: int), > c_first_sales_date_sk (type: int) > outputColumnNames: _col0, _col1, _col2, _col3, _col4, > _col5 > Statistics: Num rows: 1387729 Data size: 32529300 Basic > stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col0} {_col1} {_col2} {_col3} {_col4} > 1 > keys: > 0 _col5 (type: int) > 1 _col0 (type: int) > outputColumnNames: _col0, _col1, _col2, _col3, _col4 > input vertices: > 1 Map 10 > Statistics: Num rows: 1551647 Data size: 31032940 > Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col0} {_col1} {_col2} {_col3} > 1 > keys: > 0 _col4 (type: int) > 1 _col0 (type: int) > outputColumnNames: _col0, _col1, _col2, _col3 > input vertices: > 1 Map 20 > Statistics: Num rows: 1734927 Data size: 27758832 > Basic stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: _col0 (type: int), _col1 (type: > int), _col2 (type: int), _col3 (type: int) > outputColumnNames: _col0, _col1, _col2, _col3 > Statistics: Num rows: 1734927 Data size: 27758832 > Basic stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col1 (type: int) > sort order: + > Map-reduce partition columns: _col1 (type: int) > Statistics: Num rows: 1734927 Data size: > 27758832 Basic stats: COMPLETE Column stats: COMPLETE > value expressions: _col0 (type: int), _col2 > (type: int), _col3 (type: int) > Execution mode: vectorized > Map 12 > Map Operator Tree: > TableScan > alias: catalog_sales > filterExpr: (cs_item_sk is not null and cs_order_number is > not null) (type: boolean) > Statistics: Num rows: 286549727 Data size: 37743959324 > Basic stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (cs_item_sk is not null and cs_order_number is > not null) (type: boolean) > Statistics: Num rows: 286549727 Data size: 3435718732 > Basic stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: cs_item_sk (type: int), cs_order_number > (type: int), cs_ext_list_price (type: float) > outputColumnNames: _col0, _col1, _col2 > Statistics: Num rows: 286549727 Data size: 3435718732 > Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col0} {_col2} > 1 {_col2} {_col3} {_col4} > keys: > 0 _col0 (type: int), _col1 (type: int) > 1 _col0 (type: int), _col1 (type: int) > outputColumnNames: _col0, _col2, _col5, _col6, _col7 > input vertices: > 1 Map 14 > Statistics: Num rows: 7733966 Data size: 123743456 > Basic stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: _col0 (type: int), _col2 (type: > float), ((_col5 + _col6) + _col7) (type: float) > outputColumnNames: _col0, _col1, _col2 > Statistics: Num rows: 7733966 Data size: 123743456 > Basic stats: COMPLETE Column stats: COMPLETE > Group By Operator > aggregations: sum(_col1), sum(_col2) > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0, _col1, _col2 > Statistics: Num rows: 14754 Data size: 295080 > Basic stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 14754 Data size: 295080 > Basic stats: COMPLETE Column stats: COMPLETE > value expressions: _col1 (type: double), _col2 > (type: double) > Execution mode: vectorized > Map 14 > Map Operator Tree: > TableScan > alias: catalog_returns > filterExpr: (cr_item_sk is not null and cr_order_number is > not null) (type: boolean) > Statistics: Num rows: 28798881 Data size: 2942039156 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (cr_item_sk is not null and cr_order_number is > not null) (type: boolean) > Statistics: Num rows: 28798881 Data size: 569059536 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: cr_item_sk (type: int), cr_order_number > (type: int), cr_refunded_cash (type: float), cr_reversed_charge (type: > float), cr_store_credit (type: float) > outputColumnNames: _col0, _col1, _col2, _col3, _col4 > Statistics: Num rows: 28798881 Data size: 569059536 > Basic stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int), _col1 (type: int) > sort order: ++ > Map-reduce partition columns: _col0 (type: int), > _col1 (type: int) > Statistics: Num rows: 28798881 Data size: 569059536 > Basic stats: COMPLETE Column stats: COMPLETE > value expressions: _col2 (type: float), _col3 (type: > float), _col4 (type: float) > Execution mode: vectorized > Map 15 > Map Operator Tree: > TableScan > alias: ad1 > filterExpr: ca_address_sk is not null (type: boolean) > Statistics: Num rows: 800000 Data size: 811903688 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ca_address_sk is not null (type: boolean) > Statistics: Num rows: 800000 Data size: 3200000 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: ca_address_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 800000 Data size: 3200000 Basic > stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 800000 Data size: 3200000 Basic > stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 16 > Map Operator Tree: > TableScan > alias: hd1 > filterExpr: (hd_income_band_sk is not null and hd_demo_sk > is not null) (type: boolean) > Statistics: Num rows: 7200 Data size: 770400 Basic stats: > COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (hd_income_band_sk is not null and hd_demo_sk > is not null) (type: boolean) > Statistics: Num rows: 7200 Data size: 57600 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: hd_demo_sk (type: int), hd_income_band_sk > (type: int) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 7200 Data size: 57600 Basic > stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col0} > 1 > keys: > 0 _col1 (type: int) > 1 _col0 (type: int) > outputColumnNames: _col0 > input vertices: > 1 Map 7 > Statistics: Num rows: 8000 Data size: 32000 Basic > stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 > 1 > keys: > 0 _col0 (type: int) > 1 _col19 (type: int) > input vertices: > 1 Map 4 > Statistics: Num rows: 90416698032652288 Data size: > 0 Basic stats: PARTIAL Column stats: NONE > Select Operator > Statistics: Num rows: 90416698032652288 Data > size: 0 Basic stats: PARTIAL Column stats: NONE > Group By Operator > aggregations: count() > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 8 Basic > stats: COMPLETE Column stats: NONE > Reduce Output Operator > sort order: > Statistics: Num rows: 1 Data size: 8 Basic > stats: COMPLETE Column stats: NONE > value expressions: _col0 (type: bigint) > Execution mode: vectorized > Map 18 > Map Operator Tree: > TableScan > alias: promotion > filterExpr: p_promo_sk is not null (type: boolean) > Statistics: Num rows: 450 Data size: 530848 Basic stats: > COMPLETE Column stats: COMPLETE > Filter Operator > predicate: p_promo_sk is not null (type: boolean) > Statistics: Num rows: 450 Data size: 1800 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: p_promo_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 450 Data size: 1800 Basic stats: > COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 450 Data size: 1800 Basic > stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 19 > Map Operator Tree: > TableScan > alias: ad1 > filterExpr: ca_address_sk is not null (type: boolean) > Statistics: Num rows: 800000 Data size: 811903688 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ca_address_sk is not null (type: boolean) > Statistics: Num rows: 800000 Data size: 3200000 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: ca_address_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 800000 Data size: 3200000 Basic > stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 800000 Data size: 3200000 Basic > stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 2 > Map Operator Tree: > TableScan > alias: cd1 > filterExpr: cd_demo_sk is not null (type: boolean) > Statistics: Num rows: 1920800 Data size: 718379200 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: cd_demo_sk is not null (type: boolean) > Statistics: Num rows: 1920800 Data size: 170951200 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: cd_demo_sk (type: int), cd_marital_status > (type: string) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 1920800 Data size: 170951200 > Basic stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 1920800 Data size: 170951200 > Basic stats: COMPLETE Column stats: COMPLETE > value expressions: _col1 (type: string) > Execution mode: vectorized > Map 20 > Map Operator Tree: > TableScan > alias: d1 > filterExpr: d_date_sk is not null (type: boolean) > Statistics: Num rows: 73049 Data size: 81741831 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: d_date_sk is not null (type: boolean) > Statistics: Num rows: 73049 Data size: 292196 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: d_date_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 73049 Data size: 292196 Basic > stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 73049 Data size: 292196 Basic > stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 21 > Map Operator Tree: > TableScan > alias: ib1 > filterExpr: ib_income_band_sk is not null (type: boolean) > Statistics: Num rows: 20 Data size: 240 Basic stats: > COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ib_income_band_sk is not null (type: boolean) > Statistics: Num rows: 20 Data size: 80 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: ib_income_band_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 20 Data size: 80 Basic stats: > COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 20 Data size: 80 Basic stats: > COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 3 > Map Operator Tree: > TableScan > alias: item > filterExpr: ((((i_color) IN ('maroon', 'burnished', 'dim', > 'steel', 'navajo', 'chocolate') and i_current_price BETWEEN 35 AND 45) and > i_current_price BETWEEN 36 AND 50) and i_item_sk is not null) (type: boolean) > Statistics: Num rows: 48000 Data size: 68732712 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ((((i_color) IN ('maroon', 'burnished', 'dim', > 'steel', 'navajo', 'chocolate') and i_current_price BETWEEN 35 AND 45) and > i_current_price BETWEEN 36 AND 50) and i_item_sk is not null) (type: boolean) > Statistics: Num rows: 6000 Data size: 581936 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: i_item_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 6000 Data size: 24000 Basic > stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 6000 Data size: 24000 Basic > stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 4 > Map Operator Tree: > TableScan > alias: store_sales > filterExpr: (((((((ss_item_sk is not null and ss_store_sk > is not null) and ss_cdemo_sk is not null) and ss_customer_sk is not null) and > ss_ticket_number is not null) and ss_addr_sk is not null) and ss_promo_sk is > not null) and ss_hdemo_sk is not null) (type: boolean) > Statistics: Num rows: 550076554 Data size: 47370018896 > Basic stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (((((((ss_item_sk is not null and ss_store_sk > is not null) and ss_cdemo_sk is not null) and ss_customer_sk is not null) and > ss_ticket_number is not null) and ss_addr_sk is not null) and ss_promo_sk is > not null) and ss_hdemo_sk is not null) (type: boolean) > Statistics: Num rows: 476766967 Data size: 14987001212 > Basic stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: ss_item_sk (type: int), ss_customer_sk > (type: int), ss_cdemo_sk (type: int), ss_hdemo_sk (type: int), ss_addr_sk > (type: int), ss_store_sk (type: int), ss_promo_sk (type: int), > ss_ticket_number (type: int), ss_sold_date_sk (type: int) > outputColumnNames: _col0, _col1, _col2, _col3, _col4, > _col5, _col6, _col7, _col8 > Statistics: Num rows: 476766967 Data size: 16894069080 > Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col0} {_col1} {_col2} {_col3} {_col4} {_col5} > {_col6} {_col7} {_col8} > 1 > keys: > 0 _col0 (type: int) > 1 _col0 (type: int) > outputColumnNames: _col0, _col1, _col2, _col3, _col4, > _col5, _col6, _col7, _col8 > input vertices: > 1 Map 3 > Statistics: Num rows: 365759084 Data size: > 13167327024 Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col0} {_col1} {_col2} {_col3} {_col4} {_col5} > {_col6} {_col7} > 1 > keys: > 0 _col8 (type: int) > 1 _col0 (type: int) > outputColumnNames: _col0, _col1, _col2, _col3, > _col4, _col5, _col6, _col7 > input vertices: > 1 Map 1 > Statistics: Num rows: 408347470 Data size: > 13067119040 Basic stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: _col0 (type: int), _col1 (type: > int), _col2 (type: int), _col3 (type: int), _col4 (type: int), _col5 (type: > int), _col6 (type: int), _col7 (type: int) > outputColumnNames: _col0, _col1, _col2, _col3, > _col4, _col5, _col6, _col7 > Statistics: Num rows: 408347470 Data size: > 13067119040 Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 > 1 {_col0} {_col1} {_col2} {_col3} {_col4} > {_col6} {_col7} > keys: > 0 _col0 (type: int) > 1 _col5 (type: int) > outputColumnNames: _col1, _col2, _col3, _col4, > _col5, _col7, _col8 > input vertices: > 0 Map 9 > Statistics: Num rows: 1095818527 Data size: > 30682918756 Basic stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: _col1 (type: int), _col2 (type: > int), _col3 (type: int), _col4 (type: int), _col5 (type: int), _col7 (type: > int), _col8 (type: int) > outputColumnNames: _col1, _col2, _col3, > _col4, _col5, _col7, _col8 > Statistics: Num rows: 1095818527 Data size: > 30682918756 Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col1} > 1 {_col1} {_col2} {_col4} {_col5} {_col7} > {_col8} > keys: > 0 _col0 (type: int) > 1 _col3 (type: int) > outputColumnNames: _col1, _col3, _col4, > _col6, _col7, _col9, _col10 > input vertices: > 0 Map 2 > Statistics: Num rows: 26284318514 Data > size: 2864990718026 Basic stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: _col1 (type: string), _col10 > (type: int), _col3 (type: int), _col4 (type: int), _col6 (type: int), _col7 > (type: int), _col9 (type: int) > outputColumnNames: _col1, _col10, _col3, > _col4, _col6, _col7, _col9 > Statistics: Num rows: 26284318514 Data > size: 2864990718026 Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col1} {_col4} {_col5} > 1 {_col1} {_col3} {_col6} {_col7} > {_col9} {_col10} > keys: > 0 _col2 (type: int) > 1 _col4 (type: int) > outputColumnNames: _col1, _col4, _col5, > _col11, _col13, _col16, _col17, _col19, _col20 > input vertices: > 0 Map 5 > Statistics: Num rows: 1259845072505 > Data size: 137323112903045 Basic stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (_col11 <> _col1) (type: > boolean) > Statistics: Num rows: 1259845072505 > Data size: 137323112903045 Basic stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: _col13 (type: int), > _col16 (type: int), _col17 (type: int), _col19 (type: int), _col20 (type: > int), _col4 (type: int), _col5 (type: int) > outputColumnNames: _col13, _col16, > _col17, _col19, _col20, _col4, _col5 > Statistics: Num rows: 1259845072505 > Data size: 30236281740120 Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 > 1 {_col4} {_col5} {_col13} > {_col16} {_col17} {_col19} > keys: > 0 _col0 (type: int), _col1 > (type: int) > 1 _col13 (type: int), _col20 > (type: int) > outputColumnNames: _col6, _col7, > _col15, _col18, _col19, _col21 > input vertices: > 0 Map 8 > Statistics: Num rows: > 102517810489 Data size: 2050356209780 Basic stats: COMPLETE Column stats: > COMPLETE > Select Operator > expressions: _col15 (type: > int), _col6 (type: int), _col7 (type: int), _col18 (type: int), _col19 (type: > int), _col21 (type: int) > outputColumnNames: _col0, > _col13, _col14, _col3, _col4, _col6 > Statistics: Num rows: > 102517810489 Data size: 2050356209780 Basic stats: COMPLETE Column stats: > COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 > 1 {_col0} {_col3} {_col6} > {_col13} {_col14} > keys: > 0 _col0 (type: int) > 1 _col4 (type: int) > outputColumnNames: _col1, > _col4, _col7, _col14, _col15 > input vertices: > 0 Map 15 > Statistics: Num rows: > 13141203075020 Data size: 210259249200320 Basic stats: COMPLETE Column stats: > COMPLETE > Select Operator > expressions: _col1 (type: > int), _col14 (type: int), _col15 (type: int), _col4 (type: int), _col7 (type: > int) > outputColumnNames: _col1, > _col14, _col15, _col4, _col7 > Statistics: Num rows: > 13141203075020 Data size: 210259249200320 Basic stats: COMPLETE Column stats: > COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 > 1 {_col1} {_col4} > {_col7} {_col14} > keys: > 0 _col0 (type: int) > 1 _col15 (type: int) > outputColumnNames: _col2, > _col5, _col8, _col15 > input vertices: > 0 Map 19 > Statistics: Num rows: > 239649914744597 Data size: 2875798976935164 Basic stats: COMPLETE Column > stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col5} {_col8} > {_col15} > 1 > keys: > 0 _col2 (type: int) > 1 _col0 (type: int) > outputColumnNames: > _col5, _col8, _col15 > input vertices: > 1 Reducer 13 > Statistics: Num rows: > 239649914744597 Data size: 1917199317956776 Basic stats: COMPLETE Column > stats: COMPLETE > Select Operator > expressions: _col15 > (type: int), _col5 (type: int), _col8 (type: int) > outputColumnNames: > _col15, _col5, _col8 > Statistics: Num rows: > 239649914744597 Data size: 1917199317956776 Basic stats: COMPLETE Column > stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 > to 1 > condition > expressions: > 0 > 1 {_col5} {_col15} > keys: > 0 _col0 (type: > int) > 1 _col8 (type: > int) > outputColumnNames: > _col6, _col16 > input vertices: > 0 Map 18 > Statistics: Num > rows: 6740153852191791 Data size: 26960615408767164 Basic stats: COMPLETE > Column stats: COMPLETE > Select Operator > expressions: > _col16 (type: int), _col6 (type: int) > > outputColumnNames: _col16, _col6 > Statistics: Num > rows: 6740153852191791 Data size: 26960615408767164 Basic stats: COMPLETE > Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join > 0 to 1 > condition > expressions: > 0 > 1 {_col16} > keys: > 0 _col0 > (type: int) > 1 _col6 > (type: int) > > outputColumnNames: _col19 > input vertices: > 0 Map 6 > Statistics: Num > rows: 82196998197460864 Data size: 0 Basic stats: PARTIAL Column stats: > COMPLETE > Select Operator > expressions: > _col19 (type: int) > > outputColumnNames: _col19 > Statistics: > Num rows: 82196998197460864 Data size: 0 Basic stats: PARTIAL Column stats: > COMPLETE > Reduce Output > Operator > key > expressions: _col19 (type: int) > sort order: > + > Map-reduce > partition columns: _col19 (type: int) > Statistics: > Num rows: 82196998197460864 Data size: 0 Basic stats: PARTIAL Column stats: > COMPLETE > Execution mode: vectorized > Map 5 > Map Operator Tree: > TableScan > alias: cd1 > filterExpr: cd_demo_sk is not null (type: boolean) > Statistics: Num rows: 1920800 Data size: 718379200 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: cd_demo_sk is not null (type: boolean) > Statistics: Num rows: 1920800 Data size: 170951200 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: cd_demo_sk (type: int), cd_marital_status > (type: string) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 1920800 Data size: 170951200 > Basic stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col1} > 1 {_col0} {_col2} {_col3} > keys: > 0 _col0 (type: int) > 1 _col1 (type: int) > outputColumnNames: _col1, _col2, _col4, _col5 > input vertices: > 1 Map 11 > Statistics: Num rows: 3675622 Data size: 44107464 > Basic stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col2 (type: int) > sort order: + > Map-reduce partition columns: _col2 (type: int) > Statistics: Num rows: 3675622 Data size: 44107464 > Basic stats: COMPLETE Column stats: COMPLETE > value expressions: _col1 (type: string), _col4 > (type: int), _col5 (type: int) > Execution mode: vectorized > Map 6 > Map Operator Tree: > TableScan > alias: hd1 > filterExpr: (hd_income_band_sk is not null and hd_demo_sk > is not null) (type: boolean) > Statistics: Num rows: 7200 Data size: 770400 Basic stats: > COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (hd_income_band_sk is not null and hd_demo_sk > is not null) (type: boolean) > Statistics: Num rows: 7200 Data size: 57600 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: hd_demo_sk (type: int), hd_income_band_sk > (type: int) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 7200 Data size: 57600 Basic > stats: COMPLETE Column stats: COMPLETE > Map Join Operator > condition map: > Inner Join 0 to 1 > condition expressions: > 0 {_col0} > 1 > keys: > 0 _col1 (type: int) > 1 _col0 (type: int) > outputColumnNames: _col0 > input vertices: > 1 Map 21 > Statistics: Num rows: 8000 Data size: 32000 Basic > stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 8000 Data size: 32000 Basic > stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 7 > Map Operator Tree: > TableScan > alias: ib1 > filterExpr: ib_income_band_sk is not null (type: boolean) > Statistics: Num rows: 20 Data size: 240 Basic stats: > COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ib_income_band_sk is not null (type: boolean) > Statistics: Num rows: 20 Data size: 80 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: ib_income_band_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 20 Data size: 80 Basic stats: > COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 20 Data size: 80 Basic stats: > COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 8 > Map Operator Tree: > TableScan > alias: store_returns > filterExpr: (sr_item_sk is not null and sr_ticket_number is > not null) (type: boolean) > Statistics: Num rows: 55578005 Data size: 4155315616 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (sr_item_sk is not null and sr_ticket_number > is not null) (type: boolean) > Statistics: Num rows: 55578005 Data size: 444624040 Basic > stats: COMPLETE Column stats: COMPLETE > Select Operator > expressions: sr_item_sk (type: int), sr_ticket_number > (type: int) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 55578005 Data size: 444624040 > Basic stats: COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int), _col1 (type: int) > sort order: ++ > Map-reduce partition columns: _col0 (type: int), > _col1 (type: int) > Statistics: Num rows: 55578005 Data size: 444624040 > Basic stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Map 9 > Map Operator Tree: > TableScan > alias: store > filterExpr: s_store_sk is not null (type: boolean) > Statistics: Num rows: 212 Data size: 405680 Basic stats: > COMPLETE Column stats: COMPLETE > Filter Operator > predicate: s_store_sk is not null (type: boolean) > Statistics: Num rows: 212 Data size: 848 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: s_store_sk (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 212 Data size: 848 Basic stats: > COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 212 Data size: 848 Basic stats: > COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Reducer 13 > Reduce Operator Tree: > Group By Operator > aggregations: sum(VALUE._col0), sum(VALUE._col1) > keys: KEY._col0 (type: int) > mode: mergepartial > outputColumnNames: _col0, _col1, _col2 > Statistics: Num rows: 14754 Data size: 354096 Basic stats: > COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (_col1 > (UDFToDouble(2) * _col2)) (type: > boolean) > Statistics: Num rows: 4918 Data size: 118032 Basic stats: > COMPLETE Column stats: COMPLETE > Select Operator > expressions: _col0 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 4918 Data size: 39344 Basic stats: > COMPLETE Column stats: COMPLETE > Reduce Output Operator > key expressions: _col0 (type: int) > sort order: + > Map-reduce partition columns: _col0 (type: int) > Statistics: Num rows: 4918 Data size: 39344 Basic > stats: COMPLETE Column stats: COMPLETE > Execution mode: vectorized > Reducer 17 > Reduce Operator Tree: > Group By Operator > aggregations: count(VALUE._col0) > mode: mergepartial > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: _col0 (type: bigint) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE > Column stats: NONE > File Output Operator > compressed: false > Statistics: Num rows: 1 Data size: 8 Basic stats: > COMPLETE Column stats: NONE > table: > input format: org.apache.hadoop.mapred.TextInputFormat > output format: > org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat > serde: > org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe > Execution mode: vectorized > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > ListSink > Time taken: 12.6 seconds, Fetched: 738 row(s) > {code} > Looks like an overflow is happening and key count gets set to > Integer.MAX_VALUE then nextHighestPowerOfTwo overflows to Integer.MIN_VALUE > {code} > 2014-10-13 23:18:08,215 INFO [TezChild] > org.apache.hadoop.hive.ql.exec.persistence.HashMapWrapper: Key count from > statistics is 82196998197460864; setting map size to 2147483647 > 2014-10-13 23:18:08,215 INFO [TezChild] > org.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap: > initialCapacity in :2147483647 > 2014-10-13 23:18:08,215 INFO [TezChild] > org.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap: > initialCapacity out :-2147483648 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)