[ https://issues.apache.org/jira/browse/HIVE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16403131#comment-16403131 ]
Hive QA commented on HIVE-18976: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914906/HIVE-18976.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9672/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9672/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9672/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-03-16 23:51:01.270 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-9672/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-03-16 23:51:01.274 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at e7480d7 HIVE-18633: Service discovery for Active/Passive HA mode (Prasanth Jayachandran reviewed by Sergey Shelukhin) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at e7480d7 HIVE-18633: Service discovery for Active/Passive HA mode (Prasanth Jayachandran reviewed by Sergey Shelukhin) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-03-16 23:51:04.500 + rm -rf ../yetus_PreCommit-HIVE-Build-9672 + mkdir ../yetus_PreCommit-HIVE-Build-9672 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-9672 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9672/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/common/src/java/org/apache/hadoop/hive/conf/Constants.java: does not exist in index error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in index error: a/druid-handler/pom.xml: does not exist in index error: a/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandlerUtils.java: does not exist in index error: a/itests/qtest-druid/pom.xml: does not exist in index error: a/itests/qtest-druid/src/main/java/org/apache/hive/druid/MiniDruidCluster.java: does not exist in index error: a/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java: does not exist in index error: a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java: does not exist in index error: a/pom.xml: does not exist in index Going to apply patch with: git apply -p1 + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven [ERROR] Failed to execute goal on project hive-shims-common: Could not resolve dependencies for project org.apache.hive.shims:hive-shims-common:jar:3.0.0-SNAPSHOT: Could not find artifact commons-codec:commons-codec:jar:1.7 in datanucleus (http://www.datanucleus.org/downloads/maven2) -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hive-shims-common + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12914906 - PreCommit-HIVE-Build > Add ability to setup Druid Kafka Ingestion from Hive > ---------------------------------------------------- > > Key: HIVE-18976 > URL: https://issues.apache.org/jira/browse/HIVE-18976 > Project: Hive > Issue Type: Bug > Components: Druid integration > Reporter: Nishant Bangarwa > Assignee: Nishant Bangarwa > Priority: Major > Attachments: HIVE-18976.patch > > > Add Ability to setup druid kafka Ingestion using Hive CREATE TABLE statement > e.g. Below query can submit a kafka supervisor spec to the druid overlord and > druid can start ingesting events from kafka. > {code:java} > > CREATE TABLE druid_kafka_test(`__time` timestamp, page string, language > string, `user` string, added int, deleted int, delta int) > STORED BY > 'org.apache.hadoop.hive.druid.DruidKafkaStreamingStorageHandler' > TBLPROPERTIES ( > "druid.segment.granularity" = "HOUR", > "druid.query.granularity" = "MINUTE", > "kafka.bootstrap.servers" = "localhost:9092", > "kafka.topic" = "test-topic", > "druid.kafka.ingest.useEarliestOffset" = "true" > ); > {code} > Design - This can be done via a DruidKafkaStreamingStorageHandler that > extends existing DruidStorageHandler and adds the additional functionality > for Streaming. > Testing - Add a DruidKafkaMiniCluster which will consist of DruidMiniCluster > + Single Node Kafka Broker. The broker can be populated with a test topic > that has some predefined data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)