[ https://issues.apache.org/jira/browse/HIVE-20377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16581731#comment-16581731 ]
Hive QA commented on HIVE-20377: -------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 21s{color} | {color:blue} itests/qtest-druid in master has 6 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 50s{color} | {color:blue} itests/util in master has 52 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 46s{color} | {color:blue} llap-server in master has 84 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 7s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} llap-server: The patch generated 1 new + 26 unchanged - 4 fixed = 27 total (was 30) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} patch/itests/qtest-druid cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 47s{color} | {color:red} patch/itests/util cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} patch/kafka-handler cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s{color} | {color:red} patch/llap-server cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 7m 6s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 5m 41s{color} | {color:red} root in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13246/dev-support/hive-personality.sh | | git revision | master / b7b5cb4 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13246/yetus/diff-checkstyle-llap-server.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13246/yetus/patch-findbugs-itests_qtest-druid.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13246/yetus/patch-findbugs-itests_util.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13246/yetus/patch-findbugs-kafka-handler.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13246/yetus/patch-findbugs-llap-server.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13246/yetus/patch-findbugs-ql.txt | | javadoc | http://104.198.109.242/logs//PreCommit-HIVE-Build-13246/yetus/patch-javadoc-root.txt | | modules | C: . itests itests/qtest itests/qtest-druid itests/util kafka-handler llap-server packaging ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13246/yetus.txt | | Powered by | Apache Yetus http://yetus.apache.org | This message was automatically generated. > Hive Kafka Storage Handler > -------------------------- > > Key: HIVE-20377 > URL: https://issues.apache.org/jira/browse/HIVE-20377 > Project: Hive > Issue Type: New Feature > Affects Versions: 4.0.0 > Reporter: slim bouguerra > Assignee: slim bouguerra > Priority: Major > Attachments: HIVE-20377.4.patch, HIVE-20377.5.patch, > HIVE-20377.6.patch, HIVE-20377.8.patch, HIVE-20377.8.patch, HIVE-20377.patch > > > h1. Goal > * Read streaming data form Kafka queue as an external table. > * Allow streaming navigation by pushing down filters on Kafka record > partition id, offset and timestamp. > * Insert streaming data form Kafka to an actual Hive internal table, using > CTAS statement. > h1. Example > h2. Create the external table > {code} > CREATE EXTERNAL TABLE kafka_table (`timestamp` timestamp, page string, `user` > string, language string, added int, deleted int, flags string,comment string, > namespace string) > STORED BY 'org.apache.hadoop.hive.kafka.KafkaStorageHandler' > TBLPROPERTIES > ("kafka.topic" = "wikipedia", > "kafka.bootstrap.servers"="brokeraddress:9092", > "kafka.serde.class"="org.apache.hadoop.hive.serde2.JsonSerDe"); > {code} > h2. Kafka Metadata > In order to keep track of Kafka records the storage handler will add > automatically the Kafka row metadata eg partition id, record offset and > record timestamp. > {code} > DESCRIBE EXTENDED kafka_table > timestamp timestamp from deserializer > page string from deserializer > user string from deserializer > language string from deserializer > country string from deserializer > continent string from deserializer > namespace string from deserializer > newpage boolean from deserializer > unpatrolled boolean from deserializer > anonymous boolean from deserializer > robot boolean from deserializer > added int from deserializer > deleted int from deserializer > delta bigint from deserializer > __partition int from deserializer > __offset bigint from deserializer > __timestamp bigint from deserializer > {code} > h2. Filter push down. > Newer Kafka consumers 0.11.0 and higher allow seeking on the stream based on > a given offset. The proposed storage handler will be able to leverage such > API by pushing down filters over metadata columns, namely __partition (int), > __offset(long) and __timestamp(long) > For instance Query like > {code} > select `__offset` from kafka_table where (`__offset` < 10 and `__offset`>3 > and `__partition` = 0) or (`__partition` = 0 and `__offset` < 105 and > `__offset` > 99) or (`__offset` = 109); > {code} > Will result on a scan of partition 0 only then read only records between > offset 4 and 109. > h2. With timestamp seeks > The seeking based on the internal timestamps allows the handler to run on > recently arrived data, by doing > {code} > select count(*) from kafka_table where `__timestamp` > 1000 * > to_unix_timestamp(CURRENT_TIMESTAMP - interval '20' hours) ; > {code} > This allows for implicit relationships between event timestamps and kafka > timestamps to be expressed in queries (i.e event_timestamp is always < than > kafka __timestamp and kafka __timestamp is never > 15 minutes from event etc). -- This message was sent by Atlassian JIRA (v7.6.3#76005)