[ https://issues.apache.org/jira/browse/HIVE-19772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501350#comment-16501350 ]
Prasanth Jayachandran commented on HIVE-19772: ---------------------------------------------- With this .2 patch, I am able to hit ~40-45 million rows/sec (64 threads writing to 64 static partitions each committing after 100000 rows) using the test app [https://github.com/prasanthj/culvert/] {code:java} ./culvert -u thrift://localhost:9183 -db prasanth -table culvert -p 64 -n 100000 hive> select count(*) from culvert; OK 44700000 Time taken: 0.26 seconds, Fetched: 1 row(s) {code} The destination table schema is (from [https://yahooeng.tumblr.com/post/135321837876/benchmarking-streaming-computation-engines-at]) {code:java} create table if not exists culvert ( user_id string, page_id string, ad_id string, ad_type string, event_type string, event_time string, ip_address string) partitioned by (year int, month int) stored as orc tblproperties("transactional"="true"); {code} > Streaming ingest V2 API can generate invalid orc file if interrupted > -------------------------------------------------------------------- > > Key: HIVE-19772 > URL: https://issues.apache.org/jira/browse/HIVE-19772 > Project: Hive > Issue Type: Bug > Components: Transactions > Affects Versions: 3.1.0, 3.0.1, 4.0.0 > Reporter: Gopal V > Assignee: Prasanth Jayachandran > Priority: Critical > Attachments: HIVE-19772.1.patch, HIVE-19772.2.patch > > > Hive streaming ingest generated 0 length and 3 byte files which are invalid > orc files. This will throw the following exception during compaction > {code} > Error: org.apache.orc.FileFormatException: Not a valid ORC file > hdfs://cn105-10.l42scl.hortonworks.com:8020/apps/hive/warehouse/culvert/year=2018/month=7/delta_0000025_0000025/bucket_00005 > (maxFileLength= 3) at > org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:546) at > org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370) at > org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:60) at > org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:90) at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:1124) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:2373) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:1000) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:977) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at > org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:460) at > org.apache.hadoop.mapred.MapTask.run(MapTask.java:344) at > org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at > java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)