[ https://issues.apache.org/jira/browse/HIVE-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13094294#comment-13094294 ]
Krishna Kumar commented on HIVE-2417: ------------------------------------- Yes, the test is designed to produce the error when run without the change. Are you finding that that's not the case? I get an EOFException while running the same steps in my development environment (i.e., not as a unit test). 1. This is needed so that the rcfiles in the target table are compressed with Bzip2. Do you mean that we should be using Default compression codec instead? Fine with me but why is that important? 2. tgt does contain more than one file. [before alter] +POSTHOOK: query: show table extended like `tgt_rc_merge_test` ... +totalNumberFiles:2 ... [after alter] +POSTHOOK: query: show table extended like `tgt_rc_merge_test` ... +totalNumberFiles:1 The 'create' adds one file, and the insert adds another file. [OT: Does it make sense append a block merge task after an non-overwrite insert? Dunno...] > Merging of compressed rcfiles fails to write the valuebuffer part correctly > --------------------------------------------------------------------------- > > Key: HIVE-2417 > URL: https://issues.apache.org/jira/browse/HIVE-2417 > Project: Hive > Issue Type: Bug > Components: Query Processor > Reporter: Krishna Kumar > Assignee: Krishna Kumar > Attachments: HIVE-2417.v0.patch > > > The blockmerge task does not create proper rc files when merging compressed > rc files as the valuebuffer writing is incorrect. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira