[ https://issues.apache.org/jira/browse/HIVE-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13511199#comment-13511199 ]
Hudson commented on HIVE-3645: ------------------------------ Integrated in Hive-0.9.1-SNAPSHOT-h0.21 #219 (See [https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/219/]) HIVE-3645 : RCFileWriter does not implement the right function to support Federation (Arup Malakar via Ashutosh Chauhan) (Revision 1417220) Result = FAILURE hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1417220 Files : * /hive/branches/branch-0.9/ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java * /hive/branches/branch-0.9/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java * /hive/branches/branch-0.9/shims/src/0.20S/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java * /hive/branches/branch-0.9/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java * /hive/branches/branch-0.9/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java * /hive/branches/branch-0.9/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java > RCFileWriter does not implement the right function to support Federation > ------------------------------------------------------------------------ > > Key: HIVE-3645 > URL: https://issues.apache.org/jira/browse/HIVE-3645 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers > Affects Versions: 0.9.0, 0.10.0 > Environment: Hadoop 0.23.3 federation, Hive 0.9 and Pig 0.10 > Reporter: Viraj Bhat > Assignee: Arup Malakar > Fix For: 0.11 > > Attachments: HIVE_3645_branch_0.patch, HIVE_3645_trunk_0.patch > > > Create a table using Hive DDL > {code} > CREATE TABLE tmp_hcat_federated_numbers_part_1 ( > id int, > intnum int, > floatnum float > )partitioned by ( > part1 string, > part2 string > ) > STORED AS rcfile > LOCATION 'viewfs:///database/tmp_hcat_federated_numbers_part_1'; > {code} > Populate it using Pig: > {code} > A = load 'default.numbers_pig' using org.apache.hcatalog.pig.HCatLoader(); > B = filter A by id <= 500; > C = foreach B generate (int)id, (int)intnum, (float)floatnum; > store C into > 'default.tmp_hcat_federated_numbers_part_1' > using org.apache.hcatalog.pig.HCatStorer > ('part1=pig, part2=hcat_pig_insert', > 'id: int,intnum: int,floatnum: float'); > {code} > Generates the following error when running on a Federated Cluster: > {quote} > 2012-10-29 20:40:25,011 [main] ERROR > org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate > exception from backed error: AttemptID:attempt_1348522594824_0846_m_000000_3 > Info:Error: org.apache.hadoop.fs.viewfs.NotInMountpointException: > getDefaultReplication on empty path is invalid > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.getDefaultReplication(ViewFileSystem.java:479) > at org.apache.hadoop.hive.ql.io.RCFile$Writer.<init>(RCFile.java:723) > at org.apache.hadoop.hive.ql.io.RCFile$Writer.<init>(RCFile.java:705) > at > org.apache.hadoop.hive.ql.io.RCFileOutputFormat.getRecordWriter(RCFileOutputFormat.java:86) > at > org.apache.hcatalog.mapreduce.FileOutputFormatContainer.getRecordWriter(FileOutputFormatContainer.java:100) > at > org.apache.hcatalog.mapreduce.HCatOutputFormat.getRecordWriter(HCatOutputFormat.java:228) > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84) > at > org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:587) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:706) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152) > {quote} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira