[ 
https://issues.apache.org/jira/browse/HIVE-18623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yangfang updated HIVE-18623:
----------------------------
    Attachment: HIVE-18623.1.patch

> Hive throws an exception "Renames across Mount points not supported" when 
> running in a federated cluster
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-18623
>                 URL: https://issues.apache.org/jira/browse/HIVE-18623
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Processor
>    Affects Versions: 2.2.0, 2.3.0, 2.3.1, 2.3.2
>         Environment: hadoop 2.7.5, HDFS Federation enabled
> hive 3.0.0
>            Reporter: yangfang
>            Assignee: yangfang
>            Priority: Major
>         Attachments: HIVE-18623.1.patch
>
>
>  
> I run a sql query in in a federated cluster and I have two namespaces: 
> nameservice and nameservice1. I set 
> hive.exec.stagingdir=/nameservice1/hive_tmp in hive-site.xml and my data 
> tables are located in the directory of nameservice, then I got the exception 
> as below:
> hive> create external table test_par6(id int,name string) partitioned by(p 
> int);
> OK
> Time taken: 1.527 seconds
> hive> insert into table test_par6 partition(p = 10000) values(1,'Jack');
> Moving data to directory 
> viewfs://nsX/nameservice1/hive_tmp_hive_2018-02-05_14-09-36_416_3075179128063595297-1/-ext-10000
> Loading data to table default.test_par6 partition (p=10000)
> Failed with exception java.io.IOException: Renames across Mount points not 
> supported
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask. java.io.IOException: Renames across 
> Mount points not supported
> MapReduce Jobs Launched: 
> Stage-Stage-1: Map: 1 Cumulative CPU: 2.08 sec HDFS Read: 3930 HDFS Write: 7 
> SUCCESS
> Total MapReduce CPU Time Spent: 2 seconds 80 msec



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to