I had to resort to a shell script, the crux of it was something like this:

--
echo "Getting the DB_ID for the source database:"
source_db_id=`mysql -u $mysql_user -h $mysql_host -p$mysql_passwd $mysql_db -e 
"select DBS.DB_ID from TBLS, DBS where TBLS.TBL_NAME=\"$tbl_name\" and 
TBLS.DB_ID = DBS.DB_ID and DBS.NAME=\"$source_db\";" | grep -v DB_ID`

echo "Getting the DB_ID for the target database:"
target_db_id=`mysql -u $mysql_user -h $mysql_host -p$mysql_passwd $mysql_db -e 
"select DBS.DB_ID from DBS where DBS.NAME=\"$target_db\";" | grep -v DB_ID`

echo "Migrating table to $target_db database"
mysql -u $mysql_user -h $mysql_host -p$mysql_passwd $mysql_db -e "update TBLS 
set DB_ID=$target_db_id where TBL_NAME=\"$tbl_name\" and DB_ID=$source_db_id;"
--

The parameters source_db and target_db are the names of the source and target 
database respectively, and the name of the table to migrate is tbl_name. I 
should note that this worked for us for Hive 0.7.1, and we were using a MySQL 
database for the metastore.

Its certainly hacky, but did the job. Before you actually do the update in the 
last step, you may want to test it thoroughly :).

Sriram

From: hadoop hive <hadooph...@gmail.com<mailto:hadooph...@gmail.com>>
Reply-To: <user@hive.apache.org<mailto:user@hive.apache.org>>
Date: Tue, 31 Jan 2012 11:19:14 +0530
To: <user@hive.apache.org<mailto:user@hive.apache.org>>
Subject: Re: move tables into different database

Thanks All,

Sriram can you explain about that shell script which directly update the 
metadata.

Actually data is in large amount , having many tables which is not possible to 
move the data and create each table into different database;

table also have some partitions.

Regards
Vikas Srivastava

On Tue, Jan 31, 2012 at 7:22 AM, Matt Tucker 
<matthewt...@gmail.com<mailto:matthewt...@gmail.com>> wrote:
Someone recently posted a command to convert a managed table to an external 
table. If you also use that command, there's no expensive copy operation.

You'd probably want to move the data into a different HDFS directory though, to 
keep your namespace as consistent as possible,

Matt



On Jan 30, 2012, at 8:48 PM, Sriram Krishnan 
<skrish...@netflix.com<mailto:skrish...@netflix.com>> wrote:

You could do that if you don't mind moving the data (which may be expensive 
depending on how much data you have). Also if your table is partitioned, you 
may have to do another additional step (to recover partitions).

Sriram

From: Aniket Mokashi <aniket...@gmail.com<mailto:aniket...@gmail.com>>
Reply-To: <user@hive.apache.org<mailto:user@hive.apache.org>>
Date: Mon, 30 Jan 2012 16:32:03 -0800
To: <user@hive.apache.org<mailto:user@hive.apache.org>>
Subject: Re: move tables into different database

If you are on hdfs

How about--

use db1;
create table table1 like db2.table1;

and move the data?

Thanks,
Aniket

On Mon, Jan 30, 2012 at 8:09 AM, Sriram Krishnan 
<skrish...@netflix.com<mailto:skrish...@netflix.com>> wrote:
AFAIK there is no way in HiveQL to do this. We had a similar requirement in the 
past, and we wrote a shell script to update the MySQL metastore directly 
(obviously not the cleanest or recommended way to go).

Cheers,
Sriram

From: hadoop hive <hadooph...@gmail.com<mailto:hadooph...@gmail.com>>
Reply-To: <user@hive.apache.org<mailto:user@hive.apache.org>>
Date: Mon, 30 Jan 2012 20:05:13 +0530
To: <user@hive.apache.org<mailto:user@hive.apache.org>>
Subject: move tables into different database

Hey folks,

Is this possible to moves table into different hive database??

regards
Vikas Srivastava





--
"...:::Aniket:::... Quetzalco@tl"

Reply via email to