Make sure your Hive metadata database is a separate one and the new one has the
tables pointing to the new cluster. I has this situation "hive comes up fine
and show tables etc but the hive location is still pointing to old cluster" so
all MR for Hive queries were pulling data over the network f
thanks ,I change the DDL manually ,change bit to boolean ,it works ,
2011/8/21, wd :
> you can try hive 0.5, after create the metadata, use upgrade sql file
> in hive 0.7.1 to upgrade to 0.7.1
>
> On Sat, Aug 20, 2011 at 2:20 PM, Xiaobo Gu wrote:
>> Hi,
>> I have just set up a PostgreSQL 9.0.2
you can try hive 0.5, after create the metadata, use upgrade sql file
in hive 0.7.1 to upgrade to 0.7.1
On Sat, Aug 20, 2011 at 2:20 PM, Xiaobo Gu wrote:
> Hi,
> I have just set up a PostgreSQL 9.0.2 server for hive 0.7.1 metastore,
> and I am using the postgresql-9.0-801.jdbc4.jar jdbc driver, w
Hi,
I have a custom Input format that reads multiple lines as one row based on
number of columns in a table.
I want to dynamically pass the table properties (like number of columns in
table, their data types etc. just like what you get in SerDe) How can I do that?
If that is not possible, and
I did have the exact scenario but w/o partitioned tables. I took the more
hackish but definitive approach. I wrote a script to read "desc formatted
" for every db and then parse them and replace the name-node string
and re-fire the create table call on new cluster. It worked well for me.
-Ayon
Hello,
Where do you keep your metadata?
If it's a regular RDBMS, you could update the tables directly.
the location is in the partitions table inside your metadata database.
Florin
On Aug 20, 2011, at 3:52 AM, Aggarwal, Vaibhav wrote:
> You could also specify fully qualified hdfs path in the c