Hey Rohith,
last time I used psql was with postgresql-8.4-701.jdbc4.jar, and was working
great. But I guess all 8.x version should work. Postgres 9x I personally
wouldn't choose.
best,
Alex
On Nov 6, 2012, at 5:29 AM, rohithsharma wrote:
> Hi Guys,
>
> I am planning to use postgres as meta
Hi Guys,
I am planning to use postgres as metastore with Hive. Can anyone point me
which is the postgres version is compatible with Hive..?
Regards
Rohith Sharma K S
Mark, thank you so much for your suggestion.
Although I've already add necessary jars to my hive aux path, thus I
can execute my sql in hive CLI mode without getting any error.
But when I use a java client to access the tables through the thrift
service, I need to add these jars manually.
I execut
hi all:
now, I'm map to an existed hbase table, i got the following infomation as:
FAILED: Error in metadata: MetaException(message:Column Family
data is not defined in hbase table df_money_files)
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask
my h
Hi Mark,
I just started to restore the data to a separate MySQL 5.1 schema, will try to
create a table and post back here.
I copied the error stack trace below.
Nov 5 22:24:02 127.0.0.1/127.0.0.1 local3:[ETLManager] ERROR [pool-2-thread-1]
exec.MoveTask - Failed with exception Insert of object
Venkatesh,
What's the exact integrity constraint error you are seeing?
I'd be curious to see if you restored the data from the mysqldump onto a
separate schema/db on MySQL 5.1 server whether you still get the error or
not.
Mark
On Mon, Nov 5, 2012 at 3:37 PM, Venkatesh Kavuluri wrote:
> Sorry f
Cheng,
You will have to add the appropriate HBase related jars to your class path.
You can do that by running "add jar" command(s) or put it in aux_lib. See
this thread for reference:
http://mail-archives.apache.org/mod_mbox/hive-user/201103.mbox/%3caanlktingqlgknqmizgoi+szfnexgcat8caqtovf8j...@ma
Sorry for the confusion, the problem is not with the MySQL version upgrade - I
have indeed performed the upgrade by doing a mysqldump and restoring the data.
The problem is with how Hive 0.7.1 is interacting with the same metastore data
on a different version of MySQL server.
> Date: Mon, 5 Nov
Moving underlying data files around is not the correct way to perform
an upgrade.
https://dev.mysql.com/doc/refman/5.5/en/upgrading-from-previous-series.html
I would do a mysqldump and then re-insert the data for maximum comparability.
On Mon, Nov 5, 2012 at 6:21 PM, Venkatesh Kavuluri
wrote:
>
I am working on copying existing Hive metadata (Hive 0.7.1 with MySQL 5.1) to a
new cluster environment (Hive 0.7.1 with MySQL 5.5). I copied over the
metastore tables and modified the data under SDS (sub-directories) table to
reflect the new data path. However I am getting MySQL integrity const
Chunky,
I have used "recover partitions" command on EMR, and that worked fine.
However, take a look at https://issues.apache.org/jira/browse/HIVE-874. Seems
like msck command in Apache Hive does the same thing. Try it out and let us
know it goes.
Mark
On Mon, Nov 5, 2012 at 7:56 AM, Edward Capri
Compression is a confusing issue. Sequence files that are in block
format are always split table regardless of what compression for the
block is chosen.The Programming Hive book has an entire section
dedicated to the permutations of compression options.
Edward
On Mon, Nov 5, 2012 at 10:57 AM, Kris
Hi all,
I'm looking into finding a suitable format to store data in HDFS, so that
it's available for processing by Hive. Ideally I would like to satisfy the
following:
1. store the data in a format that is readable by multiple Hadoop projects
(eg. Pig, Mahout, etc.), not just Hive
2. work with a
Recover partitions should work the same way for different file systems.
Edward
On Mon, Nov 5, 2012 at 9:33 AM, Dean Wampler
wrote:
> Writing a script to add the external partitions individually is the only way
> I know of.
>
> Sent from my rotary phone.
>
>
> On Nov 5, 2012, at 8:19 AM, Chunky G
Hi, all. I have a hive+hbase integration cluster.
When I try to execute query through the java client of hive, sometimes
a ClassNotFoundException happens.
My java code :
final Connection conn = DriverManager.getConnection(URL);
final ResultSet rs = conn.executeQuery("SELECT count(*) FROM
test_ta
Writing a script to add the external partitions individually is the only way I
know of.
Sent from my rotary phone.
On Nov 5, 2012, at 8:19 AM, Chunky Gupta wrote:
> Hi Dean,
>
> Actually I was having Hadoop and Hive cluster on EMR and I have S3 storage
> containing logs which updates dail
Hi Dean,
Actually I was having Hadoop and Hive cluster on EMR and I have S3 storage
containing logs which updates daily and having partition with date(dt). And
I was using this recover partition.
Now I wanted to shift to EC2 and have my own Hadoop and Hive cluster. So,
what is the alternate of usi
The RECOVER PARTITIONS is an enhancement added by Amazon to their version
of Hive.
http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/emr-hive-additional-features.html
Chapter 21 of Programming Hive discusses this feature and other aspects
of using Hive in EMR.
dean
On
Hi,
I am having a cluster setup on EC2 with Hadoop version 0.20.2 and Hive
version 0.8.1 (I configured everything) . I have created a table using :-
CREATE EXTERNAL TABLE XXX ( YYY )PARTITIONED BY ( ZZZ )ROW FORMAT DELIMITED
FIELDS TERMINATED BY 'WWW' LOCATION 's3://my-location/data/';
Now I am
19 matches
Mail list logo