RDBMS works on the basis of changes written to redo or transaction log before
commits.
To get a true feed to Hive you will need the committed log deliveries in the
form of text delimited files loaded to a hive temporary tables and then
inserted to Hive table following the initial load using
Thank you!
r7raul1...@163.com
From: Gerald-G
Date: 2015-05-06 10:35
To: user
Subject: Re: How to config hive when using namenode HA
Upgrading the Hive Metastore to Use HDFS HA Using the Command Line
To configure the Hive metastore to use HDFS HA, change the records to reflect
the location spe
Upgrading the Hive Metastore to Use HDFS HA Using the Command Line
To configure the Hive metastore to use HDFS HA, change the records to
reflect the location specified in the dfs.nameservices property, using the
Hive metatool to obtain and change the locations.
*Note*: Before attempting to upgra
I change the sql where condition to (where t.update_time >= '2015-05-04') ,
the sql can return result for a while. Because t.update_time >= '2015-05-04'
can filter many row when table scan. But why change where condition to (where
t.update_time >= '2015-05-04' or length(t8.end_user_id)>0) ,t
Please find attached error log for the same.
On Tue, May 5, 2015 at 11:36 PM, Jason Dere wrote:
> Looks like you are running into
> https://issues.apache.org/jira/browse/HIVE-8321, fixed in Hive-0.14.
> You might be stuck having to use Kryo, what are the issues you are having
> with Kryo?
>
>
>
You have to write a different hql which will handle update and delete, you
can not do this direct from sqoop.
On Wed, May 6, 2015 at 12:02 AM, Divakar Reddy
wrote:
> As per my knowledge Sqoop doesn't support updates and deletes.
>
> We are handling like:
>
> 1) drop particular data from *partiti
As per my knowledge Sqoop doesn't support updates and deletes.
We are handling like:
1) drop particular data from *partitioned* (form partitioned Column) table
and load it again with conditions in sqoop like --query "select * form xyz
where date = '2015-04-02'
Thanks,
Divakar
On Tue, May 5, 201
Looks like you are running into
https://issues.apache.org/jira/browse/HIVE-8321, fixed in Hive-0.14.
You might be stuck having to use Kryo, what are the issues you are having with
Kryo?
Thanks,
Jason
On May 5, 2015, at 4:28 AM, Bhagwan S. Soni
mailto:bhgwnsson...@gmail.com>> wrote:
Bottom o
Hi gurus,
I can use Sqoop import to get RDBMS data say Oracle to Hive first and then use
incremental append for new rows with PK and last value.
However, how do you account for updates and deletes with Sqoop without full
load of table from RDBMS to Hive?
Thanks
Hi,
I've just upgraded to Hive 1.1.0 and it looks like there is a problem with
the distributed cache.
I use ADD FILE, then an UDF that wants to read the file. The following
syntax works in Hive 1.0.0 but Hive can't find the file in 1.1.0 (testfile
exists on hdfs, the built-in udf in_file is just a
Bottom on the log:
at java.beans.Encoder.writeObject(Encoder.java:74)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:327)
at java.beans.Encoder.writeExpression(Encoder.java:330)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:454)
at
java.
kryo/javaXML are the only available options. What are the errors you see with
each setting?
On May 1, 2015, at 9:41 AM, Bhagwan S. Soni
mailto:bhgwnsson...@gmail.com>> wrote:
Hi Hive Users,
I'm using cloudera's hive 0.13 version which by default provide Kryo plan
serialization format.
hive
I want to turn my single namenode to namenode HA. How to config hive ?
r7raul1...@163.com
13 matches
Mail list logo