Then you will get a copy of each sub table.
My project use external partitions instead, and in this way, you don't
need to cp sub tables.
1) Create a table some_table
2) ALTER TABLE some_table ADD PARTITION (partition_col =
'partition_col_value1') LOCATION '/user/data/subTable1'
ALTER TABLE some_ta
I have created an Apache JIRA for this issue here:
https://issues.apache.org/jira/browse/HIVE-3453 (apparently I cannot assign
this to myself)
Please feel free to up vote this if you feel this would be helpful to you
or the community. Also please be sure to leave comments or suggestions on
impleme
Hi Ashok,
AFAIK, there is no property that will get you this functionality on the fly.
Regards,
Bejoy KS
From: "ashok.sa...@wipro.com"
To: user@hive.apache.org; bejoy...@yahoo.com
Sent: Thursday, September 13, 2012 2:42 AM
Subject: RE: Performance: hive+hbas
Yes bejoy i did it today and it's working. But i was thinking by setting some
property we can achieve it.
Is there anything like that?
Thanks
Ashok
From: Bejoy KS [mailto:bejoy...@yahoo.com]
Sent: 13 September 2012 02:40
To: user@hive.apache.org
Subject: Re: Performance: hive+hbase integration q
Hi Ashok
'LOAD DATA INPATH ..' issues a hdfs move under the hood, that is why the
original data in hdfs is not present after the load operation. If you want
to preserve the data in some hdfs location and use the same with hive, why not
create an external table and point it to the required hdfs
Hi
Just follow the steps
1) Create an external table with location as /user/aggregatedTable
2) Move the contents of 3 monthly tables to this location
Hadoop fs -cp /user/monthlyTable1/* /user/aggregatedTable
Hadoop fs -cp /user/monthlyTable2/* /user/aggregatedTable
...
Replace the hsfs dirs in
Thanks Bejoy.
Yes, they have the same schema.
Can you explain further how to? I am new to hive.
On Wed, Sep 12, 2012 at 1:06 PM, Bejoy KS wrote:
> **
> Hi
>
> If all the 3 tables have the same. Schema, Create an external table and
> move the data from all the 3 tables to this new table's locati
Hi
If all the 3 tables have the same. Schema, Create an external table and move
the data from all the 3 tables to this new table's location. Just a hdfs copy
or move is not that expensive.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: zuohua zhang
Hey Matt,
We did something similar at Facebook to capture the information on who ran
what on the clusters and dumped that out to an audit db. Specifically we
were using Hive post execution hooks to achive that
http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/ql/hooks/PostExecute.html
All,
I looked in the Hive JIRA and saw nothing like what we are looking to
implement so I am interesting in getting feedback as to whether there is
any overlap in this and any other current efforts:
Currently our Hive warehouse is open to querying from any of our business
analysts and we pool the
10 matches
Mail list logo