Thanks
Once hive.compactor.initiator.on property is set to true whether the
merge operation takes place and reuse the blocks after each update or do we
need to do Alter statement mensioned
in
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-ConfigurationValuest
The compact operation will merge the data and then the blocks may be resued.
At 2014-12-02 17:10:43, "unmesha sreeveni" wrote:
So that block will not be reused right? If we are updating the entire
block..and at some point we dont need that record...The block will be wasted
right?
They nee
So that block will not be reused right? If we are updating the entire
block..and at some point we dont need that record...The block will be
wasted right?
They need to relize the blocks for further writes, right?
Am I correct?
On Tue, Dec 2, 2014 at 2:36 PM, vic0777 wrote:
>
> The document descri
The document describes how transaction works and what the data layout is:
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions. See the
"Basic design" section. HDFS is immutable. Hive creates a delta directory for
every transaction and merges it when read, so it does not written
Why hive "UPADTE" is not reusing the blocks.
the update is not written to same block, why is it so ?
On Tue, Dec 2, 2014 at 10:50 AM, unmesha sreeveni
wrote:
> I tried to update my record in hive previous version and also tried out
> update in hive 0.14.0. The newer version which support hive.
I tried to update my record in hive previous version and also tried out
update in hive 0.14.0. The newer version which support hive.
I created a table with 3 buckets with 180 MB. In my warehouse the data get
stored into 3 different blocks
delta_012_012
--- Block ID: 1073751752
--- Block I