[ 
https://issues.apache.org/jira/browse/HIVE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14999937#comment-14999937
 ] 

Eugene Koifman commented on HIVE-11388:
---------------------------------------

A simpler idea:
The Initiator places a row in COMPACTION_QUEUE to 'schedule' a compaction.
Before that it generates an ID from NEXT_COMPACTION_QUEUE_ID.
With HIVE-11948 we have support for SELECT FOR UPDATE on every DB except Derby, 
though even w/o it we have Serializable isolation.  Either way we should be 
able to check that (db,table,partition,state=Initiated/Working) combination is 
unique.  

This allows Initiator to run on any/every metastore while ensuring there is no 
double-compaction happening.  

Need to come up with a solution for Cleaner.  Could add another state, such as 
'bc'= being cleaned, so that 1 cleaner marks a row as bc, and then others 
ignore it.  (This just needs to be cleaned up on failure)

This avoids the ZK.

> there should only be 1 Initiator for compactions per Hive installation
> ----------------------------------------------------------------------
>
>                 Key: HIVE-11388
>                 URL: https://issues.apache.org/jira/browse/HIVE-11388
>             Project: Hive
>          Issue Type: Bug
>          Components: Transactions
>    Affects Versions: 1.0.0
>            Reporter: Eugene Koifman
>            Assignee: Eugene Koifman
>
> org.apache.hadoop.hive.ql.txn.compactor.Initiator is a thread that runs 
> inside the metastore service to manage compactions of ACID tables.  There 
> should be exactly 1 instance of this thread (even with multiple Thrift 
> services).
> This is documented in 
> https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Configuration
>  but not enforced.
> Should add enforcement, since more than 1 Initiator could cause concurrent 
> attempts to compact the same table/partition - which will not work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to