[ 
https://issues.apache.org/jira/browse/HIVE-12352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15099182#comment-15099182
 ] 

Hive QA commented on HIVE-12352:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12782361/HIVE-12352.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10019 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_union
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl.testMultiSessionMultipleUse
org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl.testSingleSessionMultipleUse
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6627/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6627/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6627/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12782361 - PreCommit-HIVE-TRUNK-Build

> CompactionTxnHandler.markCleaned() may delete too much
> ------------------------------------------------------
>
>                 Key: HIVE-12352
>                 URL: https://issues.apache.org/jira/browse/HIVE-12352
>             Project: Hive
>          Issue Type: Bug
>          Components: Transactions
>    Affects Versions: 1.0.0
>            Reporter: Eugene Koifman
>            Assignee: Eugene Koifman
>            Priority: Blocker
>         Attachments: HIVE-12352.2.patch, HIVE-12352.3.patch, HIVE-12352.patch
>
>
>    Worker will start with DB in state X (wrt this partition).
>    while it's working more txns will happen, against partition it's 
> compacting.
>    then this will delete state up to X and since then.  There may be new 
> delta files created
>    between compaction starting and cleaning.  These will not be compacted 
> until more
>    transactions happen.  So this ideally should only delete
>    up to TXN_ID that was compacted (i.e. HWM in Worker?)  Then this can also 
> run
>    at READ_COMMITTED.  So this means we'd want to store HWM in 
> COMPACTION_QUEUE when
>    Worker picks up the job.
> Actually the problem is even worse (but also solved using HWM as above):
> Suppose some transactions (against same partition) have started and aborted 
> since the time Worker ran compaction job.
> That means there are never-compacted delta files with data that belongs to 
> these aborted txns.
> Following will pick up these aborted txns.
> s = "select txn_id from TXNS, TXN_COMPONENTS where txn_id = tc_txnid and 
> txn_state = '" +
>           TXN_ABORTED + "' and tc_database = '" + info.dbname + "' and 
> tc_table = '" +
>           info.tableName + "'";
>         if (info.partName != null) s += " and tc_partition = '" + 
> info.partName + "'";
> The logic after that will delete relevant data from TXN_COMPONENTS and if one 
> of these txns becomes empty, it will be picked up by cleanEmptyAbortedTxns(). 
>  At that point any metadata about an Aborted txn is gone and the system will 
> think it's committed.
> HWM in this case would be (in ValidCompactorTxnList)
> if(minOpenTxn > 0)
> min(highWaterMark, minOpenTxn) 
> else 
> highWaterMark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to