What version of Hive are you running?  In the current codebase at least, 
TxnHandler.checkRetryable() keeps track of how many retries it allowed – it 
will not retry forever.


From: Steve Howard <stevedhow...@gmail.com<mailto:stevedhow...@gmail.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
<user@hive.apache.org<mailto:user@hive.apache.org>>
Date: Tuesday, September 22, 2015 at 1:32 PM
To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
<user@hive.apache.org<mailto:user@hive.apache.org>>
Subject: Transaction deadlocks


Thread A…


"HiveServer2-Background-Pool: Thread-35" #35 prio=5 os_prio=0 
tid=0x00007fd150e40000 nid=0x2c97 runnable [0x00007fd146e0a000]
   java.lang.Thread.State: RUNNABLE
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
        at java.net.SocketInputStream.read(SocketInputStream.java:170)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at oracle.net.ns.Packet.receive(Packet.java:300)
        at oracle.net.ns.DataPacket.receive(DataPacket.java:106)
        at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:315)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:260)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:185)
        at oracle.net.ns.NetInputStream.read(NetInputStream.java:102)
        at 
oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:124)
        at 
oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:80)
        at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1137)
        at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:290)
        at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)
        at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
        at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
        at 
oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:1033)
        at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1329)
        at 
oracle.jdbc.driver.OracleStatement.executeUpdateInternal(OracleStatement.java:1838)
        at 
oracle.jdbc.driver.OracleStatement.executeUpdate(OracleStatement.java:1803)
        - locked <0x00000000c09fcda0> (a oracle.jdbc.driver.T4CConnection)
        at 
oracle.jdbc.driver.OracleStatementWrapper.executeUpdate(OracleStatementWrapper.java:294)
        at 
org.apache.commons.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
        at 
org.apache.commons.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1432)
        - locked <0x00000000c09fcc28> (a java.lang.Object)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:422)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)



…blocks Thread B, resulting in a deadlock message being written to the logs 
over and over again…




"HiveServer2-Background-Pool: Thread-51" #51 prio=5 os_prio=0 
tid=0x000000000279f000 nid=0x7227 waiting for monitor entry [0x00007fd146a47000]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1361)
        - waiting to lock <0x00000000c09fcc28> (a java.lang.Object)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:422)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)
        at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:433)



…but my guess is the bigger issue is that eventually the stack will be 
exhausted, as the method below recursively calls itself (notice the stack 
above)…


  public LockResponse lock(LockRequest rqst)
    throws NoSuchTxnException, TxnAbortedException, MetaException
  {
    this.deadlockCnt = 0;
    try
    {
      Connection dbConn = null;
      try
      {
        dbConn = getDbConn(8);
        return lock(dbConn, rqst, true);
      }
      catch (SQLException e)
      {
        LOG.debug("Going to rollback");
        rollbackDBConn(dbConn);
        checkRetryable(dbConn, e, "lock(" + rqst + ")");
        throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
      }
      finally
      {
        closeDbConn(dbConn);
      }
      return lock(rqst);
    }
    catch (RetryException e) {}
  }



This effectively freezes all access to a table.

Any ideas?

Reply via email to