The error you see is with hive metastore and these issues were kind of
related to two sided
1) Load on metastore
2) datanuclueas related

For now if possible, see if you can restart hive metastore and that
resolves your issue.




On Tue, Jun 10, 2014 at 3:27 PM, Fernando Agudo <fag...@pragsis.com> wrote:

> I have problems to upgrade to hive-0.13 or 0.12 because is in production.
> Only have this configuration of the datanuclues:
>
>         <property>
> <name>datanucleus.fixedDatastore</name>
>                 <value>true</value>
>         </property>
>         <property>
> <name>datanucleus.autoCreateSchema</name>
>                 <value>false</value>
>         </property>
>
> This is relevant for the problem?
>
> Thanks,
>
>
> On 10/06/14 10:53, Nitin Pawar wrote:
>
>> Hive 0.9.0 with CDH4.1 <--- This is very old release.
>>
>> I would recommend to upgrade to hive-0.13 or at least 0.12 and see.
>>
>> Error you are seeing is on loading data into a partition and metastore
>> alter/add partition is failing.
>>
>> Can you try upgrading and see if that resolves your issue?
>> If not can you share your datanuclues related settings in hive
>>
>>
>> On Tue, Jun 10, 2014 at 2:16 PM, Fernando Agudo <fag...@pragsis.com>
>> wrote:
>>
>>  Hello,
>>>
>>> I'm working with Hive 0.9.0 with CDH4.1. I have a process which it's
>>> loading data in Hive every minute. It creates the partition if it's
>>> necessary.
>>> I have been monitoring this process for three days and I realize that
>>> there's a method (*listStorageDescriptorsWithCD*) which increases the
>>> execution time. First execution this method lasted about 15 millisencond
>>> and in the end it took more than 3 seconds (three days later), after
>>> that,
>>> Hive throws an exception and starts working again.
>>>
>>> I have checking this method but I haven't figured out any suspicious,
>>> could it be a bug?
>>>
>>>
>>>
>>> *2014-06-05 09:58:20,921* DEBUG metastore.ObjectStore (ObjectStore.java:
>>> listStorageDescriptorsWithCD(2036)) - Executing
>>> listStorageDescriptorsWithCD
>>> *2014-06-05 09:58:20,928* DEBUG metastore.ObjectStore (ObjectStore.java:
>>> listStorageDescriptorsWithCD(2045)) - Done executing query for
>>> listStorageDescriptorsWithCD
>>>
>>>
>>> *2014-06-08 20:15:33,867* DEBUG metastore.ObjectStore (ObjectStore.java:
>>> listStorageDescriptorsWithCD(2036)) - Executing listStorageDescriptor
>>> sWithCD
>>> *2014-06-08 20:15:36,134* DEBUG metastore.ObjectStore (ObjectStore.java:
>>> listStorageDescriptorsWithCD(2045)) - Done executing query for listSt
>>> orageDescriptorsWithCD
>>>
>>>
>>>
>>> 2014-06-08 20:16:34,600 DEBUG metastore.ObjectStore (ObjectStore.java:
>>> removeUnusedColumnDescriptor(1989)) - execute removeUnusedColumnDescr
>>> iptor
>>> *2014-06-08 20:16:34,600 DEBUG metastore.ObjectStore (ObjectStore.java:
>>> listStorageDescriptorsWithCD(2036)) - Executing listStorageDescriptor**
>>> **sWithCD*
>>> 2014-06-08 20:16:34,805 ERROR metadata.Hive
>>> (Hive.java:getPartition(1453))
>>> - org.apache.hadoop.hive.ql.metadata.HiveException: Unable to al
>>> ter partition.
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(
>>> Hive.java:429)
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(
>>> Hive.java:1446)
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(
>>> Hive.java:1158)
>>>          at org.apache.hadoop.hive.ql.exec.MoveTask.execute(
>>> MoveTask.java:304)
>>>          at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.
>>> java:153)
>>>          at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(
>>> TaskRunner.java:57)
>>>          at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:
>>> 1331)
>>>          at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1117)
>>>          at org.apache.hadoop.hive.ql.Driver.run(Driver.java:950)
>>>          at org.apache.hadoop.hive.service.HiveServer$
>>> HiveServerHandler.execute(HiveServer.java:191)
>>>          at org.apache.hadoop.hive.service.ThriftHive$Processor$
>>> execute.getResult(ThriftHive.java:630)
>>>          at org.apache.hadoop.hive.service.ThriftHive$Processor$
>>> execute.getResult(ThriftHive.java:618)
>>>          at org.apache.thrift.ProcessFunction.process(
>>> ProcessFunction.java:32)
>>>          at org.apache.thrift.TBaseProcessor.process(
>>> TBaseProcessor.java:34)
>>>          at org.apache.thrift.server.TThreadPoolServer$
>>> WorkerProcess.run(
>>> TThreadPoolServer.java:176)
>>>          at java.util.concurrent.ThreadPoolExecutor$Worker.
>>> runTask(ThreadPoolExecutor.java:886)
>>>          at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> ThreadPoolExecutor.java:908)
>>>          at java.lang.Thread.run(Thread.java:662)
>>> Caused by: MetaException(message:The transaction for alter partition did
>>> not commit successfully.)
>>>          at org.apache.hadoop.hive.metastore.ObjectStore.
>>> alterPartition(ObjectStore.java:1927)
>>>          at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>>>          at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>> DelegatingMethodAccessorImpl.java:25)
>>>          at java.lang.reflect.Method.invoke(Method.java:597)
>>>          at org.apache.hadoop.hive.metastore.RetryingRawStore.
>>> invoke(RetryingRawStore.java:111)
>>>          at $Proxy0.alterPartition(Unknown Source)
>>>          at org.apache.hadoop.hive.metastore.HiveAlterHandler.
>>> alterPartition(HiveAlterHandler.java:254)
>>>          at org.apache.hadoop.hive.metastore.HiveMetaStore$
>>> HMSHandler.rename_partition(HiveMetaStore.java:1816)
>>>          at org.apache.hadoop.hive.metastore.HiveMetaStore$
>>> HMSHandler.rename_partition(HiveMetaStore.java:1788)
>>>          at org.apache.hadoop.hive.metastore.HiveMetaStore$
>>> HMSHandler.alter_partition(HiveMetaStore.java:1771)
>>>          at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.
>>> alter_partition(HiveMetaStoreClient.java:834)
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(
>>> Hive.java:425)
>>>          ... 17 more
>>>
>>> 2014-06-08 20:16:34,827 ERROR exec.Task (SessionState.java:printError(
>>> 403))
>>> - Failed with exception org.apache.hadoop.hive.ql.
>>> metadata.HiveException:
>>> Unable to alter partition.
>>> org.apache.hadoop.hive.ql.metadata.HiveException:
>>> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter
>>> partition.
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(
>>> Hive.java:1454)
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(
>>> Hive.java:1158)
>>>          at org.apache.hadoop.hive.ql.exec.MoveTask.execute(
>>> MoveTask.java:304)
>>>          at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.
>>> java:153)
>>>          at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(
>>> TaskRunner.java:57)
>>>          at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:
>>> 1331)
>>>          at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1117)
>>>          at org.apache.hadoop.hive.ql.Driver.run(Driver.java:950)
>>>          at org.apache.hadoop.hive.service.HiveServer$
>>> HiveServerHandler.execute(HiveServer.java:191)
>>>          at org.apache.hadoop.hive.service.ThriftHive$Processor$
>>> execute.getResult(ThriftHive.java:630)
>>>          at org.apache.hadoop.hive.service.ThriftHive$Processor$
>>> execute.getResult(ThriftHive.java:618)
>>>          at org.apache.thrift.ProcessFunction.process(
>>> ProcessFunction.java:32)
>>>          at org.apache.thrift.TBaseProcessor.process(
>>> TBaseProcessor.java:34)
>>>          at org.apache.thrift.server.TThreadPoolServer$
>>> WorkerProcess.run(
>>> TThreadPoolServer.java:176)
>>>          at java.util.concurrent.ThreadPoolExecutor$Worker.
>>> runTask(ThreadPoolExecutor.java:886)
>>>          at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> ThreadPoolExecutor.java:908)
>>>          at java.lang.Thread.run(Thread.java:662)
>>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to
>>> alter partition.
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(
>>> Hive.java:429)
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(
>>> Hive.java:1446)
>>>          ... 16 more
>>> Caused by: MetaException(message:The transaction for alter partition did
>>> not commit successfully.)
>>>          at org.apache.hadoop.hive.metastore.ObjectStore.
>>> alterPartition(ObjectStore.java:1927)
>>>          at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>>>          at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>> DelegatingMethodAccessorImpl.java:25)
>>>          at java.lang.reflect.Method.invoke(Method.java:597)
>>>          at org.apache.hadoop.hive.metastore.RetryingRawStore.
>>> invoke(RetryingRawStore.java:111)
>>>          at $Proxy0.alterPartition(Unknown Source)
>>>          at org.apache.hadoop.hive.metastore.HiveAlterHandler.
>>> alterPartition(HiveAlterHandler.java:254)
>>>          at org.apache.hadoop.hive.metastore.HiveMetaStore$
>>> HMSHandler.rename_partition(HiveMetaStore.java:1816)
>>>          at org.apache.hadoop.hive.metastore.HiveMetaStore$
>>> HMSHandler.rename_partition(HiveMetaStore.java:1788)
>>>          at org.apache.hadoop.hive.metastore.HiveMetaStore$
>>> HMSHandler.alter_partition(HiveMetaStore.java:1771)
>>>          at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.
>>> alter_partition(HiveMetaStoreClient.java:834)
>>>          at org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(
>>> Hive.java:425)
>>>          ... 17 more
>>> 2014-06-08 20:16:34,852 ERROR ql.Driver (SessionState.java:printError(
>>> 403))
>>> - FAILED: Execution Error, return code 1 from org.apache.hadoop
>>> .hive.ql.exec.MoveTask
>>>
>>>
>>> --
>>> *Fernando Agudo Tarancón*
>>> /Big Data Software Engineer/
>>>
>>> Telf.: +34 917 680 490
>>> Fax: +34 913 833 301
>>> C/ Manuel Tovar, 49-53 - 28034 Madrid - Spain
>>>
>>> _http://www.bidoop.es_
>>>
>>>
>>>
>>
>
> --
> *Fernando Agudo Tarancón*
> /Big Data Software Engineer/
>
> Telf.: +34 917 680 490
> Fax: +34 913 833 301
> C/ Manuel Tovar, 49-53 - 28034 Madrid - Spain
>
> _http://www.bidoop.es_
>
>


-- 
Nitin Pawar

Reply via email to