Hi Alan,
 
I'm using Hive version 1.1.0. The metastore database is PostgreSQL.

I've reduced the rate and now I get the following error when compacting 
manually:

ERROR org.apache.hadoop.hive.ql.txn.compactor.CompactorMR: [hvi1x0194-29]: No 
delta files found to compact in 
hdfs://hvi1x0220:8020/user/hive/warehouse/biguardian_prod.db/events/day=2016-11-30

And then:

ERROR org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler: [Thread-11]: 
Expected to remove at least one row from completed_txn_components when marking 
compaction entry as clean!

On the other hand, if I execute SHOW TRANSACTIONS there are tons of them... 
Maybe we are not closing them appropiately... How can I kill them?

Many thanks,

Diego




-----Mensaje original-----
De: Alan Gates [mailto:alanfga...@gmail.com] 
Enviado el: martes, 29 de noviembre de 2016 19:03
Para: user@hive.apache.org
Asunto: Re: Problems with Hive Streaming. Compactions not working. Out of 
memory errors.

I’m guessing that this is an issue in the metastore database where it is unable 
to read from the transaction tables due to the ingestion rate.  What version of 
Hive are you using?  What database are you storing the metadata in?

Alan.

> On Nov 29, 2016, at 00:05, Diego Fustes Villadóniga <dfus...@oesia.com> wrote:
> 
> Hi all,
>  
> We are trying to use Hive streaming to ingest data in real time from Flink. 
> We send batches of data every 5 seconds to Hive. We are working version 
> 1.1.0-cdh5.8.2.
>  
> The ingestión works fine. However, compactions are not working, the log shows 
> this error:
>  
> Unable to select next element for compaction, ERROR: could not 
> serialize access due to concurrent update
>  
> In addition, when we run simple queries like SELECT COUNT(1) FROM 
> events, we are getting OutOfMemory errors, even though we have assigned 10GB 
> to each Mapper/Reducer. Seeing the logs, each map task tries to load all 
> delta files, until it breaks, which does not make much sense to me.
>  
>  
> I think that we have followed all the steps described in the documentation, 
> so we are blocked in this point.
>  
> Could you help us?

Reply via email to