Please forgive the cross post, but I could really use some help. I have Hive setup using a remote metastore, backed by H2, and am able to create tables, load data, and query them without issue. However, when I restart the remote metastore, I can no longer query previously created tables. 'show tables' shows them, but when I perform a simple select 'select * from test_table limit 5' I receive a FAILED: Error in semantic analysis: Unable to fetch table test_table error. When I look at the logs in the metastore, I noticed the following exceptions repeat until the retry limit is exceeded:
13/06/03 19:02:06 INFO HiveMetaStore.audit: ugi=rtws ip=unknown-ip-addr cmd=get_table : db=default tbl=test_table 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MSerDeInfo 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MStorageDescriptor 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MTable 13/06/03 19:02:06 INFO DataNucleus.JDO: Exception thrown Illegal null value in column SDS.IS_COMPRESSED org.datanucleus.exceptions.NucleusDataStoreException: Illegal null value in column SDS.IS_COMPRESSED at org.datanucleus.store.rdbms.mapping.CharRDBMSMapping.getBoolean(CharRDBMSMapping.java:374) at org.datanucleus.store.mapped.mapping.SingleFieldMapping.getBoolean(SingleFieldMapping.java:122) at org.datanucleus.store.rdbms.fieldmanager.ResultSetGetter.fetchBooleanField(ResultSetGetter.java:64) at org.datanucleus.state.AbstractStateManager.replacingBooleanField(AbstractStateManager.java:1038) at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceField(MStorageDescriptor.java) at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceFields(MStorageDescriptor.java) at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860) at org.datanucleus.store.rdbms.query.PersistentClassROF$2.fetchFields(PersistentClassROF.java:487) at org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldValues(JDOStateManagerImpl.java:858) at org.datanucleus.jdo.state.JDOStateManagerImpl.initialiseForHollow(JDOStateManagerImpl.java:258) at org.datanucleus.state.StateManagerFactory.newStateManagerForHollowPopulated(StateManagerFactory.java:87) at org.datanucleus.ObjectManagerImpl.findObject(ObjectManagerImpl.java:2389) at org.datanucleus.store.rdbms.query.PersistentClassROF.getObjectForDatastoreId(PersistentClassROF.java:481) at org.datanucleus.store.rdbms.query.PersistentClassROF.getObject(PersistentClassROF.java:366) at org.datanucleus.store.rdbms.fieldmanager.ResultSetGetter.fetchObjectField(ResultSetGetter.java:144) at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183) at org.apache.hadoop.hive.metastore.model.MTable.jdoReplaceField(MTable.java) at org.apache.hadoop.hive.metastore.model.MTable.jdoReplaceFields(MTable.java) at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860) at org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879) at org.datanucleus.store.rdbms.request.FetchRequest.execute(FetchRequest.java:335) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.fetchObject(RDBMSPersistenceHandler.java:240) at org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsFromDatastore(JDOStateManagerImpl.java:1929) at org.datanucleus.jdo.state.JDOStateManagerImpl.loadUnloadedFields(JDOStateManagerImpl.java:1597) at org.datanucleus.jdo.state.Hollow.transitionRetrieve(Hollow.java:168) at org.datanucleus.state.AbstractStateManager.retrieve(AbstractStateManager.java:470) at org.datanucleus.ObjectManagerImpl.retrieveObject(ObjectManagerImpl.java:1131) at org.datanucleus.jdo.JDOPersistenceManager.jdoRetrieve(JDOPersistenceManager.java:534) at org.datanucleus.jdo.JDOPersistenceManager.retrieve(JDOPersistenceManager.java:551) at org.datanucleus.jdo.JDOPersistenceManager.retrieve(JDOPersistenceManager.java:560) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:776) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:709) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1073) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1073) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.process(ThriftHiveMetastore.java:5457) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679) What is odd is that when I look at the SDS table, the row for the table does not contain a null value: SD_ID,INPUT_FORMAT,IS_COMPRESSED,LOCATION,NUM_BUCKETS,OUTPUT_FORMAT,SERDE_ID 1,org.apache.hadoop.mapred.TextInputFormat,false,hdfs://namenode/tmp/hivedata/stuff,-1,org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat,1 So I'm guess it has something to do with the meatstore initialization code but I'm not able to figure it out. Here is the hive site config section related to the metastore: <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:h2:tcp://metastore:8161/metastrdb;SCHEMA_SEARCH_PATH=METASTORE</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>org.h2.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hiveuser</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>changeme</value> </property> <property> <name>datanucleus.autoCreateSchema</name> <value>false</value> </property> <property> <name>datanucleus.fixedDatastore</name> <value>true</value> </property> Attached is the h2 schema used to populate the metastore. I translated it from the mysql version without changing any table/column names. I am using hive 0.7.1 from the CDH3u4 release. Any help will be greatly appreciated. Thanks
hive-schema-0.7.0.h2.sql
Description: Binary data