well. There lies your answer.  So. what you might consider doing is
altering that table column and change it to tinyint.  That might be just
enough to trick it since i think the java code is expecting either a 0 or 1
- not 'false'.

might be worth a try.


On Mon, Jun 3, 2013 at 5:47 PM, Jamal B <jm151...@gmail.com> wrote:

> It treats it as a boolean.
>
> http://www.h2database.com/html/datatypes.html#boolean_type
>
>
> On Mon, Jun 3, 2013 at 8:16 PM, Stephen Sprague <sprag...@gmail.com>wrote:
>
>> it does.  so its a bit datatype.  that's a tad non-standard i'd say.
>> what does your backend store (H2) do with that?
>>
>> {code}
>> | SDS   | CREATE TABLE `SDS` (
>>
>>   `SD_ID` bigint(20) NOT NULL,
>>   `INPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>> DEFAULT NULL,
>>   `IS_COMPRESSED` bit(1) NOT NULL,
>>   `LOCATION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>> DEFAULT NULL,
>>   `NUM_BUCKETS` int(11) NOT NULL,
>>   `OUTPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>> DEFAULT NULL,
>>   `SERDE_ID` bigint(20) DEFAULT NULL,
>>   PRIMARY KEY (`SD_ID`),
>>   KEY `SDS_N49` (`SERDE_ID`),
>>   CONSTRAINT `SDS_FK1` FOREIGN KEY (`SERDE_ID`) REFERENCES `SERDES`
>> (`SERDE_ID`)
>> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
>> {code}
>>
>>
>> On Mon, Jun 3, 2013 at 4:57 PM, Jamal B <jm151...@gmail.com> wrote:
>>
>>> I gave that a shot, but it didn't work.  Could you run a describe table
>>> to see if it matches this schema  (assuming mysql):
>>> --
>>> -- Table structure for table `SDS`
>>> --
>>>
>>> /*!40101 SET @saved_cs_client     = @@character_set_client */;
>>> /*!40101 SET character_set_client = utf8 */;
>>> CREATE TABLE IF NOT EXISTS `SDS` (
>>>   `SD_ID` bigint(20) NOT NULL,
>>>   `INPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>> DEFAULT NULL,
>>>   `IS_COMPRESSED` bit(1) NOT NULL,
>>>   `LOCATION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>> DEFAULT NULL,
>>>   `NUM_BUCKETS` int(11) NOT NULL,
>>>   `OUTPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin
>>> DEFAULT NULL,
>>>   `SERDE_ID` bigint(20) DEFAULT NULL,
>>>   PRIMARY KEY (`SD_ID`),
>>>   KEY `SDS_N49` (`SERDE_ID`),
>>>   CONSTRAINT `SDS_FK1` FOREIGN KEY (`SERDE_ID`) REFERENCES `SERDES`
>>> (`SERDE_ID`)
>>> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
>>>
>>>
>>>
>>> On Mon, Jun 3, 2013 at 4:20 PM, Stephen Sprague <sprag...@gmail.com>wrote:
>>>
>>>> not that this is any consolation but on my mysql instance, same cdh
>>>> release, i have the following for IS_COMPRESSED where you have 'false':
>>>>
>>>> {code}
>>>> {metastore@etl1 root 13:13}>select SD_ID, is_compressed from SDS limit
>>>> 10;
>>>> +-------+---------------+
>>>> | SD_ID | is_compressed |
>>>> +-------+---------------+
>>>> |     1 |               |
>>>> |     2 |               |
>>>> |     6 |               |
>>>> |    11 |               |
>>>> |    26 |               |
>>>> |    31 |               |
>>>> |    36 |               |
>>>> |    41 |               |
>>>> |    46 |               |
>>>> |    47 |               |
>>>> +-------+---------------+
>>>> 10 rows in set (0.00 sec)
>>>> {code}
>>>>
>>>> so *maybe* that 'false' value there is something specific to your
>>>> backend store?  might you experiment and change it to a null string? and
>>>> see if that changes anything?   Something to think about anyway.
>>>>
>>>>
>>>>
>>>> On Mon, Jun 3, 2013 at 12:49 PM, Jamal B <jm151...@gmail.com> wrote:
>>>>
>>>>> Please forgive the cross post, but I could really use some help.
>>>>>
>>>>> I have Hive setup using a remote metastore, backed by H2, and am able
>>>>> to create tables, load data, and query them without issue.  However, when 
>>>>> I
>>>>> restart the remote metastore, I can no longer query previously created
>>>>> tables.  'show tables' shows them, but when I perform a simple select
>>>>> 'select * from test_table limit 5' I receive a FAILED: Error in semantic
>>>>> analysis: Unable to fetch table test_table error.  When I look at the logs
>>>>> in the metastore, I noticed the following exceptions repeat until the 
>>>>> retry
>>>>> limit is exceeded:
>>>>>
>>>>> 13/06/03 19:02:06 INFO HiveMetaStore.audit: ugi=rtws
>>>>> ip=unknown-ip-addr cmd=get_table : db=default tbl=test_table
>>>>> 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found
>>>>> initialisation for persistable class
>>>>> org.apache.hadoop.hive.metastore.model.MSerDeInfo
>>>>> 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found
>>>>> initialisation for persistable class
>>>>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor
>>>>> 13/06/03 19:02:06 INFO DataNucleus.MetaData: Listener found
>>>>> initialisation for persistable class
>>>>> org.apache.hadoop.hive.metastore.model.MTable
>>>>> 13/06/03 19:02:06 INFO DataNucleus.JDO: Exception thrown
>>>>>  Illegal null value in column SDS.IS_COMPRESSED
>>>>> org.datanucleus.exceptions.NucleusDataStoreException: Illegal null
>>>>> value in column SDS.IS_COMPRESSED
>>>>> at
>>>>> org.datanucleus.store.rdbms.mapping.CharRDBMSMapping.getBoolean(CharRDBMSMapping.java:374)
>>>>>  at
>>>>> org.datanucleus.store.mapped.mapping.SingleFieldMapping.getBoolean(SingleFieldMapping.java:122)
>>>>> at
>>>>> org.datanucleus.store.rdbms.fieldmanager.ResultSetGetter.fetchBooleanField(ResultSetGetter.java:64)
>>>>>  at
>>>>> org.datanucleus.state.AbstractStateManager.replacingBooleanField(AbstractStateManager.java:1038)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceField(MStorageDescriptor.java)
>>>>>  at
>>>>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceFields(MStorageDescriptor.java)
>>>>> at
>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
>>>>>  at
>>>>> org.datanucleus.store.rdbms.query.PersistentClassROF$2.fetchFields(PersistentClassROF.java:487)
>>>>> at
>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldValues(JDOStateManagerImpl.java:858)
>>>>>  at
>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.initialiseForHollow(JDOStateManagerImpl.java:258)
>>>>> at
>>>>> org.datanucleus.state.StateManagerFactory.newStateManagerForHollowPopulated(StateManagerFactory.java:87)
>>>>>  at
>>>>> org.datanucleus.ObjectManagerImpl.findObject(ObjectManagerImpl.java:2389)
>>>>> at
>>>>> org.datanucleus.store.rdbms.query.PersistentClassROF.getObjectForDatastoreId(PersistentClassROF.java:481)
>>>>>  at
>>>>> org.datanucleus.store.rdbms.query.PersistentClassROF.getObject(PersistentClassROF.java:366)
>>>>> at
>>>>> org.datanucleus.store.rdbms.fieldmanager.ResultSetGetter.fetchObjectField(ResultSetGetter.java:144)
>>>>>  at
>>>>> org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1183)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.model.MTable.jdoReplaceField(MTable.java)
>>>>>  at
>>>>> org.apache.hadoop.hive.metastore.model.MTable.jdoReplaceFields(MTable.java)
>>>>> at
>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2860)
>>>>>  at
>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2879)
>>>>> at
>>>>> org.datanucleus.store.rdbms.request.FetchRequest.execute(FetchRequest.java:335)
>>>>>  at
>>>>> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.fetchObject(RDBMSPersistenceHandler.java:240)
>>>>> at
>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsFromDatastore(JDOStateManagerImpl.java:1929)
>>>>>  at
>>>>> org.datanucleus.jdo.state.JDOStateManagerImpl.loadUnloadedFields(JDOStateManagerImpl.java:1597)
>>>>> at org.datanucleus.jdo.state.Hollow.transitionRetrieve(Hollow.java:168)
>>>>>  at
>>>>> org.datanucleus.state.AbstractStateManager.retrieve(AbstractStateManager.java:470)
>>>>> at
>>>>> org.datanucleus.ObjectManagerImpl.retrieveObject(ObjectManagerImpl.java:1131)
>>>>>  at
>>>>> org.datanucleus.jdo.JDOPersistenceManager.jdoRetrieve(JDOPersistenceManager.java:534)
>>>>> at
>>>>> org.datanucleus.jdo.JDOPersistenceManager.retrieve(JDOPersistenceManager.java:551)
>>>>>  at
>>>>> org.datanucleus.jdo.JDOPersistenceManager.retrieve(JDOPersistenceManager.java:560)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:776)
>>>>>  at
>>>>> org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:709)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1076)
>>>>>  at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1073)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
>>>>>  at
>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1073)
>>>>> at
>>>>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.process(ThriftHiveMetastore.java:5457)
>>>>>  at
>>>>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
>>>>> at
>>>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>>>>>  at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>  at java.lang.Thread.run(Thread.java:679)
>>>>>
>>>>> What is odd is that when I look at the SDS table, the row for the
>>>>> table does not contain a null value:
>>>>>
>>>>> SD_ID,INPUT_FORMAT,IS_COMPRESSED,LOCATION,NUM_BUCKETS,OUTPUT_FORMAT,SERDE_ID
>>>>>
>>>>> 1,org.apache.hadoop.mapred.TextInputFormat,false,hdfs://namenode/tmp/hivedata/stuff,-1,org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat,1
>>>>>
>>>>> So I'm guess it has something to do with the meatstore initialization
>>>>> code but I'm not able to figure it out.  Here is the hive site config
>>>>> section related to the metastore:
>>>>> <property>
>>>>>   <name>javax.jdo.option.ConnectionURL</name>
>>>>>
>>>>> <value>jdbc:h2:tcp://metastore:8161/metastrdb;SCHEMA_SEARCH_PATH=METASTORE</value>
>>>>>   <description>JDBC connect string for a JDBC metastore</description>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>>   <name>javax.jdo.option.ConnectionDriverName</name>
>>>>>   <value>org.h2.Driver</value>
>>>>>   <description>Driver class name for a JDBC metastore</description>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>>   <name>javax.jdo.option.ConnectionUserName</name>
>>>>>   <value>hiveuser</value>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>>   <name>javax.jdo.option.ConnectionPassword</name>
>>>>>   <value>changeme</value>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>>   <name>datanucleus.autoCreateSchema</name>
>>>>>   <value>false</value>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>>   <name>datanucleus.fixedDatastore</name>
>>>>>   <value>true</value>
>>>>> </property>
>>>>>
>>>>> Attached is the h2 schema used to populate the metastore.  I
>>>>> translated it from the mysql version without changing any table/column
>>>>> names.
>>>>>
>>>>>
>>>>>
>>>>> I am using hive 0.7.1 from the CDH3u4 release.   Any help will be
>>>>> greatly appreciated.
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to