[ https://issues.apache.org/jira/browse/HIVE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13767205#comment-13767205 ]
Brock Noland commented on HIVE-5218: ------------------------------------ Hi, I don't want to block SQL Server support. However, SQL Server is just as supported as DB2, Informix, Sybase, SQLite, Teradata, Netezza, etc. It might work or it might not. There is no offical support for any database, all "support" is [de facto|https://issues.apache.org/jira/browse/HIVE-1391?focusedCommentId=12975068&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12975068]. Given there are no sql scripts for SQL Server shipped with Hive we cannot even claim de facto support for SQL Server. I'd be more than happy to review SQL Server scripts and review any changes required to upgrade to a version of DN with SQL Server support fixed, but reverting a change that has [widespread|https://issues.apache.org/jira/browse/HIVE-3632?focusedCommentId=13708673&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13708673] community support is the not the way towards SQL Server support. Until DN is able to provide a fix or workaround that we can incorporate in Hive, I suggest SQL Server users apply your patch on top of the Hive 0.12 release. Minor customization of Apache software is extremely common as all Apache releases are [source code|http://www.apache.org/dev/release.html] with the binaries provided only as a convenience to users. Brock > datanucleus does not work with SQLServer in Hive metastore > ---------------------------------------------------------- > > Key: HIVE-5218 > URL: https://issues.apache.org/jira/browse/HIVE-5218 > Project: Hive > Issue Type: Bug > Components: Metastore > Affects Versions: 0.12.0 > Reporter: shanyu zhao > Attachments: > 0001-HIVE-5218-datanucleus-does-not-work-with-SQLServer-i.patch, > HIVE-5218.patch > > > HIVE-3632 upgraded datanucleus version to 3.2.x, however, this version of > datanucleus doesn't work with SQLServer as the metastore. The problem is that > datanucleus tries to use fully qualified object name to find a table in the > database but couldn't find it. > If I downgrade the version to HIVE-2084, SQLServer works fine. > It could be a bug in datanucleus. > This is the detailed exception I'm getting when using datanucleus 3.2.x with > SQL Server: > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTa > sk. MetaException(message:javax.jdo.JDOException: Exception thrown calling > table > .exists() for a2ee36af45e9f46c19e995bfd2d9b5fd1hivemetastore..SEQUENCE_TABLE > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusExc > eption(NucleusJDOHelper.java:596) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPe > rsistenceManager.java:732) > … > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawS > tore.java:111) > at $Proxy0.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl > e_core(HiveMetaStore.java:1071) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl > e_with_environment_context(HiveMetaStore.java:1104) > … > at $Proxy11.create_table_with_environment_context(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr > eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6417) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr > eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6401) > NestedThrowablesStackTrace: > com.microsoft.sqlserver.jdbc.SQLServerException: There is already an object > name > d 'SEQUENCE_TABLE' in the database. > at > com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError > (SQLServerException.java:197) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServ > erStatement.java:1493) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQ > LServerStatement.java:775) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute > (SQLServerStatement.java:676) > at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615) > at > com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLSe > rverConnection.java:1400) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLSer > verStatement.java:179) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLS > erverStatement.java:154) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement.execute(SQLServerStat > ement.java:649) > at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:300) > at > org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(A > bstractTable.java:760) > at > org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatementLi > st(AbstractTable.java:711) > at > org.datanucleus.store.rdbms.table.AbstractTable.create(AbstractTable. > java:425) > at > org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable. > java:488) > at > org.datanucleus.store.rdbms.valuegenerator.TableGenerator.repositoryE > xists(TableGenerator.java:242) > at > org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obt > ainGenerationBlock(AbstractRDBMSGenerator.java:86) > at > org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerati > onBlock(AbstractGenerator.java:197) > at > org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractG > enerator.java:105) > at > org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGene > rator(RDBMSStoreManager.java:2019) > at > org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractS > toreManager.java:1385) > at > org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl > .java:3727) > at > org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.jav > a:2574) > at > org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOS > tateManager.java:526) > at > org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(O > bjectProviderFactoryImpl.java:202) > at > org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNe > w(ExecutionContextImpl.java:1326) > at > org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionC > ontextImpl.java:2123) > at > org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionConte > xtImpl.java:1972) > at > org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextIm > pl.java:1820) > at > org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionC > ontextThreadedImpl.java:217) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPe > rsistenceManager.java:727) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersi > stenceManager.java:752) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore. > java:646) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. > java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces > sorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:601) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawS > tore.java:111) > at $Proxy0.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl > e_core(HiveMetaStore.java:1071) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl > e_with_environment_context(HiveMetaStore.java:1104) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. > java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces > sorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:601) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHM > SHandler.java:103) > at $Proxy11.create_table_with_environment_context(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr > eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6417) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr > eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6401) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetI > pAddressProcessor.java:48) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadP > oolServer.java:206) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. > java:1110) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor > .java:603) > at java.lang.Thread.run(Thread.java:722) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira