[ 
https://issues.apache.org/jira/browse/HIVE-24396?focusedWorklogId=573586&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573586
 ]

ASF GitHub Bot logged work on HIVE-24396:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 29/Mar/21 16:15
            Start Date: 29/Mar/21 16:15
    Worklog Time Spent: 10m 
      Work Description: nrg4878 commented on a change in pull request #2037:
URL: https://github.com/apache/hive/pull/2037#discussion_r602892256



##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorOperation.java
##########
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.DDLOperation;
+import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+/**
+ * Operation process of altering a data connector.
+ */
+public abstract class AbstractAlterDataConnectorOperation<T extends 
AbstractAlterDataConnectorDesc> extends DDLOperation<T> {
+  public AbstractAlterDataConnectorOperation(DDLOperationContext context, T 
desc) {
+    super(context, desc);
+  }
+
+  @Override
+  public int execute() throws HiveException {
+    String dcName = desc.getConnectorName();
+    DataConnector connector = context.getDb().getDataConnector(dcName);
+    if (connector == null) {
+      throw new HiveException(ErrorMsg.DATACONNECTOR_ALREADY_EXISTS, dcName);
+    }
+
+    Map<String, String> params = connector.getParameters();
+    if ((null != desc.getReplicationSpec()) &&
+        !desc.getReplicationSpec().allowEventReplacementInto(params)) {
+      LOG.debug("DDLTask: Alter Connector {} is skipped as connector is newer 
than update", dcName);
+      return 0; // no replacement, the existing connector state is newer than 
our update.
+    }
+
+    doAlteration(connector, params);
+
+    context.getDb().alterDataConnector(connector.getName(), connector);

Review comment:
       Thats correct. doAlteration calls the concrete implementation to alter 
the appropriate value on the DataConnector object while the alterDataConnector 
is a call that eventually calls HMS to persist the new metadata. This is a 
common pattern across all alter* operations. But I will add comments.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorOperation.java
##########
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.DDLOperation;
+import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+/**
+ * Operation process of altering a data connector.
+ */
+public abstract class AbstractAlterDataConnectorOperation<T extends 
AbstractAlterDataConnectorDesc> extends DDLOperation<T> {
+  public AbstractAlterDataConnectorOperation(DDLOperationContext context, T 
desc) {
+    super(context, desc);
+  }
+
+  @Override
+  public int execute() throws HiveException {
+    String dcName = desc.getConnectorName();
+    DataConnector connector = context.getDb().getDataConnector(dcName);
+    if (connector == null) {
+      throw new HiveException(ErrorMsg.DATACONNECTOR_ALREADY_EXISTS, dcName);
+    }
+
+    Map<String, String> params = connector.getParameters();
+    if ((null != desc.getReplicationSpec()) &&
+        !desc.getReplicationSpec().allowEventReplacementInto(params)) {
+      LOG.debug("DDLTask: Alter Connector {} is skipped as connector is newer 
than update", dcName);
+      return 0; // no replacement, the existing connector state is newer than 
our update.

Review comment:
       We are relying on an exception being thrown when an alter fails. In that 
case, we wouldnt be retutning 0 right ?

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/properties/AlterDataConnectorSetPropertiesDesc.java
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter.properties;
+
+import java.util.Map;
+
+import 
org.apache.hadoop.hive.ql.ddl.dataconnector.alter.AbstractAlterDataConnectorDesc;
+import org.apache.hadoop.hive.ql.parse.ReplicationSpec;
+import org.apache.hadoop.hive.ql.plan.Explain;
+import org.apache.hadoop.hive.ql.plan.Explain.Level;
+
+/**
+ * DDL task description for ALTER CONNECTOR ... SET PROPERTIES ... commands.
+ */
+@Explain(displayName = "Set Connector Properties", explainLevels = { 
Level.USER, Level.DEFAULT, Level.EXTENDED })
+public class AlterDataConnectorSetPropertiesDesc extends 
AbstractAlterDataConnectorDesc {
+  private static final long serialVersionUID = 1L;
+
+  private final Map<String, String> dcProperties;
+
+  public AlterDataConnectorSetPropertiesDesc(String connectorName, Map<String, 
String> dcProperties,
+      ReplicationSpec replicationSpec) {

Review comment:
       I have already filed HIVE-24941 for the replication stuff for 
connectors. We can follow up on this on part of that jira.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/url/AlterDataConnectorSetUrlAnalyzer.java
##########
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter.url;
+
+import org.apache.hadoop.hive.ql.QueryState;
+import org.apache.hadoop.hive.ql.ddl.DDLSemanticAnalyzerFactory.DDLType;
+import 
org.apache.hadoop.hive.ql.ddl.dataconnector.alter.AbstractAlterDataConnectorAnalyzer;
+import org.apache.hadoop.hive.ql.parse.ASTNode;
+import org.apache.hadoop.hive.ql.parse.HiveParser;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Analyzer for connector set url commands.
+ */
+@DDLType(types = HiveParser.TOK_ALTERDATACONNECTOR_URL)
+public class AlterDataConnectorSetUrlAnalyzer extends 
AbstractAlterDataConnectorAnalyzer {
+  public AlterDataConnectorSetUrlAnalyzer(QueryState queryState) throws 
SemanticException {
+    super(queryState);
+  }
+
+  @Override
+  public void analyzeInternal(ASTNode root) throws SemanticException {
+    String connectorName = getUnescapedName((ASTNode) root.getChild(0));

Review comment:
       connectorName cannot be null. its required to be provided.
   Also in the SetUrlOperation, we already check for null url value.
         if (StringUtils.isBlank(newUrl) || newUrl.equals(connector.getUrl())) {
           throw new HiveException("New URL for data connector cannot be blank 
or the same as the current one");
         }
   
   So we are covered.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/properties/AlterDataConnectorSetPropertiesOperation.java
##########
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter.properties;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
+import 
org.apache.hadoop.hive.ql.ddl.dataconnector.alter.AbstractAlterDataConnectorOperation;
+
+/**
+ * Operation process of altering a dataconnector's properties.
+ */
+public class AlterDataConnectorSetPropertiesOperation
+    extends 
AbstractAlterDataConnectorOperation<AlterDataConnectorSetPropertiesDesc> {
+  public AlterDataConnectorSetPropertiesOperation(DDLOperationContext context, 
AlterDataConnectorSetPropertiesDesc desc) {
+    super(context, desc);
+  }
+
+  @Override
+  protected void doAlteration(DataConnector connector, Map<String, String> 
params) {
+    Map<String, String> newParams = desc.getConnectorProperties();
+
+    // if both old and new params are not null, merge them
+    if (params != null && newParams != null) {
+      params.putAll(newParams);
+      connector.setParameters(params);
+    } else {
+      // if one of them is null, replace the old params with the new one
+      connector.setParameters(newParams);

Review comment:
       The comment is 
   // if one of them is null, replace the old params with the new one
   
   So if the user executes an alter connector set dcproperties with null, then 
we assume the properties are to be deleted on the existing connector. (same for 
database as well). This might be means for users to reset all current 
properties.
   I am not sure I understand where the NPE would be thrown from if params == 
null. Could you please elaborate?

##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
##########
@@ -0,0 +1,311 @@
+package org.apache.hadoop.hive.metastore.dataconnector.jdbc;
+
+import org.apache.hadoop.hive.metastore.ColumnType;
+import org.apache.hadoop.hive.metastore.Warehouse;
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import 
org.apache.hadoop.hive.metastore.dataconnector.AbstractDataConnectorProvider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.ConnectException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public abstract class AbstractJDBCConnectorProvider extends 
AbstractDataConnectorProvider {
+  private static Logger LOG = 
LoggerFactory.getLogger(AbstractJDBCConnectorProvider.class);
+  protected static Warehouse warehouse = null;
+
+  // duplicate constants from Constants.java to avoid a dependency on 
hive-common
+  public static final String JDBC_HIVE_STORAGE_HANDLER_ID =
+      "org.apache.hive.storage.jdbc.JdbcStorageHandler";
+  public static final String JDBC_CONFIG_PREFIX = "hive.sql";
+  public static final String JDBC_CATALOG = JDBC_CONFIG_PREFIX + ".catalog";
+  public static final String JDBC_SCHEMA = JDBC_CONFIG_PREFIX + ".schema";
+  public static final String JDBC_TABLE = JDBC_CONFIG_PREFIX + ".table";
+  public static final String JDBC_DATABASE_TYPE = JDBC_CONFIG_PREFIX + 
".database.type";
+  public static final String JDBC_URL = JDBC_CONFIG_PREFIX + ".jdbc.url";
+  public static final String JDBC_DRIVER = JDBC_CONFIG_PREFIX + ".jdbc.driver";
+  public static final String JDBC_USERNAME = JDBC_CONFIG_PREFIX + 
".dbcp.username";
+  public static final String JDBC_PASSWORD = JDBC_CONFIG_PREFIX + 
".dbcp.password";
+  public static final String JDBC_KEYSTORE = JDBC_CONFIG_PREFIX + 
".dbcp.password.keystore";
+  public static final String JDBC_KEY = JDBC_CONFIG_PREFIX + 
".dbcp.password.key";
+  public static final String JDBC_QUERY = JDBC_CONFIG_PREFIX + ".query";
+  public static final String JDBC_QUERY_FIELD_NAMES = JDBC_CONFIG_PREFIX + 
".query.fieldNames";
+  public static final String JDBC_QUERY_FIELD_TYPES = JDBC_CONFIG_PREFIX + 
".query.fieldTypes";
+  public static final String JDBC_SPLIT_QUERY = JDBC_CONFIG_PREFIX + 
".query.split";
+  public static final String JDBC_PARTITION_COLUMN = JDBC_CONFIG_PREFIX + 
".partitionColumn";
+  public static final String JDBC_NUM_PARTITIONS = JDBC_CONFIG_PREFIX + 
".numPartitions";
+  public static final String JDBC_LOW_BOUND = JDBC_CONFIG_PREFIX + 
".lowerBound";
+  public static final String JDBC_UPPER_BOUND = JDBC_CONFIG_PREFIX + 
".upperBound";
+
+  private static final String JDBC_INPUTFORMAT_CLASS = 
"org.apache.hive.storage.jdbc.JdbcInputFormat".intern();
+  private static final String JDBC_OUTPUTFORMAT_CLASS = 
"org.apache.hive.storage.jdbc.JdbcOutputFormat".intern();
+
+  String type = null; // MYSQL, POSTGRES, ORACLE, DERBY, MSSQL, DB2 etc.
+  String driverClassName = null;
+  String jdbcUrl = null;
+  String username = null;
+  String password = null; // TODO convert to byte array
+
+  public AbstractJDBCConnectorProvider(String dbName, DataConnector dataConn) {
+    super(dbName, dataConn);
+    this.type = connector.getType().toUpperCase(); // TODO
+    this.jdbcUrl = connector.getUrl();
+    this.username = connector.getParameters().get(JDBC_USERNAME);
+    this.password = connector.getParameters().get(JDBC_PASSWORD);
+    if (this.password == null) {
+      String keystore = connector.getParameters().get(JDBC_KEYSTORE);
+      String key = connector.getParameters().get(JDBC_KEY);
+      try {
+        char[] keyValue = MetastoreConf.getValueFromKeystore(keystore, key);
+        if (keyValue != null)
+          this.password = new String(keyValue);
+      } catch (IOException i) {
+        LOG.warn("Could not read key value from keystore");
+      }
+    }
+
+    try {
+      warehouse = new Warehouse(MetastoreConf.newMetastoreConf());
+    } catch (MetaException e) { /* ignore */ }
+  }
+
+  @Override public void open() throws ConnectException {
+    try {
+      Class.forName(driverClassName);
+      handle = DriverManager.getConnection(jdbcUrl, username, password);
+      isOpen = true;
+    } catch (ClassNotFoundException cnfe) {
+      LOG.warn("Driver class not found in classpath:" + driverClassName);
+      throw new RuntimeException("Driver class not found:" + driverClassName);

Review comment:
       fixed.

##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
##########
@@ -0,0 +1,311 @@
+package org.apache.hadoop.hive.metastore.dataconnector.jdbc;
+
+import org.apache.hadoop.hive.metastore.ColumnType;
+import org.apache.hadoop.hive.metastore.Warehouse;
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import 
org.apache.hadoop.hive.metastore.dataconnector.AbstractDataConnectorProvider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.ConnectException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public abstract class AbstractJDBCConnectorProvider extends 
AbstractDataConnectorProvider {
+  private static Logger LOG = 
LoggerFactory.getLogger(AbstractJDBCConnectorProvider.class);
+  protected static Warehouse warehouse = null;
+
+  // duplicate constants from Constants.java to avoid a dependency on 
hive-common
+  public static final String JDBC_HIVE_STORAGE_HANDLER_ID =
+      "org.apache.hive.storage.jdbc.JdbcStorageHandler";
+  public static final String JDBC_CONFIG_PREFIX = "hive.sql";
+  public static final String JDBC_CATALOG = JDBC_CONFIG_PREFIX + ".catalog";
+  public static final String JDBC_SCHEMA = JDBC_CONFIG_PREFIX + ".schema";
+  public static final String JDBC_TABLE = JDBC_CONFIG_PREFIX + ".table";
+  public static final String JDBC_DATABASE_TYPE = JDBC_CONFIG_PREFIX + 
".database.type";
+  public static final String JDBC_URL = JDBC_CONFIG_PREFIX + ".jdbc.url";
+  public static final String JDBC_DRIVER = JDBC_CONFIG_PREFIX + ".jdbc.driver";
+  public static final String JDBC_USERNAME = JDBC_CONFIG_PREFIX + 
".dbcp.username";
+  public static final String JDBC_PASSWORD = JDBC_CONFIG_PREFIX + 
".dbcp.password";
+  public static final String JDBC_KEYSTORE = JDBC_CONFIG_PREFIX + 
".dbcp.password.keystore";
+  public static final String JDBC_KEY = JDBC_CONFIG_PREFIX + 
".dbcp.password.key";
+  public static final String JDBC_QUERY = JDBC_CONFIG_PREFIX + ".query";
+  public static final String JDBC_QUERY_FIELD_NAMES = JDBC_CONFIG_PREFIX + 
".query.fieldNames";
+  public static final String JDBC_QUERY_FIELD_TYPES = JDBC_CONFIG_PREFIX + 
".query.fieldTypes";
+  public static final String JDBC_SPLIT_QUERY = JDBC_CONFIG_PREFIX + 
".query.split";
+  public static final String JDBC_PARTITION_COLUMN = JDBC_CONFIG_PREFIX + 
".partitionColumn";
+  public static final String JDBC_NUM_PARTITIONS = JDBC_CONFIG_PREFIX + 
".numPartitions";
+  public static final String JDBC_LOW_BOUND = JDBC_CONFIG_PREFIX + 
".lowerBound";
+  public static final String JDBC_UPPER_BOUND = JDBC_CONFIG_PREFIX + 
".upperBound";
+
+  private static final String JDBC_INPUTFORMAT_CLASS = 
"org.apache.hive.storage.jdbc.JdbcInputFormat".intern();
+  private static final String JDBC_OUTPUTFORMAT_CLASS = 
"org.apache.hive.storage.jdbc.JdbcOutputFormat".intern();
+
+  String type = null; // MYSQL, POSTGRES, ORACLE, DERBY, MSSQL, DB2 etc.
+  String driverClassName = null;
+  String jdbcUrl = null;
+  String username = null;
+  String password = null; // TODO convert to byte array
+
+  public AbstractJDBCConnectorProvider(String dbName, DataConnector dataConn) {
+    super(dbName, dataConn);
+    this.type = connector.getType().toUpperCase(); // TODO
+    this.jdbcUrl = connector.getUrl();
+    this.username = connector.getParameters().get(JDBC_USERNAME);
+    this.password = connector.getParameters().get(JDBC_PASSWORD);
+    if (this.password == null) {
+      String keystore = connector.getParameters().get(JDBC_KEYSTORE);
+      String key = connector.getParameters().get(JDBC_KEY);
+      try {
+        char[] keyValue = MetastoreConf.getValueFromKeystore(keystore, key);
+        if (keyValue != null)
+          this.password = new String(keyValue);
+      } catch (IOException i) {
+        LOG.warn("Could not read key value from keystore");
+      }
+    }
+
+    try {
+      warehouse = new Warehouse(MetastoreConf.newMetastoreConf());
+    } catch (MetaException e) { /* ignore */ }
+  }
+
+  @Override public void open() throws ConnectException {
+    try {
+      Class.forName(driverClassName);
+      handle = DriverManager.getConnection(jdbcUrl, username, password);
+      isOpen = true;
+    } catch (ClassNotFoundException cnfe) {
+      LOG.warn("Driver class not found in classpath:" + driverClassName);
+      throw new RuntimeException("Driver class not found:" + driverClassName);
+    } catch (SQLException sqle) {
+      LOG.warn("Could not connect to remote data source at " + jdbcUrl);
+      throw new ConnectException("Could not connect to remote datasource at " 
+ jdbcUrl + ",cause:" + sqle.getMessage());
+    }
+  }
+
+  protected Connection getConnection() {
+    try {
+      if (!isOpen)
+        open();
+    } catch (ConnectException ce) {
+      throw new RuntimeException(ce.getMessage());
+    }
+
+    if (handle instanceof Connection)
+      return (Connection)handle;
+
+    throw new RuntimeException("unexpected type for connection handle");
+  }
+
+  @Override public void close() {
+    if (isOpen) {
+      try {
+        ((Connection)handle).close();
+      } catch (SQLException sqle) {
+        LOG.warn("Could not close jdbc connection to " + jdbcUrl, sqle);

Review comment:
       throwing a new RuntimeException with the underlying exception. FIXED.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/owner/AlterDataConnectorSetOwnerOperation.java
##########
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter.owner;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
+import 
org.apache.hadoop.hive.ql.ddl.dataconnector.alter.AbstractAlterDataConnectorOperation;
+
+/**
+ * Operation process of altering a connector's owner.
+ */
+public class AlterDataConnectorSetOwnerOperation extends
+    AbstractAlterDataConnectorOperation<AlterDataConnectorSetOwnerDesc> {
+  public AlterDataConnectorSetOwnerOperation(DDLOperationContext context, 
AlterDataConnectorSetOwnerDesc desc) {
+    super(context, desc);
+  }
+
+  @Override
+  protected void doAlteration(DataConnector connector, Map<String, String> 
params) {
+    connector.setOwnerName(desc.getOwnerPrincipal().getName());

Review comment:
       so we use the same signature across all the alter operations because all 
operations extend the abstract operation. The new values for all alter 
operations come from the descriptor, they are NOT in the map being passed in. 
The map contains new properties of the existing connector + any modifications 
that are to be merged with the new properties in the descriptor. So ideally, we 
should not even need the params. But case of set properties, the old properties 
might need to be modified based on replicationspec. At this point, I do not 
know that connectors are to be replicated.
   
   So if these are not to be replicated, it would make sense to remove the Map 
being passed in. 

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/properties/AlterDataConnectorSetPropertiesDesc.java
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter.properties;
+
+import java.util.Map;
+
+import 
org.apache.hadoop.hive.ql.ddl.dataconnector.alter.AbstractAlterDataConnectorDesc;
+import org.apache.hadoop.hive.ql.parse.ReplicationSpec;
+import org.apache.hadoop.hive.ql.plan.Explain;
+import org.apache.hadoop.hive.ql.plan.Explain.Level;
+
+/**
+ * DDL task description for ALTER CONNECTOR ... SET PROPERTIES ... commands.
+ */
+@Explain(displayName = "Set Connector Properties", explainLevels = { 
Level.USER, Level.DEFAULT, Level.EXTENDED })
+public class AlterDataConnectorSetPropertiesDesc extends 
AbstractAlterDataConnectorDesc {
+  private static final long serialVersionUID = 1L;
+
+  private final Map<String, String> dcProperties;
+
+  public AlterDataConnectorSetPropertiesDesc(String connectorName, Map<String, 
String> dcProperties,
+      ReplicationSpec replicationSpec) {

Review comment:
       so the replication stuff has not been implemented for connectors yet. 
From what I know, only databases that have certain properties in them are 
replicated. so not all databases are replicated automatically. So the remote 
databases will not be replicated by default. This is something we will need to 
follow up via a separate jira perhaps.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/AbstractAlterDataConnectorOperation.java
##########
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.DDLOperation;
+import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+/**
+ * Operation process of altering a data connector.
+ */
+public abstract class AbstractAlterDataConnectorOperation<T extends 
AbstractAlterDataConnectorDesc> extends DDLOperation<T> {
+  public AbstractAlterDataConnectorOperation(DDLOperationContext context, T 
desc) {
+    super(context, desc);
+  }
+
+  @Override
+  public int execute() throws HiveException {
+    String dcName = desc.getConnectorName();
+    DataConnector connector = context.getDb().getDataConnector(dcName);
+    if (connector == null) {
+      throw new HiveException(ErrorMsg.DATACONNECTOR_ALREADY_EXISTS, dcName);
+    }
+
+    Map<String, String> params = connector.getParameters();
+    if ((null != desc.getReplicationSpec()) &&
+        !desc.getReplicationSpec().allowEventReplacementInto(params)) {
+      LOG.debug("DDLTask: Alter Connector {} is skipped as connector is newer 
than update", dcName);
+      return 0; // no replacement, the existing connector state is newer than 
our update.
+    }
+
+    doAlteration(connector, params);
+
+    context.getDb().alterDataConnector(connector.getName(), connector);
+    return 0;

Review comment:
       If the alter fails, alterDataConnector() would throw an exception which 
would be thrown back. We will not be returning 0 in that case. make sense?

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
##########
@@ -1719,6 +1720,23 @@ protected Database getDatabase(String dbName, boolean 
throwException) throws Sem
     return database;
   }
 
+  protected DataConnector getDataConnector(String dbName) throws 
SemanticException {
+    return getDataConnector(dbName, true);
+  }
+
+  protected DataConnector getDataConnector(String dcName, boolean 
throwException) throws SemanticException {
+    DataConnector connector;
+    try {
+      connector = db.getDataConnector(dcName);
+    } catch (Exception e) {
+      throw new SemanticException(e.getMessage(), e);

Review comment:
       fixed.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/alter/properties/AlterDataConnectorSetPropertiesAnalyzer.java
##########
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.alter.properties;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.QueryState;
+import org.apache.hadoop.hive.ql.ddl.DDLSemanticAnalyzerFactory.DDLType;
+import 
org.apache.hadoop.hive.ql.ddl.dataconnector.alter.AbstractAlterDataConnectorAnalyzer;
+import org.apache.hadoop.hive.ql.parse.ASTNode;
+import org.apache.hadoop.hive.ql.parse.HiveParser;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Analyzer for connector set properties commands.
+ */
+@DDLType(types = HiveParser.TOK_ALTERDATACONNECTOR_PROPERTIES)
+public class AlterDataConnectorSetPropertiesAnalyzer extends 
AbstractAlterDataConnectorAnalyzer {
+  public AlterDataConnectorSetPropertiesAnalyzer(QueryState queryState) throws 
SemanticException {
+    super(queryState);
+  }
+
+  @Override
+  public void analyzeInternal(ASTNode root) throws SemanticException {
+    String connectorName = unescapeIdentifier(root.getChild(0).getText());
+
+    Map<String, String> dbProps = null;
+    for (int i = 1; i < root.getChildCount(); i++) {
+      ASTNode childNode = (ASTNode) root.getChild(i);
+      switch (childNode.getToken().getType()) {

Review comment:
       fixed.

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/dataconnector/show/ShowDataConnectorsOperation.java
##########
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.dataconnector.show;
+
+import java.io.DataOutputStream;
+import java.util.List;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.DDLOperation;
+import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.udf.UDFLike;
+import org.apache.hadoop.io.IOUtils;
+
+/**
+ * Operation process of showing data connectors.
+ */
+public class ShowDataConnectorsOperation extends 
DDLOperation<ShowDataConnectorsDesc> {
+  public ShowDataConnectorsOperation(DDLOperationContext context, 
ShowDataConnectorsDesc desc) {
+    super(context, desc);
+  }
+
+  @Override
+  public int execute() throws HiveException {
+    List<String> connectors = context.getDb().getAllDataConnectors();
+    if (desc.getPattern() != null) {
+      LOG.debug("pattern: {}", desc.getPattern());
+      Pattern pattern = 
Pattern.compile(UDFLike.likePatternToRegExp(desc.getPattern()), 
Pattern.CASE_INSENSITIVE);
+      connectors = connectors.stream().filter(name -> 
pattern.matcher(name).matches()).collect(Collectors.toList());
+    }
+
+    LOG.info("Found {} connector(s) matching the SHOW CONNECTORS statement.", 
connectors.size());

Review comment:
       This should be debug. I will fix this

##########
File path: 
ql/src/test/results/clientpositive/llap/alter_change_db_location.q.out
##########
@@ -11,7 +11,7 @@ PREHOOK: Input: database:newdb
 POSTHOOK: query: describe database extended newDB
 POSTHOOK: type: DESCDATABASE
 POSTHOOK: Input: database:newdb
-newdb          location/in/test                hive_test_user  USER    
+newdb          location/in/test                hive_test_user  USER            
        

Review comment:
       yes, now the test is printing out white spaces for connector name and 
remoteDbName. Because this is a NATIVE database, it is expected to contain 
whitespace.

##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
##########
@@ -0,0 +1,311 @@
+package org.apache.hadoop.hive.metastore.dataconnector.jdbc;
+
+import org.apache.hadoop.hive.metastore.ColumnType;
+import org.apache.hadoop.hive.metastore.Warehouse;
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import 
org.apache.hadoop.hive.metastore.dataconnector.AbstractDataConnectorProvider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.ConnectException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public abstract class AbstractJDBCConnectorProvider extends 
AbstractDataConnectorProvider {
+  private static Logger LOG = 
LoggerFactory.getLogger(AbstractJDBCConnectorProvider.class);
+  protected static Warehouse warehouse = null;
+
+  // duplicate constants from Constants.java to avoid a dependency on 
hive-common
+  public static final String JDBC_HIVE_STORAGE_HANDLER_ID =
+      "org.apache.hive.storage.jdbc.JdbcStorageHandler";
+  public static final String JDBC_CONFIG_PREFIX = "hive.sql";
+  public static final String JDBC_CATALOG = JDBC_CONFIG_PREFIX + ".catalog";
+  public static final String JDBC_SCHEMA = JDBC_CONFIG_PREFIX + ".schema";
+  public static final String JDBC_TABLE = JDBC_CONFIG_PREFIX + ".table";
+  public static final String JDBC_DATABASE_TYPE = JDBC_CONFIG_PREFIX + 
".database.type";
+  public static final String JDBC_URL = JDBC_CONFIG_PREFIX + ".jdbc.url";
+  public static final String JDBC_DRIVER = JDBC_CONFIG_PREFIX + ".jdbc.driver";
+  public static final String JDBC_USERNAME = JDBC_CONFIG_PREFIX + 
".dbcp.username";
+  public static final String JDBC_PASSWORD = JDBC_CONFIG_PREFIX + 
".dbcp.password";
+  public static final String JDBC_KEYSTORE = JDBC_CONFIG_PREFIX + 
".dbcp.password.keystore";
+  public static final String JDBC_KEY = JDBC_CONFIG_PREFIX + 
".dbcp.password.key";
+  public static final String JDBC_QUERY = JDBC_CONFIG_PREFIX + ".query";
+  public static final String JDBC_QUERY_FIELD_NAMES = JDBC_CONFIG_PREFIX + 
".query.fieldNames";
+  public static final String JDBC_QUERY_FIELD_TYPES = JDBC_CONFIG_PREFIX + 
".query.fieldTypes";
+  public static final String JDBC_SPLIT_QUERY = JDBC_CONFIG_PREFIX + 
".query.split";
+  public static final String JDBC_PARTITION_COLUMN = JDBC_CONFIG_PREFIX + 
".partitionColumn";
+  public static final String JDBC_NUM_PARTITIONS = JDBC_CONFIG_PREFIX + 
".numPartitions";
+  public static final String JDBC_LOW_BOUND = JDBC_CONFIG_PREFIX + 
".lowerBound";
+  public static final String JDBC_UPPER_BOUND = JDBC_CONFIG_PREFIX + 
".upperBound";
+
+  private static final String JDBC_INPUTFORMAT_CLASS = 
"org.apache.hive.storage.jdbc.JdbcInputFormat".intern();
+  private static final String JDBC_OUTPUTFORMAT_CLASS = 
"org.apache.hive.storage.jdbc.JdbcOutputFormat".intern();
+
+  String type = null; // MYSQL, POSTGRES, ORACLE, DERBY, MSSQL, DB2 etc.
+  String driverClassName = null;
+  String jdbcUrl = null;
+  String username = null;
+  String password = null; // TODO convert to byte array
+
+  public AbstractJDBCConnectorProvider(String dbName, DataConnector dataConn) {
+    super(dbName, dataConn);
+    this.type = connector.getType().toUpperCase(); // TODO
+    this.jdbcUrl = connector.getUrl();
+    this.username = connector.getParameters().get(JDBC_USERNAME);
+    this.password = connector.getParameters().get(JDBC_PASSWORD);
+    if (this.password == null) {
+      String keystore = connector.getParameters().get(JDBC_KEYSTORE);
+      String key = connector.getParameters().get(JDBC_KEY);
+      try {
+        char[] keyValue = MetastoreConf.getValueFromKeystore(keystore, key);
+        if (keyValue != null)
+          this.password = new String(keyValue);
+      } catch (IOException i) {
+        LOG.warn("Could not read key value from keystore");
+      }
+    }
+
+    try {
+      warehouse = new Warehouse(MetastoreConf.newMetastoreConf());
+    } catch (MetaException e) { /* ignore */ }
+  }
+
+  @Override public void open() throws ConnectException {
+    try {
+      Class.forName(driverClassName);
+      handle = DriverManager.getConnection(jdbcUrl, username, password);
+      isOpen = true;
+    } catch (ClassNotFoundException cnfe) {
+      LOG.warn("Driver class not found in classpath:" + driverClassName);
+      throw new RuntimeException("Driver class not found:" + driverClassName);
+    } catch (SQLException sqle) {
+      LOG.warn("Could not connect to remote data source at " + jdbcUrl);
+      throw new ConnectException("Could not connect to remote datasource at " 
+ jdbcUrl + ",cause:" + sqle.getMessage());
+    }
+  }
+
+  protected Connection getConnection() {
+    try {
+      if (!isOpen)
+        open();
+    } catch (ConnectException ce) {
+      throw new RuntimeException(ce.getMessage());

Review comment:
       fixed.

##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##########
@@ -1160,6 +1163,38 @@ private static void 
updateTableAggregatePartitionColStats(RawStore rawStore, Str
     return sharedCache.listCachedDatabases(catName);
   }
 
+  @Override public void createDataConnector(DataConnector connector) throws 
InvalidObjectException, MetaException {
+    rawStore.createDataConnector(connector);
+    // in case of event based cache update, cache will be updated during 
commit.
+    /*

Review comment:
       done.

##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/JDBCConnectorProviderFactory.java
##########
@@ -0,0 +1,45 @@
+package org.apache.hadoop.hive.metastore.dataconnector;
+
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import 
org.apache.hadoop.hive.metastore.dataconnector.jdbc.DerbySQLConnectorProvider;
+import 
org.apache.hadoop.hive.metastore.dataconnector.jdbc.MySQLConnectorProvider;
+import 
org.apache.hadoop.hive.metastore.dataconnector.jdbc.PostgreSQLConnectorProvider;
+
+import static 
org.apache.hadoop.hive.metastore.dataconnector.IDataConnectorProvider.*;
+
+public class JDBCConnectorProviderFactory {
+
+  public static IDataConnectorProvider get(String dbName, DataConnector 
connector) {
+    IDataConnectorProvider provider = null;
+    switch(connector.getType().toLowerCase()) {
+    case MYSQL_TYPE:
+      provider = new MySQLConnectorProvider(dbName, connector);
+      /*

Review comment:
       done

##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/dataconnector/jdbc/AbstractJDBCConnectorProvider.java
##########
@@ -0,0 +1,311 @@
+package org.apache.hadoop.hive.metastore.dataconnector.jdbc;
+
+import org.apache.hadoop.hive.metastore.ColumnType;
+import org.apache.hadoop.hive.metastore.Warehouse;
+import org.apache.hadoop.hive.metastore.api.DataConnector;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import 
org.apache.hadoop.hive.metastore.dataconnector.AbstractDataConnectorProvider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.ConnectException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public abstract class AbstractJDBCConnectorProvider extends 
AbstractDataConnectorProvider {
+  private static Logger LOG = 
LoggerFactory.getLogger(AbstractJDBCConnectorProvider.class);
+  protected static Warehouse warehouse = null;
+
+  // duplicate constants from Constants.java to avoid a dependency on 
hive-common
+  public static final String JDBC_HIVE_STORAGE_HANDLER_ID =
+      "org.apache.hive.storage.jdbc.JdbcStorageHandler";
+  public static final String JDBC_CONFIG_PREFIX = "hive.sql";
+  public static final String JDBC_CATALOG = JDBC_CONFIG_PREFIX + ".catalog";
+  public static final String JDBC_SCHEMA = JDBC_CONFIG_PREFIX + ".schema";
+  public static final String JDBC_TABLE = JDBC_CONFIG_PREFIX + ".table";
+  public static final String JDBC_DATABASE_TYPE = JDBC_CONFIG_PREFIX + 
".database.type";
+  public static final String JDBC_URL = JDBC_CONFIG_PREFIX + ".jdbc.url";
+  public static final String JDBC_DRIVER = JDBC_CONFIG_PREFIX + ".jdbc.driver";
+  public static final String JDBC_USERNAME = JDBC_CONFIG_PREFIX + 
".dbcp.username";
+  public static final String JDBC_PASSWORD = JDBC_CONFIG_PREFIX + 
".dbcp.password";
+  public static final String JDBC_KEYSTORE = JDBC_CONFIG_PREFIX + 
".dbcp.password.keystore";
+  public static final String JDBC_KEY = JDBC_CONFIG_PREFIX + 
".dbcp.password.key";
+  public static final String JDBC_QUERY = JDBC_CONFIG_PREFIX + ".query";
+  public static final String JDBC_QUERY_FIELD_NAMES = JDBC_CONFIG_PREFIX + 
".query.fieldNames";
+  public static final String JDBC_QUERY_FIELD_TYPES = JDBC_CONFIG_PREFIX + 
".query.fieldTypes";
+  public static final String JDBC_SPLIT_QUERY = JDBC_CONFIG_PREFIX + 
".query.split";
+  public static final String JDBC_PARTITION_COLUMN = JDBC_CONFIG_PREFIX + 
".partitionColumn";
+  public static final String JDBC_NUM_PARTITIONS = JDBC_CONFIG_PREFIX + 
".numPartitions";
+  public static final String JDBC_LOW_BOUND = JDBC_CONFIG_PREFIX + 
".lowerBound";
+  public static final String JDBC_UPPER_BOUND = JDBC_CONFIG_PREFIX + 
".upperBound";
+
+  private static final String JDBC_INPUTFORMAT_CLASS = 
"org.apache.hive.storage.jdbc.JdbcInputFormat".intern();
+  private static final String JDBC_OUTPUTFORMAT_CLASS = 
"org.apache.hive.storage.jdbc.JdbcOutputFormat".intern();
+
+  String type = null; // MYSQL, POSTGRES, ORACLE, DERBY, MSSQL, DB2 etc.
+  String driverClassName = null;
+  String jdbcUrl = null;
+  String username = null;
+  String password = null; // TODO convert to byte array
+
+  public AbstractJDBCConnectorProvider(String dbName, DataConnector dataConn) {
+    super(dbName, dataConn);
+    this.type = connector.getType().toUpperCase(); // TODO
+    this.jdbcUrl = connector.getUrl();
+    this.username = connector.getParameters().get(JDBC_USERNAME);
+    this.password = connector.getParameters().get(JDBC_PASSWORD);
+    if (this.password == null) {
+      String keystore = connector.getParameters().get(JDBC_KEYSTORE);
+      String key = connector.getParameters().get(JDBC_KEY);
+      try {
+        char[] keyValue = MetastoreConf.getValueFromKeystore(keystore, key);
+        if (keyValue != null)
+          this.password = new String(keyValue);
+      } catch (IOException i) {
+        LOG.warn("Could not read key value from keystore");
+      }
+    }
+
+    try {
+      warehouse = new Warehouse(MetastoreConf.newMetastoreConf());
+    } catch (MetaException e) { /* ignore */ }
+  }
+
+  @Override public void open() throws ConnectException {
+    try {
+      Class.forName(driverClassName);
+      handle = DriverManager.getConnection(jdbcUrl, username, password);
+      isOpen = true;
+    } catch (ClassNotFoundException cnfe) {
+      LOG.warn("Driver class not found in classpath:" + driverClassName);
+      throw new RuntimeException("Driver class not found:" + driverClassName);
+    } catch (SQLException sqle) {
+      LOG.warn("Could not connect to remote data source at " + jdbcUrl);
+      throw new ConnectException("Could not connect to remote datasource at " 
+ jdbcUrl + ",cause:" + sqle.getMessage());

Review comment:
       ConnectException does not have a constructor that takes another 
exception. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 573586)
    Time Spent: 9h 20m  (was: 9h 10m)

> [New Feature] Add data connector support for remote datasources
> ---------------------------------------------------------------
>
>                 Key: HIVE-24396
>                 URL: https://issues.apache.org/jira/browse/HIVE-24396
>             Project: Hive
>          Issue Type: Improvement
>          Components: Hive
>            Reporter: Naveen Gangam
>            Assignee: Naveen Gangam
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> This feature work is to be able to support in Hive Metastore to be able to 
> configure data connectors for remote datasources and map databases. We 
> currently have support for remote tables via StorageHandlers like 
> JDBCStorageHandler and HBaseStorageHandler.
> Data connectors are a natural extension to this where we can map an entire 
> database or catalogs instead of individual tables. The tables within are 
> automagically mapped at runtime. The metadata for these tables are not 
> persisted in Hive. They are always mapped and built at runtime. 
> With this feature, we introduce a concept of type for Databases in Hive. 
> NATIVE vs REMOTE. All current databases are NATIVE. To create a REMOTE 
> database, the following syntax is to be used
> CREATE REMOTE DATABASE remote_db USING <dataconnector> WITH DCPROPERTIES 
> (....);
> Will attach a design doc to this jira. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to