wolf-hunter404 opened a new issue, #34965: URL: https://github.com/apache/shardingsphere/issues/34965
After upgrading shardingsphere-jdbc-core-spring-boot-starter-5.2.1 to the new version of shardingsphere5.5.2 in the project. The production environment has been running for many years. In order to reduce the impact, the configuration file will not be changed during this upgrade. It uses the sharding API to read the configuration file in the code and define the data source and sharding rules. **Development Environment** JDK 17 spring-boot 3.4.1 JPA/Hibernate **Problem description** When the project starts,Occasionally, the problem of not having any exists in the table. After debugging,I found that the relevant tables do not need to be sharded. However, the default data source and tables that do not require sharding have been configured in the configuration file. This situation does not occur 100%, it is probability. The proportion is started 5 times, and it will be like this twice. **Exception** `` Caused by: jakarta.persistence.PersistenceException: [PersistenceUnit: default] Unable to build Hibernate SessionFactory; nested exception is org.apache.shardingsphere.infra.exception.dialect.exception.syntax.table.TableExistsException at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:431) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:400) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.afterPropertiesSet(LocalContainerEntityManagerFactoryBean.java:366) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1855) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1804) ... 15 common frames omitted Caused by: org.apache.shardingsphere.infra.exception.dialect.exception.syntax.table.TableExistsException: null at org.apache.shardingsphere.infra.binder.engine.segment.dml.from.type.SimpleTableSegmentBinder.lambda$checkTableExists$5(SimpleTableSegmentBinder.java:138) at org.apache.shardingsphere.infra.exception.core.ShardingSpherePreconditions.checkState(ShardingSpherePreconditions.java:44) at org.apache.shardingsphere.infra.binder.engine.segment.dml.from.type.SimpleTableSegmentBinder.checkTableExists(SimpleTableSegmentBinder.java:137) at org.apache.shardingsphere.infra.binder.engine.segment.dml.from.type.SimpleTableSegmentBinder.bind(SimpleTableSegmentBinder.java:90) at org.apache.shardingsphere.infra.binder.engine.statement.ddl.CreateTableStatementBinder.bind(CreateTableStatementBinder.java:41) at org.apache.shardingsphere.infra.binder.engine.type.DDLStatementBindEngine.bind(DDLStatementBindEngine.java:74) at org.apache.shardingsphere.infra.binder.engine.SQLBindEngine.bindSQLStatement(SQLBindEngine.java:71) at org.apache.shardingsphere.infra.binder.engine.SQLBindEngine.bind(SQLBindEngine.java:58) at org.apache.shardingsphere.driver.jdbc.core.statement.ShardingSphereStatement.createQueryContext(ShardingSphereStatement.java:261) at org.apache.shardingsphere.driver.jdbc.core.statement.ShardingSphereStatement.execute(ShardingSphereStatement.java:248) at org.apache.shardingsphere.driver.jdbc.core.statement.ShardingSphereStatement.execute(ShardingSphereStatement.java:196) at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:80) at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applySqlString(AbstractSchemaMigrator.java:576) at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applySqlStrings(AbstractSchemaMigrator.java:516) at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.createTable(AbstractSchemaMigrator.java:316) at org.hibernate.tool.schema.internal.GroupedSchemaMigratorImpl.performTablesMigration(GroupedSchemaMigratorImpl.java:80) at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.performMigration(AbstractSchemaMigrator.java:233) at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.doMigration(AbstractSchemaMigrator.java:112) at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:280) at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.lambda$process$5(SchemaManagementToolCoordinator.java:144) at java.base/java.util.HashMap.forEach(HashMap.java:1421) at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:141) at org.hibernate.boot.internal.SessionFactoryObserverForSchemaExport.sessionFactoryCreated(SessionFactoryObserverForSchemaExport.java:37) at org.hibernate.internal.SessionFactoryObserverChain.sessionFactoryCreated(SessionFactoryObserverChain.java:35) at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:324) at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:463) at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1506) at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:66) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:390) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:419) ... 19 common frames omitted `` **Code** `` package cn.cx.sys.sharding; import com.zaxxer.hikari.HikariDataSource; import org.apache.shardingsphere.driver.ShardingSphereDriver; import org.apache.shardingsphere.driver.api.ShardingSphereDataSourceFactory; import org.apache.shardingsphere.infra.algorithm.core.config.AlgorithmConfiguration; import org.apache.shardingsphere.infra.config.mode.ModeConfiguration; import org.apache.shardingsphere.infra.config.mode.PersistRepositoryConfiguration; import org.apache.shardingsphere.infra.config.rule.RuleConfiguration; import org.apache.shardingsphere.infra.datasource.pool.creator.DataSourcePoolReflection; import org.apache.shardingsphere.infra.datasource.pool.metadata.DataSourcePoolMetaData; import org.apache.shardingsphere.infra.spi.type.typed.TypedSPILoader; import org.apache.shardingsphere.mode.repository.standalone.StandalonePersistRepositoryConfiguration; import org.apache.shardingsphere.sharding.api.config.ShardingRuleConfiguration; import org.apache.shardingsphere.sharding.api.config.rule.ShardingTableRuleConfiguration; import org.apache.shardingsphere.sharding.api.config.strategy.sharding.ComplexShardingStrategyConfiguration; import org.apache.shardingsphere.sharding.spi.ShardingAlgorithm; import org.apache.shardingsphere.single.config.SingleRuleConfiguration; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.boot.autoconfigure.AutoConfigureBefore; import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.stereotype.Component; import javax.sql.DataSource; import java.sql.SQLException; import java.util.*; import java.util.Map.Entry; /** * Sharding分库分表配置,当spring.shardingsphere.enabled配置项为true时生效 * 由于历史原因,不进行配置文件改动,采用API形式创建数据源和分片规则 */ @Component @ConditionalOnProperty( prefix = "spring.shardingsphere", name = {"enabled"}, havingValue = "true", matchIfMissing = true ) @ConfigurationProperties(prefix = "spring.shardingsphere") @AutoConfigureBefore(DataSourceAutoConfiguration.class) public class ShardingSphereDataSourceConfig { private static final Logger logger = LoggerFactory.getLogger(ShardingSphereDataSourceConfig.class); private static final String DATABASE_STRATEGY = "database-strategy"; private static final String TABLE_STRATEGY = "table-strategy"; private static final String COMPLEX = "complex"; private static final String SHARDING_COLUMNS = "sharding-columns"; private static final String ALGORITHM_CLASS_NAME = "algorithm-class-name"; private final Map<String, Object> datasource = new HashMap<>(); private final Map<String, Object> sharding = new HashMap<>(); private final Map<String, Object> props = new HashMap<>(); public Map<String, Object> getDatasource() { return datasource; } public void setDatasource(Map<String, Object> datasource) { this.datasource.clear(); this.datasource.putAll(datasource); } public Map<String, Object> getSharding() { return sharding; } public void setSharding(Map<String, Object> sharding) { this.sharding.clear(); this.sharding.putAll(sharding); } public Map<String, Object> getProps() { return props; } public void setProps(Map<String, Object> props) { this.props.clear(); this.props.putAll(props); } /** * 创建数据源,覆盖掉spring boot自带的数据源 * @return * @throws SQLException */ @Bean public DataSource createDataSource() throws SQLException { // 1. 创建真实数据源映射 Map<String, DataSource> dataSourceMap = createDataSourceMap(); // 2. 加载分片规则配置 ShardingRuleConfiguration shardingRuleConfig = loadShardingRuleConfiguration(); String defaultDataSourceName = (String) sharding.get("default-data-source-name"); if (defaultDataSourceName == null || defaultDataSourceName.trim().isEmpty()) { throw new IllegalArgumentException("Default data source name is missing or empty."); } //设置不需要分片的表和数据源关联 SingleRuleConfiguration singleRuleConfiguration = new SingleRuleConfiguration(); singleRuleConfiguration.setDefaultDataSource(defaultDataSourceName); singleRuleConfiguration.setTables(Collections.singletonList(defaultDataSourceName + ".*")); List<RuleConfiguration> ruleConfigs = new ArrayList<>(); ruleConfigs.add(shardingRuleConfig); ruleConfigs.add(singleRuleConfiguration); Map<String, String> sqlMap = (Map<String, String>) props.get("sql"); if (!(sqlMap instanceof Map)) { throw new IllegalArgumentException("SQL properties are missing or invalid."); } Properties props = new Properties(); props.setProperty("sql-show", sqlMap.getOrDefault("show", "false")); PersistRepositoryConfiguration persistRepositoryConfiguration = new StandalonePersistRepositoryConfiguration("JDBC", new Properties()); ModeConfiguration modeConfiguration = new ModeConfiguration("Standalone", persistRepositoryConfiguration); // 3. 创建ShardingSphere数据源 logger.info("Creating ShardingSphere data source with default data source: {}", defaultDataSourceName); return ShardingSphereDataSourceFactory.createDataSource(defaultDataSourceName, modeConfiguration, dataSourceMap, ruleConfigs, props); } /** * 创建并返回一个包含数据源名称和对应数据源对象的映射。 * * <p>该方法从配置中读取数据源名称列表,并为每个数据源名称创建一个 HikariDataSource 实例。 * 数据源的配置信息从 `datasource` 中获取,每个数据源的属性必须是一个非空的 Map。 * 如果配置无效或数据源创建失败,则抛出 IllegalArgumentException。 * * @return 一个 Map,键为数据源名称,值为对应的 DataSource 对象。 * @throws IllegalArgumentException 如果配置无效或数据源创建失败。 */ private Map<String, DataSource> createDataSourceMap() { Map<String, DataSource> dataSourceMap = new HashMap<>(); //数据源列表 Object namesObject = datasource.get("names"); if (!(namesObject instanceof String)) { throw new IllegalArgumentException("Invalid 'names' configuration in datasource."); } String[] dsNames = ((String) namesObject).split(","); for (String dsName : dsNames) { dsName = dsName.trim(); if (dsName.isEmpty()) { continue; } try { Object dsPropsObject = datasource.get(dsName); if (!(dsPropsObject instanceof Map)) { throw new IllegalArgumentException("Invalid configuration for data source: " + dsName); } Map<String, String> dsProps = (Map<String, String>) dsPropsObject; if (dsProps.isEmpty()) { throw new IllegalArgumentException("DataSource properties for '" + dsName + "' are missing or empty."); } HikariDataSource ds = createHikariDataSource(dsProps); dataSourceMap.put(dsName, ds); } catch (ClassCastException | NullPointerException e) { logger.error("Error configuring data source: {}", dsName, e); throw new IllegalArgumentException("Invalid configuration for data source: " + dsName, e); } } return dataSourceMap; } /** * 创建并配置一个 Hikari 数据源实例。 * 目前使用HikariDataSource类,通过反射获取属性值的set方法进行设置。 * @param dsProps 包含数据源配置的键值对映射,配置项名称应该与HikariDataSource类内的字段名相同。 * @throws IllegalArgumentException 如果配置中包含无效的数值(如 minimum-idle 或 maximum-pool-size 格式错误) */ private HikariDataSource createHikariDataSource(Map<String, String> dsProps) { HikariDataSource ds = new HikariDataSource(); try { DataSourcePoolReflection dataSourcePoolReflection = new DataSourcePoolReflection(ds); Set<Map.Entry<String, String>> entrySet = dsProps.entrySet(); for (Map.Entry<String, String> entry : entrySet) { String key = convertToCamelCase(entry.getKey()); dataSourcePoolReflection.setField(key, entry.getValue()); } } catch (NumberFormatException e) { logger.error("Invalid numeric value in data source properties.", e); throw new IllegalArgumentException("Invalid numeric value in data source properties.", e); } return ds; } private static String convertToCamelCase(String input) { // 如果输入为空,直接返回空字符串 if (input == null || input.isEmpty()) { return input; } if (!input.contains("-")) { return input; } // 按连字符分割输入字符串 String[] parts = input.split("-"); StringBuilder result = new StringBuilder(); for (int i = 0; i < parts.length; i++) { String part = parts[i]; if (i == 0) { // 第一个单词保持原样 result.append(part); } else { // 后续单词首字母大写 result.append(Character.toUpperCase(part.charAt(0))); if (part.length() > 1) { result.append(part.substring(1)); } } } return result.toString(); } /** * 加载并生成分片规则配置对象。 * 该方法从 `sharding` 配置中解析表的分片规则,并生成对应的分片规则配置对象。 * 主要包括以下步骤: * 1. 解析 `tables` 配置,验证其格式并生成每个表的分片规则。 * 2. 配置默认的分片算法。 * 3. 将所有表的分片规则和算法配置整合到最终的分片规则配置对象中。 * * @return ShardingRuleConfiguration 分片规则配置对象,包含所有表的分片规则和算法配置。 * @throws IllegalArgumentException 如果配置格式不正确或缺少必要字段,则抛出异常。 */ private ShardingRuleConfiguration loadShardingRuleConfiguration() { ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration(); List<ShardingTableRuleConfiguration> tableRuleConfigs = new ArrayList<>(); Map<String, Class<?>> algorithmCache = new HashMap<>(); // 缓存反射加载的类 Object tablesObject = sharding.get("tables"); if (!(tablesObject instanceof Map)) { throw new IllegalArgumentException("Invalid 'tables' configuration in sharding."); } Map<String, Object> tables = (Map<String, Object>) tablesObject; Map<String, String> shardingAlgorithmMap = new HashMap<>(); for (Map.Entry<String, Object> entry : tables.entrySet()) { String tableName = entry.getKey(); try { Object shardingPropsObject = entry.getValue(); validateShardingProps(tableName, shardingPropsObject); Map<String, Object> shardingProps = (Map<String, Object>) shardingPropsObject; String actualDataNodes = getAndValidateActualDataNodes(tableName, shardingProps); String databaseColumns = getDatabaseColumns(shardingProps); String databaseAlgClassName = getDatabaseAlgClassName(shardingProps); String tableColumns = getTableColumns(shardingProps); String tableAlgClassName = getTableAlgClassName(shardingProps); ShardingAlgorithm dataBaseShardingAlgorithm = createShardingAlgorithm(databaseAlgClassName, algorithmCache); ShardingAlgorithm tableShardingAlgorithm = createShardingAlgorithm(tableAlgClassName, algorithmCache); shardingAlgorithmMap.put(dataBaseShardingAlgorithm.getType(), databaseAlgClassName); shardingAlgorithmMap.put(tableShardingAlgorithm.getType(), tableAlgClassName); ShardingTableRuleConfiguration tableRuleConfig = new ShardingTableRuleConfiguration(tableName, actualDataNodes); tableRuleConfig.setDatabaseShardingStrategy(new ComplexShardingStrategyConfiguration(databaseColumns, dataBaseShardingAlgorithm.getType())); tableRuleConfig.setTableShardingStrategy(new ComplexShardingStrategyConfiguration(tableColumns, tableShardingAlgorithm.getType())); tableRuleConfigs.add(tableRuleConfig); } catch (IllegalArgumentException e) { logger.error("Error configuring table '{}': {}", tableName, e.getMessage()); throw e; } catch (Exception e) { logger.error("Unexpected error configuring table '{}'", tableName, e); throw new RuntimeException("Error configuring table: " + tableName, e); } } // 添加 ShardingAlgorithm 的算法配置 Map<String, AlgorithmConfiguration> shardingAlgorithms = shardingRuleConfig.getShardingAlgorithms(); shardingAlgorithmMap.forEach((key, value) -> { Properties properties = new Properties(); properties.put("algorithmClassName", value); properties.put("strategy", "COMPLEX"); AlgorithmConfiguration algorithmConfiguration = new AlgorithmConfiguration("CLASS_BASED", properties); shardingAlgorithms.put(key, algorithmConfiguration); }); shardingRuleConfig.setTables(tableRuleConfigs); return shardingRuleConfig; } private String getDatabaseColumns(Map<String, Object> shardingProps) { Map<String, Object> databaseStrategy = (Map<String, Object>) shardingProps.get(DATABASE_STRATEGY); if (databaseStrategy != null && databaseStrategy.get(COMPLEX) instanceof Map) { Map<String, Object> complexStrategy = (Map<String, Object>) databaseStrategy.get(COMPLEX); return (String) complexStrategy.get(SHARDING_COLUMNS); } return null; } private String getDatabaseAlgClassName(Map<String, Object> shardingProps) { Map<String, Object> databaseStrategy = (Map<String, Object>) shardingProps.get(DATABASE_STRATEGY); if (databaseStrategy != null && databaseStrategy.get(COMPLEX) instanceof Map) { Map<String, Object> complexStrategy = (Map<String, Object>) databaseStrategy.get(COMPLEX); return (String) complexStrategy.get(ALGORITHM_CLASS_NAME); } return null; } private String getTableColumns(Map<String, Object> shardingProps) { Map<String, Object> tableStrategy = (Map<String, Object>) shardingProps.get(TABLE_STRATEGY); if (tableStrategy != null && tableStrategy.get(COMPLEX) instanceof Map) { Map<String, Object> complexStrategy = (Map<String, Object>) tableStrategy.get(COMPLEX); return (String) complexStrategy.get(SHARDING_COLUMNS); } return null; } private String getTableAlgClassName(Map<String, Object> shardingProps) { Map<String, Object> tableStrategy = (Map<String, Object>) shardingProps.get(TABLE_STRATEGY); if (tableStrategy != null && tableStrategy.get(COMPLEX) instanceof Map) { Map<String, Object> complexStrategy = (Map<String, Object>) tableStrategy.get(COMPLEX); return (String) complexStrategy.get(ALGORITHM_CLASS_NAME); } return null; } // 校验 shardingProps 是否有效 private void validateShardingProps(String tableName, Object shardingPropsObject) { if (!(shardingPropsObject instanceof Map)) { throw new IllegalArgumentException("Invalid configuration for table: " + tableName); } Map<String, Object> shardingProps = (Map<String, Object>) shardingPropsObject; if (shardingProps.isEmpty()) { throw new IllegalArgumentException("Sharding properties for table '" + tableName + "' are missing or empty."); } } // 获取并校验 actual-data-nodes private String getAndValidateActualDataNodes(String tableName, Map<String, Object> shardingProps) { String actualDataNodes = (String) shardingProps.get("actual-data-nodes"); if (actualDataNodes == null || actualDataNodes.trim().isEmpty()) { throw new IllegalArgumentException("Actual data nodes for table '" + tableName + "' are missing or empty."); } return actualDataNodes; } // 创建 ShardingAlgorithm 实例,并缓存类对象 private ShardingAlgorithm createShardingAlgorithm(String algClassName, Map<String, Class<?>> algorithmCache) throws Exception { Class<?> algClass = algorithmCache.computeIfAbsent(algClassName, className -> { try { return Class.forName(className); } catch (ClassNotFoundException e) { throw new RuntimeException("Failed to load class: " + className, e); } }); return (ShardingAlgorithm) algClass.getDeclaredConstructor().newInstance(); } } `` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@shardingsphere.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org