TeslaCN opened a new issue, #17806:
URL: https://github.com/apache/shardingsphere/issues/17806

   ## Bug Report
   
   ### Which version of ShardingSphere did you use?
   
   master - 
https://github.com/apache/shardingsphere/tree/a393962f5b3eec2d154ad1c2c5d15a758efc499b
   
   ### Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy?
   
   ShardingSphere-Proxy
   
   ### Expected behavior
   
   Scaling works well.
   
   ### Actual behavior
   
   ```
   [ERROR] 2022-05-19 17:35:46.888 
[ShardingSphere-Scaling-Incremental-0130317c30317c3054317c626d73716c5f7368617264696e67-2]
 o.a.s.d.p.c.i.AbstractImporter - flush failed 3/3 times.
   java.sql.BatchUpdateException: Batch entry 0 INSERT INTO 
"public"."bmsql_item_3"("i_id","i_name") VALUES(999999, 'ull 
i_price[numeric]:null i_data[character varying]:null i_im_id[integer]:null') ON 
CONFLICT (i_id) DO UPDATE SET "i_name"=EXCLUDED."i_name" was aborted: ERROR: 
value too long for type character varying(24)  Call getNextException to see 
other errors in the batch.
        at 
org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:165)
        at 
org.postgresql.core.ResultHandlerDelegate.handleError(ResultHandlerDelegate.java:52)
        at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366)
        at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:559)
        at 
org.postgresql.jdbc.PgStatement.internalExecuteBatch(PgStatement.java:887)
        at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:910)
        at 
org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1638)
        at 
com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128)
        at 
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
        at 
org.apache.shardingsphere.driver.executor.batch.BatchPreparedStatementExecutor$1.executeSQL(BatchPreparedStatementExecutor.java:141)
        at 
org.apache.shardingsphere.driver.executor.batch.BatchPreparedStatementExecutor$1.executeSQL(BatchPreparedStatementExecutor.java:137)
        at 
org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback.execute(JDBCExecutorCallback.java:84)
        at 
org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback.execute(JDBCExecutorCallback.java:64)
        at 
org.apache.shardingsphere.infra.executor.kernel.ExecutorEngine.syncExecute(ExecutorEngine.java:135)
        at 
org.apache.shardingsphere.infra.executor.kernel.ExecutorEngine.serialExecute(ExecutorEngine.java:121)
        at 
org.apache.shardingsphere.infra.executor.kernel.ExecutorEngine.execute(ExecutorEngine.java:115)
        at 
org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutor.execute(JDBCExecutor.java:65)
        at 
org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutor.execute(JDBCExecutor.java:49)
        at 
org.apache.shardingsphere.driver.executor.batch.BatchPreparedStatementExecutor.executeBatch(BatchPreparedStatementExecutor.java:150)
        at 
org.apache.shardingsphere.driver.jdbc.core.statement.ShardingSpherePreparedStatement.executeBatch(ShardingSpherePreparedStatement.java:560)
        at 
org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.executeBatchInsert(AbstractImporter.java:166)
        at 
org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.doFlush(AbstractImporter.java:140)
        at 
org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.tryFlush(AbstractImporter.java:125)
        at 
org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.flushInternal(AbstractImporter.java:116)
        at 
org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.lambda$flush$2(AbstractImporter.java:107)
        at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
        at 
org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.flush(AbstractImporter.java:105)
        at 
org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.write(AbstractImporter.java:88)
        at 
org.apache.shardingsphere.data.pipeline.core.importer.AbstractImporter.doStart(AbstractImporter.java:74)
        at 
org.apache.shardingsphere.data.pipeline.api.executor.AbstractLifecycleExecutor.start(AbstractLifecycleExecutor.java:41)
        at 
org.apache.shardingsphere.data.pipeline.api.executor.AbstractLifecycleExecutor.run(AbstractLifecycleExecutor.java:61)
        at 
java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:833)
   Caused by: org.postgresql.util.PSQLException: ERROR: value too long for type 
character varying(24)
        at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
        at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
        ... 32 common frames omitted
   ```
   
   Data consistency check failed.
   ```
   [INFO ] 2022-05-19 17:36:50.048 [_finished_check_Worker-1] 
o.a.s.d.p.core.job.FinishedCheckJob - scaling job 
0130317c30317c3054317c626d73716c5f7368617264696e67 almost finished.
   [INFO ] 2022-05-19 17:36:50.048 [_finished_check_Worker-1] 
o.a.s.d.p.s.r.s.DefaultSourceWritingStopLock - lock, 
databaseName=bmsql_sharding, 
jobId=0130317c30317c3054317c626d73716c5f7368617264696e67
   [INFO ] 2022-05-19 17:36:50.061 [_finished_check_Worker-1] 
o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - stopClusterWriteDB, tryLockSuccess=true
   [INFO ] 2022-05-19 17:36:50.063 [_finished_check_Worker-1] 
o.a.s.d.p.core.job.FinishedCheckJob - dataConsistencyCheck for job 
0130317c30317c3054317c626d73716c5f7368617264696e67
   [INFO ] 2022-05-19 17:36:50.064 [_finished_check_Worker-1] 
o.a.s.d.p.a.c.TableNameSchemaNameMapping - mapping={pg_trigger=pg_catalog, 
bmsql_district=public, bmsql_oorder=public, bmsql_stock=public, 
columns=information_schema, pg_class=pg_catalog, pg_tablespace=pg_catalog, 
pg_database=pg_catalog, bmsql_customer=public, bmsql_item=public, 
tables=information_schema, bmsql_new_order=public, bmsql_history=public, 
bmsql_warehouse=public, pg_inherits=pg_catalog, bmsql_order_line=public, 
views=information_schema, bmsql_config=public}
   [INFO ] 2022-05-19 17:36:50.329 [_finished_check_Worker-1] 
o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job 
0130317c30317c3054317c626d73716c5f7368617264696e67 with check algorithm 
'DATA_MATCH' data consistency checker result 
{bmsql_item=DataConsistencyCheckResult(countCheckResult=DataConsistencyCountCheckResult(sourceRecordsCount=100001,
 targetRecordsCount=100000, matched=false), 
contentCheckResult=DataConsistencyContentCheckResult(matched=false))}
   [ERROR] 2022-05-19 17:36:50.329 [_finished_check_Worker-1] 
o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job: 
0130317c30317c3054317c626d73716c5f7368617264696e67, table: bmsql_item data 
consistency check failed, count matched: false, content matched: false
   [INFO ] 2022-05-19 17:36:50.329 [_finished_check_Worker-1] 
o.a.s.d.p.c.a.i.GovernanceRepositoryAPIImpl - persist job check result 'false' 
for job 0130317c30317c3054317c626d73716c5f7368617264696e67
   [ERROR] 2022-05-19 17:36:50.338 [_finished_check_Worker-1] 
o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - Scaling job: 
0130317c30317c3054317c626d73716c5f7368617264696e67, table: bmsql_item data 
consistency check failed, count matched: false, content matched: false
   [ERROR] 2022-05-19 17:36:50.338 [_finished_check_Worker-1] 
o.a.s.d.p.core.job.FinishedCheckJob - data consistency check failed, job 
0130317c30317c3054317c626d73716c5f7368617264696e67
   [INFO ] 2022-05-19 17:36:50.338 [_finished_check_Worker-1] 
o.a.s.d.p.s.r.s.DefaultSourceWritingStopLock - releaseLock, 
databaseName=bmsql_sharding, 
jobId=0130317c30317c3054317c626d73716c5f7368617264696e67
   [INFO ] 2022-05-19 17:36:50.339 [_finished_check_Worker-1] 
o.a.s.d.p.c.a.i.RuleAlteredJobAPIImpl - restoreClusterWriteDB, before 
releaseLock, databaseName=bmsql_sharding, 
jobId=0130317c30317c3054317c626d73716c5f7368617264696e67
   ```
   
   Full logs could be found here: https://paste.ubuntu.com/p/wFpnZcgYym/
   
   ### Steps to reproduce the behavior, such as: SQL to execute, sharding rule 
configuration, when exception occur etc.
   
   #### 1. Prepare ShardingSphere-Proxy
   
   config-sharding.yaml
   ```yaml
   schemaName: bmsql_sharding
   dataSources:
     ds_0:
       url: jdbc:postgresql://127.0.0.1:25432/bmsql_0
       username: postgres
       password: postgres
       connectionTimeoutMilliseconds: 3000
       idleTimeoutMilliseconds: 60000
       maxLifetimeMilliseconds: 1800000
       maxPoolSize: 32
       minPoolSize: 0
     ds_1:
       url: jdbc:postgresql://127.0.0.1:25432/bmsql_1
       username: postgres
       password: postgres
       connectionTimeoutMilliseconds: 3000
       idleTimeoutMilliseconds: 60000
       maxLifetimeMilliseconds: 1800000
       maxPoolSize: 32
       minPoolSize: 0
   
   rules:
     - !SHARDING
       defaultDatabaseStrategy:
         none:
       defaultTableStrategy:
         none:
       keyGenerators:
       autoTables:  
         bmsql_item:
           actualDataSources: ds_${0..1}
           shardingStrategy:
             standard:
               shardingColumn: i_id
               shardingAlgorithmName: mod_4
       shardingAlgorithms:
         mod_4:
           type: MOD
           props:
             sharding-count: 4
       scalingName: bmsql_scaling
       scaling:
         bmsql_scaling:
           completionDetector:
             props:
               incremental-task-idle-seconds-threshold: '60'
             type: IDLE
           dataConsistencyChecker:
             props:
               chunk-size: '1000'
             type: DATA_MATCH
   ```
   
   #### 2. Prepare inventory data
   
   ```sql
   create table bmsql_item (
     i_id     integer      not null,
     i_name   varchar(24),
     i_price  decimal(5,2),
     i_data   varchar(50),
     i_im_id  integer
   );
   alter table bmsql_item add constraint bmsql_item_pkey
     primary key (i_id);
   
   -- Insert 100000 rows into table.
   ```
   
   #### 3. Add data sources and alter rule to trigger Scaling
   
   ```sql
   ADD RESOURCE target_0 (
       URL="jdbc:postgresql://127.0.0.1:35432/bmsql_0",
       USER=postgres,
       PASSWORD= postgres,
       PROPERTIES("maximumPoolSize"=32, "minimumIdle"=0,"idleTimeout"="60000")
   ), target_1 (
       URL="jdbc:postgresql://127.0.0.1:35432/bmsql_1",
       USER= postgres,
       PASSWORD= postgres,
       PROPERTIES("maximumPoolSize"=32, "minimumIdle"=0,"idleTimeout"="60000")
   ), target_2 (
       URL="jdbc:postgresql://127.0.0.1:35432/bmsql_2",
       USER= postgres,
       PASSWORD= postgres,
       PROPERTIES("maximumPoolSize"=32, "minimumIdle"=0,"idleTimeout"="60000")
   );
   
   ALTER SHARDING TABLE RULE bmsql_item(
       RESOURCES(target_0, target_1, target_2),
       SHARDING_COLUMN=i_id,
       TYPE(NAME=MOD,PROPERTIES("sharding-count"=12))
   );
   ```
   
   #### 4. Do insert before Scaling switching to new rules
   
   ```sql
   insert into bmsql_item (i_id) values (999999);
   ```
   Then error occurred.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
notifications-unsubscr...@shardingsphere.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to