liaojiexin commented on issue #25182:
URL: 
https://github.com/apache/shardingsphere/issues/25182#issuecomment-1519065678

   > I tested it locally, the `ds_attach_segment` migration as expected
   > 
   > ```
   > ➜  ~ psql --host=localhost --port=3307 postgres root
   > Password for user root:
   > psql (14.7, server 12.3-ShardingSphere-Proxy 5.3.3-SNAPSHOT-dirty-dfb83fc)
   > Type "help" for help.
   > 
   > postgres=> CREATE DATABASE sharding_db;
   > CREATE DATABASE
   > postgres=> \c sharding_db
   > psql (14.7, server 12.3-ShardingSphere-Proxy 5.3.3-SNAPSHOT-dirty-dfb83fc)
   > You are now connected to database "sharding_db" as user "root".
   > sharding_db=> REGISTER STORAGE UNIT ds_0 (
   > sharding_db(>     URL="jdbc:postgresql://127.0.0.1:5432/ds_0",
   > sharding_db(>     USER="postgres",
   > sharding_db(>     PASSWORD="postgres"
   > sharding_db(> );
   > SUCCESS
   > sharding_db=>
   > sharding_db=> REGISTER STORAGE UNIT ds_1 (
   > sharding_db(>     URL="jdbc:postgresql://127.0.0.1:5432/ds_1",
   > sharding_db(>     USER="postgres",
   > sharding_db(>     PASSWORD="postgres"
   > sharding_db(> );
   > SUCCESS
   > sharding_db=> REGISTER MIGRATION SOURCE STORAGE UNIT ds (
   > sharding_db(>     URL="jdbc:postgresql://127.0.0.1:5432/cdc_db",
   > sharding_db(>     USER="postgres",
   > sharding_db(>     PASSWORD="201314",
   > sharding_db(>     
PROPERTIES("minPoolSize"="1","maxPoolSize"="20","idleTimeout"="60000")
   > sharding_db(> );
   > SUCCESS
   > sharding_db=> CREATE SHARDING TABLE RULE ds_attach_segment (
   > sharding_db(> STORAGE_UNITS(ds_0,ds_1),
   > sharding_db(> 
SHARDING_COLUMN=md5,TYPE(NAME="hash_mod",PROPERTIES("sharding-count"="2"))
   > sharding_db(> );
   > SUCCESS
   > sharding_db=> MIGRATE TABLE ds.ds_attach_segment into 
sharding_db.ds_attach_segment;
   > SUCCESS
   > sharding_db=> SHOW MIGRATION LIST;
   >                      id                     |           tables            
| job_item_count | active |     create_time     | stop_time
   > 
--------------------------------------------+-----------------------------+----------------+--------+---------------------+-----------
   >  j0102p0000de4d44121708195f51503f6c146053fd | ds.public.ds_attach_segment 
| 1              | true   | 2023-04-23 15:32:05 |
   > (1 row)
   > 
   > sharding_db=> select * from ds_attach_segment;
   >  userid | md5 | filename |      createtime       | filepath | partcount | 
uuid | isdelete | length | sid
   > 
--------+-----+----------+-----------------------+----------+-----------+------+----------+--------+-----
   >  1      | 2   | 3        | 2023-04-04 15:29:35.0 | cccc     |         1 | 
123  | true     |   1111 | dd
   > (1 row)
   > 
   > sharding_db=>
   > ```
   
   I've dealt with the second issue:I found that after executing the above 
ds_attach_segment rules,two new tables have been added,they are 
ds_0.ds_attach_segment_0 and ds_1.ds_attach_segment_1.I initially thought that 
the data was migrated to the ds_0.ds_attach_segment and ds_1.ds_attach_segment 
tables,so I changed to the following rule:
   ```sql
   # 配置ds_attach_extend的分片算法
   ALTER SHARDING TABLE RULE ds_attach_extend (
   DATANODES("ds_${0..1}.ds_attach_extend"),
   
DATABASE_STRATEGY(TYPE="standard",SHARDING_COLUMN=id,SHARDING_ALGORITHM(TYPE(NAME="inline",PROPERTIES("algorithm-expression"="ds_$->{id.hashCode()
 & Integer.MAX_VALUE %2}")))));
   # 配置ds_attach_segment的分片算法
   ALTER SHARDING TABLE RULE ds_attach_segment (
   DATANODES("ds_${0..1}.ds_attach_segment"),
   
DATABASE_STRATEGY(TYPE="standard",SHARDING_COLUMN=md5,SHARDING_ALGORITHM(TYPE(NAME="inline",PROPERTIES("algorithm-expression"="ds_$->{md5.hashCode()
 & Integer.MAX_VALUE %2}")))));
   ```
   
   Also, regarding the first question, I saw the error message below
   ```log
   java.sql.BatchUpdateException: Batch entry 0 INSERT INTO 
public.ds_attach_metadata_0(id,fmd5,total,uuid,finished,progress,birthday,path,source_file_nid,pid,bimid)
 VALUES('CBklG3C4tRxRjZQW6Of', '4f91dcd437ff415eb1c8eeaa0941c381', 169752, 
'ada7f8d7-a156-4a3b-a863-1c93665fdbc5', 1, 169752, '2022-04-21 
20:38:54.57+00'::timestamp, '/2022/04/21', NULL, NULL, 'BM00004001') ON 
CONFLICT (id,fmd5) DO UPDATE SET 
total=EXCLUDED.total,uuid=EXCLUDED.uuid,finished=EXCLUDED.finished,progress=EXCLUDED.progress,birthday=EXCLUDED.birthday,path=EXCLUDED.path,source_file_nid=EXCLUDED.source_file_nid,pid=EXCLUDED.pid,bimid=EXCLUDED.bimid
 was aborted: ERROR: there is no unique or exclusion constraint matching the ON 
CONFLICT specification  Call getNextException to see other errors in the batch.
   2023-04-23 20:42:22     at 
org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:165)
   2023-04-23 20:42:22     at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2367)
   2023-04-23 20:42:22     at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2099)
   2023-04-23 20:42:22     at 
org.postgresql.core.v3.QueryExecutorImpl.flushIfDeadlockRisk(QueryExecutorImpl.java:1456)
   2023-04-23 20:42:22     at 
org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1481)
   2023-04-23 20:42:22     at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:546)
   2023-04-23 20:42:22     at 
org.postgresql.jdbc.PgStatement.internalExecuteBatch(PgStatement.java:887)
   2023-04-23 20:42:22     at 
org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:910)
   2023-04-23 20:42:22     at 
org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1663)
   2023-04-23 20:42:22     at 
com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:127)
   2023-04-23 20:42:22     at 
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.driver.executor.batch.BatchPreparedStatementExecutor$1.executeSQL(BatchPreparedStatementExecutor.java:143)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.driver.executor.batch.BatchPreparedStatementExecutor$1.executeSQL(BatchPreparedStatementExecutor.java:139)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback.execute(JDBCExecutorCallback.java:86)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback.execute(JDBCExecutorCallback.java:65)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.infra.executor.kernel.ExecutorEngine.syncExecute(ExecutorEngine.java:133)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.infra.executor.kernel.ExecutorEngine.serialExecute(ExecutorEngine.java:119)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.infra.executor.kernel.ExecutorEngine.execute(ExecutorEngine.java:113)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutor.execute(JDBCExecutor.java:67)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutor.execute(JDBCExecutor.java:51)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.driver.executor.batch.BatchPreparedStatementExecutor.executeBatch(BatchPreparedStatementExecutor.java:152)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.driver.jdbc.core.statement.ShardingSpherePreparedStatement.executeBatch(ShardingSpherePreparedStatement.java:687)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.data.pipeline.core.importer.DataSourceImporter.executeBatchInsert(DataSourceImporter.java:191)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.data.pipeline.core.importer.DataSourceImporter.doFlush(DataSourceImporter.java:158)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.data.pipeline.core.importer.DataSourceImporter.tryFlush(DataSourceImporter.java:140)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.data.pipeline.core.importer.DataSourceImporter.flushInternal(DataSourceImporter.java:133)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.data.pipeline.core.importer.DataSourceImporter.flush(DataSourceImporter.java:123)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.data.pipeline.core.importer.DataSourceImporter.runBlocking(DataSourceImporter.java:99)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.data.pipeline.api.executor.AbstractLifecycleExecutor.start(AbstractLifecycleExecutor.java:52)
   2023-04-23 20:42:22     at 
org.apache.shardingsphere.data.pipeline.api.executor.AbstractLifecycleExecutor.run(AbstractLifecycleExecutor.java:90)
   2023-04-23 20:42:22     at 
java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
   2023-04-23 20:42:22     at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
   2023-04-23 20:42:22     at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
   2023-04-23 20:42:22     at java.base/java.lang.Thread.run(Thread.java:833)
   2023-04-23 20:42:22 Caused by: org.postgresql.util.PSQLException: ERROR: 
there is no unique or exclusion constraint matching the ON CONFLICT 
specification
   2023-04-23 20:42:22     at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676)
   2023-04-23 20:42:22     at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366)
   2023-04-23 20:42:22     ... 32 common frames omitted
   ```
   This is the SQL that creates the ds_attach_metadata table
   ```sql
   create table ds_attach_metadata
   (
        id varchar(20) not null
                constraint ds_attach_metadata_pkey
                        primary key,
        fmd5 varchar(40) default 'NULL'::character varying not null,
        total bigint default '0'::bigint,
        uuid varchar(40),
        finished smallint default '0'::smallint,
        progress bigint default '0'::bigint,
        birthday timestamp(6) default CURRENT_TIMESTAMP,
        path varchar(256),
        source_file_nid varchar(50),
        pid varchar(50),
        bimid varchar(32)
   );
   
   alter table ds_attach_metadata owner to postgres;
   
   create unique index ds_attach_metadata_fmd5_uindex
        on ds_attach_metadata (fmd5);
   
   create index ds_attach_metadata_bimid_index
        on ds_attach_metadata (bimid);
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to