MOBIN-F opened a new pull request, #3925:
URL: https://github.com/apache/flink-cdc/pull/3925

   Repeat step:
   1. ALTER TABLE table_name add  COLUMN new_first_column bigint first
   2. throw ArrayIndexOutOfBoundsException(Because the [add column first] event 
will be converted into an [add column before] event, 
ArrayIndexOutOfBoundsException is thrown when getting the existingColumnIndex)
   
   log:
   ```
   2025-02-19 11:32:16,586 INFO  
org.apache.flink.cdc.runtime.operators.schema.regular.SchemaCoordinator [] - 
All sink subtask have flushed for table dw_app.cdc_sink19. Start to apply 
schema change request: 
       SchemaChangeRequest{tableId=dw_app.cdc_sink19, 
schemaChangeEvent=AddColumnEvent{tableId=dw_app.cdc_sink19, 
addedColumns=[ColumnWithPosition{column=`new_first_column` BIGINT, 
position=BEFORE, existedColumnName=first_column}]}, subTaskId=0}
   that extracts to:
       AddColumnEvent{tableId=rt_ods.cdc_sink19_add_column_EVOLVE4, 
addedColumns=[ColumnWithPosition{column=`new_first_column` BIGINT, 
position=BEFORE, existedColumnName=first_column}]}
   2025-02-19 11:32:16,716 ERROR 
org.apache.flink.cdc.runtime.operators.schema.common.SchemaRegistry [] - An 
exception was triggered from Schema change applying task. Job will fail now.
   org.apache.flink.util.FlinkRuntimeException: Failed to apply schema change 
event.
       at 
org.apache.flink.cdc.runtime.operators.schema.regular.SchemaCoordinator.lambda$startSchemaChangesEvolve$0(SchemaCoordinator.java:290)
 ~[flink-cdc-dist-3.3.0.jar:3.3.0]
       at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_65]
       at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_65]
       at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_65]
       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_65]
       at java.lang.Thread.run(Thread.java:745) [?:1.8.0_65]
   Caused by: java.lang.ArrayIndexOutOfBoundsException: -1
       at java.util.ArrayList.elementData(ArrayList.java:418) ~[?:1.8.0_65]
       at java.util.ArrayList.get(ArrayList.java:431) ~[?:1.8.0_65]
       at 
org.apache.flink.cdc.connectors.paimon.sink.PaimonMetadataApplier.applyAddColumnWithBeforePosition(PaimonMetadataApplier.java:275)
 ~[flink-cdc-pipeline-connector-paimon-3.3.0.jar:3.3.0]
       at 
org.apache.flink.cdc.connectors.paimon.sink.PaimonMetadataApplier.applyAddColumnEventWithPosition(PaimonMetadataApplier.java:237)
 ~[flink-cdc-pipeline-connector-paimon-3.3.0.jar:3.3.0]
       at 
org.apache.flink.cdc.connectors.paimon.sink.PaimonMetadataApplier.applyAddColumn(PaimonMetadataApplier.java:206)
 ~[flink-cdc-pipeline-connector-paimon-3.3.0.jar:3.3.0]
       at 
org.apache.flink.cdc.connectors.paimon.sink.PaimonMetadataApplier.lambda$applySchemaChange$0(PaimonMetadataApplier.java:127)
 ~[flink-cdc-pipeline-connector-paimon-3.3.0.jar:3.3.0]
       at 
org.apache.flink.cdc.common.event.visitor.SchemaChangeEventVisitor.visit(SchemaChangeEventVisitor.java:47)
 ~[flink-cdc-dist-3.3.0.jar:3.3.0]
       at 
org.apache.flink.cdc.connectors.paimon.sink.PaimonMetadataApplier.applySchemaChange(PaimonMetadataApplier.java:124)
 ~[flink-cdc-pipeline-connector-paimon-3.3.0.jar:3.3.0]
       at 
org.apache.flink.cdc.runtime.operators.schema.regular.SchemaCoordinator.applyAndUpdateEvolvedSchemaChange(SchemaCoordinator.java:434)
 ~[flink-cdc-dist-3.3.0.jar:3.3.0]
       at 
org.apache.flink.cdc.runtime.operators.schema.regular.SchemaCoordinator.applySchemaChange(SchemaCoordinator.java:401)
 ~[flink-cdc-dist-3.3.0.jar:3.3.0]
       at 
org.apache.flink.cdc.runtime.operators.schema.regular.SchemaCoordinator.lambda$startSchemaChangesEvolve$0(SchemaCoordinator.java:288)
 ~[flink-cdc-dist-3.3.0.jar:3.3.0]
       ... 5 more 
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to