chengwei977 opened a new issue, #7221: URL: https://github.com/apache/seatunnel/issues/7221
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22) and found no similar issues. ### What happened When a table in my iceberg source has 81 fields, and only 80 of them are written when sinking to doris, when using the query: insert into doris_db.doris_table(id1,id2,...,id80) Select id1,id2,...,id80 from iceberg_catalog.iceberg_db.iceberg_table; when sinking, an exception is thrown: "java.sql.SQLException: Parameter index out of range (81 > number of parameters, which is 80)." ### SeaTunnel Version 2.3.5 ### SeaTunnel Config ```conf { "env" : { "job.name" : "1813354955321581569", "job.mode" : "BATCH", "checkpoint.interval" : 10000, "parallelism" : 1 }, "source" : [ { "catalog_name" : "ice_05", "iceberg.catalog.config" : { "type" : "hive", "uri" : "thrift://iceberg-hms-uri:port", "warehouse" : "hdfs://nameservice1/iceberg-warehouse/ice_05" }, "namespace" : "ads", "plugin_name" : "Iceberg", "iceberg.hadoop-conf-path" : "/hadoop_conf", "table" : "iceberg_table_name" } ], "transform" : [], "sink" : [ { "password" : "******", "driver" : "com.mysql.cj.jdbc.Driver", "query" : "INSERT INTO doris_db.doris_table_name(zyear,zmonth,zyearmonth,node_code,node_name,node_path,name_path,tree_level,root_code,root_name,level01_code,level01_name,level02_code,level02_name,level03_code,level03_name,level04_code,level04_name,level05_code,level05_name,level06_code,level06_name,customer_project_no,customer_project_name,kunnr,kunnr_name,industrial_sector,cus_association,industrial_unit,dept,market_manager,export_type,export_region,export_countries,product_line1,product_name,last_year_actual_account_receivable,current_receivable_balance,this_year_yj_hk_nc,this_year_yj_hk_by,this_year_accumulate_finish_hk,hk_hj,hk_yj,hk_lx,month1_yj_hk,month2_yj_hk,month3_yj_hk,month4_yj_hk,month5_yj_hk,month6_yj_hk,month7_yj_hk,month8_yj_hk,month9_yj_hk,month10_yj_hk,month11_yj_hk,month12_yj_hk,month1_sj_hk,month2_sj_hk,month3_sj_hk,month4_sj_hk,month5_sj_hk,month6_sj_hk,month7_sj_hk,month8_sj_hk,month9_sj_hk,month10_sj_hk,month11_sj_hk,month12_sj_hk,month1_hk_xz,mon th2_hk_xz,month3_hk_xz,month4_hk_xz,month5_hk_xz,month6_hk_xz,month7_hk_xz,month8_hk_xz,month9_hk_xz,month10_hk_xz,month11_hk_xz,month12_hk_xz) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)", "plugin_name" : "Jdbc", "user" : "doris_user", "url" : "jdbc:mysql://doris_fe:port/doris_db?" } ] } ``` ### Running Command ```shell /usr/bin/seatunnel-2.3.5/seatunnel/bin/seatunnel.sh --config /tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442/seatunnel_148524_1305442.conf The conf file is the above content. ``` ### Error Exception ```log [LOG-PATH]: /opt/dolphinscheduler/logs/20240717/13101056104384_19-148524-1305442.log, [HOST]: Host{address='dolphinscheduler-worker-v2-0.dolphinscheduler-worker-v2-e36v:1234', ip='dolphinscheduler-worker-v2-0.dolphinscheduler-worker-v2-e36v', port=1234} [INFO] 2024-07-17 07:28:04.884 +0800 - Begin to pulling task [INFO] 2024-07-17 07:28:04.885 +0800 - Begin to initialize task [INFO] 2024-07-17 07:28:04.886 +0800 - Set task startTime: Wed Jul 17 07:28:04 CST 2024 [INFO] 2024-07-17 07:28:04.886 +0800 - Set task envFile: /opt/dolphinscheduler/conf/dolphinscheduler_env.sh [INFO] 2024-07-17 07:28:04.886 +0800 - Set task appId: 148524_1305442 [INFO] 2024-07-17 07:28:04.886 +0800 - End initialize task [INFO] 2024-07-17 07:28:04.887 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'} [INFO] 2024-07-17 07:28:04.887 +0800 - TenantCode:admin check success [INFO] 2024-07-17 07:28:04.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442 check success [INFO] 2024-07-17 07:28:04.888 +0800 - Resources:{} check success [INFO] 2024-07-17 07:28:04.888 +0800 - Task plugin: DATAINTEGRATION create success [INFO] 2024-07-17 07:28:04.901 +0800 - Success initialized task plugin instance success [INFO] 2024-07-17 07:28:04.902 +0800 - Success set taskVarPool: null [INFO] 2024-07-17 07:28:06.511 +0800 - raw custom config content : env { job.name = "${jobName}" job.mode = "BATCH" checkpoint.interval = 10000 parallelism = 1 } source { Iceberg { catalog_name = "ice_05" iceberg.catalog.config = { type = "hive" uri = "thrift://iceberg-hms-uri:port" warehouse = "hdfs://nameservice1/iceberg-warehouse/ice_05" } namespace = "ads" table = "iceberg_table_name" iceberg.hadoop-conf-path = "/hadoop_conf" } } transform { } sink { Jdbc { url = "jdbc:mysql://doris_fe:port/doris_db?" driver = "com.mysql.cj.jdbc.Driver" user = "doris_user" password = "******" query = "INSERT INTO doris_db.doris_table_name(zyear,zmonth,zyearmonth,node_code,node_name,node_path,name_path,tree_level,root_code,root_name,level01_code,level01_name,level02_code,level02_name,level03_code,level03_name,level04_code,level04_name,level05_code,level05_name,level06_code,level06_name,customer_project_no,customer_project_name,kunnr,kunnr_name,industrial_sector,cus_association,industrial_unit,dept,market_manager,export_type,export_region,export_countries,product_line1,product_name,last_year_actual_account_receivable,current_receivable_balance,this_year_yj_hk_nc,this_year_yj_hk_by,this_year_accumulate_finish_hk,hk_hj,hk_yj,hk_lx,month1_yj_hk,month2_yj_hk,month3_yj_hk,month4_yj_hk,month5_yj_hk,month6_yj_hk,month7_yj_hk,month8_yj_hk,month9_yj_hk,month10_yj_hk,month11_yj_hk,month12_yj_hk,month1_sj_hk,month2_sj_hk,month3_sj_hk,month4_sj_hk,month5_sj_hk,month6_sj_hk,month7_sj_hk,month8_sj_hk,month9_sj_hk,month10_sj_hk,month11_sj_hk,month12_sj_hk,month1_hk_xz,month2_hk_xz, month3_hk_xz,month4_hk_xz,month5_hk_xz,month6_hk_xz,month7_hk_xz,month8_hk_xz,month9_hk_xz,month10_hk_xz,month11_hk_xz,month12_hk_xz) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)" } } [INFO] 2024-07-17 07:28:06.511 +0800 - tenantCode :admin, task dir:/tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442 [INFO] 2024-07-17 07:28:06.511 +0800 - generate script file:/tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442/seatunnel_148524_1305442.conf [INFO] 2024-07-17 07:28:06.512 +0800 - SeaTunnel task command: /usr/bin/seatunnel-2.3.5/seatunnel/bin/seatunnel.sh --config /tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442/seatunnel_148524_1305442.conf [INFO] 2024-07-17 07:28:06.512 +0800 - 执行前置sql:truncate table doris_db.doris_table_name; [INFO] 2024-07-17 07:28:06.627 +0800 - log path /opt/dolphinscheduler/logs/20240717/13101056104384_19-148524-1305442.log,task log name taskAppId=TASK-20240717-13101056104384_19-148524-1305442 [INFO] 2024-07-17 07:28:06.627 +0800 - Begin to create command file:/tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442/148524_1305442.command [INFO] 2024-07-17 07:28:06.628 +0800 - Success create command file, command: #!/bin/bash BASEDIR=$(cd `dirname $0`; pwd) cd $BASEDIR source /opt/dolphinscheduler/conf/dolphinscheduler_env.sh /usr/bin/seatunnel-2.3.5/seatunnel/bin/seatunnel.sh --config /tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442/seatunnel_148524_1305442.conf [INFO] 2024-07-17 07:28:06.635 +0800 - task run command: bash /tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442/148524_1305442.command [INFO] 2024-07-17 07:28:06.636 +0800 - process start, process id is: 17528 [INFO] 2024-07-17 07:28:07.637 +0800 - -> Jul 17, 2024 7:28:07 AM com.hazelcast.internal.config.AbstractConfigLocator INFO: Loading configuration '/usr/bin/seatunnel-2.3.5/seatunnel/config/seatunnel.yaml' from System property 'seatunnel.config' Jul 17, 2024 7:28:07 AM com.hazelcast.internal.config.AbstractConfigLocator INFO: Using configuration file at /usr/bin/seatunnel-2.3.5/seatunnel/config/seatunnel.yaml Jul 17, 2024 7:28:07 AM org.apache.seatunnel.engine.common.config.SeaTunnelConfig INFO: seatunnel.home is /tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442 Jul 17, 2024 7:28:07 AM com.hazelcast.internal.config.AbstractConfigLocator INFO: Loading configuration '/usr/bin/seatunnel-2.3.5/seatunnel/config/hazelcast.yaml' from System property 'hazelcast.config' Jul 17, 2024 7:28:07 AM com.hazelcast.internal.config.AbstractConfigLocator INFO: Using configuration file at /usr/bin/seatunnel-2.3.5/seatunnel/config/hazelcast.yaml [INFO] 2024-07-17 07:28:08.640 +0800 - -> Jul 17, 2024 7:28:07 AM com.hazelcast.internal.config.AbstractConfigLocator INFO: Loading configuration '/usr/bin/seatunnel-2.3.5/seatunnel/config/hazelcast-client.yaml' from System property 'hazelcast.client.config' Jul 17, 2024 7:28:07 AM com.hazelcast.internal.config.AbstractConfigLocator INFO: Using configuration file at /usr/bin/seatunnel-2.3.5/seatunnel/config/hazelcast-client.yaml 2024-07-17 07:28:08,032 INFO [.c.i.s.ClientInvocationService] [main] - hz.client_1 [seatunnel] [5.1] Running with 2 response threads, dynamic=true 2024-07-17 07:28:08,103 INFO [c.h.c.LifecycleService ] [main] - hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is STARTING 2024-07-17 07:28:08,104 INFO [c.h.c.LifecycleService ] [main] - hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is STARTED 2024-07-17 07:28:08,128 INFO [.c.i.c.ClientConnectionManager] [main] - hz.client_1 [seatunnel] [5.1] Trying to connect to cluster: seatunnel 2024-07-17 07:28:08,131 INFO [.c.i.c.ClientConnectionManager] [main] - hz.client_1 [seatunnel] [5.1] Trying to connect to [10.96.74.17]:30030 2024-07-17 07:28:08,166 INFO [c.h.c.LifecycleService ] [main] - hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is CLIENT_CONNECTED 2024-07-17 07:28:08,167 INFO [.c.i.c.ClientConnectionManager] [main] - hz.client_1 [seatunnel] [5.1] Authenticated with server [localhost]:5801:798ca61f-7550-4270-bd35-496c78671f48, server version: 5.1, local address: /10.233.103.87:35712 2024-07-17 07:28:08,169 INFO [c.h.i.d.Diagnostics ] [main] - hz.client_1 [seatunnel] [5.1] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments. 2024-07-17 07:28:08,179 INFO [c.h.c.i.s.ClientClusterService] [hz.client_1.event-2] - hz.client_1 [seatunnel] [5.1] Members [1] { Member [localhost]:5801 - 798ca61f-7550-4270-bd35-496c78671f48 } 2024-07-17 07:28:08,206 INFO [.c.i.s.ClientStatisticsService] [main] - Client statistics is enabled with period 5 seconds. 2024-07-17 07:28:08,355 INFO [.ClientJobExecutionEnvironment] [main] - add common jar in plugins :[] 2024-07-17 07:28:08,372 INFO [o.a.s.c.s.u.ConfigBuilder ] [main] - Loading config file from path: /tmp/dolphinscheduler/exec/process/admin/1731933618335571970/13101056104384_19/148524/1305442/seatunnel_148524_1305442.conf 2024-07-17 07:28:08,421 INFO [o.a.s.c.s.u.ConfigShadeUtils ] [main] - Load config shade spi: [base64] 2024-07-17 07:28:08,495 INFO [o.a.s.c.s.u.ConfigBuilder ] [main] - Parsed config file: { "env" : { "job.name" : "1813354955321581569", "job.mode" : "BATCH", "checkpoint.interval" : 10000, "parallelism" : 1 }, "source" : [ { "catalog_name" : "ice_05", "iceberg.catalog.config" : { "type" : "hive", "uri" : "thrift://iceberg-hms-uri:port", "warehouse" : "hdfs://nameservice1/iceberg-warehouse/ice_05" }, "namespace" : "ads", "plugin_name" : "Iceberg", "iceberg.hadoop-conf-path" : "/hadoop_conf", "table" : "iceberg_table_name" } ], "transform" : [], "sink" : [ { "password" : "******", "driver" : "com.mysql.cj.jdbc.Driver", "query" : "INSERT INTO doris_db.doris_table_name(zyear,zmonth,zyearmonth,node_code,node_name,node_path,name_path,tree_level,root_code,root_name,level01_code,level01_name,level02_code,level02_name,level03_code,level03_name,level04_code,level04_name,level05_code,level05_name,level06_code,level06_name,customer_project_no,customer_project_name,kunnr,kunnr_name,industrial_sector,cus_association,industrial_unit,dept,market_manager,export_type,export_region,export_countries,product_line1,product_name,last_year_actual_account_receivable,current_receivable_balance,this_year_yj_hk_nc,this_year_yj_hk_by,this_year_accumulate_finish_hk,hk_hj,hk_yj,hk_lx,month1_yj_hk,month2_yj_hk,month3_yj_hk,month4_yj_hk,month5_yj_hk,month6_yj_hk,month7_yj_hk,month8_yj_hk,month9_yj_hk,month10_yj_hk,month11_yj_hk,month12_yj_hk,month1_sj_hk,month2_sj_hk,month3_sj_hk,month4_sj_hk,month5_sj_hk,month6_sj_hk,month7_sj_hk,month8_sj_hk,month9_sj_hk,month10_sj_hk,month11_sj_hk,month12_sj_hk,month1_ hk_xz,month2_hk_xz,month3_hk_xz,month4_hk_xz,month5_hk_xz,month6_hk_xz,month7_hk_xz,month8_hk_xz,month9_hk_xz,month10_hk_xz,month11_hk_xz,month12_hk_xz) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)", "plugin_name" : "Jdbc", "user" : "doris_user", "url" : "jdbc:mysql://doris_fe:port/doris_db?" } ] } 2024-07-17 07:28:08,511 INFO [.s.p.d.AbstractPluginDiscovery] [main] - Load SeaTunnelSink Plugin from /usr/bin/seatunnel-2.3.5/seatunnel/connectors 2024-07-17 07:28:08,518 INFO [.s.p.d.AbstractPluginDiscovery] [main] - Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', pluginType='source', pluginName='Iceberg'} at: file:/usr/bin/seatunnel-2.3.5/seatunnel/connectors/connector-iceberg-2.3.5.jar 2024-07-17 07:28:08,519 INFO [.s.p.d.AbstractPluginDiscovery] [main] - Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', pluginType='sink', pluginName='Jdbc'} at: file:/usr/bin/seatunnel-2.3.5/seatunnel/connectors/connector-jdbc-2.3.5.jar 2024-07-17 07:28:08,526 INFO [p.MultipleTableJobConfigParser] [main] - start generating all sources. [INFO] 2024-07-17 07:28:09.641 +0800 - -> 2024-07-17 07:28:08,664 INFO [a.s.c.s.i.IcebergCatalogLoader] [main] - Hadoop config initialized: org.apache.hadoop.hdfs.HdfsConfiguration 2024-07-17 07:28:08,857 INFO [o.a.h.h.c.HiveConf ] [main] - Found configuration file null 2024-07-17 07:28:09,154 WARN [o.a.h.u.NativeCodeLoader ] [main] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2024-07-17 07:28:09,203 INFO [o.a.h.h.m.HiveMetaStoreClient ] [main] - Trying to connect to metastore with URI thrift://dcopro017:31105 2024-07-17 07:28:09,224 INFO [o.a.h.h.m.HiveMetaStoreClient ] [main] - Opened a connection to metastore, current connections: 1 2024-07-17 07:28:09,258 INFO [o.a.h.h.m.HiveMetaStoreClient ] [main] - Connected to metastore. 2024-07-17 07:28:09,258 INFO [.h.h.m.RetryingMetaStoreClient] [main] - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.metastore.HiveMetaStoreClient ugi=root (auth:SIMPLE) retries=1 delay=1 lifetime=0 2024-07-17 07:28:09,412 INFO [i.BaseMetastoreTableOperations] [main] - Refreshing table metadata from new version: hdfs://nameservice1/iceberg-warehouse/ice_05/ads.db/icberg_table_name-ac84fb0d63e146a8a0721cdc5294313f/metadata/00010-4fd1e9fb-cf90-4681-89fb-6154d2b10cb6.metadata.json [INFO] 2024-07-17 07:28:10.642 +0800 - -> 2024-07-17 07:28:10,098 INFO [o.a.i.BaseMetastoreCatalog ] [main] - Table loaded by catalog: ice_05.ads.iceberg_table_name 2024-07-17 07:28:10,103 INFO [o.a.s.c.s.i.c.IcebergCatalog ] [main] - Fetched table details for: ads.ads_yygk_hk_detailed_list 2024-07-17 07:28:10,110 INFO [a.s.c.s.i.IcebergCatalogLoader] [main] - Hadoop config initialized: org.apache.hadoop.hdfs.HdfsConfiguration 2024-07-17 07:28:10,142 INFO [i.BaseMetastoreTableOperations] [main] - Refreshing table metadata from new version: hdfs://nameservice1/iceberg-warehouse/ice_05/ads.db/icberg_table_name-ac84fb0d63e146a8a0721cdc5294313f/metadata/00010-4fd1e9fb-cf90-4681-89fb-6154d2b10cb6.metadata.json 2024-07-17 07:28:10,156 INFO [o.a.i.BaseMetastoreCatalog ] [main] - Table loaded by catalog: ice_05.ads.iceberg_table_name 2024-07-17 07:28:10,160 INFO [o.a.s.a.t.f.FactoryUtil ] [main] - get the CatalogTable from source Iceberg: Iceberg.ads.ads_yygk_hk_detailed_list 2024-07-17 07:28:10,175 INFO [.s.p.d.AbstractPluginDiscovery] [main] - Load SeaTunnelSource Plugin from /usr/bin/seatunnel-2.3.5/seatunnel/connectors 2024-07-17 07:28:10,176 INFO [.s.p.d.AbstractPluginDiscovery] [main] - Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', pluginType='source', pluginName='Iceberg'} at: file:/usr/bin/seatunnel-2.3.5/seatunnel/connectors/connector-iceberg-2.3.5.jar 2024-07-17 07:28:10,178 INFO [p.MultipleTableJobConfigParser] [main] - start generating all transforms. 2024-07-17 07:28:10,178 INFO [p.MultipleTableJobConfigParser] [main] - start generating all sinks. 2024-07-17 07:28:10,181 INFO [.s.p.d.AbstractPluginDiscovery] [main] - Load SeaTunnelSink Plugin from /usr/bin/seatunnel-2.3.5/seatunnel/connectors 2024-07-17 07:28:10,182 INFO [.s.p.d.AbstractPluginDiscovery] [main] - Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', pluginType='sink', pluginName='Jdbc'} at: file:/usr/bin/seatunnel-2.3.5/seatunnel/connectors/connector-jdbc-2.3.5.jar 2024-07-17 07:28:10,255 INFO [o.a.s.e.c.j.ClientJobProxy ] [main] - Start submit job, job id: 865736592944267265, with plugin jar [file:/usr/bin/seatunnel-2.3.5/seatunnel/connectors/connector-jdbc-2.3.5.jar, file:/usr/bin/seatunnel-2.3.5/seatunnel/connectors/connector-iceberg-2.3.5.jar] 2024-07-17 07:28:10,592 INFO [o.a.s.e.c.j.ClientJobProxy ] [main] - Submit job finished, job id: 865736592944267265, job name: 1813354955321581569 2024-07-17 07:28:10,621 WARN [o.a.s.e.c.j.JobMetricsRunner ] [job-metrics-runner-865736592944267265] - Failed to get job metrics summary, it maybe first-run [INFO] 2024-07-17 07:28:38.645 +0800 - -> 2024-07-17 07:28:38,566 INFO [o.a.s.e.c.j.ClientJobProxy ] [main] - Job (865736592944267265) end with state FAILED 2024-07-17 07:28:38,567 INFO [c.h.c.LifecycleService ] [main] - hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is SHUTTING_DOWN 2024-07-17 07:28:38,572 INFO [.c.i.c.ClientConnectionManager] [main] - hz.client_1 [seatunnel] [5.1] Removed connection to endpoint: [localhost]:5801:798ca61f-7550-4270-bd35-496c78671f48, connection: ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.233.103.87:35712->/10.96.74.17:30030}, remoteAddress=[localhost]:5801, lastReadTime=2024-07-17 07:28:38.548, lastWriteTime=2024-07-17 07:28:38.214, closedTime=2024-07-17 07:28:38.569, connected server version=5.1} 2024-07-17 07:28:38,572 INFO [c.h.c.LifecycleService ] [main] - hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is CLIENT_DISCONNECTED 2024-07-17 07:28:38,575 INFO [c.h.c.LifecycleService ] [main] - hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is SHUTDOWN 2024-07-17 07:28:38,575 INFO [s.c.s.s.c.ClientExecuteCommand] [main] - Closed SeaTunnel client...... 2024-07-17 07:28:38,575 INFO [s.c.s.s.c.ClientExecuteCommand] [main] - Closed metrics executor service ...... 2024-07-17 07:28:38,575 ERROR [o.a.s.c.s.SeaTunnel ] [main] - =============================================================================== 2024-07-17 07:28:38,575 ERROR [o.a.s.c.s.SeaTunnel ] [main] - Fatal Error, 2024-07-17 07:28:38,575 ERROR [o.a.s.c.s.SeaTunnel ] [main] - Please submit bug report in https://github.com/apache/seatunnel/issues 2024-07-17 07:28:38,575 ERROR [o.a.s.c.s.SeaTunnel ] [main] - Reason:SeaTunnel job executed failed 2024-07-17 07:28:38,577 ERROR [o.a.s.c.s.SeaTunnel ] [main] - Exception StackTrace:org.apache.seatunnel.core.starter.exception.CommandExecuteException: SeaTunnel job executed failed at org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:202) at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40) at org.apache.seatunnel.core.starter.seatunnel.SeaTunnelClient.main(SeaTunnelClient.java:34) Caused by: org.apache.seatunnel.engine.common.exception.SeaTunnelEngineException: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException: ErrorCode:[COMMON-08], ErrorDescription:[Sql operation failed, such as (execute,addBatch,close) etc...] - Writing records to JDBC failed. at org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:262) at org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:68) at org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:39) at org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:27) at org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.handleRecord(IntermediateBlockingQueue.java:75) at org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.collect(IntermediateBlockingQueue.java:50) at org.apache.seatunnel.engine.server.task.flow.IntermediateQueueFlowLifeCycle.collect(IntermediateQueueFlowLifeCycle.java:51) at org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.collect(TransformSeaTunnelTask.java:73) at org.apache.seatunnel.engine.server.task.SeaTunnelTask.stateProcess(SeaTunnelTask.java:168) at org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.call(TransformSeaTunnelTask.java:78) at org.apache.seatunnel.engine.server.TaskExecutionService$BlockingWorker.run(TaskExecutionService.java:703) at org.apache.seatunnel.engine.server.TaskExecutionService$NamedTaskWrapper.run(TaskExecutionService.java:1004) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException: ErrorCode:[COMMON-08], ErrorDescription:[Sql operation failed, such as (execute,addBatch,close) etc...] - Writing records to JDBC failed. at org.apache.seatunnel.connectors.seatunnel.common.multitablesink.MultiTableSinkWriter.subSinkErrorCheck(MultiTableSinkWriter.java:121) at org.apache.seatunnel.connectors.seatunnel.common.multitablesink.MultiTableSinkWriter.write(MultiTableSinkWriter.java:158) at org.apache.seatunnel.connectors.seatunnel.common.multitablesink.MultiTableSinkWriter.write(MultiTableSinkWriter.java:43) at org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:252) ... 16 more Caused by: org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException: ErrorCode:[COMMON-08], ErrorDescription:[Sql operation failed, such as (execute,addBatch,close) etc...] - Writing records to JDBC failed. at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.writeRecord(JdbcOutputFormat.java:109) at org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.write(JdbcSinkWriter.java:129) at org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.write(JdbcSinkWriter.java:47) at org.apache.seatunnel.connectors.seatunnel.common.multitablesink.MultiTableWriterRunnable.run(MultiTableWriterRunnable.java:62) ... 5 more Caused by: org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException: ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink connector failed] at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:142) at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.writeRecord(JdbcOutputFormat.java:106) ... 8 more Caused by: java.sql.SQLException: Parameter index out of range (81 > number of parameters, which is 80). at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63) at com.mysql.cj.jdbc.ClientPreparedStatement.checkBounds(ClientPreparedStatement.java:1396) at com.mysql.cj.jdbc.ClientPreparedStatement.getCoreParameterIndex(ClientPreparedStatement.java:1409) at com.mysql.cj.jdbc.ClientPreparedStatement.setObject(ClientPreparedStatement.java:1693) at org.apache.seatunnel.shade.com.zaxxer.hikari.pool.HikariProxyPreparedStatement.setObject(HikariProxyPreparedStatement.java) at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.FieldNamedPreparedStatement.setObject(FieldNamedPreparedStatement.java:167) at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.converter.AbstractJdbcRowConverter.toExternal(AbstractJdbcRowConverter.java:133) at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.SimpleBatchStatementExecutor.addToBatch(SimpleBatchStatementExecutor.java:45) at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.SimpleBatchStatementExecutor.addToBatch(SimpleBatchStatementExecutor.java:31) at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.BufferedBatchStatementExecutor.executeBatch(BufferedBatchStatementExecutor.java:51) at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.attemptFlush(JdbcOutputFormat.java:172) at org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:136) ... 9 more at org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:194) ... 2 more ``` ### Zeta or Flink or Spark Version Zeta ### Java or Scala Version openjdk 1.8.0_312 ### Screenshots _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@seatunnel.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org