dik111 opened a new issue, #4515:
URL: https://github.com/apache/incubator-seatunnel/issues/4515

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/incubator-seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   I had test paimon to sap hana, it throw an error `SQLDataExceptionSapDB: 
Invalid number: NaN`
   while hana table all the field are varchar
   here is my hana table schema:
   ```
   create column table EBS.SFY_WSH_PACKSLIP_HEADERS_ALL_FTS_TEST (
   "COPY_NUMBER" VARCHAR(240),
   "HEADER_ID" VARCHAR(240),
   "ORG_ID" VARCHAR(240),
   "TAG_NUMBER" VARCHAR(30),
   "STRUCTURE_CODE" VARCHAR(30),
   "ORGANIZATION_ID" VARCHAR(30),
   "OE_HEADER_ID" VARCHAR(240),
   "CUSTOMER_ID" VARCHAR(240),
   "APPROVER" VARCHAR(240),
   "TYPE" VARCHAR(30),
   "REFERENCE_ID" VARCHAR(240),
   "PACKSLIP_NUMBER" VARCHAR(30),
   "STATUS" VARCHAR(30),
   "CREATION_DATE" VARCHAR(240),
   "LAST_UPDATE_DATE" VARCHAR(240),
   "PT" VARCHAR(240),
   "FTS_PT" VARCHAR(240),
   "FTS_TS" VARCHAR(240)
   )
   ``
   
   ### SeaTunnel Version
   
   2.3.1
   
   ### SeaTunnel Config
   
   ```conf
   env {
     # You can set SeaTunnel environment configuration here
     execution.parallelism = 1
     job.mode = "BATCH"
     checkpoint.interval = 10000
     #execution.checkpoint.interval = 10000
     #execution.checkpoint.data-uri = "hdfs://localhost:9000/checkpoint"
   }
   
   source {
   Paimon {
        warehouse = 
"hdfs://hacluster/warehouse/tablespace/managed/hive/table_store"
        database = "ods_ebs"
        table = "pt_sfy_wsh_packslip_headers_all_fts"
        hdfs_site_path= 
"/data/software/spark/spark-3.2.1-bin-hadoop2.7/conf/hdfs-site.xml"
      }
   }
   
   sink {
     jdbc {
       url = "jdbc:sap://xx:30015?reconnect=true"
       driver = "com.sap.db.jdbc.Driver"
       user = "xx"
       password = "xx"
       query = "insert into 
EBS.SFY_WSH_PACKSLIP_HEADERS_ALL_FTS_TEST(COPY_NUMBER,HEADER_ID,ORG_ID,TAG_NUMBER,STRUCTURE_CODE,ORGANIZATION_ID,OE_HEADER_ID,CUSTOMER_ID,APPROVER,TYPE,REFERENCE_ID,PACKSLIP_NUMBER,STATUS,CREATION_DATE,LAST_UPDATE_DATE,PT,FTS_PT,FTS_TS)
 values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"
   }
   }
   ```
   
   
   ### Running Command
   
   ```shell
   bin/start-seatunnel-spark-3-connector-v2.sh \
   --master yarn \
   --deploy-mode client \
   --config ./config/paimon_hana_pt_sfy_wsh_packslip_headers_all_fts.config
   ```
   
   
   ### Error Exception
   
   ```log
   23/04/07 16:27:37 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) 
(jtbihdp09.sogal.com executor 1): 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-08], ErrorDescription:[Sql operation failed, such as 
(execute,addBatch,close) etc...] - Writing records to JDBC failed.
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.writeRecord(JdbcOutputFormat.java:148)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.write(JdbcSinkWriter.java:81)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.write(JdbcSinkWriter.java:44)
           at 
org.apache.seatunnel.translation.spark.sink.write.SeaTunnelSparkDataWriter.write(SeaTunnelSparkDataWriter.java:59)
           at 
org.apache.seatunnel.translation.spark.sink.write.SeaTunnelSparkDataWriter.write(SeaTunnelSparkDataWriter.java:37)
           at 
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:419)
           at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1496)
           at 
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:457)
           at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:358)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:131)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Caused by: 
org.apache.seatunnel.connectors.seatunnel.jdbc.exception.JdbcConnectorException:
 ErrorCode:[COMMON-10], ErrorDescription:[Flush data operation that in sink 
connector failed] - com.sap.db.jdbc.exceptions.SQLDataExceptionSapDB: Invalid 
number: NaN
           at 
com.sap.db.jdbc.exceptions.SQLExceptionSapDB._newInstance(SQLExceptionSapDB.java:172)
           at 
com.sap.db.jdbc.exceptions.SQLExceptionSapDB.newInstance(SQLExceptionSapDB.java:26)
           at 
com.sap.db.jdbc.converters.AbstractConverter._newSetNumericValueInvalidException(AbstractConverter.java:815)
           at 
com.sap.db.jdbc.converters.CharacterConverter.setDouble(CharacterConverter.java:353)
           at 
com.sap.db.jdbc.converters.CharacterConverter.setDouble(CharacterConverter.java:23)
           at 
com.sap.db.jdbc.PreparedStatementSapDB._setDouble(PreparedStatementSapDB.java:2323)
           at 
com.sap.db.jdbc.PreparedStatementSapDB.setDouble(PreparedStatementSapDB.java:868)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.FieldNamedPreparedStatement.setDouble(FieldNamedPreparedStatement.java:109)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.converter.AbstractJdbcRowConverter.toExternal(AbstractJdbcRowConverter.java:151)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.SimpleBatchStatementExecutor.addToBatch(SimpleBatchStatementExecutor.java:45)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.SimpleBatchStatementExecutor.addToBatch(SimpleBatchStatementExecutor.java:31)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.executor.BufferedBatchStatementExecutor.executeBatch(BufferedBatchStatementExecutor.java:51)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.attemptFlush(JdbcOutputFormat.java:197)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:162)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.writeRecord(JdbcOutputFormat.java:145)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.write(JdbcSinkWriter.java:81)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.sink.JdbcSinkWriter.write(JdbcSinkWriter.java:44)
           at 
org.apache.seatunnel.translation.spark.sink.write.SeaTunnelSparkDataWriter.write(SeaTunnelSparkDataWriter.java:59)
           at 
org.apache.seatunnel.translation.spark.sink.write.SeaTunnelSparkDataWriter.write(SeaTunnelSparkDataWriter.java:37)
           at 
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:419)
           at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1496)
           at 
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:457)
           at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:358)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:131)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.flush(JdbcOutputFormat.java:168)
           at 
org.apache.seatunnel.connectors.seatunnel.jdbc.internal.JdbcOutputFormat.writeRecord(JdbcOutputFormat.java:145)
           ... 16 more
   ```
   
   
   ### Flink or Spark Version
   
   spark version 3.2.1
   
   ### Java or Scala Version
   
   java 8
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to