qifanlili opened a new issue, #8775:
URL: https://github.com/apache/seatunnel/issues/8775

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   collStats 的值太大,Long类型无法处理科学计数法表示的字符串数值
   
   错误在这里:
   
   <img width="815" alt="Image" 
src="https://github.com/user-attachments/assets/db6d6a06-e285-4b3b-a7f9-ae74e2ce8601";
 />
   
   ### SeaTunnel Version
   
   2.3.9
   
   ### SeaTunnel Config
   
   ```conf
   env {
     job.mode = "BATCH"
     spark.yarn.queue = "root.jwth.sync"
     spark.task.maxFailures = 1
     spark.executor.memory = 4g
     spark.executor.cores = 1
     spark.yarn.maxAppAttempts = 1
     spark.shuffle.service.enabled=true
   }
   
   source {
     MongoDB {
           parallelism = 3
           uri = 
mongodb://user:password@hosts:27017/database?readPreference=secondary&slaveOk=true.
           database = db_name
           collection = student_detail
           schema = {
             fields {
                "_id" = "String"
             }
           } 
       }
   }
   
   sink {
     Console {}
   }
   ```
   
   ### Running Command
   
   ```shell
   sh /opt/seatunnel/bin/start-seatunnel-spark-3-connector-v2.sh --user 
username --master yarn --deploy-mode cluster --name jobName --config 
mongo2hive.conf
   ```
   
   ### Error Exception
   
   ```log
   Caused by: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
SourceSplitEnumerator run failed.
           at 
org.apache.seatunnel.translation.spark.source.partition.batch.SeaTunnelBatchPartitionReader.next(SeaTunnelBatchPartitionReader.java:38)
           at 
org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:119)
           at 
org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:156)
           at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
           at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
           at scala.Option.exists(Option.scala:376)
           at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
           at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.advanceToNextIter(DataSourceRDD.scala:97)
           at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
           at 
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:435)
           at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538)
           at 
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:480)
           at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:381)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:136)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Caused by: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: SourceSplitEnumerator run failed.
           at java.util.concurrent.FutureTask.report(FutureTask.java:122)
           at java.util.concurrent.FutureTask.get(FutureTask.java:192)
           at 
org.apache.seatunnel.translation.source.ParallelSource.run(ParallelSource.java:142)
           at 
org.apache.seatunnel.translation.spark.source.partition.batch.ParallelBatchPartitionReader.lambda$prepare$0(ParallelBatchPartitionReader.java:117)
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
           at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
           ... 3 more
   Caused by: java.lang.RuntimeException: SourceSplitEnumerator run failed.
           at 
org.apache.seatunnel.translation.source.ParallelSource.lambda$run$0(ParallelSource.java:136)
           ... 7 more
   Caused by: java.lang.NumberFormatException: For input string: 
"1.3360484963E10"
           at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
           at java.lang.Long.parseLong(Long.java:589)
           at java.lang.Long.parseLong(Long.java:631)
           at 
org.apache.seatunnel.connectors.seatunnel.mongodb.source.split.SamplingSplitStrategy.lambda$getDocumentNumAndAvgSize$2(SamplingSplitStrategy.java:115)
           at java.util.Optional.map(Optional.java:215)
           at 
org.apache.seatunnel.connectors.seatunnel.mongodb.source.split.SamplingSplitStrategy.getDocumentNumAndAvgSize(SamplingSplitStrategy.java:115)
           at 
org.apache.seatunnel.connectors.seatunnel.mongodb.source.split.SamplingSplitStrategy.split(SamplingSplitStrategy.java:73)
           at 
org.apache.seatunnel.connectors.seatunnel.mongodb.source.enumerator.MongodbSplitEnumerator.run(MongodbSplitEnumerator.java:78)
           at 
org.apache.seatunnel.translation.source.ParallelSource.lambda$run$0(ParallelSource.java:134)
           ... 7 more
   ```
   
   ### Zeta or Flink or Spark Version
   
   _No response_
   
   ### Java or Scala Version
   
   _No response_
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [x] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@seatunnel.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to