804e opened a new issue, #13049: URL: https://github.com/apache/hudi/issues/13049
**Describe the problem you faced** After the data entry test is performed according to the document [flink quick start](https://hudi.apache.org/cn/docs/flink-quick-start-guide), data insertion is slow. The first insertion takes 11 minutes, and it is found that checkpoint times out the first time. Please help me check the problem **To Reproduce** Steps to reproduce the behavior: flink sql command ``` Flink SQL> set sql-client.verbose = true; [INFO] Session property has been set. Flink SQL> SET execution.checkpointing.interval = 1min; [INFO] Session property has been set. Flink SQL> set sql-client.execution.result-mode = tableau; [INFO] Session property has been set. Flink SQL> CREATE TABLE hudi_table( > ts BIGINT, > uuid VARCHAR(40) PRIMARY KEY NOT ENFORCED, > rider VARCHAR(20), > driver VARCHAR(20), > fare DOUBLE, > city VARCHAR(20) > ) > PARTITIONED BY (`city`) > WITH ( > 'connector' = 'hudi', > 'path' = 'file:///opt/flink-store/data/hudi_table', > 'table.type' = 'MERGE_ON_READ' > ); [INFO] Execute statement succeed. Flink SQL> INSERT INTO hudi_table > VALUES > (1695159649087,'334e26e9-8355-45cc-97c6-c31daf0df330','rider-A','driver-K',19.10,'san_francisco'), > (1695091554788,'e96c4396-3fad-413a-a942-4cb36106d721','rider-C','driver-M',27.70 ,'san_francisco'), > (1695046462179,'9909a8b1-2d15-4d3d-8ec9-efc48c536a00','rider-D','driver-L',33.90 ,'san_francisco'), > (1695332066204,'1dced545-862b-4ceb-8b43-d2a568f6616b','rider-E','driver-O',93.50,'san_francisco'), > (1695516137016,'e3cf430c-889d-4015-bc98-59bdce1e530c','rider-F','driver-P',34.15,'sao_paulo'), > (1695376420876,'7a84095f-737f-40bc-b62f-6b69664712d2','rider-G','driver-Q',43.40 ,'sao_paulo'), > (1695173887231,'3eeb61f7-c2b0-4636-99bd-5d7a5a1d2c04','rider-I','driver-S',41.06 ,'chennai'), > (1695115999911,'c8abbe79-8d89-47ea-b4ce-4d224bae5bfa','rider-J','driver-T',17.85,'chennai'); [INFO] Submitting SQL update statement to the cluster... 2025-03-28 02:36:13,495 DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory [] - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"}) 2025-03-28 02:36:13,496 DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory [] - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latency (milliseconds)"}) 2025-03-28 02:36:13,496 DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory [] - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and latency (milliseconds)"}) 2025-03-28 02:36:13,497 DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory [] - field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since last successful login"}) 2025-03-28 02:36:13,497 DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory [] - field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since startup"}) 2025-03-28 02:36:13,498 DEBUG org.apache.hadoop.metrics2.impl.MetricsSystemImpl [] - UgiMetrics, User and group related metrics 2025-03-28 02:36:13,617 DEBUG org.apache.hadoop.util.Shell [] - setsid exited with exit code 0 2025-03-28 02:36:13,639 DEBUG org.apache.hadoop.security.SecurityUtil [] - Setting hadoop.security.token.service.use_ip to true 2025-03-28 02:36:13,678 DEBUG org.apache.hadoop.security.Groups [] - Creating new Groups object 2025-03-28 02:36:13,679 DEBUG org.apache.hadoop.util.PerformanceAdvisory [] - Falling back to shell based 2025-03-28 02:36:13,681 DEBUG org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback [] - Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping 2025-03-28 02:36:13,739 DEBUG org.apache.hadoop.security.Groups [] - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000 2025-03-28 02:36:13,776 DEBUG org.apache.hadoop.security.UserGroupInformation [] - Hadoop login 2025-03-28 02:36:13,779 DEBUG org.apache.hadoop.security.UserGroupInformation [] - hadoop login commit 2025-03-28 02:36:13,779 DEBUG org.apache.hadoop.security.UserGroupInformation [] - Using kerberos user: hdfs-jaf...@jafron.com 2025-03-28 02:36:13,780 DEBUG org.apache.hadoop.security.UserGroupInformation [] - Using user: "hdfs-jaf...@jafron.com" with name: hdfs-jaf...@jafron.com 2025-03-28 02:36:13,780 DEBUG org.apache.hadoop.security.UserGroupInformation [] - User entry: "hdfs-jaf...@jafron.com" 2025-03-28 02:36:13,780 DEBUG org.apache.hadoop.security.UserGroupInformation [] - UGI loginUser: hdfs-jaf...@jafron.com (auth:KERBEROS) 2025-03-28 02:36:13,782 DEBUG org.apache.hadoop.security.UserGroupInformation [] - Current time is 1743129373782, next refresh is 1743196381400 2025-03-28 02:36:13,784 DEBUG org.apache.hadoop.fs.FileSystem [] - Starting: Acquiring creator semaphore for file:///opt/flink-store/data/hudi_table 2025-03-28 02:36:13,785 DEBUG org.apache.hadoop.fs.FileSystem [] - Acquiring creator semaphore for file:///opt/flink-store/data/hudi_table: duration 0:00.003s 2025-03-28 02:36:13,786 DEBUG org.apache.hadoop.fs.FileSystem [] - Starting: Creating FS file:///opt/flink-store/data/hudi_table 2025-03-28 02:36:13,786 DEBUG org.apache.hadoop.fs.FileSystem [] - Loading filesystems 2025-03-28 02:36:13,796 DEBUG org.apache.hadoop.fs.FileSystem [] - file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/flink-store/hadoop/lib/hadoop/hadoop-common-3.3.4.jar 2025-03-28 02:36:13,800 DEBUG org.apache.hadoop.fs.FileSystem [] - viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/flink-store/hadoop/lib/hadoop/hadoop-common-3.3.4.jar 2025-03-28 02:36:13,803 DEBUG org.apache.hadoop.fs.FileSystem [] - har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/flink-store/hadoop/lib/hadoop/hadoop-common-3.3.4.jar 2025-03-28 02:36:13,804 DEBUG org.apache.hadoop.fs.FileSystem [] - http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/flink-store/hadoop/lib/hadoop/hadoop-common-3.3.4.jar 2025-03-28 02:36:13,805 DEBUG org.apache.hadoop.fs.FileSystem [] - https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/flink-store/hadoop/lib/hadoop/hadoop-common-3.3.4.jar 2025-03-28 02:36:13,815 DEBUG org.apache.hadoop.fs.FileSystem [] - hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/flink-store/hadoop/lib/hadoop-hdfs/hadoop-hdfs-client-3.3.4.jar 2025-03-28 02:36:13,823 DEBUG org.apache.hadoop.fs.FileSystem [] - webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/flink-store/hadoop/lib/hadoop-hdfs/hadoop-hdfs-client-3.3.4.jar 2025-03-28 02:36:13,825 DEBUG org.apache.hadoop.fs.FileSystem [] - swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/flink-store/hadoop/lib/hadoop-hdfs/hadoop-hdfs-client-3.3.4.jar 2025-03-28 02:36:13,825 DEBUG org.apache.hadoop.fs.FileSystem [] - Looking for FS supporting file 2025-03-28 02:36:13,825 DEBUG org.apache.hadoop.fs.FileSystem [] - looking for configuration option fs.file.impl 2025-03-28 02:36:13,871 DEBUG org.apache.hadoop.fs.FileSystem [] - Looking in service filesystems for implementation class 2025-03-28 02:36:13,871 DEBUG org.apache.hadoop.fs.FileSystem [] - FS for file is class org.apache.hadoop.fs.LocalFileSystem 2025-03-28 02:36:13,875 DEBUG org.apache.hadoop.fs.FileSystem [] - Creating FS file:///opt/flink-store/data/hudi_table: duration 0:00.089s WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.flink.api.java.ClosureCleaner (file:/opt/flink/lib/flink-dist-1.16.1.jar) to field java.lang.Class.ANNOTATION WARNING: Please consider reporting this to the maintainers of org.apache.flink.api.java.ClosureCleaner WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 2025-03-28 02:36:16,253 DEBUG org.apache.flink.kubernetes.kubeclient.FlinkKubeClientFactory [] - Trying to load default kubernetes config. 2025-03-28 02:36:16,262 DEBUG org.apache.flink.kubernetes.kubeclient.FlinkKubeClientFactory [] - Setting Kubernetes client namespace: data, userAgent: flink 2025-03-28 02:36:17,140 WARN org.apache.flink.kubernetes.KubernetesClusterDescriptor [] - Please note that Flink client operations(e.g. cancel, list, stop, savepoint, etc.) won't work from outside the Kubernetes cluster since 'kubernetes.rest-service.exposed.type' has been set to ClusterIP. 2025-03-28 02:36:17,143 INFO org.apache.flink.kubernetes.KubernetesClusterDescriptor [] - Retrieve flink cluster flink-session-test successfully, JobManager Web Interface: http://flink-session-test-rest.data:8081 2025-03-28 02:36:17,177 WARN org.apache.flink.kubernetes.KubernetesClusterDescriptor [] - Please note that Flink client operations(e.g. cancel, list, stop, savepoint, etc.) won't work from outside the Kubernetes cluster since 'kubernetes.rest-service.exposed.type' has been set to ClusterIP. [INFO] SQL update statement has been successfully submitted to the cluster: Job ID: ac0c704cdbb67e9d84642d99bd38e03d Flink SQL> ```   **Environment Description** * Hudi version : 0.14.1 * Flink version : 1.16.1 **Additional context** flink job log [flink--kubernetes-session-0-flink-session-test-5fb8cc6568-n6whc.log](https://github.com/user-attachments/files/19497552/flink--kubernetes-session-0-flink-session-test-5fb8cc6568-n6whc.log) [flink--kubernetes-taskmanager-0-flink-session-test-taskmanager-1-1.log](https://github.com/user-attachments/files/19497553/flink--kubernetes-taskmanager-0-flink-session-test-taskmanager-1-1.log) [flink--sql-client-flink-session-test-5fb8cc6568-n6whc.log](https://github.com/user-attachments/files/19497554/flink--sql-client-flink-session-test-5fb8cc6568-n6whc.log) **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org