[jira] [Resolved] (FLINK-36201) StateLocalitySlotAssigner should be only used when local recovery is enabled for Adaptive Scheduler

2024-09-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-36201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Morávek resolved FLINK-36201.
---
Fix Version/s: 2.0.0
   Resolution: Fixed

> StateLocalitySlotAssigner should be only used when local recovery is enabled 
> for Adaptive Scheduler
> ---
>
> Key: FLINK-36201
> URL: https://issues.apache.org/jira/browse/FLINK-36201
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> SlotSharingSlotAllocator created the StateLocalitySlotAssigner[1] instead of 
> DefaultSlotAssigner whenever failover happens.
> I'm curious why we use StateLocalitySlotAssigner when local recovery is 
> disable. 
> As I understand, the local recovery doesn't take effect if flink doesn't 
> backup state on the TM local disk. So StateLocalitySlotAssigner should be 
> only used when local recovery is enabled.
>  
> [1] 
> [https://github.com/apache/flink/blob/c869326d089705475481c2c2ea42a6efabb8c828/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/SlotSharingSlotAllocator.java#L136]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36201) StateLocalitySlotAssigner should be only used when local recovery is enabled for Adaptive Scheduler

2024-09-04 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-36201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879116#comment-17879116
 ] 

David Morávek commented on FLINK-36201:
---

master: 57bc16948bef75cf0fb483efb8bb9959d72cf513

> StateLocalitySlotAssigner should be only used when local recovery is enabled 
> for Adaptive Scheduler
> ---
>
> Key: FLINK-36201
> URL: https://issues.apache.org/jira/browse/FLINK-36201
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
>
> SlotSharingSlotAllocator created the StateLocalitySlotAssigner[1] instead of 
> DefaultSlotAssigner whenever failover happens.
> I'm curious why we use StateLocalitySlotAssigner when local recovery is 
> disable. 
> As I understand, the local recovery doesn't take effect if flink doesn't 
> backup state on the TM local disk. So StateLocalitySlotAssigner should be 
> only used when local recovery is enabled.
>  
> [1] 
> [https://github.com/apache/flink/blob/c869326d089705475481c2c2ea42a6efabb8c828/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/SlotSharingSlotAllocator.java#L136]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36201) StateLocalitySlotAssigner should be only used when local recovery is enabled for Adaptive Scheduler

2024-09-04 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879136#comment-17879136
 ] 

Rui Fan commented on FLINK-36201:
-

{quote} This makes me wonder whether we should default 
`execution.state-recovery.from-local` to true in 2.0. WDYT?
{quote}
Thanks [~dmvk] for pointing it out! 

Actually, most of our production jobs do not enable local recovery for now. So 
I'm not sure whether it's better to enable it by default.

But I think 2.0 is a good time to update the default value if some contributors 
think it's useful(Pros outweigh cons). It's better to collect some feedbacks 
from dev mail list if we wanna update it. :)

> StateLocalitySlotAssigner should be only used when local recovery is enabled 
> for Adaptive Scheduler
> ---
>
> Key: FLINK-36201
> URL: https://issues.apache.org/jira/browse/FLINK-36201
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> SlotSharingSlotAllocator created the StateLocalitySlotAssigner[1] instead of 
> DefaultSlotAssigner whenever failover happens.
> I'm curious why we use StateLocalitySlotAssigner when local recovery is 
> disable. 
> As I understand, the local recovery doesn't take effect if flink doesn't 
> backup state on the TM local disk. So StateLocalitySlotAssigner should be 
> only used when local recovery is enabled.
>  
> [1] 
> [https://github.com/apache/flink/blob/c869326d089705475481c2c2ea42a6efabb8c828/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/SlotSharingSlotAllocator.java#L136]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-36201) StateLocalitySlotAssigner should be only used when local recovery is enabled for Adaptive Scheduler

2024-09-04 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879116#comment-17879116
 ] 

Rui Fan edited comment on FLINK-36201 at 9/4/24 8:34 AM:
-

master: 57bc16948bef75cf0fb483efb8bb9959d72cf513

1.20: 5d9a1c5de9dd30220e7e9cf3fbe5bf1bdedb6749


was (Author: davidmoravek):
master: 57bc16948bef75cf0fb483efb8bb9959d72cf513

> StateLocalitySlotAssigner should be only used when local recovery is enabled 
> for Adaptive Scheduler
> ---
>
> Key: FLINK-36201
> URL: https://issues.apache.org/jira/browse/FLINK-36201
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> SlotSharingSlotAllocator created the StateLocalitySlotAssigner[1] instead of 
> DefaultSlotAssigner whenever failover happens.
> I'm curious why we use StateLocalitySlotAssigner when local recovery is 
> disable. 
> As I understand, the local recovery doesn't take effect if flink doesn't 
> backup state on the TM local disk. So StateLocalitySlotAssigner should be 
> only used when local recovery is enabled.
>  
> [1] 
> [https://github.com/apache/flink/blob/c869326d089705475481c2c2ea42a6efabb8c828/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/SlotSharingSlotAllocator.java#L136]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35660][dynamodb][BugFix] Removing dependency on sorting of splits [flink-connector-aws]

2024-09-04 Thread via GitHub


gguptp opened a new pull request, #158:
URL: https://github.com/apache/flink-connector-aws/pull/158

   …lits
   
   
   
   ## Purpose of the change
   
   Removing dependency on sorting of splits. This also makes sure that if 
parent and child come in same describestream call, we get TRIM_HORIZON for the 
child instead of race conditions where parent and child both get assigned 
LATEST shard iterator
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is already covered by existing tests,
   
   ## Significant changes
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for convenience.)*
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
 - If yes, how is this documented? (not applicable / docs / JavaDocs / not 
documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-36214) Error log when building flink-cdc-pipeline-udf-examples from source code

2024-09-04 Thread lincoln lee (Jira)
lincoln lee created FLINK-36214:
---

 Summary: Error log when building flink-cdc-pipeline-udf-examples 
from source code
 Key: FLINK-36214
 URL: https://issues.apache.org/jira/browse/FLINK-36214
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: lincoln lee


There's an error log when building from source code(encountered on 3.2.0 rc & 
master branch), but not fail the build. 

{code}

[INFO] --< org.apache.flink:flink-cdc-pipeline-udf-examples >--
[INFO] Building flink-cdc-pipeline-udf-examples 3.2.0                    [3/42]
[INFO] [ jar ]-
[INFO]
[INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ 
flink-cdc-pipeline-udf-examples ---
[INFO] Deleting 
/Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/target
[INFO]
[INFO] --- flatten-maven-plugin:1.5.0:clean (flatten.clean) @ 
flink-cdc-pipeline-udf-examples ---
[INFO] Deleting 
/Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/.flattened-pom.xml
[INFO]
[INFO] --- spotless-maven-plugin:2.4.2:check (spotless-check) @ 
flink-cdc-pipeline-udf-examples ---
[INFO]
[INFO] --- maven-checkstyle-plugin:2.17:check (validate) @ 
flink-cdc-pipeline-udf-examples ---
[INFO]
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven-version) @ 
flink-cdc-pipeline-udf-examples ---
[INFO]
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) 
@ flink-cdc-pipeline-udf-examples ---
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ 
flink-cdc-pipeline-udf-examples ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/src/main/resources
[INFO] Copying 3 resources
[INFO]
[INFO] --- flatten-maven-plugin:1.5.0:flatten (flatten) @ 
flink-cdc-pipeline-udf-examples ---
[INFO] Generating flattened POM of project 
org.apache.flink:flink-cdc-pipeline-udf-examples:jar:3.2.0...
[INFO]
[INFO] --- scala-maven-plugin:4.9.2:add-source (scala-compile-first) @ 
flink-cdc-pipeline-udf-examples ---
[INFO] Add Source directory: 
/Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/src/main/scala
[INFO] Add Test Source directory: 
/Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/src/test/scala
[INFO]
[INFO] --- scala-maven-plugin:4.9.2:compile (scala-compile-first) @ 
flink-cdc-pipeline-udf-examples ---
[INFO] Compiler bridge file: 
/Users/lilin/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.12-1.10.0-bin_2.12.16__52.0-1.10.0_20240505T232140.jar
[INFO] compiling 8 Scala sources and 8 Java sources to 
/Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/target/classes
 ...
[ERROR] -release is only supported on Java 9 and higher
[INFO] done compiling
[INFO] compile in 8.2 s

{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-36215) Child shards are getting started with LATEST shard iterator even when parents are not expired

2024-09-04 Thread Abhi Gupta (Jira)
Abhi Gupta created FLINK-36215:
--

 Summary: Child shards are getting started with LATEST shard 
iterator even when parents are not expired
 Key: FLINK-36215
 URL: https://issues.apache.org/jira/browse/FLINK-36215
 Project: Flink
  Issue Type: Bug
  Components: Connectors / DynamoDB
Reporter: Abhi Gupta


While testing, we found out that children are also taking start position as 
LATEST



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-36215) Child shards are getting started with LATEST shard iterator even when parents are not expired

2024-09-04 Thread Hong Liang Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh reassigned FLINK-36215:
---

Assignee: Abhi Gupta

> Child shards are getting started with LATEST shard iterator even when parents 
> are not expired
> -
>
> Key: FLINK-36215
> URL: https://issues.apache.org/jira/browse/FLINK-36215
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / DynamoDB
>Reporter: Abhi Gupta
>Assignee: Abhi Gupta
>Priority: Major
>
> While testing, we found out that children are also taking start position as 
> LATEST



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36196) DDB Streams connector should be able to resolve the stream ARN from given Table name

2024-09-04 Thread Hong Liang Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh updated FLINK-36196:

Summary: DDB Streams connector should be able to resolve the stream ARN 
from given Table name  (was: Have a UX for customer where customer just gives 
table name and gets the stream arn which is enabled)

> DDB Streams connector should be able to resolve the stream ARN from given 
> Table name
> 
>
> Key: FLINK-36196
> URL: https://issues.apache.org/jira/browse/FLINK-36196
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / DynamoDB
>Reporter: Abhi Gupta
>Priority: Not a Priority
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-36196) DDB Streams connector should be able to resolve the stream ARN from given Table name

2024-09-04 Thread Hong Liang Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh resolved FLINK-36196.
-
Resolution: Duplicate

https://issues.apache.org/jira/browse/FLINK-36205

> DDB Streams connector should be able to resolve the stream ARN from given 
> Table name
> 
>
> Key: FLINK-36196
> URL: https://issues.apache.org/jira/browse/FLINK-36196
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / DynamoDB
>Reporter: Abhi Gupta
>Priority: Not a Priority
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36195) Generalized retry configuration in flink-connector-base

2024-09-04 Thread Hong Liang Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh updated FLINK-36195:

Summary: Generalized retry configuration in flink-connector-base  (was: 
Migrate DDB Streams specific retry configuration to flink-connector-base)

> Generalized retry configuration in flink-connector-base
> ---
>
> Key: FLINK-36195
> URL: https://issues.apache.org/jira/browse/FLINK-36195
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / DynamoDB
>Reporter: Abhi Gupta
>Priority: Minor
>
> In DynamoDBStreamsSource.java, we have put custom retry configuration, put 
> that in flink-connector-aws-base



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-36215) Child shards are getting started with LATEST shard iterator even when parents are not expired

2024-09-04 Thread Hong Liang Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh resolved FLINK-36215.
-
Resolution: Fixed

 merged commit 
[{{4d847fe}}|https://github.com/apache/flink-connector-aws/commit/4d847fee5b18780e2d729975698bf0897205bafe]
 into   apache:main

> Child shards are getting started with LATEST shard iterator even when parents 
> are not expired
> -
>
> Key: FLINK-36215
> URL: https://issues.apache.org/jira/browse/FLINK-36215
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / DynamoDB
>Reporter: Abhi Gupta
>Assignee: Abhi Gupta
>Priority: Major
>  Labels: pull-request-available
>
> While testing, we found out that children are also taking start position as 
> LATEST



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-36206][Connectors/AWS] Adding support for custom override configuration [flink-connector-aws]

2024-09-04 Thread via GitHub


gguptp opened a new pull request, #159:
URL: https://github.com/apache/flink-connector-aws/pull/159

   
   
   ## Purpose of the change
   
   Adding support for custom override configuration in AWSClientUtil.java
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   This change added tests and can be verified as follows:
   
   - *Added unit tests*
   
   ## Significant changes
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for convenience.)*
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
 - If yes, how is this documented? (not applicable / docs / JavaDocs / not 
documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36206) Support flink-connector-aws-base to allow custom override configuration

2024-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36206:
---
Labels: pull-request-available  (was: )

> Support flink-connector-aws-base to allow custom override configuration
> ---
>
> Key: FLINK-36206
> URL: https://issues.apache.org/jira/browse/FLINK-36206
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB, Connectors / Kinesis
>Reporter: Abhi Gupta
>Priority: Major
>  Labels: pull-request-available
>
> The flink-connector-aws-base in the file: 
> {color:#e8912d}flink-connector-aws-base/src/main/java/org/apache/flink/connector/aws/util/AWSClientUtil.java
>  {color}
> {color:#172b4d}has a configuration to set the client override configuration 
> to default value even if customer supplies a custom override config. We 
> should fix this behaviour to support custom override configurations{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36206][Connectors/AWS] Adding support for custom override configuration [flink-connector-aws]

2024-09-04 Thread via GitHub


hlteoh37 commented on code in PR #159:
URL: 
https://github.com/apache/flink-connector-aws/pull/159#discussion_r1743572674


##
flink-connector-aws-base/src/main/java/org/apache/flink/connector/aws/util/AWSClientUtil.java:
##
@@ -207,6 +207,39 @@ S createAwsSyncClient(
 .build();
 }
 
+public static <

Review Comment:
   can we collapse the previous `createAwsSyncClient()` into this one?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36206][Connectors/AWS] Adding support for custom override configuration [flink-connector-aws]

2024-09-04 Thread via GitHub


gguptp commented on code in PR #159:
URL: 
https://github.com/apache/flink-connector-aws/pull/159#discussion_r1743580734


##
flink-connector-aws-base/src/main/java/org/apache/flink/connector/aws/util/AWSClientUtil.java:
##
@@ -207,6 +207,39 @@ S createAwsSyncClient(
 .build();
 }
 
+public static <

Review Comment:
   sure



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36206][Connectors/AWS] Adding support for custom override configuration [flink-connector-aws]

2024-09-04 Thread via GitHub


hlteoh37 commented on code in PR #159:
URL: 
https://github.com/apache/flink-connector-aws/pull/159#discussion_r1743584656


##
flink-connector-aws-base/src/main/java/org/apache/flink/connector/aws/util/AWSClientUtil.java:
##
@@ -194,7 +196,7 @@ S createAwsSyncClient(
 final ClientOverrideConfiguration overrideConfiguration =
 createClientOverrideConfiguration(
 clientConfiguration,
-ClientOverrideConfiguration.builder(),
+clientOverrideConfigurationBuilder,

Review Comment:
   Let's keep the old method to keep backwards compatibility



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36206][Connectors/AWS] Adding support for custom override configuration [flink-connector-aws]

2024-09-04 Thread via GitHub


gguptp commented on code in PR #159:
URL: 
https://github.com/apache/flink-connector-aws/pull/159#discussion_r1743591005


##
flink-connector-aws-base/src/main/java/org/apache/flink/connector/aws/util/AWSClientUtil.java:
##
@@ -194,7 +196,7 @@ S createAwsSyncClient(
 final ClientOverrideConfiguration overrideConfiguration =
 createClientOverrideConfiguration(
 clientConfiguration,
-ClientOverrideConfiguration.builder(),
+clientOverrideConfigurationBuilder,

Review Comment:
   my bad, fixed this in next revision



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36206][Connectors/AWS] Adding support for custom override configuration [flink-connector-aws]

2024-09-04 Thread via GitHub


hlteoh37 merged PR #159:
URL: https://github.com/apache/flink-connector-aws/pull/159


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-36216) RocksDB: Compaction sees out-of-order keys

2024-09-04 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879221#comment-17879221
 ] 

Robert Metzger edited comment on FLINK-36216 at 9/4/24 12:48 PM:
-

Some initial research on the issue:
{code}
at GlobalWindowAggsHandler$473.merge_split229(Unknown Source)
at GlobalWindowAggsHandler$473.merge(Unknown Source)
{code}
--> Seems to be generated code from the Table API

This issue seems related: https://github.com/facebook/rocksdb/issues/8248

Questions in that ticket:

bq. Were the input files created by a pre-6.14 RocksDB version,

Flink 1.19 uses 6.20.3-ververica-2.0, so NO.

bq. or without check_flush_compaction_key_order,
bq. or with a different comparator?

I couldn't find any info on that from the default rocksdb logs. I 
[suspect|https://rocksdb.org/blog/2021/05/26/online-validation.html] those are 
ColumnFamilyOptions, which are not printed by default.


was (Author: rmetzger):
Some initial research on the issue:
{code}
at GlobalWindowAggsHandler$473.merge_split229(Unknown Source)
at GlobalWindowAggsHandler$473.merge(Unknown Source)
{code}
--> Seems to be generated code from the Table API

This issue seems related: https://github.com/facebook/rocksdb/issues/8248

Questions in that ticket:

> Were the input files created by a pre-6.14 RocksDB version,

Flink 1.19 uses 6.20.3-ververica-2.0, so NO.

> or without check_flush_compaction_key_order,
> or with a different comparator?

I couldn't find any info on that from the default rocksdb logs. I 
[suspect|https://rocksdb.org/blog/2021/05/26/online-validation.html] those are 
ColumnFamilyOptions, which are not printed by default.

> RocksDB: Compaction sees out-of-order keys
> --
>
> Key: FLINK-36216
> URL: https://issues.apache.org/jira/browse/FLINK-36216
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.19.0
>Reporter: Robert Metzger
>Priority: Major
>
> {code}
> org.rocksdb.RocksDBException: Compaction sees out-of-order keys.
>   at org.rocksdb.RocksDB.put(Native Method)
>   at org.rocksdb.RocksDB.put(RocksDB.java:955)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBMapState.put(RocksDBMapState.java:139)
>   at 
> org.apache.flink.table.runtime.dataview.StateMapView$StateMapViewWithKeysNullable.put(StateMapView.java:168)
>   at 
> org.apache.flink.table.runtime.dataview.StateMapView$NamespacedStateMapViewWithKeysNullable.put(StateMapView.java:392)
>   at GlobalWindowAggsHandler$473.merge_split229(Unknown Source)
>   at GlobalWindowAggsHandler$473.merge(Unknown Source)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.window.combines.GlobalAggCombiner.combineAccumulator(GlobalAggCombiner.java:99)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.window.combines.GlobalAggCombiner.combine(GlobalAggCombiner.java:85)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.window.buffers.RecordsWindowBuffer.flush(RecordsWindowBuffer.java:112)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-27355][runtime] Unregister JobManagerRunner after it's closed [flink]

2024-09-04 Thread via GitHub


XComp commented on code in PR #25027:
URL: https://github.com/apache/flink/pull/25027#discussion_r1703278478


##
flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/TestingJobManagerRunnerRegistry.java:
##
@@ -188,11 +197,26 @@ public static class Builder {
 private Supplier> 
getJobManagerRunnersSupplier =
 Collections::emptyList;
 private Function unregisterFunction = 
ignoredJobId -> null;
-private BiFunction> 
localCleanupAsyncFunction =
-(ignoredJobId, ignoredExecutor) -> 
FutureUtils.completedVoidFuture();
+private TriFunction>
+localCleanupAsyncFunction =
+(ignoredJobId, ignoredExecutor, mainThreadExecutor) ->
+FutureUtils.completedVoidFuture();
 private BiFunction> 
globalCleanupAsyncFunction =
 (ignoredJobId, ignoredExecutor) -> 
FutureUtils.completedVoidFuture();
 
+private Builder fromDefaultJobManagerRunnerRegistry(

Review Comment:
   What's the purpose of this? It looks like we map each of the 
`DefaultJobManagerRunnerRegistry` methods to the 
`TestingJobManagerRunnerRegistry` callback generating actually a "new" 
`DefaultJobManagerRunnerRegistry` instance wrapped in a 
`TestingJobManagerRunnerRegistry`. We could just use 
`DefaultJobManagerRunnerRegistry`, instead. Don't you think? :thinking: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-36169) Support namespace level resource check before scaling up

2024-09-04 Thread Gang Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879296#comment-17879296
 ] 

Gang Li commented on FLINK-36169:
-

Thanks for bringing this up.

> Support namespace level resource check before scaling up
> 
>
> Key: FLINK-36169
> URL: https://issues.apache.org/jira/browse/FLINK-36169
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: 1.10.0
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-14068][streaming] Removes deprecated org.apache.flink.streaming.api.windowing.time.Time [flink]

2024-09-04 Thread via GitHub


XComp commented on code in PR #25261:
URL: https://github.com/apache/flink/pull/25261#discussion_r1744113081


##
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/AllWindowedStream.scala:
##
@@ -70,7 +71,7 @@ class AllWindowedStream[T, W <: Window](javaStream: 
JavaAllWStream[T, W]) {
* thrown.
*/
   @PublicEvolving
-  def allowedLateness(lateness: Time): AllWindowedStream[T, W] = {
+  def allowedLateness(lateness: Duration): AllWindowedStream[T, W] = {

Review Comment:
   I recovered the disabling of japicmp for Scala in FLINK-36207



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-36217) Remove powermock usage

2024-09-04 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-36217:
---

 Summary: Remove powermock usage
 Key: FLINK-36217
 URL: https://issues.apache.org/jira/browse/FLINK-36217
 Project: Flink
  Issue Type: Technical Debt
  Components: Tests
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


Most of the tests are either moved to a different repo like connectors or 
rewritten in powermock free way.
Powermock itself became unmaintained (latest release was in 2020 
https://github.com/powermock/powermock/releases/tag/powermock-2.0.9)
and latest commit 2 years ago https://github.com/powermock/powermock

also there is no support for junit5 (the request to support it and even PR from 
junit5 maintainers is ready for review since Feb 2023 
https://github.com/powermock/powermock/pull/1146)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36217) Remove powermock usage

2024-09-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-36217:

Description: 
Most of the tests are either moved to a different repo like connectors or 
rewritten in powermock free way.
Powermock itself became unmaintained (latest release was in 2020 
https://github.com/powermock/powermock/releases/tag/powermock-2.0.9)
and latest commit 2 years ago https://github.com/powermock/powermock

also there is no support for junit5 (the request to support it and even PR from 
junit5 maintainers is ready for review since Feb 2023 
https://github.com/powermock/powermock/pull/1146, however still no feedback 
from maintainers...)

  was:
Most of the tests are either moved to a different repo like connectors or 
rewritten in powermock free way.
Powermock itself became unmaintained (latest release was in 2020 
https://github.com/powermock/powermock/releases/tag/powermock-2.0.9)
and latest commit 2 years ago https://github.com/powermock/powermock

also there is no support for junit5 (the request to support it and even PR from 
junit5 maintainers is ready for review since Feb 2023 
https://github.com/powermock/powermock/pull/1146)


> Remove powermock usage
> --
>
> Key: FLINK-36217
> URL: https://issues.apache.org/jira/browse/FLINK-36217
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> Most of the tests are either moved to a different repo like connectors or 
> rewritten in powermock free way.
> Powermock itself became unmaintained (latest release was in 2020 
> https://github.com/powermock/powermock/releases/tag/powermock-2.0.9)
> and latest commit 2 years ago https://github.com/powermock/powermock
> also there is no support for junit5 (the request to support it and even PR 
> from junit5 maintainers is ready for review since Feb 2023 
> https://github.com/powermock/powermock/pull/1146, however still no feedback 
> from maintainers...)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-36217][tests] Remove powermock usage [flink]

2024-09-04 Thread via GitHub


snuyanzin opened a new pull request, #25287:
URL: https://github.com/apache/flink/pull/25287

   
   ## What is the purpose of the change
   
   Remove `powermock` from usages
   
   ## Verifying this change
   
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: ( no)
 - The runtime per-record code paths (performance sensitive): ( no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no)
 - The S3 file system connector: ( no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36217) Remove powermock usage

2024-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36217:
---
Labels: pull-request-available  (was: )

> Remove powermock usage
> --
>
> Key: FLINK-36217
> URL: https://issues.apache.org/jira/browse/FLINK-36217
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> Most of the tests are either moved to a different repo like connectors or 
> rewritten in powermock free way.
> Powermock itself became unmaintained (latest release was in 2020 
> https://github.com/powermock/powermock/releases/tag/powermock-2.0.9)
> and latest commit 2 years ago https://github.com/powermock/powermock
> also there is no support for junit5 (the request to support it and even PR 
> from junit5 maintainers is ready for review since Feb 2023 
> https://github.com/powermock/powermock/pull/1146, however still no feedback 
> from maintainers...)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.19][FLINK-36191][tests] FsMergingCheckpointStorageLocationTest generates test data not in tmp folder [flink]

2024-09-04 Thread via GitHub


snuyanzin commented on PR #25278:
URL: https://github.com/apache/flink/pull/25278#issuecomment-2330160555

   the reason of failure was the improper base commit, after rebase ci passed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-36191) FsMergingCheckpointStorageLocationTest generates test data not in tmp folder

2024-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17878475#comment-17878475
 ] 

Sergey Nuyanzin edited comment on FLINK-36191 at 9/4/24 9:17 PM:
-

Merged to master as 
[339f97c458a9eb44755b84ce067a31134ed9cfc0|https://github.com/apache/flink/commit/339f97c458a9eb44755b84ce067a31134ed9cfc0]
1.19: 
[194df6c4067c1ee4712839afdf699e051d237c74|https://github.com/apache/flink/commit/194df6c4067c1ee4712839afdf699e051d237c74]


was (Author: sergey nuyanzin):
Merged as 
[339f97c458a9eb44755b84ce067a31134ed9cfc0|https://github.com/apache/flink/commit/339f97c458a9eb44755b84ce067a31134ed9cfc0]

> FsMergingCheckpointStorageLocationTest generates test data not in tmp folder
> 
>
> Key: FLINK-36191
> URL: https://issues.apache.org/jira/browse/FLINK-36191
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0-preview
>
>
> to reproduce this command could be used (assuming that something like 
> {{./mvnw clean verify -DskipTests -Dfast}} was run before)
> {code:bash}
> cd flink-runtime && ./../mvnw -Dtest=FsMergingCheckpointStorageLocationTest 
> test && git status
> {code}
> the output will contain something like
> {noformat}
> Untracked files:
>   (use "git add ..." to include in what will be committed)
>   org.junit.rules.TemporaryFolder@3cc1435c/
>   org.junit.rules.TemporaryFolder@625732/
>   org.junit.rules.TemporaryFolder@bef2d72/
> {noformat}
> the reason is that in {{@Before}} there *not* absolute path is in use



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36191) FsMergingCheckpointStorageLocationTest generates test data not in tmp folder

2024-09-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-36191:

Fix Version/s: 1.19.2

> FsMergingCheckpointStorageLocationTest generates test data not in tmp folder
> 
>
> Key: FLINK-36191
> URL: https://issues.apache.org/jira/browse/FLINK-36191
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.2, 2.0-preview
>
>
> to reproduce this command could be used (assuming that something like 
> {{./mvnw clean verify -DskipTests -Dfast}} was run before)
> {code:bash}
> cd flink-runtime && ./../mvnw -Dtest=FsMergingCheckpointStorageLocationTest 
> test && git status
> {code}
> the output will contain something like
> {noformat}
> Untracked files:
>   (use "git add ..." to include in what will be committed)
>   org.junit.rules.TemporaryFolder@3cc1435c/
>   org.junit.rules.TemporaryFolder@625732/
>   org.junit.rules.TemporaryFolder@bef2d72/
> {noformat}
> the reason is that in {{@Before}} there *not* absolute path is in use



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-36191) FsMergingCheckpointStorageLocationTest generates test data not in tmp folder

2024-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17878475#comment-17878475
 ] 

Sergey Nuyanzin edited comment on FLINK-36191 at 9/4/24 9:57 PM:
-

Merged to master as 
[339f97c458a9eb44755b84ce067a31134ed9cfc0|https://github.com/apache/flink/commit/339f97c458a9eb44755b84ce067a31134ed9cfc0]
1.19: 
[194df6c4067c1ee4712839afdf699e051d237c74|https://github.com/apache/flink/commit/194df6c4067c1ee4712839afdf699e051d237c74]
1.20: 
[bf393918e5c2a81d4dc49ee82e7019f38f64b517|https://github.com/apache/flink/commit/bf393918e5c2a81d4dc49ee82e7019f38f64b517]


was (Author: sergey nuyanzin):
Merged to master as 
[339f97c458a9eb44755b84ce067a31134ed9cfc0|https://github.com/apache/flink/commit/339f97c458a9eb44755b84ce067a31134ed9cfc0]
1.19: 
[194df6c4067c1ee4712839afdf699e051d237c74|https://github.com/apache/flink/commit/194df6c4067c1ee4712839afdf699e051d237c74]

> FsMergingCheckpointStorageLocationTest generates test data not in tmp folder
> 
>
> Key: FLINK-36191
> URL: https://issues.apache.org/jira/browse/FLINK-36191
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.2, 2.0-preview
>
>
> to reproduce this command could be used (assuming that something like 
> {{./mvnw clean verify -DskipTests -Dfast}} was run before)
> {code:bash}
> cd flink-runtime && ./../mvnw -Dtest=FsMergingCheckpointStorageLocationTest 
> test && git status
> {code}
> the output will contain something like
> {noformat}
> Untracked files:
>   (use "git add ..." to include in what will be committed)
>   org.junit.rules.TemporaryFolder@3cc1435c/
>   org.junit.rules.TemporaryFolder@625732/
>   org.junit.rules.TemporaryFolder@bef2d72/
> {noformat}
> the reason is that in {{@Before}} there *not* absolute path is in use



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29240) Unify the ClassLoader in StreamExecutionEnvironment and TableEnvironment

2024-09-04 Thread Georgios Kousouris (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879383#comment-17879383
 ] 

Georgios Kousouris commented on FLINK-29240:


Hi [~lsy], do you know if there are any updates on providing a generic 
classloader at runtime for Flink 1.16 ?

> Unify the ClassLoader in StreamExecutionEnvironment and TableEnvironment
> 
>
> Key: FLINK-29240
> URL: https://issues.apache.org/jira/browse/FLINK-29240
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: dalongliu
>Priority: Major
> Fix For: 2.0.0
>
>
> Since [FLINK-15635| https://issues.apache.org/jira/browse/FLINK-15635], we 
> have introduced a user classloader in table module to manage all user jars, 
> such as the jar added by `ADD JAR` or `CREATE FUNCTION ... USING JAR` syntax. 
> However, in table API  program user can create `StreamExecutionEnvironment` 
> first, then create `TableEnvironment` based on it, the classloader in 
> `StreamExecutionEnvironment` and `TableEnvironment` are not the same.  if the 
> user use `ADD JAR` syntax in SQL query, here maybe occur 
> ClassNotFoundException during compile StreamGraph to JobGraph because of the 
> different classloader, so we need to unify the classloader, make sure the 
> classloader is the same.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36214) Error log when building flink-cdc-pipeline-udf-examples from source code

2024-09-04 Thread yux (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879401#comment-17879401
 ] 

yux commented on FLINK-36214:
-

Seems it was brought in by FLINK-34876, will investigate this.

> Error log when building flink-cdc-pipeline-udf-examples from source code
> 
>
> Key: FLINK-36214
> URL: https://issues.apache.org/jira/browse/FLINK-36214
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: lincoln lee
>Priority: Minor
>
> There's an error log when building from source code(encountered on 3.2.0 rc & 
> master branch), but not fail the build. 
> {code}
> [INFO] --< org.apache.flink:flink-cdc-pipeline-udf-examples 
> >--
> [INFO] Building flink-cdc-pipeline-udf-examples 3.2.0                    
> [3/42]
> [INFO] [ jar 
> ]-
> [INFO]
> [INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO] Deleting 
> /Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/target
> [INFO]
> [INFO] --- flatten-maven-plugin:1.5.0:clean (flatten.clean) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO] Deleting 
> /Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/.flattened-pom.xml
> [INFO]
> [INFO] --- spotless-maven-plugin:2.4.2:check (spotless-check) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO]
> [INFO] --- maven-checkstyle-plugin:2.17:check (validate) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO]
> [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven-version) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO]
> [INFO] --- maven-remote-resources-plugin:1.5:process 
> (process-resource-bundles) @ flink-cdc-pipeline-udf-examples ---
> [INFO]
> [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] skip non existing resourceDirectory 
> /Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/src/main/resources
> [INFO] Copying 3 resources
> [INFO]
> [INFO] --- flatten-maven-plugin:1.5.0:flatten (flatten) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO] Generating flattened POM of project 
> org.apache.flink:flink-cdc-pipeline-udf-examples:jar:3.2.0...
> [INFO]
> [INFO] --- scala-maven-plugin:4.9.2:add-source (scala-compile-first) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO] Add Source directory: 
> /Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/src/main/scala
> [INFO] Add Test Source directory: 
> /Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/src/test/scala
> [INFO]
> [INFO] --- scala-maven-plugin:4.9.2:compile (scala-compile-first) @ 
> flink-cdc-pipeline-udf-examples ---
> [INFO] Compiler bridge file: 
> /Users/lilin/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.12-1.10.0-bin_2.12.16__52.0-1.10.0_20240505T232140.jar
> [INFO] compiling 8 Scala sources and 8 Java sources to 
> /Users/lilin/Downloads/veri-cdc/flink-cdc-3.2.0/flink-cdc-pipeline-udf-examples/target/classes
>  ...
> [ERROR] -release is only supported on Java 9 and higher
> [INFO] done compiling
> [INFO] compile in 8.2 s
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35049][state] Implement Map Async State API for ForStStateBackend [flink]

2024-09-04 Thread via GitHub


Zakelly commented on code in PR #24812:
URL: https://github.com/apache/flink/pull/24812#discussion_r1742988791


##
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStDBGetRequest.java:
##
@@ -45,6 +47,12 @@ public abstract class ForStDBGetRequest {
 this.future = future;
 }
 
+public void process(RocksDB db) throws IOException, RocksDBException {

Review Comment:
   I suggest a generic interface `ForStDBRequest` defining basic methods for 
all requests. WDYT?
   



##
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStDBMapEntryIterRequest.java:
##
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.state.forst;
+
+import org.apache.flink.api.common.state.v2.StateIterator;
+import org.apache.flink.core.state.InternalStateFuture;
+import org.apache.flink.runtime.asyncprocessing.StateRequestHandler;
+import org.apache.flink.runtime.asyncprocessing.StateRequestType;
+
+import org.rocksdb.RocksDB;
+import org.rocksdb.RocksIterator;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+
+/** The ForSt {@link ForStDBIterRequest} which returns the entries of a 
ForStMapState. */
+public class ForStDBMapEntryIterRequest extends 
ForStDBIterRequest {
+
+private final InternalStateFuture>> future;
+
+public ForStDBMapEntryIterRequest(
+ContextKey contextKey,
+ForStMapState table,
+StateRequestHandler stateRequestHandler,
+byte[] toSeekBytes,
+InternalStateFuture>> future) {
+super(contextKey, table, stateRequestHandler, toSeekBytes);
+this.future = future;
+}
+
+@Override
+public void completeStateFutureExceptionally(String message, Throwable ex) 
{
+future.completeExceptionally(message, ex);
+}
+
+@Override
+public void process(RocksDB db, int cacheSizeLimit) throws IOException {
+try (RocksIterator iter = 
db.newIterator(table.getColumnFamilyHandle())) {

Review Comment:
   Is it possible we keep and reuse this iterator in the next request?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36214] Downgrade scala-maven-plugin version to 4.8.0 to keep compatibility with Java 8 [flink-cdc]

2024-09-04 Thread via GitHub


yuxiqian commented on PR #3594:
URL: https://github.com/apache/flink-cdc/pull/3594#issuecomment-2330513582

   cc @leonardBang @lincoln-lil 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-36219) Add Flink compatibility matrix of CDC releases

2024-09-04 Thread yux (Jira)
yux created FLINK-36219:
---

 Summary: Add Flink compatibility matrix of CDC releases
 Key: FLINK-36219
 URL: https://issues.apache.org/jira/browse/FLINK-36219
 Project: Flink
  Issue Type: Sub-task
Reporter: yux
 Fix For: cdc-3.2.0


Now, CDC releases have their own preferences over Flink versions. For example, 
Flink 3.1.0- doesn't work with Flink 1.19.

Adding a compatibility table would be much cleaner.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-36221) Add specification about CAST ... AS ... built-in functions

2024-09-04 Thread yux (Jira)
yux created FLINK-36221:
---

 Summary: Add specification about CAST ... AS ... built-in functions
 Key: FLINK-36221
 URL: https://issues.apache.org/jira/browse/FLINK-36221
 Project: Flink
  Issue Type: Sub-task
Reporter: yux
 Fix For: cdc-3.2.0


FLINK-34877 adds CAST ... AS ... syntax in transform expressions, but there's 
no corresponding documentations yet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-36221] Add `CAST ... AS ...` documentations [flink-cdc]

2024-09-04 Thread via GitHub


yuxiqian opened a new pull request, #3596:
URL: https://github.com/apache/flink-cdc/pull/3596

   This closes FLINK-36221.
   
   [FLINK-34877](https://issues.apache.org/jira/browse/FLINK-34877) adds `CAST 
... AS ...` syntax in transform expressions, but there's no corresponding 
documentations yet. Adding it would make it easier for users to write 
expressions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36221) Add specification about CAST ... AS ... built-in functions

2024-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36221:
---
Labels: pull-request-available  (was: )

> Add specification about CAST ... AS ... built-in functions
> --
>
> Key: FLINK-36221
> URL: https://issues.apache.org/jira/browse/FLINK-36221
> Project: Flink
>  Issue Type: Sub-task
>Reporter: yux
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> FLINK-34877 adds CAST ... AS ... syntax in transform expressions, but there's 
> no corresponding documentations yet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35049][state] Implement Map Async State API for ForStStateBackend [flink]

2024-09-04 Thread via GitHub


fredia commented on code in PR #24812:
URL: https://github.com/apache/flink/pull/24812#discussion_r174481


##
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStDBGetRequest.java:
##
@@ -45,6 +47,12 @@ public abstract class ForStDBGetRequest {
 this.future = future;
 }
 
+public void process(RocksDB db) throws IOException, RocksDBException {

Review Comment:
   The `process()` functions of `ForStDBPutRequest` , `ForStDBGetRequest` and 
`ForStDBIterRequest` have different parameters, it is not convenient to 
summarize then with a generic interface, I think it is better to keep it as it 
is.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35049][state] Implement Map Async State API for ForStStateBackend [flink]

2024-09-04 Thread via GitHub


Zakelly commented on code in PR #24812:
URL: https://github.com/apache/flink/pull/24812#discussion_r1744799308


##
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStDBGetRequest.java:
##
@@ -45,6 +47,12 @@ public abstract class ForStDBGetRequest {
 this.future = future;
 }
 
+public void process(RocksDB db) throws IOException, RocksDBException {

Review Comment:
   OK



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33749][core] Remove deprecated getter and setter method in Configuration. [flink]

2024-09-04 Thread via GitHub


JunRuiLee commented on PR #25288:
URL: https://github.com/apache/flink/pull/25288#issuecomment-2330639410

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36201) StateLocalitySlotAssigner should be only used when local recovery is enabled for Adaptive Scheduler

2024-09-04 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-36201:
-
Fix Version/s: 2.0-preview
   (was: 2.0.0)

> StateLocalitySlotAssigner should be only used when local recovery is enabled 
> for Adaptive Scheduler
> ---
>
> Key: FLINK-36201
> URL: https://issues.apache.org/jira/browse/FLINK-36201
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0-preview
>
>
> SlotSharingSlotAllocator created the StateLocalitySlotAssigner[1] instead of 
> DefaultSlotAssigner whenever failover happens.
> I'm curious why we use StateLocalitySlotAssigner when local recovery is 
> disable. 
> As I understand, the local recovery doesn't take effect if flink doesn't 
> backup state on the TM local disk. So StateLocalitySlotAssigner should be 
> only used when local recovery is enabled.
>  
> [1] 
> [https://github.com/apache/flink/blob/c869326d089705475481c2c2ea42a6efabb8c828/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/SlotSharingSlotAllocator.java#L136]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36201) StateLocalitySlotAssigner should be only used when local recovery is enabled for Adaptive Scheduler

2024-09-04 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879439#comment-17879439
 ] 

Xintong Song commented on FLINK-36201:
--

Hi [~dmvk], for code changes merged before the preview release, please mark the 
FixVersion as 2.0-preview.

> StateLocalitySlotAssigner should be only used when local recovery is enabled 
> for Adaptive Scheduler
> ---
>
> Key: FLINK-36201
> URL: https://issues.apache.org/jira/browse/FLINK-36201
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0-preview
>
>
> SlotSharingSlotAllocator created the StateLocalitySlotAssigner[1] instead of 
> DefaultSlotAssigner whenever failover happens.
> I'm curious why we use StateLocalitySlotAssigner when local recovery is 
> disable. 
> As I understand, the local recovery doesn't take effect if flink doesn't 
> backup state on the TM local disk. So StateLocalitySlotAssigner should be 
> only used when local recovery is enabled.
>  
> [1] 
> [https://github.com/apache/flink/blob/c869326d089705475481c2c2ea42a6efabb8c828/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/SlotSharingSlotAllocator.java#L136]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35049][state] Implement Map Async State API for ForStStateBackend [flink]

2024-09-04 Thread via GitHub


fredia commented on code in PR #24812:
URL: https://github.com/apache/flink/pull/24812#discussion_r1744859290


##
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStDBMapEntryIterRequest.java:
##
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.state.forst;
+
+import org.apache.flink.api.common.state.v2.StateIterator;
+import org.apache.flink.core.state.InternalStateFuture;
+import org.apache.flink.runtime.asyncprocessing.StateRequestHandler;
+import org.apache.flink.runtime.asyncprocessing.StateRequestType;
+
+import org.rocksdb.RocksDB;
+import org.rocksdb.RocksIterator;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+
+/** The ForSt {@link ForStDBIterRequest} which returns the entries of a 
ForStMapState. */
+public class ForStDBMapEntryIterRequest extends 
ForStDBIterRequest {
+
+private final InternalStateFuture>> future;
+
+public ForStDBMapEntryIterRequest(
+ContextKey contextKey,
+ForStMapState table,
+StateRequestHandler stateRequestHandler,
+byte[] toSeekBytes,
+InternalStateFuture>> future) {
+super(contextKey, table, stateRequestHandler, toSeekBytes);
+this.future = future;
+}
+
+@Override
+public void completeStateFutureExceptionally(String message, Throwable ex) 
{
+future.completeExceptionally(message, ex);
+}
+
+@Override
+public void process(RocksDB db, int cacheSizeLimit) throws IOException {
+try (RocksIterator iter = 
db.newIterator(table.getColumnFamilyHandle())) {

Review Comment:
   Yes👍, I put the `iterator` into `nextPayloadForContinuousLoading()`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35827][table-planner] Correcting equality comparisons between rowType fields and constants [flink]

2024-09-04 Thread via GitHub


xuyangzhong commented on PR #25229:
URL: https://github.com/apache/flink/pull/25229#issuecomment-2330686539

   @lincoln-lil Agree with you. I have updated this pr and removed the first 
commit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-36222) Support the use of LLM models for data processing in the transform module.

2024-09-04 Thread LvYanquan (Jira)
LvYanquan created FLINK-36222:
-

 Summary: Support the use of LLM models for data processing in the 
transform module.
 Key: FLINK-36222
 URL: https://issues.apache.org/jira/browse/FLINK-36222
 Project: Flink
  Issue Type: New Feature
  Components: Flink CDC
Affects Versions: cdc-3.3.0
Reporter: LvYanquan
 Fix For: cdc-3.3.0


Transform module allow us to generate some calculation columns, By combining 
the vector data generation capability provided by the LLM model, we can provide 
the ability to write vector data for scenarios such as RAG.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26821) Refactor Cassandra Sink implementation to the ASync Sink

2024-09-04 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879442#comment-17879442
 ] 

Robert Metzger commented on FLINK-26821:


I've unassigned Marco, Lorenzo, are you still interested?

> Refactor Cassandra Sink implementation to the ASync Sink
> 
>
> Key: FLINK-26821
> URL: https://issues.apache.org/jira/browse/FLINK-26821
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Cassandra
>Reporter: Martijn Visser
>Priority: Major
>
> The current Cassandra connector is using the SinkFunction. This needs to be 
> ported to the correct Flink API, which for Cassandra is most likely the ASync 
> Sink. More details about this API can be found in FLIP-171 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36223) SplitFetcher thread 0 received unexpected exception while polling the records

2024-09-04 Thread zjf (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zjf updated FLINK-36223:

Attachment: image-2024-09-05-14-36-12-672.png

> SplitFetcher thread 0 received unexpected exception while polling the records
> -
>
> Key: FLINK-36223
> URL: https://issues.apache.org/jira/browse/FLINK-36223
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.1
> Environment: # JDK 1.8
>  # SQL SERVER 2019
>  # FLINK CDC 3X
>Reporter: zjf
>Priority: Major
> Attachments: image-2024-09-05-14-17-56-066.png, 
> image-2024-09-05-14-23-48-144.png, image-2024-09-05-14-35-58-759.png, 
> image-2024-09-05-14-36-12-672.png
>
>
> 1.SQL Server dynamic table error occurred,The triggering condition is that 
> after I save the checkpoint, I add a new table to my Flink CDC, and then the 
> exception occurs when using the checkpoint to restore the CDC task
> 2.The error log information is as follows
> 024-09-04 18:08:56,577 INFO tracer[] [debezium-reader-0] 
> i.d.c.s.SqlServerStreamingChangeEventSource:? - CDC is enabled for table 
> Capture instance "T_BD_SUPPLIER_L" 
> [sourceTableId=AIS20231222100348.dbo.T_BD_SUPPLIER_L, 
> changeTableId=AIS20231222100348.cdc.T_BD_SUPPLIER_L_CT, 
> startLsn=000abdbd:192b:0001, changeTableObjectId=627568271, stopLsn=NULL] 
> but the table is not whitelisted by connector
> 2024-09-04 18:08:56,947 ERROR tracer[] [Source Data Fetcher for Source: 
> kingdee-cdc-supply_test-source (1/1)#0] o.a.f.c.b.s.r.f.SplitFetcherManager:? 
> - Received uncaught exception.
> java.lang.RuntimeException: SplitFetcher thread 0 received unexpected 
> exception while polling the records
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:168)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:117)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>     at java.util.concurrent.FutureTask.run(FutureTask.java)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:750)
> Caused by: 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.errors.ConnectException:
>  An exception occurred in the change event producer. This connector will be 
> stopped.
>     at 
> io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50)
>     at 
> io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.executeIteration(SqlServerStreamingChangeEventSource.java:459)
>     at 
> io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.execute(SqlServerStreamingChangeEventSource.java:138)
>     at 
> com.ververica.cdc.connectors.sqlserver.source.reader.fetch.SqlServerStreamFetchTask$StreamSplitReadTask.execute(SqlServerStreamFetchTask.java:161)
>     at 
> com.ververica.cdc.connectors.sqlserver.source.reader.fetch.SqlServerStreamFetchTask.execute(SqlServerStreamFetchTask.java:69)
>     at 
> com.ververica.cdc.connectors.base.source.reader.external.IncrementalSourceStreamFetcher.lambda$submitTask$0(IncrementalSourceStreamFetcher.java:89)
>     ... 6 common frames omitted
> Caused by: 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.errors.DataException:
>  file is not a valid field name
>     at 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254)
>     at 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:261)
>     at 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.getString(Struct.java:158)
>     at 
> com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher$SchemaChangeEventReceiver.schemaChangeRecordValue(JdbcSourceEventDispatcher.java:193)
>     at 
> com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher$SchemaChangeEventReceiver.schemaChangeEvent(JdbcSourceEventDispatcher.java:223)
>     at 
> io.debezium.connector.sqlserver.SqlServerSchemaChangeEventEmitter.emitSchemaChangeEvent(SqlServerSchemaChangeEventEmitter.java:47)
>     at 
> com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher.dispatchSchemaChangeEvent(JdbcSourceEventDispatcher.java:147)
>     at 
> com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher.dispatchSchemaChangeEvent(JdbcSourceEventDispatcher.java:62)
>     at 
> io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.getChangeTablesToQuery(SqlServerStreamingChangeEventS

Re: [PR] [FLINK-36151] Add schema evolution related docs [flink-cdc]

2024-09-04 Thread via GitHub


gtk96 commented on PR #3575:
URL: https://github.com/apache/flink-cdc/pull/3575#issuecomment-2330716899

   LGTM ! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36151] Add schema evolution related docs [flink-cdc]

2024-09-04 Thread via GitHub


gtk96 commented on code in PR #3575:
URL: https://github.com/apache/flink-cdc/pull/3575#discussion_r1730587567


##
docs/content.zh/docs/core-concept/schema-evolution.md:
##
@@ -0,0 +1,115 @@
+---
+title: "Schema Evolution"
+weight: 7
+type: docs
+aliases:
+  - /core-concept/schema-evolution/
+---
+
+
+# 定义
+
+**Schema Evolution** 功能可以用于将上游的 DDL 
变更事件同步到下游,例如创建新表、添加新列、重命名列或更改列类型、删除列、截断和删除表等。
+
+## 参数
+
+Schema Evolution 的行为可以通过配置以下参数来设定:
+
+```yaml
+pipeline:
+  schema.change.behavior: evolve
+```
+
+`schema.change.behavior` 是一个枚举类型,可以被设定为 
`exception`、`evolve`、`try_evolve`、`lenient`、或 `ignore`。
+
+## Schema Evolution 行为
+
+### Exception 模式
+
+在此模式下,所有结构变更行为均不被允许。
+一旦收到表结构变更事件,`SchemaOperator` 就会抛出异常。
+当您的下游接收器不能处理任何架构更改时,可以使用此模式。
+
+### Evolve 模式
+
+在此模式下,`SchemaOperator` 会将所有上游架构更改事件应用于下游接收器。
+如果尝试失败,则会从 `SchemaRegistry` 抛出异常并触发全局的故障重启。
+
+### TryEvolve 模式
+
+在此模式下,架构运算符还将尝试将上游架构更改事件应用于下游接收器。
+但是,如果下游接收器不支持特定的架构更改事件并报告失败,
+`SchemaOperator` 会容忍这一事件,并且在出现上下游表结构差异的情况下,尝试转换所有后续数据记录。
+
+> 警告:此类数据转换和转换不能保证无损。某些数据类型不兼容的字段可能会丢失。
+
+### Lenient 模式
+
+在此模式下,架构操作员将在转换所有上游架构更改事件后将其转换为下游接收器,以确保不会丢失任何数据。
+例如,`AlterColumnTypeEvent` 将被转换为两个单独的架构更改事件 `RenameColumnEvent` 和 
`AddColumnEvent`:
+保留上一列(具有更改前的类型),并添加一个新列(具有新类型)。
+
+这是默认的架构演变行为。
+
+> 注意:在此模式下,`TruncateTableEvent` 和 `DropTableEvent` 
默认不会被发送到下游,以避免意外的数据丢失。这一行为可以通过配置 [Per-Event Type 
Control](#per-event-type-control) 调整。
+
+### Ignore 模式
+
+在此模式下,所有架构更改事件都将被 `SchemaOperator` 默默接收,并且永远不会尝试将它们应用于下游接收器。
+当您的下游接收器尚未准备好进行任何架构更改,但想要继续从未更改的列中接收数据时,这很有用。
+
+## 按类型配置行为
+
+有时,将所有架构更改事件同步到下游可能并不合适。
+例如,允许 `AddColumnEvent` 但禁止 `DropColumnEvent` 是一种常见的情况,可以避免删除已有的数据。
+这可以通过在 `sink` 块中设置 `include.schema.changes` 和 `exclude.schema.changes` 选项来实现。
+
+### 选项
+
+| Option Key   | 注释  | 
是否可选 |
+|--|-|--|
+| `include.schema.changes` | 要应用的结构变更事件类型。如果未指定,则默认包含所有类型。   | 
是|
+| `exclude.schema.changes` | 不希望应用的结构变更事件类型。其优先级高于 `include.schema.changes`。 | 
是|
+
+> 在 Lenient 模式下,`TruncateTableEvent` 和 `DropTableEvent` 
默认会被忽略。在任何其他模式下,默认不会忽略任何事件。
+
+以下是可配置架构变更事件类型的完整列表:
+
+| 事件类型| 注释   |
+|-|--|
+| `add.column`| 向表中追加一列。 |
+| `alter.column.type` | 变更某一列的数据类型。  |
+| `create.table`  | 创建一张新表。  |
+| `drop.column`   | 删除某一列。   |
+| `drop.table`| 删除某张表。   |
+| `rename.column` | 修改某一列的名字。|
+| `truncate.table`| 清除某张表中的全部数据。 |
+
+支持部分匹配。例如,将 `drop` 传入上面的选项相当于同时传入 `drop.column` 和 `drop.table`。
+
+### 例子
+
+下面的 YAML 配置设置为包括 `CreateTableEvent` 和列相关事件,但 `DropColumnEvent` 除外。
+
+```yaml
+sink:
+  include.schema.changes: [create.table.event, column]

Review Comment:
   > create.table   --create.table.event 
   
   What are the differences between them?
   
   I noticed that the enumerated value in the documentation is create.table, 
but in the example, it is given as create.table.event.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36223) SplitFetcher thread 0 received unexpected exception while polling the records

2024-09-04 Thread zjf (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zjf updated FLINK-36223:

Description: 
1.SQL Server dynamic table error occurred,The triggering condition is that 
after I save the checkpoint, I add a new table to my Flink CDC, and then the 
exception occurs when using the checkpoint to restore the CDC task
2.The error log information is as follows

024-09-04 18:08:56,577 INFO tracer[] [debezium-reader-0] 
i.d.c.s.SqlServerStreamingChangeEventSource:? - CDC is enabled for table 
Capture instance "T_BD_SUPPLIER_L" 
[sourceTableId=AIS20231222100348.dbo.T_BD_SUPPLIER_L, 
changeTableId=AIS20231222100348.cdc.T_BD_SUPPLIER_L_CT, 
startLsn=000abdbd:192b:0001, changeTableObjectId=627568271, stopLsn=NULL] 
but the table is not whitelisted by connector
2024-09-04 18:08:56,947 ERROR tracer[] [Source Data Fetcher for Source: 
kingdee-cdc-supply_test-source (1/1)#0|#0] 
o.a.f.c.b.s.r.f.SplitFetcherManager:? - Received uncaught exception.
java.lang.RuntimeException: SplitFetcher thread 0 received unexpected exception 
while polling the records
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:168)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:117)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
    at java.util.concurrent.FutureTask.run(FutureTask.java)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
Caused by: 
com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.errors.ConnectException:
 An exception occurred in the change event producer. This connector will be 
stopped.
    at 
io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50)
    at 
io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.executeIteration(SqlServerStreamingChangeEventSource.java:459)
    at 
io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.execute(SqlServerStreamingChangeEventSource.java:138)
    at 
com.ververica.cdc.connectors.sqlserver.source.reader.fetch.SqlServerStreamFetchTask$StreamSplitReadTask.execute(SqlServerStreamFetchTask.java:161)
    at 
com.ververica.cdc.connectors.sqlserver.source.reader.fetch.SqlServerStreamFetchTask.execute(SqlServerStreamFetchTask.java:69)
    at 
com.ververica.cdc.connectors.base.source.reader.external.IncrementalSourceStreamFetcher.lambda$submitTask$0(IncrementalSourceStreamFetcher.java:89)
    ... 6 common frames omitted
Caused by: 
com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.errors.DataException:
 file is not a valid field name
    at 
com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254)
    at 
com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:261)
    at 
com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.getString(Struct.java:158)
    at 
com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher$SchemaChangeEventReceiver.schemaChangeRecordValue(JdbcSourceEventDispatcher.java:193)
    at 
com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher$SchemaChangeEventReceiver.schemaChangeEvent(JdbcSourceEventDispatcher.java:223)
    at 
io.debezium.connector.sqlserver.SqlServerSchemaChangeEventEmitter.emitSchemaChangeEvent(SqlServerSchemaChangeEventEmitter.java:47)
    at 
com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher.dispatchSchemaChangeEvent(JdbcSourceEventDispatcher.java:147)
    at 
com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher.dispatchSchemaChangeEvent(JdbcSourceEventDispatcher.java:62)
    at 
io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.getChangeTablesToQuery(SqlServerStreamingChangeEventSource.java:581)
    at 
io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.executeIteration(SqlServerStreamingChangeEventSource.java:237)
    ... 10 common frames omitted
3.The Maven configuration file I introduced is as follows

  
        1.19.1
        3.0.1
    


            org.apache.flink
            flink-connector-base
            ${flink.version}

        
        
            com.ververica
            flink-cdc-base
            ${flink.version}
        

        
            com.ververica
            flink-sql-connector-mysql-cdc
            ${sql-connector.version}
            compile
        

        
            com.ververica
            flink-sql-connector-sqlserver-cdc
            ${sql-connector.version}
        
        
            org.apache.flink
            flink-streaming-java
            ${flink.ver

[jira] [Updated] (FLINK-36223) SplitFetcher thread 0 received unexpected exception while polling the records

2024-09-04 Thread zjf (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zjf updated FLINK-36223:

Attachment: image-2024-09-05-14-39-07-070.png

> SplitFetcher thread 0 received unexpected exception while polling the records
> -
>
> Key: FLINK-36223
> URL: https://issues.apache.org/jira/browse/FLINK-36223
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.1
> Environment: # JDK 1.8
>  # SQL SERVER 2019
>  # FLINK CDC 3X
>Reporter: zjf
>Priority: Major
> Attachments: image-2024-09-05-14-17-56-066.png, 
> image-2024-09-05-14-23-48-144.png, image-2024-09-05-14-35-58-759.png, 
> image-2024-09-05-14-36-12-672.png, image-2024-09-05-14-37-46-581.png, 
> image-2024-09-05-14-38-30-542.png, image-2024-09-05-14-38-49-424.png, 
> image-2024-09-05-14-39-07-070.png
>
>
> 1.SQL Server dynamic table error occurred,The triggering condition is that 
> after I save the checkpoint, I add a new table to my Flink CDC, and then the 
> exception occurs when using the checkpoint to restore the CDC task
> 2.The error log information is as follows
> 024-09-04 18:08:56,577 INFO tracer[] [debezium-reader-0] 
> i.d.c.s.SqlServerStreamingChangeEventSource:? - CDC is enabled for table 
> Capture instance "T_BD_SUPPLIER_L" 
> [sourceTableId=AIS20231222100348.dbo.T_BD_SUPPLIER_L, 
> changeTableId=AIS20231222100348.cdc.T_BD_SUPPLIER_L_CT, 
> startLsn=000abdbd:192b:0001, changeTableObjectId=627568271, stopLsn=NULL] 
> but the table is not whitelisted by connector
> 2024-09-04 18:08:56,947 ERROR tracer[] [Source Data Fetcher for Source: 
> kingdee-cdc-supply_test-source (1/1)#0] o.a.f.c.b.s.r.f.SplitFetcherManager:? 
> - Received uncaught exception.
> java.lang.RuntimeException: SplitFetcher thread 0 received unexpected 
> exception while polling the records
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:168)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:117)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>     at java.util.concurrent.FutureTask.run(FutureTask.java)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:750)
> Caused by: 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.errors.ConnectException:
>  An exception occurred in the change event producer. This connector will be 
> stopped.
>     at 
> io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50)
>     at 
> io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.executeIteration(SqlServerStreamingChangeEventSource.java:459)
>     at 
> io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.execute(SqlServerStreamingChangeEventSource.java:138)
>     at 
> com.ververica.cdc.connectors.sqlserver.source.reader.fetch.SqlServerStreamFetchTask$StreamSplitReadTask.execute(SqlServerStreamFetchTask.java:161)
>     at 
> com.ververica.cdc.connectors.sqlserver.source.reader.fetch.SqlServerStreamFetchTask.execute(SqlServerStreamFetchTask.java:69)
>     at 
> com.ververica.cdc.connectors.base.source.reader.external.IncrementalSourceStreamFetcher.lambda$submitTask$0(IncrementalSourceStreamFetcher.java:89)
>     ... 6 common frames omitted
> Caused by: 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.errors.DataException:
>  file is not a valid field name
>     at 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254)
>     at 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:261)
>     at 
> com.ververica.cdc.connectors.shaded.org.apache.kafka.connect.data.Struct.getString(Struct.java:158)
>     at 
> com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher$SchemaChangeEventReceiver.schemaChangeRecordValue(JdbcSourceEventDispatcher.java:193)
>     at 
> com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher$SchemaChangeEventReceiver.schemaChangeEvent(JdbcSourceEventDispatcher.java:223)
>     at 
> io.debezium.connector.sqlserver.SqlServerSchemaChangeEventEmitter.emitSchemaChangeEvent(SqlServerSchemaChangeEventEmitter.java:47)
>     at 
> com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher.dispatchSchemaChangeEvent(JdbcSourceEventDispatcher.java:147)
>     at 
> com.ververica.cdc.connectors.base.relational.JdbcSourceEventDispatcher.dispatchSchemaChangeEvent(JdbcSourceEventDispatc

[jira] [Created] (FLINK-36224) Add the version mapping between pipeline connectors and flink

2024-09-04 Thread Thorne (Jira)
Thorne created FLINK-36224:
--

 Summary:  Add the version mapping between pipeline connectors and 
flink
 Key: FLINK-36224
 URL: https://issues.apache.org/jira/browse/FLINK-36224
 Project: Flink
  Issue Type: Sub-task
  Components: Flink CDC
Reporter: Thorne
 Fix For: cdc-3.2.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32483) RescaleCheckpointManuallyITCase.testCheckpointRescalingOutKeyedState fails on AZP

2024-09-04 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879451#comment-17879451
 ] 

Matthias Pohl edited comment on FLINK-32483 at 9/5/24 6:45 AM:
---

We're observed a similar failure in {{testCheckpointRescalingInKeyedState}} of 
our internal fork of Flink (that's why I cannot shared the link). The 
corresponding branch was based on Flink 1.19.
{code}
Sep 04 16:02:51 16:02:51.889 [ERROR] 
org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState
 -- Time elapsed: 12.33 s <<< FAILURE!04:43
Sep 04 16:02:51 java.lang.AssertionError: expected:<[(0,24000), (1,22500), 
(0,34500), (1,33000), (0,21000), (0,45000), (2,31500), (2,42000), (1,6000), 
(0,28500), (0,52500), (2,15000), (1,3000), (1,51000), (0,1500), (0,49500), 
(2,12000), (2,6), (0,36000), (1,10500), (1,58500), (0,46500), (0,9000), 
(0,57000), (2,19500), (2,43500), (1,7500), (1,55500), (2,3), (1,18000), 
(0,54000), (2,40500), (1,4500), (0,16500), (2,27000), (1,39000), (2,13500), 
(1,25500), (0,37500), (0,61500), (2,0), (2,48000)]> but was:<[(1,22500), 
(0,34500), (1,33000), (0,21000), (0,45000), (2,31500), (0,52500), (0,28500), 
(2,15000), (1,3000), (1,51000), (0,49500), (0,1500), (1,10500), (1,58500), 
(0,46500), (0,57000), (0,9000), (2,19500), (2,43500), (1,7500), (1,55500), 
(2,40500), (1,4500), (0,16500), (2,27000), (1,39000), (2,13500), (1,25500), 
(0,61500), (0,37500)]>04:43
Sep 04 16:02:51 at org.junit.Assert.fail(Assert.java:89)04:43
Sep 04 16:02:51 at org.junit.Assert.failNotEquals(Assert.java:835)04:43
Sep 04 16:02:51 at org.junit.Assert.assertEquals(Assert.java:120)04:43
Sep 04 16:02:51 at org.junit.Assert.assertEquals(Assert.java:146)04:43
Sep 04 16:02:51 at 
org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.restoreAndAssert(RescaleCheckpointManuallyITCase.java:219)04:43
Sep 04 16:02:51 at 
org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingKeyedState(RescaleCheckpointManuallyITCase.java:138)04:43
Sep 04 16:02:51 at 
org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState(RescaleCheckpointManuallyITCase.java:111)04:43
Sep 04 16:02:51 at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)04:43
Sep 04 16:02:51 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
{code}

I add this here rather than creating a new Jira issue because it looks like the 
issue is the same?


was (Author: mapohl):
We're observed a similar failure in {{testCheckpointRescalingInKeyedState}} of 
our internal fork of Flink (that's why I cannot shared the link). The 
corresponding branch was based on Flink 1.19.
```
Sep 04 16:02:51 16:02:51.889 [ERROR] 
org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState
 -- Time elapsed: 12.33 s <<< FAILURE!04:43
Sep 04 16:02:51 java.lang.AssertionError: expected:<[(0,24000), (1,22500), 
(0,34500), (1,33000), (0,21000), (0,45000), (2,31500), (2,42000), (1,6000), 
(0,28500), (0,52500), (2,15000), (1,3000), (1,51000), (0,1500), (0,49500), 
(2,12000), (2,6), (0,36000), (1,10500), (1,58500), (0,46500), (0,9000), 
(0,57000), (2,19500), (2,43500), (1,7500), (1,55500), (2,3), (1,18000), 
(0,54000), (2,40500), (1,4500), (0,16500), (2,27000), (1,39000), (2,13500), 
(1,25500), (0,37500), (0,61500), (2,0), (2,48000)]> but was:<[(1,22500), 
(0,34500), (1,33000), (0,21000), (0,45000), (2,31500), (0,52500), (0,28500), 
(2,15000), (1,3000), (1,51000), (0,49500), (0,1500), (1,10500), (1,58500), 
(0,46500), (0,57000), (0,9000), (2,19500), (2,43500), (1,7500), (1,55500), 
(2,40500), (1,4500), (0,16500), (2,27000), (1,39000), (2,13500), (1,25500), 
(0,61500), (0,37500)]>04:43
Sep 04 16:02:51 at org.junit.Assert.fail(Assert.java:89)04:43
Sep 04 16:02:51 at org.junit.Assert.failNotEquals(Assert.java:835)04:43
Sep 04 16:02:51 at org.junit.Assert.assertEquals(Assert.java:120)04:43
Sep 04 16:02:51 at org.junit.Assert.assertEquals(Assert.java:146)04:43
Sep 04 16:02:51 at 
org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.restoreAndAssert(RescaleCheckpointManuallyITCase.java:219)04:43
Sep 04 16:02:51 at 
org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingKeyedState(RescaleCheckpointManuallyITCase.java:138)04:43
Sep 04 16:02:51 at 
org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState(RescaleCheckpointManuallyITCase.java:111)04:43
Sep 04 16:02:51 at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)04:43
Sep 04 16:02:51 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
```

I