[jira] [Updated] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream

2024-07-08 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-34543:

Release Note: 
Support Full Partition Processing On Non-keyed DataStream
New Features:
fullWindowPartition Method: Introduced in the DataStream class to enable full 
window processing, allowing collection and processing of all records in each 
subtask.
PartitionWindowedStream Class: Extends DataStream, facilitating full window 
processing with several new APIs.
APIs for PartitionWindowedStream:
mapPartition: Processes records using MapPartitionFunction.
sortPartition: Supports sorting records by field or key.
aggregate: Applies aggregation functions incrementally within windows.
reduce: Performs reduction transformations on windowed records.

> Support Full Partition Processing On Non-keyed DataStream
> -
>
> Key: FLINK-34543
> URL: https://issues.apache.org/jira/browse/FLINK-34543
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Introduce the PartitionWindowedStream and provide multiple full window 
> operations in it.
> The related motivation and design can be found in 
> [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream

2024-07-08 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-34543:

Release Note: 
New Features:
fullWindowPartition Method: Introduced in the DataStream class to enable full 
window processing, allowing collection and processing of all records in each 
subtask.
PartitionWindowedStream Class: Extends DataStream, facilitating full window 
processing with several new APIs.
APIs for PartitionWindowedStream:
1.mapPartition: Processes records using MapPartitionFunction.
2.sortPartition: Sort records by field or key in the window.
3.aggregate: Aggregate records in the window.
4.reduce: Reduce records in the window.

  was:
Support Full Partition Processing On Non-keyed DataStream
New Features:
fullWindowPartition Method: Introduced in the DataStream class to enable full 
window processing, allowing collection and processing of all records in each 
subtask.
PartitionWindowedStream Class: Extends DataStream, facilitating full window 
processing with several new APIs.
APIs for PartitionWindowedStream:
mapPartition: Processes records using MapPartitionFunction.
sortPartition: Supports sorting records by field or key.
aggregate: Applies aggregation functions incrementally within windows.
reduce: Performs reduction transformations on windowed records.


> Support Full Partition Processing On Non-keyed DataStream
> -
>
> Key: FLINK-34543
> URL: https://issues.apache.org/jira/browse/FLINK-34543
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Introduce the PartitionWindowedStream and provide multiple full window 
> operations in it.
> The related motivation and design can be found in 
> [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35795) FLIP-466: Introduce ProcessFunction Attribute in DataStream API V2

2024-07-09 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-35795:
---

 Summary: FLIP-466: Introduce ProcessFunction Attribute in 
DataStream API V2
 Key: FLINK-35795
 URL: https://issues.apache.org/jira/browse/FLINK-35795
 Project: Flink
  Issue Type: Sub-task
Reporter: Wencong Liu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34448) ChangelogLocalRecoveryITCase#testRestartTM failed fatally with 127 exit code

2024-02-17 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17818186#comment-17818186
 ] 

Wencong Liu commented on FLINK-34448:
-

Maybe [~Yanfei Lei] could take a look 😄.

> ChangelogLocalRecoveryITCase#testRestartTM failed fatally with 127 exit code
> 
>
> Key: FLINK-34448
> URL: https://issues.apache.org/jira/browse/FLINK-34448
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
> Attachments: FLINK-34448.head.log.gz
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57550&view=logs&j=2c3cbe13-dee0-5837-cf47-3053da9a8a78&t=b78d9d30-509a-5cea-1fef-db7abaa325ae&l=8897
> \
> {code}
> Feb 16 02:43:47 02:43:47.142 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.2.2:test (integration-tests) 
> on project flink-tests: 
> Feb 16 02:43:47 02:43:47.142 [ERROR] 
> Feb 16 02:43:47 02:43:47.142 [ERROR] Please refer to 
> /__w/1/s/flink-tests/target/surefire-reports for the individual test results.
> Feb 16 02:43:47 02:43:47.142 [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> Feb 16 02:43:47 02:43:47.142 [ERROR] ExecutionException The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> Feb 16 02:43:47 02:43:47.142 [ERROR] Command was /bin/sh -c cd 
> '/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-11.0.19+7/bin/java' 
> '-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
> '--add-opens=java.base/java.util=ALL-UNNAMED' 
> '--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
> '/__w/1/s/flink-tests/target/surefire/surefirebooter-20240216015747138_560.jar'
>  '/__w/1/s/flink-tests/target/surefire' '2024-02-16T01-57-43_286-jvmRun4' 
> 'surefire-20240216015747138_558tmp' 'surefire_185-20240216015747138_559tmp'
> Feb 16 02:43:47 02:43:47.142 [ERROR] Error occurred in starting fork, check 
> output in log
> Feb 16 02:43:47 02:43:47.142 [ERROR] Process Exit Code: 127
> Feb 16 02:43:47 02:43:47.142 [ERROR] Crashed tests:
> Feb 16 02:43:47 02:43:47.142 [ERROR] 
> org.apache.flink.test.checkpointing.ChangelogLocalRecoveryITCase
> Feb 16 02:43:47 02:43:47.142 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> Feb 16 02:43:47 02:43:47.142 [ERROR] Command was /bin/sh -c cd 
> '/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-11.0.19+7/bin/java' 
> '-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
> '--add-opens=java.base/java.util=ALL-UNNAMED' 
> '--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
> '/__w/1/s/flink-tests/target/surefire/surefirebooter-20240216015747138_560.jar'
>  '/__w/1/s/flink-tests/target/surefire' '2024-02-16T01-57-43_286-jvmRun4' 
> 'surefire-20240216015747138_558tmp' 'surefire_185-20240216015747138_559tmp'
> Feb 16 02:43:47 02:43:47.142 [ERROR] Error occurred in starting fork, check 
> output in log
> Feb 16 02:43:47 02:43:47.142 [ERROR] Process Exit Code: 127
> Feb 16 02:43:47 02:43:47.142 [ERROR] Crashed tests:
> Feb 16 02:43:47 02:43:47.142 [ERROR] 
> org.apache.flink.test.checkpointing.ChangelogLocalRecoveryITCase
> Feb 16 02:43:47 02:43:47.142 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream

2024-02-28 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-34543:
---

 Summary: Support Full Partition Processing On Non-keyed DataStream
 Key: FLINK-34543
 URL: https://issues.apache.org/jira/browse/FLINK-34543
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
Affects Versions: 1.20.0
Reporter: Wencong Liu
 Fix For: 1.20.0


1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream.
2. Introduce SortPartition API in KeyedStream.

The related FLIP can be found in 
[FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream

2024-03-03 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-34543:

Description: 
1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream.
2. Introduce SortPartition API in KeyedStream.

The related motivation and design can be found in 
[FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].

  was:
1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream.
2. Introduce SortPartition API in KeyedStream.

The related FLIP can be found in 
[FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].


> Support Full Partition Processing On Non-keyed DataStream
> -
>
> Key: FLINK-34543
> URL: https://issues.apache.org/jira/browse/FLINK-34543
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> 1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream.
> 2. Introduce SortPartition API in KeyedStream.
> The related motivation and design can be found in 
> [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream

2024-03-12 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-34543:

Description: 
Introduce the PartitionWindowedStream and provide multiple full window 
operations in it.

The related motivation and design can be found in 
[FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].

  was:
1. Introduce MapParititon, SortPartition, Aggregate, Reduce API in DataStream.
2. Introduce SortPartition API in KeyedStream.

The related motivation and design can be found in 
[FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].


> Support Full Partition Processing On Non-keyed DataStream
> -
>
> Key: FLINK-34543
> URL: https://issues.apache.org/jira/browse/FLINK-34543
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Introduce the PartitionWindowedStream and provide multiple full window 
> operations in it.
> The related motivation and design can be found in 
> [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33356) The navigation bar on Flink’s official website is messed up.

2023-10-24 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33356:

Attachment: image-2023-10-25-12-34-22-790.png

> The navigation bar on Flink’s official website is messed up.
> 
>
> Key: FLINK-33356
> URL: https://issues.apache.org/jira/browse/FLINK-33356
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Junrui Li
>Priority: Major
> Attachments: image-2023-10-25-11-55-52-653.png, 
> image-2023-10-25-12-34-22-790.png
>
>
> The side navigation bar on the Flink official website at the following link: 
> [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed 
> up, as shown in the attached screenshot.
> !image-2023-10-25-11-55-52-653.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33356) The navigation bar on Flink’s official website is messed up.

2023-10-24 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17779330#comment-17779330
 ] 

Wencong Liu commented on FLINK-33356:
-

Hello [~JunRuiLi] , I found this case is due to the commit 
"30e8b3de05c1d6b75d8f27b9188a1d34f1589ac5", which modified the subproject 
commit. I think we should revert this change

!image-2023-10-25-12-34-22-790.png!

> The navigation bar on Flink’s official website is messed up.
> 
>
> Key: FLINK-33356
> URL: https://issues.apache.org/jira/browse/FLINK-33356
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Junrui Li
>Priority: Major
> Attachments: image-2023-10-25-11-55-52-653.png, 
> image-2023-10-25-12-34-22-790.png
>
>
> The side navigation bar on the Flink official website at the following link: 
> [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed 
> up, as shown in the attached screenshot.
> !image-2023-10-25-11-55-52-653.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33356) The navigation bar on Flink’s official website is messed up.

2023-10-24 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17779330#comment-17779330
 ] 

Wencong Liu edited comment on FLINK-33356 at 10/25/23 5:58 AM:
---

Hello [~JunRuiLi] , I found this case is due to the commit 
"30e8b3de05c1d6b75d8f27b9188a1d34f1589ac5", which modified the subproject 
commit. I think we should revert this change. Could you assign to me?

!image-2023-10-25-12-34-22-790.png!


was (Author: JIRAUSER281639):
Hello [~JunRuiLi] , I found this case is due to the commit 
"30e8b3de05c1d6b75d8f27b9188a1d34f1589ac5", which modified the subproject 
commit. I think we should revert this change

!image-2023-10-25-12-34-22-790.png!

> The navigation bar on Flink’s official website is messed up.
> 
>
> Key: FLINK-33356
> URL: https://issues.apache.org/jira/browse/FLINK-33356
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Junrui Li
>Priority: Major
> Attachments: image-2023-10-25-11-55-52-653.png, 
> image-2023-10-25-12-34-22-790.png
>
>
> The side navigation bar on the Flink official website at the following link: 
> [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed 
> up, as shown in the attached screenshot.
> !image-2023-10-25-11-55-52-653.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33356) The navigation bar on Flink’s official website is messed up.

2023-11-01 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17781629#comment-17781629
 ] 

Wencong Liu commented on FLINK-33356:
-

This is because of a recent failure in the documentation build. Once the issue 
with the document building process is resolved, the website will return to 
normal.

> The navigation bar on Flink’s official website is messed up.
> 
>
> Key: FLINK-33356
> URL: https://issues.apache.org/jira/browse/FLINK-33356
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Junrui Li
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: image-2023-10-25-11-55-52-653.png, 
> image-2023-10-25-12-34-22-790.png
>
>
> The side navigation bar on the Flink official website at the following link: 
> [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed 
> up, as shown in the attached screenshot.
> !image-2023-10-25-11-55-52-653.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33445) Translate DataSet migration guideline to Chinese

2023-11-02 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-33445:
---

 Summary: Translate DataSet migration guideline to Chinese
 Key: FLINK-33445
 URL: https://issues.apache.org/jira/browse/FLINK-33445
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.19.0
Reporter: Wencong Liu
 Fix For: 1.19.0


The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about 
adding an introduction about how to migrate DataSet API to DataStream has been 
merged into master branch. Here is the link in the Flink website: [How to 
Migrate from DataSet to DataStream | Apache 
Flink|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]

According to the [contribution 
guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
 we should add an identical markdown file in {{content.zh/}} and translate it 
to Chinese. Any community volunteers are welcomed to take this task.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33445) Translate DataSet migration guideline to Chinese

2023-11-02 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33445:

Description: 
The FLIINK-33041 about adding an introduction about how to migrate DataSet API 
to DataStream has been merged into master branch. Here is the 
[LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]
 in the Flink website.

According to the [contribution 
guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
 we should add an identical markdown file in {{content.zh/}} and translate it 
to Chinese. Any community volunteers are welcomed to take this task.

 

  was:
The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about 
adding an introduction about how to migrate DataSet API to DataStream has been 
merged into master branch. Here is the link in the Flink website: [How to 
Migrate from DataSet to DataStream | Apache 
Flink|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]

According to the [contribution 
guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
 we should add an identical markdown file in {{content.zh/}} and translate it 
to Chinese. Any community volunteers are welcomed to take this task.

 


> Translate DataSet migration guideline to Chinese
> 
>
> Key: FLINK-33445
> URL: https://issues.apache.org/jira/browse/FLINK-33445
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> The FLIINK-33041 about adding an introduction about how to migrate DataSet 
> API to DataStream has been merged into master branch. Here is the 
> [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]
>  in the Flink website.
> According to the [contribution 
> guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
>  we should add an identical markdown file in {{content.zh/}} and translate it 
> to Chinese. Any community volunteers are welcomed to take this task.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33445) Translate DataSet migration guideline to Chinese

2023-11-02 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33445:

Labels: starter  (was: )

> Translate DataSet migration guideline to Chinese
> 
>
> Key: FLINK-33445
> URL: https://issues.apache.org/jira/browse/FLINK-33445
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
>  Labels: starter
> Fix For: 1.19.0
>
>
> The FLIINK-33041 about adding an introduction about how to migrate DataSet 
> API to DataStream has been merged into master branch. Here is the 
> [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]
>  in the Flink website.
> According to the [contribution 
> guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
>  we should add an identical markdown file in {{content.zh/}} and translate it 
> to Chinese. Any community volunteers are welcomed to take this task.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33445) Translate DataSet migration guideline to Chinese

2023-11-02 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33445:

Component/s: chinese-translation
 (was: Documentation)

> Translate DataSet migration guideline to Chinese
> 
>
> Key: FLINK-33445
> URL: https://issues.apache.org/jira/browse/FLINK-33445
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> The FLIINK-33041 about adding an introduction about how to migrate DataSet 
> API to DataStream has been merged into master branch. Here is the 
> [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]
>  in the Flink website.
> According to the [contribution 
> guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
>  we should add an identical markdown file in {{content.zh/}} and translate it 
> to Chinese. Any community volunteers are welcomed to take this task.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33445) Translate DataSet migration guideline to Chinese

2023-11-02 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33445:

Description: 
The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about 
adding an introduction about how to migrate DataSet API to DataStream has been 
merged into master branch. Here is the 
[LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]
 in the Flink website.

According to the [contribution 
guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
 we should add an identical markdown file in {{content.zh/}} and translate it 
to Chinese. Any community volunteers are welcomed to take this task.

 

  was:
The FLIINK-33041 about adding an introduction about how to migrate DataSet API 
to DataStream has been merged into master branch. Here is the 
[LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]
 in the Flink website.

According to the [contribution 
guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
 we should add an identical markdown file in {{content.zh/}} and translate it 
to Chinese. Any community volunteers are welcomed to take this task.

 


> Translate DataSet migration guideline to Chinese
> 
>
> Key: FLINK-33445
> URL: https://issues.apache.org/jira/browse/FLINK-33445
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
>  Labels: starter
> Fix For: 1.19.0
>
>
> The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about 
> adding an introduction about how to migrate DataSet API to DataStream has 
> been merged into master branch. Here is the 
> [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]
>  in the Flink website.
> According to the [contribution 
> guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
>  we should add an identical markdown file in {{content.zh/}} and translate it 
> to Chinese. Any community volunteers are welcomed to take this task.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33445) Translate DataSet migration guideline to Chinese

2023-11-02 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17782384#comment-17782384
 ] 

Wencong Liu commented on FLINK-33445:
-

Thanks [~liyubin117] ! Assigned to you. Please go ahead.

> Translate DataSet migration guideline to Chinese
> 
>
> Key: FLINK-33445
> URL: https://issues.apache.org/jira/browse/FLINK-33445
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Yubin Li
>Priority: Major
>  Labels: starter
> Fix For: 1.19.0
>
>
> The [FLIINK-33041|https://issues.apache.org/jira/browse/FLINK-33041] about 
> adding an introduction about how to migrate DataSet API to DataStream has 
> been merged into master branch. Here is the 
> [LINK|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration/]
>  in the Flink website.
> According to the [contribution 
> guidelines|https://flink.apache.org/how-to-contribute/contribute-documentation/#chinese-documentation-translation],
>  we should add an identical markdown file in {{content.zh/}} and translate it 
> to Chinese. Any community volunteers are welcomed to take this task.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33323) HybridShuffleITCase fails with produced an uncaught exception in FatalExitExceptionHandler

2023-11-09 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17784350#comment-17784350
 ] 

Wencong Liu commented on FLINK-33323:
-

Thanks for your reminder! [~mapohl] . Could you please help or teach me get the 
complete log during the running phase of HybridShuffleITCase like `mvn-3.zip` 
file in this jira? I've taken a look at your issue and it seems that the 
phenomenon doesn't match the one described in this Jira. Therefore, I would 
require additional logs to further investigate. 😄

> HybridShuffleITCase fails with produced an uncaught exception in 
> FatalExitExceptionHandler
> --
>
> Key: FLINK-33323
> URL: https://issues.apache.org/jira/browse/FLINK-33323
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Wencong Liu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Attachments: mvn-3.zip
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53853&view=logs&j=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3&t=712ade8c-ca16-5b76-3acd-14df33bc1cb1&l=9166
> fails with
> {noformat}
> 01:15:38,516 [blocking-shuffle-io-thread-4] ERROR 
> org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: 
> Thread 'blocking-shuffle-io-thread-4' produced an uncaught exception. 
> Stopping the process...
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4275bb45[Not
>  completed, task = 
> java.util.concurrent.Executors$RunnableAdapter@488dd035[Wrapped task = 
> org.apache.fl
> ink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler$$Lambda$2561/0x000801a2f728@464a3754]]
>  rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@22747816[Shutting down, pool 
> size = 10, active threads = 9,
>  queued tasks = 1, completed tasks = 1]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)
>  ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562)
>  ~[?:?]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.run(DiskIOScheduler.java:151)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.tier.disk.DiskIOScheduler.lambda$triggerScheduling$0(DiskIOScheduler.java:308)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
> at java.lang.Thread.run(Thread.java:833) [?:?]
> {noformat}
> also logs are attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error

2023-11-17 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17787224#comment-17787224
 ] 

Wencong Liu commented on FLINK-33502:
-

Thank you for your reminder [~mapohl] . I would like to ask if you know any way 
to obtain the complete runtime logs of this ITCase? In the local IDE, we can 
configure _log4j2-test.properties_ to directly output INFO-level logs to the 
console. From the link on Github, I can only see that the process exit code is 
239. Based on this information alone, I am currently unable to identify the 
root cause.🤔

> HybridShuffleITCase caused a fatal error
> 
>
> Key: FLINK-33502
> URL: https://issues.apache.org/jira/browse/FLINK-33502
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177]
> {code:java}
> Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
> output in log
> 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9170Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9171Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd 
> /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC 
> -Xms256m -XX:+IgnoreUnrecognizedVMOptions 
> --add-opens=java.base/java.util=ALL-UNNAMED 
> --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar 
> /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar
>  /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 
> surefire6242806641230738408tmp surefire_1603959900047297795160tmp
> 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, 
> check output in log
> 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9176Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9177Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 9178Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 9179Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 9180Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33502) HybridShuffleITCase caused a fatal error

2023-11-19 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33502:

Attachment: image-2023-11-20-14-37-37-321.png

> HybridShuffleITCase caused a fatal error
> 
>
> Key: FLINK-33502
> URL: https://issues.apache.org/jira/browse/FLINK-33502
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
> Attachments: image-2023-11-20-14-37-37-321.png
>
>
> [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177]
> {code:java}
> Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
> output in log
> 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9170Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9171Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd 
> /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC 
> -Xms256m -XX:+IgnoreUnrecognizedVMOptions 
> --add-opens=java.base/java.util=ALL-UNNAMED 
> --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar 
> /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar
>  /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 
> surefire6242806641230738408tmp surefire_1603959900047297795160tmp
> 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, 
> check output in log
> 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9176Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9177Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 9178Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 9179Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 9180Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error

2023-11-19 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17787828#comment-17787828
 ] 

Wencong Liu commented on FLINK-33502:
-

Thank you for your detailed reply. I am currently trying to download the build 
artifacts for the corresponding stage. 
However, I noticed that the log collection downloaded using the method shown in 
the figure is different from the logs-ci-test_ci_tests-1699014739.zip that you 
mentioned.
!image-2023-11-20-14-37-37-321.png|width=839,height=434!  
Could you please advise me on how to download 
logs-ci-test_ci_tests-1699014739.zip?

> HybridShuffleITCase caused a fatal error
> 
>
> Key: FLINK-33502
> URL: https://issues.apache.org/jira/browse/FLINK-33502
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
> Attachments: image-2023-11-20-14-37-37-321.png
>
>
> [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177]
> {code:java}
> Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
> output in log
> 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9170Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9171Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd 
> /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC 
> -Xms256m -XX:+IgnoreUnrecognizedVMOptions 
> --add-opens=java.base/java.util=ALL-UNNAMED 
> --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar 
> /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar
>  /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 
> surefire6242806641230738408tmp surefire_1603959900047297795160tmp
> 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, 
> check output in log
> 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9176Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9177Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 9178Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 9179Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 9180Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error

2023-11-22 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17788998#comment-17788998
 ] 

Wencong Liu commented on FLINK-33502:
-

Thank [~mapohl]  for your help. The fix should be merged soon.

> HybridShuffleITCase caused a fatal error
> 
>
> Key: FLINK-33502
> URL: https://issues.apache.org/jira/browse/FLINK-33502
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, test-stability
> Attachments: image-2023-11-20-14-37-37-321.png
>
>
> [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177]
> {code:java}
> Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
> output in log
> 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9170Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9171Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd 
> /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC 
> -Xms256m -XX:+IgnoreUnrecognizedVMOptions 
> --add-opens=java.base/java.util=ALL-UNNAMED 
> --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar 
> /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar
>  /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 
> surefire6242806641230738408tmp surefire_1603959900047297795160tmp
> 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, 
> check output in log
> 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9176Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9177Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 9178Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 9179Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 9180Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33502) HybridShuffleITCase caused a fatal error

2023-11-22 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17788998#comment-17788998
 ] 

Wencong Liu edited comment on FLINK-33502 at 11/23/23 6:49 AM:
---

Thank [~mapohl]  for your help. The fix will be merged soon.


was (Author: JIRAUSER281639):
Thank [~mapohl]  for your help. The fix should be merged soon.

> HybridShuffleITCase caused a fatal error
> 
>
> Key: FLINK-33502
> URL: https://issues.apache.org/jira/browse/FLINK-33502
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, test-stability
> Attachments: image-2023-11-20-14-37-37-321.png
>
>
> [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177]
> {code:java}
> Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
> output in log
> 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9170Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9171Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd 
> /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC 
> -Xms256m -XX:+IgnoreUnrecognizedVMOptions 
> --add-opens=java.base/java.util=ALL-UNNAMED 
> --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar 
> /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar
>  /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 
> surefire6242806641230738408tmp surefire_1603959900047297795160tmp
> 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, 
> check output in log
> 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9176Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9177Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 9178Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 9179Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 9180Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33626) Wrong style in flink ui

2023-11-23 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33626:

Attachment: image-2023-11-23-16-23-57-678.png

> Wrong style in flink ui
> ---
>
> Key: FLINK-33626
> URL: https://issues.apache.org/jira/browse/FLINK-33626
> Project: Flink
>  Issue Type: Bug
>  Components: Travis
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Priority: Major
> Attachments: image-2023-11-23-16-06-44-000.png, 
> image-2023-11-23-16-23-57-678.png
>
>
> https://nightlies.apache.org/flink/flink-docs-master/
>  !image-2023-11-23-16-06-44-000.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33626) Wrong style in flink ui

2023-11-23 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17789020#comment-17789020
 ] 

Wencong Liu commented on FLINK-33626:
-

This is a similar issue with [FLINK-33356] The navigation bar on Flink’s 
official website is messed up. - ASF JIRA (apache.org)

[~snuyanzin]  could you revert the modification to the file {*}book{*}?
!image-2023-11-23-16-23-57-678.png!

> Wrong style in flink ui
> ---
>
> Key: FLINK-33626
> URL: https://issues.apache.org/jira/browse/FLINK-33626
> Project: Flink
>  Issue Type: Bug
>  Components: Travis
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Priority: Major
> Attachments: image-2023-11-23-16-06-44-000.png, 
> image-2023-11-23-16-23-57-678.png
>
>
> https://nightlies.apache.org/flink/flink-docs-master/
>  !image-2023-11-23-16-06-44-000.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33626) Wrong style in flink ui

2023-11-23 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17789020#comment-17789020
 ] 

Wencong Liu edited comment on FLINK-33626 at 11/23/23 8:27 AM:
---

This is a similar issue with FLINK-33356 The navigation bar on Flink’s official 
website is messed up. - ASF JIRA (apache.org)

[~Sergey Nuyanzin]  could you revert the modification to the file {*}book{*}?
!image-2023-11-23-16-23-57-678.png!


was (Author: JIRAUSER281639):
This is a similar issue with [FLINK-33356] The navigation bar on Flink’s 
official website is messed up. - ASF JIRA (apache.org)

[~snuyanzin]  could you revert the modification to the file {*}book{*}?
!image-2023-11-23-16-23-57-678.png!

> Wrong style in flink ui
> ---
>
> Key: FLINK-33626
> URL: https://issues.apache.org/jira/browse/FLINK-33626
> Project: Flink
>  Issue Type: Bug
>  Components: Travis
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Priority: Major
> Attachments: image-2023-11-23-16-06-44-000.png, 
> image-2023-11-23-16-23-57-678.png
>
>
> https://nightlies.apache.org/flink/flink-docs-master/
>  !image-2023-11-23-16-06-44-000.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch

2024-01-10 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17805352#comment-17805352
 ] 

Wencong Liu commented on FLINK-33009:
-

I've opened a pull request and CI has passed. 😄

> tools/release/update_japicmp_configuration.sh should only enable binary 
> compatibility checks in the release branch
> --
>
> Key: FLINK-33009
> URL: https://issues.apache.org/jira/browse/FLINK-33009
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
>
> According to [Flink's API compatibility 
> constraints|https://nightlies.apache.org/flink/flink-docs-master/docs/ops/upgrading/],
>  we only support binary compatibility between patch versions. In 
> [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246]
>  we have binary compatibility enabled even in {{master}}. This doesn't comply 
> with the rules. We should this flag disabled in {{master}}. The 
> {{tools/release/update_japicmp_configuration.sh}} should enable this flag in 
> the release branch as part of the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34237) MongoDB connector compile failed with Flink 1.19-SNAPSHOT

2024-01-25 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17811101#comment-17811101
 ] 

Wencong Liu commented on FLINK-34237:
-

Thanks for the reminder. I'll fix it as soon as possible.

> MongoDB connector compile failed with Flink 1.19-SNAPSHOT
> -
>
> Key: FLINK-34237
> URL: https://issues.apache.org/jira/browse/FLINK-34237
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Connectors / MongoDB
>Reporter: Leonard Xu
>Assignee: Wencong Liu
>Priority: Blocker
> Fix For: 1.19.0
>
>
> {code:java}
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile 
> (default-compile) on project flink-connector-mongodb: Compilation failure
> 134Error:  
> /home/runner/work/flink-connector-mongodb/flink-connector-mongodb/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReaderContext.java:[35,8]
>  org.apache.flink.connector.mongodb.source.reader.MongoSourceReaderContext is 
> not abstract and does not override abstract method getTaskInfo() in 
> org.apache.flink.api.connector.source.SourceReaderContext
> 135{code}
> [https://github.com/apache/flink-connector-mongodb/actions/runs/7657281844/job/20867604084]
> This is related to FLINK-33905
> One point: As 
> [FLIP-382|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs]
>  is accepted,  all connectors who implement SourceReaderContext (i.e 
> MongoSourceReaderContext) should implement new introduced methods ` 
> getTaskInfo()` if they want to compile/work with Flink 1.19.
> Another point: The FLIP-382 didn't mentioned the connector backward 
> compatibility well, maybe we need to rethink the section. As I just have a 
> rough look at the FLIP, maybe [~xtsong] and [~Wencong Liu] could comment 
> under this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34246) Allow only archive failed job to history server

2024-01-26 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17811468#comment-17811468
 ] 

Wencong Liu commented on FLINK-34246:
-

Thanks [~qingwei91], for suggesting this. Are you suggesting that we should 
offer an option that allows the HistoryServer to archive only the failed batch 
jobs? This requirement seems quite specific. For instance, we would also need 
to consider archiving the logs of failed streaming jobs.

> Allow only archive failed job to history server
> ---
>
> Key: FLINK-34246
> URL: https://issues.apache.org/jira/browse/FLINK-34246
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: Lim Qing Wei
>Priority: Minor
>
> Hi, I wonder if we can support only archiving Failed job to History Server.
> History server is a great tool to allow us to check on previous job, we are 
> using FLink batch which can run many times throughout the week, we only need 
> to check job on History Server when it has failed.
> It would be more efficient if we can choose to only store a subset of the 
> data.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-30 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu reopened FLINK-32978:
-

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-30 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu closed FLINK-32978.
---
Release Note: 
The RichFunction#open(Configuration parameters) method has been deprecated and 
will be removed in future versions. Users are encouraged to migrate to the new 
RichFunction#open(OpenContext openContext) method, which provides a more 
comprehensive context for initialization.

Here are the key changes and recommendations for migration:

The open(Configuration parameters) method is now marked as deprecated.
A new method open(OpenContext openContext) has been added as a default method 
to the RichFunction interface.
Users should implement the new open(OpenContext openContext) method for 
function initialization tasks. The new method will be called automatically 
before the execution of any processing methods (map, join, etc.).
If the new open(OpenContext openContext) method is not implemented, Flink will 
fall back to invoking the deprecated open(Configuration parameters) method.
  Resolution: Fixed

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-30 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812213#comment-17812213
 ] 

Wencong Liu commented on FLINK-32978:
-

[~martijnvisser] Thanks for the reminding. I've added the release notes 
information.

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34132) Batch WordCount job fails when run with AdaptiveBatch scheduler

2024-02-01 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17813316#comment-17813316
 ] 

Wencong Liu commented on FLINK-34132:
-

Thanks for the reminding. [~zhuzh] I will address these issues when I have some 
free time. 😄

> Batch WordCount job fails when run with AdaptiveBatch scheduler
> ---
>
> Key: FLINK-34132
> URL: https://issues.apache.org/jira/browse/FLINK-34132
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.17.1, 1.18.1
>Reporter: Prabhu Joseph
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Batch WordCount job fails when run with AdaptiveBatch scheduler.
> *Repro Steps*
> {code:java}
> flink-yarn-session -Djobmanager.scheduler=adaptive -d
>  flink run -d /usr/lib/flink/examples/batch/WordCount.jar --input 
> s3://prabhuflinks3/INPUT --output s3://prabhuflinks3/OUT
> {code}
> *Error logs*
> {code:java}
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: 
> org.apache.flink.runtime.client.JobInitializationException: Could not start 
> the JobMaster.
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:851)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:245)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1095)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.flink.runtime.client.JobInitializationException: Could not start 
> the JobMaster.
>   at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
>   at 
> org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:1067)
>   at 
> org.apache.flink.client.program.ContextEnvironment.executeAsync(ContextEnvironment.java:144)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:73)
>   at 
> org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:106)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
>   ... 12 more
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: 
> org.apache.flink.runtime.client.JobInitializationException: Could not start 
> the JobMaster.
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:1062)
>   ... 20 more
> Caused by: java.lang.RuntimeException: 
> org.apache.flink.runtime.client.JobInitializationException: Could not start 
> the JobMaster.
>   at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
>   at 
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:457)
>   at java.util.concurren

[jira] [Commented] (FLINK-33652) First Steps documentation is having empty page link

2023-11-27 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17790075#comment-17790075
 ] 

Wencong Liu commented on FLINK-33652:
-

Hello [~pranav.sharma], thanks for the careful investigation. Feel free to open 
a pull request!

> First Steps documentation is having empty page link
> ---
>
> Key: FLINK-33652
> URL: https://issues.apache.org/jira/browse/FLINK-33652
> Project: Flink
>  Issue Type: Bug
> Environment: Web
>Reporter: Pranav Sharma
>Priority: Minor
> Attachments: image-2023-11-26-15-23-02-007.png, 
> image-2023-11-26-15-25-04-708.png
>
>
>  
> Under this page URL 
> [link|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/try-flink/local_installation/],
>  under "Summary" heading, the "concepts" link is pointing to an empty page 
> [link_on_concepts|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/concepts/].
>  Upon visiting, the tab heading contains HTML as well. (Attached screenshots)
> It may be pointed to concepts/overview instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error

2023-12-03 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17792610#comment-17792610
 ] 

Wencong Liu commented on FLINK-33502:
-

Thanks [~JunRuiLi] . I have investigated it and found that the root cause is 
different with this issue. But the exception caught in the outermost layer is 
same. I'll reopen this issue and fix it as soon as possible.

> HybridShuffleITCase caused a fatal error
> 
>
> Key: FLINK-33502
> URL: https://issues.apache.org/jira/browse/FLINK-33502
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
> Attachments: image-2023-11-20-14-37-37-321.png
>
>
> [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177]
> {code:java}
> Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
> output in log
> 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9170Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9171Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd 
> /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC 
> -Xms256m -XX:+IgnoreUnrecognizedVMOptions 
> --add-opens=java.base/java.util=ALL-UNNAMED 
> --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar 
> /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar
>  /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 
> surefire6242806641230738408tmp surefire_1603959900047297795160tmp
> 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, 
> check output in log
> 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9176Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9177Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 9178Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 9179Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 9180Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33502) HybridShuffleITCase caused a fatal error

2023-12-18 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17798413#comment-17798413
 ] 

Wencong Liu commented on FLINK-33502:
-

Sorry for the late reply. I've just identified the issue and proposed a fix; it 
should be stable now. [~mapohl] 

> HybridShuffleITCase caused a fatal error
> 
>
> Key: FLINK-33502
> URL: https://issues.apache.org/jira/browse/FLINK-33502
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
> Attachments: image-2023-11-20-14-37-37-321.png
>
>
> [https://github.com/XComp/flink/actions/runs/6789774296/job/18458197040#step:12:9177]
> {code:java}
> Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, check 
> output in log
> 9168Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9169Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9170Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9171Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 9172Error: 21:21:35 21:21:35.379 [ERROR] Command was /bin/sh -c cd 
> /root/flink/flink-tests && /usr/lib/jvm/jdk-11.0.19+7/bin/java -XX:+UseG1GC 
> -Xms256m -XX:+IgnoreUnrecognizedVMOptions 
> --add-opens=java.base/java.util=ALL-UNNAMED 
> --add-opens=java.base/java.io=ALL-UNNAMED -Xmx1536m -jar 
> /root/flink/flink-tests/target/surefire/surefirebooter10811559899200556131.jar
>  /root/flink/flink-tests/target/surefire 2023-11-07T20-32-50_466-jvmRun4 
> surefire6242806641230738408tmp surefire_1603959900047297795160tmp
> 9173Error: 21:21:35 21:21:35.379 [ERROR] Error occurred in starting fork, 
> check output in log
> 9174Error: 21:21:35 21:21:35.379 [ERROR] Process Exit Code: 239
> 9175Error: 21:21:35 21:21:35.379 [ERROR] Crashed tests:
> 9176Error: 21:21:35 21:21:35.379 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase
> 9177Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 9178Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 9179Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 9180Error: 21:21:35 21:21:35.379 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33905) FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs

2023-12-20 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-33905:
---

 Summary: FLIP-382: Unify the Provision of Diverse Metadata for 
Context-like APIs
 Key: FLINK-33905
 URL: https://issues.apache.org/jira/browse/FLINK-33905
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Affects Versions: 1.19.0
Reporter: Wencong Liu


This ticket is proposed for 
[FLIP-382|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33939) Make husky in runtime-web no longer affect git global hooks

2023-12-25 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800411#comment-17800411
 ] 

Wencong Liu commented on FLINK-33939:
-

Thanks for raising this issue! 😄 I completely agree with your proposal to make 
front-end code detection an optional command execution in our use of husky with 
runtime-web. By doing this, we can preserve the functionality of any globally 
configured git hooks.

> Make husky in runtime-web no longer affect git global hooks
> ---
>
> Key: FLINK-33939
> URL: https://issues.apache.org/jira/browse/FLINK-33939
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Jason TANG
>Priority: Minor
>
> Since runtime-web relies on husky to ensure that front-end code changes are 
> detected before `git commit`, husky modifies the global git hooks 
> (core.hooksPath) so that core.hooksPath won't take effect if it's configured 
> globally, I thought it would be a good idea to make the front-end code 
> detection a optional command execution, which ensures that the globally 
> configured hooks are executed correctly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2023-12-26 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-33949:
---

 Summary: METHOD_ABSTRACT_NOW_DEFAULT should be both source 
compatible and binary compatible
 Key: FLINK-33949
 URL: https://issues.apache.org/jira/browse/FLINK-33949
 Project: Flink
  Issue Type: Bug
  Components: Test Infrastructure
Affects Versions: 1.19.0
Reporter: Wencong Liu
 Fix For: 1.19.0


Currently  I'm trying to refactor some APIs annotated by @Public in [FLIP-382: 
Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - 
Apache Software 
Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
 When an abstract method is changed into a default method, the japicmp maven 
plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source 
incompatible and binary incompatible.

The reason maybe that if the abstract method becomes default, the logic in the 
default method will be ignored by the previous implementations.

I create a test case in which a job is compiled with newly changed default 
method and submitted to the previous version. There is no exception thrown. 
Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
source and binary.

By the way, currently the master branch checks both source compatibility and 
binary compatibility between minor versions. According to Flink's API 
compatibility constraints, the master branch shouldn't check binary 
compatibility. There is already a [Jira|[FLINK-33009] 
tools/release/update_japicmp_configuration.sh should only enable binary 
compatibility checks in the release branch - ASF JIRA (apache.org)] to track it 
and we should fix it as soon as possible.

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2023-12-26 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33949:

Description: 
Currently  I'm trying to refactor some APIs annotated by @Public in [FLIP-382: 
Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - 
Apache Software 
Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
 When an abstract method is changed into a default method, the japicmp maven 
plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source 
incompatible and binary incompatible.

The reason maybe that if the abstract method becomes default, the logic in the 
default method will be ignored by the previous implementations.

I create a test case in which a job is compiled with newly changed default 
method and submitted to the previous version. There is no exception thrown. 
Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
source and binary.

By the way, currently the master branch checks both source compatibility and 
binary compatibility between minor versions. According to Flink's API 
compatibility constraints, the master branch shouldn't check binary 
compatibility. There is already jira FLINK-33009 to track it and we should fix 
it as soon as possible.

 

 

 

  was:
Currently  I'm trying to refactor some APIs annotated by @Public in [FLIP-382: 
Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - 
Apache Software 
Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
 When an abstract method is changed into a default method, the japicmp maven 
plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source 
incompatible and binary incompatible.

The reason maybe that if the abstract method becomes default, the logic in the 
default method will be ignored by the previous implementations.

I create a test case in which a job is compiled with newly changed default 
method and submitted to the previous version. There is no exception thrown. 
Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
source and binary.

By the way, currently the master branch checks both source compatibility and 
binary compatibility between minor versions. According to Flink's API 
compatibility constraints, the master branch shouldn't check binary 
compatibility. There is already a [Jira|[FLINK-33009] 
tools/release/update_japicmp_configuration.sh should only enable binary 
compatibility checks in the release branch - ASF JIRA (apache.org)] to track it 
and we should fix it as soon as possible.

 

 

 


> METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
> compatible
> --
>
> Key: FLINK-33949
> URL: https://issues.apache.org/jira/browse/FLINK-33949
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently  I'm trying to refactor some APIs annotated by @Public in 
> [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - 
> Apache Flink - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
>  When an abstract method is changed into a default method, the japicmp maven 
> plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as 
> source incompatible and binary incompatible.
> The reason maybe that if the abstract method becomes default, the logic in 
> the default method will be ignored by the previous implementations.
> I create a test case in which a job is compiled with newly changed default 
> method and submitted to the previous version. There is no exception thrown. 
> Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
> source and binary.
> By the way, currently the master branch checks both source compatibility and 
> binary compatibility between minor versions. According to Flink's API 
> compatibility constraints, the master branch shouldn't check binary 
> compatibility. There is already jira FLINK-33009 to track it and we should 
> fix it as soon as possible.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2023-12-26 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33949:

Description: 
Currently  I'm trying to refactor some APIs annotated by @Public in [FLIP-382: 
Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - 
Apache Software 
Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
 When an abstract method is changed into a default method, the japicmp maven 
plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source 
incompatible and binary incompatible.

The reason maybe that if the abstract method becomes default, the logic in the 
default method will be ignored by the previous implementations.

I create a test case in which a job is compiled with newly changed default 
method and submitted to the previous version. There is no exception thrown. 
Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
source and binary. We could add the following settings to override the default 
values for binary and source compatibility, such as:
{code:java}


   METHOD_ABSTRACT_NOW_DEFAULT
   true
   true

 {code}
By the way, currently the master branch checks both source compatibility and 
binary compatibility between minor versions. According to Flink's API 
compatibility constraints, the master branch shouldn't check binary 
compatibility. There is already jira FLINK-33009 to track it and we should fix 
it as soon as possible.

 

 

 

  was:
Currently  I'm trying to refactor some APIs annotated by @Public in [FLIP-382: 
Unify the Provision of Diverse Metadata for Context-like APIs - Apache Flink - 
Apache Software 
Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
 When an abstract method is changed into a default method, the japicmp maven 
plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as source 
incompatible and binary incompatible.

The reason maybe that if the abstract method becomes default, the logic in the 
default method will be ignored by the previous implementations.

I create a test case in which a job is compiled with newly changed default 
method and submitted to the previous version. There is no exception thrown. 
Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
source and binary.

By the way, currently the master branch checks both source compatibility and 
binary compatibility between minor versions. According to Flink's API 
compatibility constraints, the master branch shouldn't check binary 
compatibility. There is already jira FLINK-33009 to track it and we should fix 
it as soon as possible.

 

 

 


> METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
> compatible
> --
>
> Key: FLINK-33949
> URL: https://issues.apache.org/jira/browse/FLINK-33949
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently  I'm trying to refactor some APIs annotated by @Public in 
> [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - 
> Apache Flink - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
>  When an abstract method is changed into a default method, the japicmp maven 
> plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as 
> source incompatible and binary incompatible.
> The reason maybe that if the abstract method becomes default, the logic in 
> the default method will be ignored by the previous implementations.
> I create a test case in which a job is compiled with newly changed default 
> method and submitted to the previous version. There is no exception thrown. 
> Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
> source and binary. We could add the following settings to override the 
> default values for binary and source compatibility, such as:
> {code:java}
> 
> 
>METHOD_ABSTRACT_NOW_DEFAULT
>true
>true
> 
>  {code}
> By the way, currently the master branch checks both source compatibility and 
> binary compatibility between minor versions. According to Flink's API 
> compatibility constraints, the master branch shouldn't check binary 
> compatibility. There is already jira FLINK-33009 to track it and we should 
> fix it as soon as possible.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch

2023-12-26 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800696#comment-17800696
 ] 

Wencong Liu commented on FLINK-33009:
-

Hi [~mapohl] , I've encountered the same issue once more when I'm making some 
code changes considered binary incompatible by japicmp. I'd like to take this 
ticket and fix it. WDYT?

> tools/release/update_japicmp_configuration.sh should only enable binary 
> compatibility checks in the release branch
> --
>
> Key: FLINK-33009
> URL: https://issues.apache.org/jira/browse/FLINK-33009
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>
> According to Flink's API compatibility constraints, we only support binary 
> compatibility between versions. In 
> [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246]
>  we have binary compatibility enabled even in {{master}}. This doesn't comply 
> with the rules. We should this flag disabled in {{master}}. The 
> {{tools/release/update_japicmp_configuration.sh}} should enable this flag in 
> the release branch as part of the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch

2023-12-26 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800696#comment-17800696
 ] 

Wencong Liu edited comment on FLINK-33009 at 12/27/23 6:42 AM:
---

Hi [~mapohl] , I've encountered the same issue once more in [FLINK-33949] 
METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
compatible - ASF JIRA (apache.org)when I'm making some code changes considered 
binary incompatible by japicmp. I'd like to take this ticket and fix it. WDYT?


was (Author: JIRAUSER281639):
Hi [~mapohl] , I've encountered the same issue once more when I'm making some 
code changes considered binary incompatible by japicmp. I'd like to take this 
ticket and fix it. WDYT?

> tools/release/update_japicmp_configuration.sh should only enable binary 
> compatibility checks in the release branch
> --
>
> Key: FLINK-33009
> URL: https://issues.apache.org/jira/browse/FLINK-33009
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>
> According to Flink's API compatibility constraints, we only support binary 
> compatibility between versions. In 
> [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246]
>  we have binary compatibility enabled even in {{master}}. This doesn't comply 
> with the rules. We should this flag disabled in {{master}}. The 
> {{tools/release/update_japicmp_configuration.sh}} should enable this flag in 
> the release branch as part of the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch

2023-12-26 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800696#comment-17800696
 ] 

Wencong Liu edited comment on FLINK-33009 at 12/27/23 6:43 AM:
---

Hi [~mapohl] , I've encountered the same issue once more in FLINK-33949 when 
I'm making some code changes considered binary incompatible by japicmp. I'd 
like to take this ticket and fix it. WDYT?


was (Author: JIRAUSER281639):
Hi [~mapohl] , I've encountered the same issue once more in [FLINK-33949] 
METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
compatible - ASF JIRA (apache.org)when I'm making some code changes considered 
binary incompatible by japicmp. I'd like to take this ticket and fix it. WDYT?

> tools/release/update_japicmp_configuration.sh should only enable binary 
> compatibility checks in the release branch
> --
>
> Key: FLINK-33009
> URL: https://issues.apache.org/jira/browse/FLINK-33009
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>
> According to Flink's API compatibility constraints, we only support binary 
> compatibility between versions. In 
> [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246]
>  we have binary compatibility enabled even in {{master}}. This doesn't comply 
> with the rules. We should this flag disabled in {{master}}. The 
> {{tools/release/update_japicmp_configuration.sh}} should enable this flag in 
> the release branch as part of the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2023-12-27 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800787#comment-17800787
 ] 

Wencong Liu commented on FLINK-33949:
-

Thanks [~martijnvisser] for your comments. The implementation classes of the 
@Public API have already overridden the abstract methods. After an abstract 
method becomes default, the behaviors of these implementation classes will not 
change any behaviors themselves. WDYT? 😄

> METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
> compatible
> --
>
> Key: FLINK-33949
> URL: https://issues.apache.org/jira/browse/FLINK-33949
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently  I'm trying to refactor some APIs annotated by @Public in 
> [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - 
> Apache Flink - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
>  When an abstract method is changed into a default method, the japicmp maven 
> plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as 
> source incompatible and binary incompatible.
> The reason maybe that if the abstract method becomes default, the logic in 
> the default method will be ignored by the previous implementations.
> I create a test case in which a job is compiled with newly changed default 
> method and submitted to the previous version. There is no exception thrown. 
> Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
> source and binary. We could add the following settings to override the 
> default values for binary and source compatibility, such as:
> {code:java}
> 
> 
>METHOD_ABSTRACT_NOW_DEFAULT
>true
>true
> 
>  {code}
> By the way, currently the master branch checks both source compatibility and 
> binary compatibility between minor versions. According to Flink's API 
> compatibility constraints, the master branch shouldn't check binary 
> compatibility. There is already jira FLINK-33009 to track it and we should 
> fix it as soon as possible.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2023-12-27 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800802#comment-17800802
 ] 

Wencong Liu commented on FLINK-33949:
-

For the users have built implementations themselves, they still don't need any 
code changes when they upgrade to a new version with abstract->default changes. 
This change could ensure the source compatibility. [~martijnvisser] 

> METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
> compatible
> --
>
> Key: FLINK-33949
> URL: https://issues.apache.org/jira/browse/FLINK-33949
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently  I'm trying to refactor some APIs annotated by @Public in 
> [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - 
> Apache Flink - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
>  When an abstract method is changed into a default method, the japicmp maven 
> plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as 
> source incompatible and binary incompatible.
> The reason maybe that if the abstract method becomes default, the logic in 
> the default method will be ignored by the previous implementations.
> I create a test case in which a job is compiled with newly changed default 
> method and submitted to the previous version. There is no exception thrown. 
> Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
> source and binary. We could add the following settings to override the 
> default values for binary and source compatibility, such as:
> {code:java}
> 
> 
>METHOD_ABSTRACT_NOW_DEFAULT
>true
>true
> 
>  {code}
> By the way, currently the master branch checks both source compatibility and 
> binary compatibility between minor versions. According to Flink's API 
> compatibility constraints, the master branch shouldn't check binary 
> compatibility. There is already jira FLINK-33009 to track it and we should 
> fix it as soon as possible.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2023-12-27 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800913#comment-17800913
 ] 

Wencong Liu commented on FLINK-33949:
-

Suppose we have two completely independent interfaces, I and J, both declaring 
a default method M with the same signature. Now, if there is a class T that 
implements both interfaces I and J but *does not override* the conflicting 
method M, the compiler would not know which interface's default method 
implementation to use, as they both have equal priority. If the code containing 
class T tries to invoke this method at runtime, the JVM would throw an 
{{IncompatibleClassChangeError}} because it is faced with an impossible 
decision: it does not know which interface’s default implementation to call.

However, if M is abstract in I or J, the implementation class T *must* provides 
an explicit implementation of the method. So no matter how interfaces I or J 
change (as long as the signature of their method  M does not change), it will 
not affect the behavior of the implementation class T or cause an 
{{{}IncompatibleClassChangeError{}}}. Class T will continue to use its own 
method M implementation, disregarding any default implementations from the two 
interfaces.

 

I have create a test case, where the StreamingRuntimeContext will be added a 
method return TestObject:
{code:java}
public class TestObject implements TestInterface1, TestInterface2 {
@Override
public String getResult() {
return "777";
}
} 
public interface TestInterface1 {
String getResult();
}
public interface TestInterface2 {
default String getResult() {
return "666";
}
}{code}
The job code is in the follows. The job is compiled with the modifiled 
StreamingRuntimeContext in Flink.
{code:java}
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment executionEnvironment = 
StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource source =
executionEnvironment.fromData(3, 2, 1, 4, 5, 6, 7, 8);
SingleOutputStreamOperator result = source.map(new 
RichMapFunction() {
@Override
public String map(Integer integer) {
StreamingRuntimeContext runtimeContext = 
(StreamingRuntimeContext)getRuntimeContext();
return runtimeContext.getTestObject().getResult();
}
});
CloseableIterator jobResult = result.executeAndCollect();
while (jobResult.hasNext())
System.out.println(jobResult.next());
} {code}
When I change the abstract method getResult into default in TestInterface1 and 
recompiled Flink. The job is still able to finish without any code changes and 
exceptions.

Therefore, I think the METHOD_ABSTRACT_NOW_DEFAULT doesn't break source 
compatibility. WDYT?

 

> METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
> compatible
> --
>
> Key: FLINK-33949
> URL: https://issues.apache.org/jira/browse/FLINK-33949
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently  I'm trying to refactor some APIs annotated by @Public in 
> [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - 
> Apache Flink - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
>  When an abstract method is changed into a default method, the japicmp maven 
> plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as 
> source incompatible and binary incompatible.
> The reason maybe that if the abstract method becomes default, the logic in 
> the default method will be ignored by the previous implementations.
> I create a test case in which a job is compiled with newly changed default 
> method and submitted to the previous version. There is no exception thrown. 
> Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
> source and binary. We could add the following settings to override the 
> default values for binary and source compatibility, such as:
> {code:java}
> 
> 
>METHOD_ABSTRACT_NOW_DEFAULT
>true
>true
> 
>  {code}
> By the way, currently the master branch checks both source compatibility and 
> binary compatibility between minor versions. According to Flink's API 
> compatibility constraints, the master branch shouldn't check binary 
> compatibility. There is already jira FLINK-33009 to track it and we should 
> fix it as soon as possible.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2023-12-27 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800913#comment-17800913
 ] 

Wencong Liu edited comment on FLINK-33949 at 12/28/23 3:45 AM:
---

Suppose we have two completely independent interfaces, I and J, both declaring 
a default method M with the same signature. Now, if there is a class T that 
implements both interfaces I and J but *does not override* the conflicting 
method M, the compiler would not know which interface's default method 
implementation to use, as they both have equal priority. If the code containing 
class T tries to invoke this method at runtime, the JVM would throw an 
{{IncompatibleClassChangeError}} because it is faced with an impossible 
decision: it does not know which interface’s default implementation to call.

However, if M is abstract in I or J, the implementation class T *must* provides 
an explicit implementation of the method. So no matter how interfaces I or J 
change (as long as the signature of their method  M does not change), it will 
not affect the behavior of the implementation class T or cause an 
{{{}IncompatibleClassChangeError{}}}. Class T will continue to use its own 
method M implementation, disregarding any default implementations from the two 
interfaces.

 

I have create a test case, where the StreamingRuntimeContext will be added a 
method return TestObject:
{code:java}
public class TestObject implements TestInterface1, TestInterface2 {
@Override
public String getResult() {
return "777";
}
} 
public interface TestInterface1 {
String getResult();
}
public interface TestInterface2 {
default String getResult() {
return "666";
}
}{code}
The job code is in the follows. The job is compiled with the modifiled 
StreamingRuntimeContext in Flink.
{code:java}
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment executionEnvironment = 
StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource source =
executionEnvironment.fromData(3, 2, 1, 4, 5, 6, 7, 8);
SingleOutputStreamOperator result = source.map(new 
RichMapFunction() {
@Override
public String map(Integer integer) {
StreamingRuntimeContext runtimeContext = 
(StreamingRuntimeContext)getRuntimeContext();
return runtimeContext.getTestObject().getResult();
}
});
CloseableIterator jobResult = result.executeAndCollect();
while (jobResult.hasNext())
System.out.println(jobResult.next());
} {code}
When I change the abstract method getResult into default in TestInterface1 and 
recompiled Flink. The job is still able to finish without any code changes and 
exceptions.

Therefore, I think the METHOD_ABSTRACT_NOW_DEFAULT doesn't break source 
compatibility. WDYT? [~martijnvisser] 

 


was (Author: JIRAUSER281639):
Suppose we have two completely independent interfaces, I and J, both declaring 
a default method M with the same signature. Now, if there is a class T that 
implements both interfaces I and J but *does not override* the conflicting 
method M, the compiler would not know which interface's default method 
implementation to use, as they both have equal priority. If the code containing 
class T tries to invoke this method at runtime, the JVM would throw an 
{{IncompatibleClassChangeError}} because it is faced with an impossible 
decision: it does not know which interface’s default implementation to call.

However, if M is abstract in I or J, the implementation class T *must* provides 
an explicit implementation of the method. So no matter how interfaces I or J 
change (as long as the signature of their method  M does not change), it will 
not affect the behavior of the implementation class T or cause an 
{{{}IncompatibleClassChangeError{}}}. Class T will continue to use its own 
method M implementation, disregarding any default implementations from the two 
interfaces.

 

I have create a test case, where the StreamingRuntimeContext will be added a 
method return TestObject:
{code:java}
public class TestObject implements TestInterface1, TestInterface2 {
@Override
public String getResult() {
return "777";
}
} 
public interface TestInterface1 {
String getResult();
}
public interface TestInterface2 {
default String getResult() {
return "666";
}
}{code}
The job code is in the follows. The job is compiled with the modifiled 
StreamingRuntimeContext in Flink.
{code:java}
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment executionEnvironment = 
StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource source =
executionEnvironment.fromData(3, 2, 1, 4, 5, 6, 7, 8);
SingleOutputStreamOperator result = source.map(new 
RichMapFunction() {
@Override
public String map(I

[jira] [Commented] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2024-01-02 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17801988#comment-17801988
 ] 

Wencong Liu commented on FLINK-33949:
-

Thanks for the explanation from [~chesnay] . Given that all the actively 
running code might throw related exceptions, it would be unreasonable to 
directly modify the rules of japicmp. If there's a specific interface that 
needs to break this rule, we should simply exclude that interface. This ticket 
can be closed now.

> METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
> compatible
> --
>
> Key: FLINK-33949
> URL: https://issues.apache.org/jira/browse/FLINK-33949
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently  I'm trying to refactor some APIs annotated by @Public in 
> [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - 
> Apache Flink - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
>  When an abstract method is changed into a default method, the japicmp maven 
> plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as 
> source incompatible and binary incompatible.
> The reason maybe that if the abstract method becomes default, the logic in 
> the default method will be ignored by the previous implementations.
> I create a test case in which a job is compiled with newly changed default 
> method and submitted to the previous version. There is no exception thrown. 
> Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
> source and binary. We could add the following settings to override the 
> default values for binary and source compatibility, such as:
> {code:java}
> 
> 
>METHOD_ABSTRACT_NOW_DEFAULT
>true
>true
> 
>  {code}
> By the way, currently the master branch checks both source compatibility and 
> binary compatibility between minor versions. According to Flink's API 
> compatibility constraints, the master branch shouldn't check binary 
> compatibility. There is already jira FLINK-33009 to track it and we should 
> fix it as soon as possible.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33949) METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary compatible

2024-01-02 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu closed FLINK-33949.
---
Resolution: Not A Problem

> METHOD_ABSTRACT_NOW_DEFAULT should be both source compatible and binary 
> compatible
> --
>
> Key: FLINK-33949
> URL: https://issues.apache.org/jira/browse/FLINK-33949
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently  I'm trying to refactor some APIs annotated by @Public in 
> [FLIP-382: Unify the Provision of Diverse Metadata for Context-like APIs - 
> Apache Flink - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-382%3A+Unify+the+Provision+of+Diverse+Metadata+for+Context-like+APIs].
>  When an abstract method is changed into a default method, the japicmp maven 
> plugin names this change METHOD_ABSTRACT_NOW_DEFAULT and considers it as 
> source incompatible and binary incompatible.
> The reason maybe that if the abstract method becomes default, the logic in 
> the default method will be ignored by the previous implementations.
> I create a test case in which a job is compiled with newly changed default 
> method and submitted to the previous version. There is no exception thrown. 
> Therefore, the METHOD_ABSTRACT_NOW_DEFAULT shouldn't be incompatible both for 
> source and binary. We could add the following settings to override the 
> default values for binary and source compatibility, such as:
> {code:java}
> 
> 
>METHOD_ABSTRACT_NOW_DEFAULT
>true
>true
> 
>  {code}
> By the way, currently the master branch checks both source compatibility and 
> binary compatibility between minor versions. According to Flink's API 
> compatibility constraints, the master branch shouldn't check binary 
> compatibility. There is already jira FLINK-33009 to track it and we should 
> fix it as soon as possible.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-09 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17804660#comment-17804660
 ] 

Wencong Liu commented on FLINK-32978:
-

Thanks for proposing this issue 😄. I will investigate all implementation 
classes annotated by @Public or @PublicEvolving and open a pull request to 
revert the error changes.

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-09 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17804660#comment-17804660
 ] 

Wencong Liu edited comment on FLINK-32978 at 1/9/24 9:45 AM:
-

Thanks for proposing this issue 😄. I will investigate all modified 
implementation classes annotated by @Public or @PublicEvolving and open a pull 
request to revert the error changes.


was (Author: JIRAUSER281639):
Thanks for proposing this issue 😄. I will investigate all implementation 
classes annotated by @Public or @PublicEvolving and open a pull request to 
revert the error changes.

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34718) KeyedPartitionWindowedStream and NonPartitionWindowedStream IllegalStateException in AZP

2024-03-18 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17828011#comment-17828011
 ] 

Wencong Liu commented on FLINK-34718:
-

Sure, I'll take a look now. [~mapohl] 

> KeyedPartitionWindowedStream and NonPartitionWindowedStream 
> IllegalStateException in AZP
> 
>
> Key: FLINK-34718
> URL: https://issues.apache.org/jira/browse/FLINK-34718
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58320&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=ea7cf968-e585-52cb-e0fc-f48de023a7ca&l=9646]
> 18 of the KeyedPartitionWindowedStreamITCase and 
> NonKeyedPartitionWindowedStreamITCase unit tests introduced in FLINK-34543 
> are failing in the adaptive scheduler profile, with errors similar to:
> {code:java}
> Mar 15 01:54:12 Caused by: java.lang.IllegalStateException: The adaptive 
> scheduler supports pipelined data exchanges (violated by MapPartition 
> (org.apache.flink.streaming.runtime.tasks.OneInputStreamTask) -> 
> ddb598ad156ed281023ba4eebbe487e3).
> Mar 15 01:54:12   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.assertPreconditions(AdaptiveScheduler.java:438)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.(AdaptiveScheduler.java:356)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerFactory.createInstance(AdaptiveSchedulerFactory.java:124)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:384)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:361)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100)
> Mar 15 01:54:12   at 
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
> Mar 15 01:54:12   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> Mar 15 01:54:12   ... 4 more
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34718) KeyedPartitionWindowedStream and NonPartitionWindowedStream IllegalStateException in AZP

2024-03-18 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17828019#comment-17828019
 ] 

Wencong Liu commented on FLINK-34718:
-

The newly introduced DataStream operators are designed based on the mechanism 
of FLIP-331, which means that the ResultPartitionType for specific operators in 
a streaming job can be BLOCKING. However, the AdaptiveScheduler mandates that 
the ResultPartitionType for all operators must be PIPELINED, therefore, these 
operators are not suitable to be executed under the configuration of the 
AdaptiveScheduler. The default scheduler for IT tests is the 
{_}DefaultScheduler{_}, and I'm curious as to why it would change to the 
AdaptiveScheduler. 🤔 [~rskraba] 

> KeyedPartitionWindowedStream and NonPartitionWindowedStream 
> IllegalStateException in AZP
> 
>
> Key: FLINK-34718
> URL: https://issues.apache.org/jira/browse/FLINK-34718
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58320&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=ea7cf968-e585-52cb-e0fc-f48de023a7ca&l=9646]
> 18 of the KeyedPartitionWindowedStreamITCase and 
> NonKeyedPartitionWindowedStreamITCase unit tests introduced in FLINK-34543 
> are failing in the adaptive scheduler profile, with errors similar to:
> {code:java}
> Mar 15 01:54:12 Caused by: java.lang.IllegalStateException: The adaptive 
> scheduler supports pipelined data exchanges (violated by MapPartition 
> (org.apache.flink.streaming.runtime.tasks.OneInputStreamTask) -> 
> ddb598ad156ed281023ba4eebbe487e3).
> Mar 15 01:54:12   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.assertPreconditions(AdaptiveScheduler.java:438)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.(AdaptiveScheduler.java:356)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerFactory.createInstance(AdaptiveSchedulerFactory.java:124)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:384)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:361)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128)
> Mar 15 01:54:12   at 
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100)
> Mar 15 01:54:12   at 
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
> Mar 15 01:54:12   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> Mar 15 01:54:12   ... 4 more
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35221) Support SQL 2011 reserved keywords as identifiers in Flink HiveParser

2024-04-23 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-35221:

Description: 
According to Hive user documentation[1], starting from version 0.13.0, Hive 
prohibits the use of reserved keywords as identifiers. Moreover, versions 2.1.0 
and earlier allow using SQL11 reserved keywords as identifiers by setting 
{{hive.support.sql11.reserved.keywords=false}} in hive-site.xml. This 
compatibility feature facilitates jobs that utilize keywords as identifiers.

HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to treat 
SQL11 reserved keywords as identifiers. This poses a challenge for users 
migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios 
where keywords are used as identifiers. Addressing this issue is necessary to 
support such cases.

[1] [LanguageManual DDL - Apache Hive - Apache Software 
Foundation|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL]

  was:
According to Hive user documentation[1], starting from version 0.13.0, Hive 
prohibits the use of reserved keywords as identifiers. Moreover, versions 2.1.0 
and earlier allow using SQL11 reserved keywords as identifiers by setting 
{{hive.support.sql11.reserved.keywords=false}} in hive-site.xml. This 
compatibility feature facilitates jobs that utilize keywords as identifiers.

HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to treat 
SQL11 reserved keywords as identifiers. This poses a challenge for users 
migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios 
where keywords are used as identifiers. Addressing this issue is necessary to 
support such cases.


> Support SQL 2011 reserved keywords as identifiers in Flink HiveParser 
> --
>
> Key: FLINK-35221
> URL: https://issues.apache.org/jira/browse/FLINK-35221
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.20.0
>Reporter: Wencong Liu
>Priority: Major
>
> According to Hive user documentation[1], starting from version 0.13.0, Hive 
> prohibits the use of reserved keywords as identifiers. Moreover, versions 
> 2.1.0 and earlier allow using SQL11 reserved keywords as identifiers by 
> setting {{hive.support.sql11.reserved.keywords=false}} in hive-site.xml. This 
> compatibility feature facilitates jobs that utilize keywords as identifiers.
> HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to treat 
> SQL11 reserved keywords as identifiers. This poses a challenge for users 
> migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios 
> where keywords are used as identifiers. Addressing this issue is necessary to 
> support such cases.
> [1] [LanguageManual DDL - Apache Hive - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35221) Support SQL 2011 reserved keywords as identifiers in Flink HiveParser

2024-04-23 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-35221:
---

 Summary: Support SQL 2011 reserved keywords as identifiers in 
Flink HiveParser 
 Key: FLINK-35221
 URL: https://issues.apache.org/jira/browse/FLINK-35221
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Affects Versions: 1.20.0
Reporter: Wencong Liu


According to Hive user documentation[1], starting from version 0.13.0, Hive 
prohibits the use of reserved keywords as identifiers. Moreover, versions 2.1.0 
and earlier allow using SQL11 reserved keywords as identifiers by setting 
{{hive.support.sql11.reserved.keywords=false}} in hive-site.xml. This 
compatibility feature facilitates jobs that utilize keywords as identifiers.

HiveParser in Flink, relying on Hive version 2.3.9, lacks the option to treat 
SQL11 reserved keywords as identifiers. This poses a challenge for users 
migrating SQL from Hive 1.x to Flink SQL, as they might encounter scenarios 
where keywords are used as identifiers. Addressing this issue is necessary to 
support such cases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch

2023-08-31 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17760830#comment-17760830
 ] 

Wencong Liu commented on FLINK-33009:
-

Thanks [~mapohl] for the summary. 

BTW, I found this issue because I'm trying to add a new default method to an 
existed *@ Public* interface. Although this behavior shouldn't introduce binary 
or source incompatibility, it cannot pass the check.
I have reported this in the japicmp community. [New static method in interface 
detected as METHOD_NEW_DEFAULT · Issue #289 · siom79/japicmp 
(github.com)|https://github.com/siom79/japicmp/issues/289].

> tools/release/update_japicmp_configuration.sh should only enable binary 
> compatibility checks in the release branch
> --
>
> Key: FLINK-33009
> URL: https://issues.apache.org/jira/browse/FLINK-33009
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>
> According to Flink's API compatibility constraints, we only support binary 
> compatibility between versions. In 
> [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246]
>  we have binary compatibility enabled even in {{master}}. This doesn't comply 
> with the rules. We should this flag disabled in {{master}}. The 
> {{tools/release/update_japicmp_configuration.sh}} should enable this flag in 
> the release branch as part of the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33009) tools/release/update_japicmp_configuration.sh should only enable binary compatibility checks in the release branch

2023-08-31 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17760830#comment-17760830
 ] 

Wencong Liu edited comment on FLINK-33009 at 8/31/23 9:49 AM:
--

Thanks [~mapohl] for the summary. 

BTW, I found this issue because I'm trying to add a new default method to an 
existed *@ Public* interface. Although this behavior shouldn't introduce 
source(or binary) incompatibility, it cannot pass the check.
I have reported this in the japicmp community. [New static method in interface 
detected as METHOD_NEW_DEFAULT · Issue #289 · siom79/japicmp 
(github.com)|https://github.com/siom79/japicmp/issues/289].


was (Author: JIRAUSER281639):
Thanks [~mapohl] for the summary. 

BTW, I found this issue because I'm trying to add a new default method to an 
existed *@ Public* interface. Although this behavior shouldn't introduce binary 
or source incompatibility, it cannot pass the check.
I have reported this in the japicmp community. [New static method in interface 
detected as METHOD_NEW_DEFAULT · Issue #289 · siom79/japicmp 
(github.com)|https://github.com/siom79/japicmp/issues/289].

> tools/release/update_japicmp_configuration.sh should only enable binary 
> compatibility checks in the release branch
> --
>
> Key: FLINK-33009
> URL: https://issues.apache.org/jira/browse/FLINK-33009
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>
> According to Flink's API compatibility constraints, we only support binary 
> compatibility between versions. In 
> [apache-flink:pom.xml:2246|https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246]
>  we have binary compatibility enabled even in {{master}}. This doesn't comply 
> with the rules. We should this flag disabled in {{master}}. The 
> {{tools/release/update_japicmp_configuration.sh}} should enable this flag in 
> the release branch as part of the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33041) Add an introduction about how to migrate DataSet API to DataStream

2023-09-05 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-33041:
---

 Summary: Add an introduction about how to migrate DataSet API to 
DataStream
 Key: FLINK-33041
 URL: https://issues.apache.org/jira/browse/FLINK-33041
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.18.0
Reporter: Wencong Liu
 Fix For: 1.18.0


The DataSet API has been formally deprecated and will no longer receive active 
maintenance and support. It will be removed in the Flink 2.0 version. Flink 
users are recommended to migrate from the DataSet API to the DataStream API, 
Table API and SQL for their data processing requirements.

Most of the DataSet operators can be implemented using the DataStream API. 
However, we believe it would be beneficial to have an introductory article on 
the Flink website that guides users in migrating their DataSet jobs to 
DataStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33041) Add an introduction about how to migrate DataSet API to DataStream

2023-09-06 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762540#comment-17762540
 ] 

Wencong Liu commented on FLINK-33041:
-

Thanks very much for your reminder [~echauchot] . I have carefully read through 
this blog and it's really good. However, I noticed that the blog only covers a 
limited number of DataSet operators. It does not include other operators like 
MapPartition or GroupReduce on Grouped DataSet. This pull request has provided 
a more comprehensive article on how to migrate all DataSet operators to 
DataStream. I will add some of the content from your blog to this pull request 
such as the difference about ExecutionEnvironment/Source/Sink between DataSet 
and DataStream API. If you're interested, you can review the pull request and 
give your feedback. 😄

> Add an introduction about how to migrate DataSet API to DataStream
> --
>
> Key: FLINK-33041
> URL: https://issues.apache.org/jira/browse/FLINK-33041
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.18.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.18.0
>
>
> The DataSet API has been formally deprecated and will no longer receive 
> active maintenance and support. It will be removed in the Flink 2.0 version. 
> Flink users are recommended to migrate from the DataSet API to the DataStream 
> API, Table API and SQL for their data processing requirements.
> Most of the DataSet operators can be implemented using the DataStream API. 
> However, we believe it would be beneficial to have an introductory article on 
> the Flink website that guides users in migrating their DataSet jobs to 
> DataStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33144) Deprecate Iteration API in DataStream

2023-09-24 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-33144:
---

 Summary: Deprecate Iteration API in DataStream
 Key: FLINK-33144
 URL: https://issues.apache.org/jira/browse/FLINK-33144
 Project: Flink
  Issue Type: Technical Debt
  Components: API / DataStream
Affects Versions: 1.19.0
Reporter: Wencong Liu
 Fix For: 1.19.0


Currently, the Iteration API of DataStream is incomplete. For instance, it 
lacks support for iteration in sync mode and exactly once semantics. 
Additionally, it does not offer the ability to set iteration termination 
conditions. As a result, it's hard for developers to build an iteration 
pipeline by DataStream in the practical applications such as machine learning.

[FLIP-176: Unified Iteration to Support 
Algorithms|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=184615300]
 has introduced a unified iteration library in the Flink ML repository. This 
library addresses all the issues present in the Iteration API of DataStream and 
could provide solution for all the iteration use-cases. However, maintaining 
two separate implementations of iteration in both the Flink repository and the 
Flink ML repository would introduce unnecessary complexity and make it 
difficult to maintain the Iteration API.

FLIP-357 has decided to deprecate the Iteration API of DataStream and remove it 
completely in the next major version. In the future, if other modules in the 
Flink repository require the use of the Iteration API, we can consider 
extracting all Iteration implementations from the Flink ML repository into an 
independent module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33144) Deprecate Iteration API in DataStream

2023-09-24 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33144:

Description: FLIP-357 has decided to deprecate the Iteration API of 
DataStream and remove it completely in the next major version.  (was: FLIP-357 
has decided to deprecate the Iteration API of DataStream and remove it 
completely in the next major version. In the future, if other modules in the 
Flink repository require the use of the Iteration API, we can consider 
extracting all Iteration implementations from the Flink ML repository into an 
independent module.)

> Deprecate Iteration API in DataStream
> -
>
> Key: FLINK-33144
> URL: https://issues.apache.org/jira/browse/FLINK-33144
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> FLIP-357 has decided to deprecate the Iteration API of DataStream and remove 
> it completely in the next major version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33144) Deprecate Iteration API in DataStream

2023-09-24 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33144:

Description: FLIP-357 has decided to deprecate the Iteration API of 
DataStream and remove it completely in the next major version. In the future, 
if other modules in the Flink repository require the use of the Iteration API, 
we can consider extracting all Iteration implementations from the Flink ML 
repository into an independent module.  (was: Currently, the Iteration API of 
DataStream is incomplete. For instance, it lacks support for iteration in sync 
mode and exactly once semantics. Additionally, it does not offer the ability to 
set iteration termination conditions. As a result, it's hard for developers to 
build an iteration pipeline by DataStream in the practical applications such as 
machine learning.

[FLIP-176: Unified Iteration to Support 
Algorithms|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=184615300]
 has introduced a unified iteration library in the Flink ML repository. This 
library addresses all the issues present in the Iteration API of DataStream and 
could provide solution for all the iteration use-cases. However, maintaining 
two separate implementations of iteration in both the Flink repository and the 
Flink ML repository would introduce unnecessary complexity and make it 
difficult to maintain the Iteration API.

FLIP-357 has decided to deprecate the Iteration API of DataStream and remove it 
completely in the next major version. In the future, if other modules in the 
Flink repository require the use of the Iteration API, we can consider 
extracting all Iteration implementations from the Flink ML repository into an 
independent module.)

> Deprecate Iteration API in DataStream
> -
>
> Key: FLINK-33144
> URL: https://issues.apache.org/jira/browse/FLINK-33144
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> FLIP-357 has decided to deprecate the Iteration API of DataStream and remove 
> it completely in the next major version. In the future, if other modules in 
> the Flink repository require the use of the Iteration API, we can consider 
> extracting all Iteration implementations from the Flink ML repository into an 
> independent module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33144) Deprecate Iteration API in DataStream

2023-09-24 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-33144:

Description: [FLIP-357: Deprecate Iteration API of DataStream - Apache 
Flink - Apache Software 
Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-357%3A+Deprecate+Iteration+API+of+DataStream]
 has decided to deprecate the Iteration API of DataStream and remove it 
completely in the next major version.  (was: FLIP-357 has decided to deprecate 
the Iteration API of DataStream and remove it completely in the next major 
version.)

> Deprecate Iteration API in DataStream
> -
>
> Key: FLINK-33144
> URL: https://issues.apache.org/jira/browse/FLINK-33144
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Priority: Major
> Fix For: 1.19.0
>
>
> [FLIP-357: Deprecate Iteration API of DataStream - Apache Flink - Apache 
> Software 
> Foundation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-357%3A+Deprecate+Iteration+API+of+DataStream]
>  has decided to deprecate the Iteration API of DataStream and remove it 
> completely in the next major version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30257) SqlClientITCase#testMatchRecognize failed

2023-02-15 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689133#comment-17689133
 ] 

Wencong Liu commented on FLINK-30257:
-

cc [~martijnvisser] 

> SqlClientITCase#testMatchRecognize failed
> -
>
> Key: FLINK-30257
> URL: https://issues.apache.org/jira/browse/FLINK-30257
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.17.0
>Reporter: Martijn Visser
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available, test-stability
> Attachments: image-2022-12-29-21-47-31-606.png
>
>
> {code:java}
> Nov 30 21:54:41 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 224.683 s <<< FAILURE! - in SqlClientITCase
> Nov 30 21:54:41 [ERROR] SqlClientITCase.testMatchRecognize  Time elapsed: 
> 50.164 s  <<< FAILURE!
> Nov 30 21:54:41 org.opentest4j.AssertionFailedError: 
> Nov 30 21:54:41 
> Nov 30 21:54:41 expected: 1
> Nov 30 21:54:41  but was: 0
> Nov 30 21:54:41   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Nov 30 21:54:41   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Nov 30 21:54:41   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Nov 30 21:54:41   at 
> SqlClientITCase.verifyNumberOfResultRecords(SqlClientITCase.java:297)
> Nov 30 21:54:41   at 
> SqlClientITCase.testMatchRecognize(SqlClientITCase.java:255)
> Nov 30 21:54:41   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Nov 30 21:54:41   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Nov 30 21:54:41   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Nov 30 21:54:41   at java.lang.reflect.Method.invoke(Method.java:498)
> Nov 30 21:54:41   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMetho
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43635&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=14817



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31020) Read-only mode for Rest API

2023-02-15 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689172#comment-17689172
 ] 

Wencong Liu commented on FLINK-31020:
-

Thanks [~omkardeshpande8] for the proposal! I think it is a tricky behavior to 
only allow GET operations. We cannot guarantee that REST APIs other than 
submit/cancel/modify do not use POST/PUT operations on the web UI. If you think 
it's unsafe, you can disable the rest server.

> Read-only mode for Rest API
> ---
>
> Key: FLINK-31020
> URL: https://issues.apache.org/jira/browse/FLINK-31020
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST
>Affects Versions: 1.16.1
>Reporter: Omkar Deshpande
>Priority: Major
>
> We run Flink jobs on application cluster on Kubernetes. We don't 
> submit/cancel or modify jobs from rest API or web UI. If there was an option 
> to enable only GET operations on the rest service, it would greatly solve the 
> problem of configuring access control and reduce the attack surface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31020) Read-only mode for Rest API

2023-02-15 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689173#comment-17689173
 ] 

Wencong Liu commented on FLINK-31020:
-

cc [~xtsong] 

> Read-only mode for Rest API
> ---
>
> Key: FLINK-31020
> URL: https://issues.apache.org/jira/browse/FLINK-31020
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST
>Affects Versions: 1.16.1
>Reporter: Omkar Deshpande
>Priority: Major
>
> We run Flink jobs on application cluster on Kubernetes. We don't 
> submit/cancel or modify jobs from rest API or web UI. If there was an option 
> to enable only GET operations on the rest service, it would greatly solve the 
> problem of configuring access control and reduce the attack surface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31092) Hive ITCases fail with OutOfMemoryError

2023-02-15 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689193#comment-17689193
 ] 

Wencong Liu commented on FLINK-31092:
-

Hi [~mapohl], is the heap dump generated before OOM error occurs?

> Hive ITCases fail with OutOfMemoryError
> ---
>
> Key: FLINK-31092
> URL: https://issues.apache.org/jira/browse/FLINK-31092
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
> Attachments: VisualVM-FLINK-31092.png
>
>
> We're experiencing a OutOfMemoryError where the heap space reaches the upper 
> limit:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46161&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23142
> {code}
> Feb 15 05:05:14 [INFO] Running 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase
> Feb 15 05:05:17 [INFO] java.lang.OutOfMemoryError: Java heap space
> Feb 15 05:05:17 [INFO] Dumping heap to java_pid9669.hprof ...
> Feb 15 05:05:28 [INFO] Heap dump file created [1957090051 bytes in 11.718 
> secs]
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.cancelPingScheduler(ForkedBooter.java:209)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:419)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:186)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31020) Read-only mode for Rest API

2023-02-15 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689458#comment-17689458
 ] 

Wencong Liu commented on FLINK-31020:
-

Sorry [~chesnay] , my statement may not be very accurate. My opinion is 
consistent with yours. Directly disabling mutating API may affect the normal 
operation of the web UI.

> Read-only mode for Rest API
> ---
>
> Key: FLINK-31020
> URL: https://issues.apache.org/jira/browse/FLINK-31020
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST
>Affects Versions: 1.16.1
>Reporter: Omkar Deshpande
>Priority: Major
>
> We run Flink jobs on application cluster on Kubernetes. We don't 
> submit/cancel or modify jobs from rest API or web UI. If there was an option 
> to enable only GET operations on the rest service, it would greatly solve the 
> problem of configuring access control and reduce the attack surface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-31020) Read-only mode for Rest API

2023-02-15 Thread Wencong Liu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-31020 ]


Wencong Liu deleted comment on FLINK-31020:
-

was (Author: JIRAUSER281639):
cc [~xtsong] 

> Read-only mode for Rest API
> ---
>
> Key: FLINK-31020
> URL: https://issues.apache.org/jira/browse/FLINK-31020
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST
>Affects Versions: 1.16.1
>Reporter: Omkar Deshpande
>Priority: Major
>
> We run Flink jobs on application cluster on Kubernetes. We don't 
> submit/cancel or modify jobs from rest API or web UI. If there was an option 
> to enable only GET operations on the rest service, it would greatly solve the 
> problem of configuring access control and reduce the attack surface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31176) correct the description of sql gateway configuration

2023-02-21 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691919#comment-17691919
 ] 

Wencong Liu commented on FLINK-31176:
-

Thanks [~wangkang] ! I'll take a look.

> correct the description of sql gateway configuration
> 
>
> Key: FLINK-31176
> URL: https://issues.apache.org/jira/browse/FLINK-31176
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / Gateway
>Affects Versions: 1.16.0
>Reporter: wangkang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-02-22-11-17-08-611.png
>
>
> correct the description of sql gateway configuration:
> 1.sql-gateway.session.idle-timeout 、sql-gateway.session.check-interval 
> description in SqlGatewayServiceConfigOptions
> 2.GetSessionConfigHeaders and TriggerSessionHeartbeatHeaders class description
> !image-2023-02-22-11-17-08-611.png|width=717,height=289!
> when setting  sql-gateway.session.idle-timeout  to  negative value,SqlGateway 
> will throw NumberFormatException,beacause the TimeUtils.pasDuration method 
> doesn't support the ne negative value,so we should remove the 'or negative 
> value' description



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31208) KafkaSourceReader overrides meaninglessly a method(pauseOrResumeSplits)

2023-02-27 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17694026#comment-17694026
 ] 

Wencong Liu commented on FLINK-31208:
-

It looks like some redundant code. cc [~renqs] WDYT?

> KafkaSourceReader overrides meaninglessly a method(pauseOrResumeSplits)
> ---
>
> Key: FLINK-31208
> URL: https://issues.apache.org/jira/browse/FLINK-31208
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Hongshun Wang
>Priority: Not a Priority
>
> KafkaSourceReader overrides meaninglessly a method(pauseOrResumeSplits) 
> ,which is no difference with its Parent class (SourceReaderBase). why not 
> remove this override method?
>  
> Relative code is here, which we can see is no difference?
> {code:java}
> //org.apache.flink.connector.kafka.source.reader.KafkaSourceReader#pauseOrResumeSplits
> @Override
> public void pauseOrResumeSplits(
> Collection splitsToPause, Collection splitsToResume) {
> splitFetcherManager.pauseOrResumeSplits(splitsToPause, splitsToResume);
> } 
> //org.apache.flink.connector.base.source.reader.SourceReaderBase#pauseOrResumeSplits
> @Override
> public void pauseOrResumeSplits(
> Collection splitsToPause, Collection splitsToResume) {
> splitFetcherManager.pauseOrResumeSplits(splitsToPause, splitsToResume);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30829) Make the backpressure tab could be sort by the backpressure level

2023-02-27 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17694039#comment-17694039
 ] 

Wencong Liu commented on FLINK-30829:
-

I think if the busy/idle/backpressure is displayed in three columns and each of 
them can be sorted separately, it will be more clear to users. WDYT? cc 
[~yunta] [~xtsong] 

> Make the backpressure tab could be sort by the backpressure level
> -
>
> Key: FLINK-30829
> URL: https://issues.apache.org/jira/browse/FLINK-30829
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.17.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> [FLINK-29998|https://issues.apache.org/jira/browse/FLINK-29998] enables user 
> to sort the backpressure tab to see which task is busiest. Another common 
> scenario for backpressure analysis is to find which tasks are backpressured. 
> We should add support to sort the backpressure tab by backpressure level as 
> well.
>  
> h4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31246) Remove PodTemplate description from the SpecChange message

2023-02-28 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17694419#comment-17694419
 ] 

Wencong Liu commented on FLINK-31246:
-

Hello [~pvary], I'm quite interested in this issue could you please provide the 
code position?

> Remove PodTemplate description from the SpecChange message
> --
>
> Key: FLINK-31246
> URL: https://issues.apache.org/jira/browse/FLINK-31246
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Peter Vary
>Priority: Major
>
> Currently the Spec Change message contains the full PodTemplate twice.
> This makes the message seriously big and also contains very little useful 
> information.
> We should abbreviate the message



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30829) Make the backpressure tab could be sort by the backpressure level

2023-03-01 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17695072#comment-17695072
 ] 

Wencong Liu commented on FLINK-30829:
-

For jobs with high parallelism, it will be very convenient to analyze job 
status according to sorted busy/idle/backpressure columns. Would you like to 
continue this? If you don't have time, I can take over. More discussion is also 
necessary. cc [~Zhanghao Chen] 

> Make the backpressure tab could be sort by the backpressure level
> -
>
> Key: FLINK-30829
> URL: https://issues.apache.org/jira/browse/FLINK-30829
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.17.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> [FLINK-29998|https://issues.apache.org/jira/browse/FLINK-29998] enables user 
> to sort the backpressure tab to see which task is busiest. Another common 
> scenario for backpressure analysis is to find which tasks are backpressured. 
> We should add support to sort the backpressure tab by backpressure level as 
> well.
>  
> h4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30829) Make the backpressure tab could be sort by the backpressure level

2023-03-02 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17695548#comment-17695548
 ] 

Wencong Liu commented on FLINK-30829:
-

I think it's necessary to sort the three columns separately. The calculation 
strategy is different between busy/idle/backpressure, and it may be updated in 
future. Therefore, it will be inaccurate to infer the top value of backpressure 
or idle percentage by sorted busy columns. WDYT? [~yunta] 

> Make the backpressure tab could be sort by the backpressure level
> -
>
> Key: FLINK-30829
> URL: https://issues.apache.org/jira/browse/FLINK-30829
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.17.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> [FLINK-29998|https://issues.apache.org/jira/browse/FLINK-29998] enables user 
> to sort the backpressure tab to see which task is busiest. Another common 
> scenario for backpressure analysis is to find which tasks are backpressured. 
> We should add support to sort the backpressure tab by backpressure level as 
> well.
>  
> h4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31298) ConnectionUtilsTest.testFindConnectingAddressWhenGetLocalHostThrows swallows IllegalArgumentException

2023-03-02 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17695974#comment-17695974
 ] 

Wencong Liu commented on FLINK-31298:
-

Hello [~mapohl] , I'd like to take this ticket. SocketOptions.SO_TIMEOUT should 
be set to 0.

> ConnectionUtilsTest.testFindConnectingAddressWhenGetLocalHostThrows swallows 
> IllegalArgumentException
> -
>
> Key: FLINK-31298
> URL: https://issues.apache.org/jira/browse/FLINK-31298
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: starter, test-stability
>
> FLINK-24156 introduced {{NetUtils.acceptWithoutTimeout}} which caused the 
> test to print a the stacktrace of an {{IllegalArgumentException}}:
> {code}
> Exception in thread "Thread-0" java.lang.IllegalArgumentException: 
> serverSocket SO_TIMEOUT option must be 0
>   at 
> org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:138)
>   at 
> org.apache.flink.util.NetUtils.acceptWithoutTimeout(NetUtils.java:139)
>   at 
> org.apache.flink.runtime.net.ConnectionUtilsTest$1.run(ConnectionUtilsTest.java:83)
>   at java.lang.Thread.run(Thread.java:750)
> {code}
> This is also shown in the Maven output of CI runs and might cause confusion. 
> The test should be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27051) CompletedCheckpoint.DiscardObject.discard is not idempotent

2023-03-04 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17696419#comment-17696419
 ] 

Wencong Liu commented on FLINK-27051:
-

Hello [~mapohl] , I'm quite interested in the issues under this umbrella. For 
this issue, do you mean the CompletedCheckpoint.DiscardObject.discard should 
only discard related data at the first time when it's invoked in multiple times?

> CompletedCheckpoint.DiscardObject.discard is not idempotent
> ---
>
> Key: FLINK-27051
> URL: https://issues.apache.org/jira/browse/FLINK-27051
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Priority: Major
>
> `CompletedCheckpoint.DiscardObject.discard` is not implemented in an 
> idempotent fashion because we're losing the operatorState even in the case of 
> a failure (see 
> [CompletedCheckpoint:328||https://github.com/apache/flink/blob/dc419b5639f68bcb0b773763f24179dd3536d713/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CompletedCheckpoint.java#L328].
>  This prevents us from retrying the deletion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27051) CompletedCheckpoint.DiscardObject.discard is not idempotent

2023-03-06 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17696970#comment-17696970
 ] 

Wencong Liu commented on FLINK-27051:
-

Thanks for the explanation [~mapohl] . I think I can try this issue, could you 
please assign to me?

> CompletedCheckpoint.DiscardObject.discard is not idempotent
> ---
>
> Key: FLINK-27051
> URL: https://issues.apache.org/jira/browse/FLINK-27051
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Priority: Major
>
> {{CompletedCheckpoint.DiscardObject.discard}} is not implemented in an 
> idempotent fashion because we're losing the operatorState even in the case of 
> a failure (see 
> [CompletedCheckpoint:328|https://github.com/apache/flink/blob/dc419b5639f68bcb0b773763f24179dd3536d713/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CompletedCheckpoint.java#L328]).
>  This prevents us from retrying the deletion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32502) Remove AbstractLeaderElectionService

2023-07-05 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17740367#comment-17740367
 ] 

Wencong Liu commented on FLINK-32502:
-

Hello [~mapohl]  Are you suggesting merging the methods of 
AbstractLeaderElectionService to the LeaderElectionService interface? I would 
like to address this issue. 😄

> Remove AbstractLeaderElectionService
> 
>
> Key: FLINK-32502
> URL: https://issues.apache.org/jira/browse/FLINK-32502
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Priority: Major
>
> {{AbstractLeaderElectionService}} doesn't bring much value anymore and can be 
> removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32523) NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted fails with timeout on AZP

2023-07-10 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17741813#comment-17741813
 ] 

Wencong Liu commented on FLINK-32523:
-

I think we should remove @Test(timeout = TEST_TIMEOUT) in this test and let CI 
judge whether it's timed out. WDYT?

> NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted fails with timeout 
> on AZP
> ---
>
> Key: FLINK-32523
> URL: https://issues.apache.org/jira/browse/FLINK-32523
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> This build
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50795&view=logs&j=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3&t=0c010d0c-3dec-5bf1-d408-7b18988b1b2b&l=8638
>  fails with timeout
> {noformat}
> Jul 03 01:26:35 org.junit.runners.model.TestTimedOutException: test timed out 
> after 10 milliseconds
> Jul 03 01:26:35   at java.lang.Object.wait(Native Method)
> Jul 03 01:26:35   at java.lang.Object.wait(Object.java:502)
> Jul 03 01:26:35   at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:61)
> Jul 03 01:26:35   at 
> org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.verifyAllOperatorsNotifyAborted(NotifyCheckpointAbortedITCase.java:198)
> Jul 03 01:26:35   at 
> org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted(NotifyCheckpointAbortedITCase.java:189)
> Jul 03 01:26:35   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 03 01:26:35   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 03 01:26:35   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 03 01:26:35   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 03 01:26:35   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 03 01:26:35   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jul 03 01:26:35   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jul 03 01:26:35   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32624) TieredStorageConsumerClientTest.testGetNextBufferFromRemoteTier failed on CI

2023-07-18 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17744403#comment-17744403
 ] 

Wencong Liu commented on FLINK-32624:
-

Sorry for the late reply, I'll take a look. [~lincoln.86xy] 

> TieredStorageConsumerClientTest.testGetNextBufferFromRemoteTier failed on CI
> 
>
> Key: FLINK-32624
> URL: https://issues.apache.org/jira/browse/FLINK-32624
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.18.0
>Reporter: lincoln lee
>Priority: Major
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51376&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8]
> errors:
> {code}
> Jul 18 11:18:35 11:18:35.412 [ERROR] 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.netty.TieredStorageConsumerClientTest.testGetNextBufferFromRemoteTier
>   Time elapsed: 0.014 s  <<< FAILURE!
> Jul 18 11:18:35 java.lang.AssertionError: 
> Jul 18 11:18:35 
> Jul 18 11:18:35 Expecting Optional to contain a value but it was empty.
> Jul 18 11:18:35   at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.netty.TieredStorageConsumerClientTest.testGetNextBufferFromRemoteTier(TieredStorageConsumerClientTest.java:127)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32624) TieredStorageConsumerClientTest.testGetNextBufferFromRemoteTier failed on CI

2023-07-18 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17744409#comment-17744409
 ] 

Wencong Liu commented on FLINK-32624:
-

Thanks [~lincoln.86xy] .  I've opened a hot fix. [[hotfix] Fix the 
TieredStorageConsumerClientTest by WencongLiu · Pull Request #23017 · 
apache/flink (github.com)|https://github.com/apache/flink/pull/23017]

> TieredStorageConsumerClientTest.testGetNextBufferFromRemoteTier failed on CI
> 
>
> Key: FLINK-32624
> URL: https://issues.apache.org/jira/browse/FLINK-32624
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.18.0
>Reporter: lincoln lee
>Priority: Major
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51376&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8]
> errors:
> {code}
> Jul 18 11:18:35 11:18:35.412 [ERROR] 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.netty.TieredStorageConsumerClientTest.testGetNextBufferFromRemoteTier
>   Time elapsed: 0.014 s  <<< FAILURE!
> Jul 18 11:18:35 java.lang.AssertionError: 
> Jul 18 11:18:35 
> Jul 18 11:18:35 Expecting Optional to contain a value but it was empty.
> Jul 18 11:18:35   at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.netty.TieredStorageConsumerClientTest.testGetNextBufferFromRemoteTier(TieredStorageConsumerClientTest.java:127)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-5336) Make Path immutable

2023-07-18 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17744452#comment-17744452
 ] 

Wencong Liu commented on FLINK-5336:


Hi all, I have checked all the classes that utilize the *Path* class. I found 
that there're still some classes are de/serialize the *Path* through 
*IOReadableWritable* interface.
 # {*}FileSourceSplitSerializer{*}: It de/serializes the *Path* during the 
process of de/serializing FileSourceSplit.
 # {*}TestManagedSinkCommittableSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedCommittable.
 # {*}TestManagedFileSourceSplitSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedIterableSourceSplit.

For 1, the Path needs to be serialized to save checkpoint data in source. 
[~sewen] 

For 2/3, the IT case in flink-table-common depends on the Path 
de/serialization. [~qingyue] 

In summary, I think the Path class should still need to implement the 
*IOReadableWritable* interface to support de/serialization. WDYT? 

> Make Path immutable
> ---
>
> Key: FLINK-5336
> URL: https://issues.apache.org/jira/browse/FLINK-5336
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataSet
>Reporter: Stephan Ewen
>Priority: Major
> Fix For: 2.0.0
>
>
> The {{Path}} class is currently mutable to support the {{IOReadableWritable}} 
> serialization. Since that serialization is not used any more, I suggest to 
> drop that interface from Path and make the Path's URI final.
> Being immutable, we can store configures paths properly without the chance of 
> them being mutated as side effects.
> Many parts of the code make the assumption that the Path is immutable, being 
> susceptible to subtle errors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-5336) Make Path immutable

2023-07-19 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17744452#comment-17744452
 ] 

Wencong Liu edited comment on FLINK-5336 at 7/20/23 2:32 AM:
-

Hi all, I have checked all the classes that utilize the *Path* class. I found 
that there're still some classes are de/serializing the *Path* through 
*IOReadableWritable* interface.
 # {*}FileSourceSplitSerializer{*}: It de/serializes the *Path* during the 
process of de/serializing FileSourceSplit.
 # {*}TestManagedSinkCommittableSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedCommittable.
 # {*}TestManagedFileSourceSplitSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedIterableSourceSplit.

For 1, the Path needs to be serialized to save checkpoint data in source.

For 2/3, the IT case in flink-table-common depends on the Path 
de/serialization. [~qingyue] 

In summary, I think the Path class should still need to implement the 
*IOReadableWritable* interface to support de/serialization. WDYT? 


was (Author: JIRAUSER281639):
Hi all, I have checked all the classes that utilize the *Path* class. I found 
that there're still some classes are de/serialize the *Path* through 
*IOReadableWritable* interface.
 # {*}FileSourceSplitSerializer{*}: It de/serializes the *Path* during the 
process of de/serializing FileSourceSplit.
 # {*}TestManagedSinkCommittableSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedCommittable.
 # {*}TestManagedFileSourceSplitSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedIterableSourceSplit.

For 1, the Path needs to be serialized to save checkpoint data in source. 
[~sewen] 

For 2/3, the IT case in flink-table-common depends on the Path 
de/serialization. [~qingyue] 

In summary, I think the Path class should still need to implement the 
*IOReadableWritable* interface to support de/serialization. WDYT? 

> Make Path immutable
> ---
>
> Key: FLINK-5336
> URL: https://issues.apache.org/jira/browse/FLINK-5336
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataSet
>Reporter: Stephan Ewen
>Priority: Major
> Fix For: 2.0.0
>
>
> The {{Path}} class is currently mutable to support the {{IOReadableWritable}} 
> serialization. Since that serialization is not used any more, I suggest to 
> drop that interface from Path and make the Path's URI final.
> Being immutable, we can store configures paths properly without the chance of 
> them being mutated as side effects.
> Many parts of the code make the assumption that the Path is immutable, being 
> susceptible to subtle errors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-5336) Make Path immutable

2023-07-19 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17744452#comment-17744452
 ] 

Wencong Liu edited comment on FLINK-5336 at 7/20/23 4:13 AM:
-

Hi all, I have checked all the classes that utilize the *Path* class. I found 
that there're still some classes are de/serializing the *Path* through 
*IOReadableWritable* interface.
 # {*}FileSourceSplitSerializer{*}: It de/serializes the *Path* during the 
process of de/serializing FileSourceSplit.
 # {*}TestManagedSinkCommittableSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedCommittable.
 # {*}TestManagedFileSourceSplitSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedIterableSourceSplit.

For 1, the Path needs to be serialized to save checkpoint data in source.

For 2/3, the IT case in flink-table-common depends on the Path 
de/serialization. [~qingyue] 


was (Author: JIRAUSER281639):
Hi all, I have checked all the classes that utilize the *Path* class. I found 
that there're still some classes are de/serializing the *Path* through 
*IOReadableWritable* interface.
 # {*}FileSourceSplitSerializer{*}: It de/serializes the *Path* during the 
process of de/serializing FileSourceSplit.
 # {*}TestManagedSinkCommittableSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedCommittable.
 # {*}TestManagedFileSourceSplitSerializer{*}: It de/serializes the Path during 
the process of de/serializing TestManagedIterableSourceSplit.

For 1, the Path needs to be serialized to save checkpoint data in source.

For 2/3, the IT case in flink-table-common depends on the Path 
de/serialization. [~qingyue] 

In summary, I think the Path class should still need to implement the 
*IOReadableWritable* interface to support de/serialization. WDYT? 

> Make Path immutable
> ---
>
> Key: FLINK-5336
> URL: https://issues.apache.org/jira/browse/FLINK-5336
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataSet
>Reporter: Stephan Ewen
>Priority: Major
> Fix For: 2.0.0
>
>
> The {{Path}} class is currently mutable to support the {{IOReadableWritable}} 
> serialization. Since that serialization is not used any more, I suggest to 
> drop that interface from Path and make the Path's URI final.
> Being immutable, we can store configures paths properly without the chance of 
> them being mutated as side effects.
> Many parts of the code make the assumption that the Path is immutable, being 
> susceptible to subtle errors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-5336) Make Path immutable

2023-07-20 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17744938#comment-17744938
 ] 

Wencong Liu commented on FLINK-5336:


Thanks [~qingyue] , I think the *FileSourceSplitSerializer* could be modified 
to no longer implement the *IOReadableWritable* interface to support 
serialization/deserialization on DataInputView/DataOutputView. I'll propose a 
FLIP about the specific actions at a later time.

> Make Path immutable
> ---
>
> Key: FLINK-5336
> URL: https://issues.apache.org/jira/browse/FLINK-5336
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataSet
>Reporter: Stephan Ewen
>Priority: Major
> Fix For: 2.0.0
>
>
> The {{Path}} class is currently mutable to support the {{IOReadableWritable}} 
> serialization. Since that serialization is not used any more, I suggest to 
> drop that interface from Path and make the Path's URI final.
> Being immutable, we can store configures paths properly without the chance of 
> them being mutated as side effects.
> Many parts of the code make the assumption that the Path is immutable, being 
> susceptible to subtle errors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32686) Performance regression on startScheduling.BATCH and startScheduling.STREAMING since 2023-07-24

2023-07-26 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17747733#comment-17747733
 ] 

Wencong Liu commented on FLINK-32686:
-

Thanks for reporting this. The second regression since 07.24 is caused by 
commit id: c9e1833642650e0b1ea162371dd7c6d35f2e21b7. The commit disables the 
repair in [FLINK-32094] startScheduling.BATCH performance regression since May 
11th - ASF JIRA (apache.org) unexpectedly and causes the regression in 07.24. 
I'll open a pull request to fix this. The first regression in 07.09 is not 
related with FLINK-32094.

> Performance regression on startScheduling.BATCH and startScheduling.STREAMING 
> since 2023-07-24 
> ---
>
> Key: FLINK-32686
> URL: https://issues.apache.org/jira/browse/FLINK-32686
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: Martijn Visser
>Priority: Blocker
>
> http://codespeed.dak8s.net:8000/timeline/#/?exe=5&ben=startScheduling.STREAMING&extr=on&quarts=on&equid=off&env=2&revs=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=5&ben=startScheduling.BATCH&extr=on&quarts=on&equid=off&env=2&revs=200



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32686) Performance regression on startScheduling.BATCH and startScheduling.STREAMING since 2023-07-24

2023-07-26 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17747733#comment-17747733
 ] 

Wencong Liu edited comment on FLINK-32686 at 7/27/23 3:42 AM:
--

Thanks for reporting this. The second regression since 07.24 is caused by 
commit id: c9e1833642650e0b1ea162371dd7c6d35f2e21b7. The commit disables the 
repair in FLINK-32094 unexpectedly and causes the regression in 07.24. I'll 
open a pull request to fix this. The first regression in 07.09 is not related 
with FLINK-32094.


was (Author: JIRAUSER281639):
Thanks for reporting this. The second regression since 07.24 is caused by 
commit id: c9e1833642650e0b1ea162371dd7c6d35f2e21b7. The commit disables the 
repair in [FLINK-32094] startScheduling.BATCH performance regression since May 
11th - ASF JIRA (apache.org) unexpectedly and causes the regression in 07.24. 
I'll open a pull request to fix this. The first regression in 07.09 is not 
related with FLINK-32094.

> Performance regression on startScheduling.BATCH and startScheduling.STREAMING 
> since 2023-07-24 
> ---
>
> Key: FLINK-32686
> URL: https://issues.apache.org/jira/browse/FLINK-32686
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
>
> http://codespeed.dak8s.net:8000/timeline/#/?exe=5&ben=startScheduling.STREAMING&extr=on&quarts=on&equid=off&env=2&revs=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=5&ben=startScheduling.BATCH&extr=on&quarts=on&equid=off&env=2&revs=200



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32708) Fix the write logic in remote tier of hybrid shuffle

2023-07-27 Thread Wencong Liu (Jira)
Wencong Liu created FLINK-32708:
---

 Summary: Fix the write logic in remote tier of hybrid shuffle
 Key: FLINK-32708
 URL: https://issues.apache.org/jira/browse/FLINK-32708
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.18.0
Reporter: Wencong Liu
 Fix For: 1.18.0


Currently, on the writer side in the remote tier, the flag file indicating the 
latest segment id is updated first, followed by the creation of the data file. 
This results in an incorrect order of file creation and we should fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32708) Fix the write logic in remote tier of Hybrid Shuffle

2023-07-28 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu updated FLINK-32708:

Summary: Fix the write logic in remote tier of Hybrid Shuffle  (was: Fix 
the write logic in remote tier of hybrid shuffle)

> Fix the write logic in remote tier of Hybrid Shuffle
> 
>
> Key: FLINK-32708
> URL: https://issues.apache.org/jira/browse/FLINK-32708
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.18.0
>Reporter: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Currently, on the writer side in the remote tier, the flag file indicating 
> the latest segment id is updated first, followed by the creation of the data 
> file. This results in an incorrect order of file creation and we should fix 
> it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30257) SqlClientITCase#testMatchRecognize failed

2023-01-16 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17677587#comment-17677587
 ] 

Wencong Liu commented on FLINK-30257:
-

Thanks [~martijnvisser], I've opened a pull request.

> SqlClientITCase#testMatchRecognize failed
> -
>
> Key: FLINK-30257
> URL: https://issues.apache.org/jira/browse/FLINK-30257
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.17.0
>Reporter: Martijn Visser
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available, test-stability
> Attachments: image-2022-12-29-21-47-31-606.png
>
>
> {code:java}
> Nov 30 21:54:41 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 224.683 s <<< FAILURE! - in SqlClientITCase
> Nov 30 21:54:41 [ERROR] SqlClientITCase.testMatchRecognize  Time elapsed: 
> 50.164 s  <<< FAILURE!
> Nov 30 21:54:41 org.opentest4j.AssertionFailedError: 
> Nov 30 21:54:41 
> Nov 30 21:54:41 expected: 1
> Nov 30 21:54:41  but was: 0
> Nov 30 21:54:41   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Nov 30 21:54:41   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Nov 30 21:54:41   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Nov 30 21:54:41   at 
> SqlClientITCase.verifyNumberOfResultRecords(SqlClientITCase.java:297)
> Nov 30 21:54:41   at 
> SqlClientITCase.testMatchRecognize(SqlClientITCase.java:255)
> Nov 30 21:54:41   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Nov 30 21:54:41   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Nov 30 21:54:41   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Nov 30 21:54:41   at java.lang.reflect.Method.invoke(Method.java:498)
> Nov 30 21:54:41   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMetho
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43635&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=14817



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29125) Placeholder in Apache Flink Web Frontend to display some "tags" to distinguish between frontends of different clusters

2023-01-17 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17678074#comment-17678074
 ] 

Wencong Liu commented on FLINK-29125:
-

Thanks for the reply [~dkrovi] [~martijnvisser] . I also think it needs a 
discuss and I'll put these options on the proposal. WDYT? cc [~xtsong] [~junhan]

> Placeholder in Apache Flink Web Frontend to display some "tags" to 
> distinguish between frontends of different clusters
> --
>
> Key: FLINK-29125
> URL: https://issues.apache.org/jira/browse/FLINK-29125
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Durga Krovi
>Priority: Major
>  Labels: pull-request-available
>
> When there are several Apache Flink clusters running and the corresponding 
> Web Frontend is opened in browser tabs, it would be great if these UIs can be 
> distinguished in a visible way. Port number in the browser location bar might 
> be useful.
> In our use case, we switch among multiple clusters, connect to only one 
> cluster at a time and use the same port for forwarding. In such a case, there 
> is no visible cue to identify the cluster of the UI being accessed on the 
> browser.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30699) Improve the efficiency of the getRandomString method in the StringUtils class

2023-01-17 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17678078#comment-17678078
 ] 

Wencong Liu commented on FLINK-30699:
-

Thanks for your proposal! [~TaoZex] . I'm a little confused about the aim of 
this issue. Do you mean your change will have a better efficiency? It will be 
more convenient that you design some tests and put the results on this issue.

> Improve the efficiency of the getRandomString method in the StringUtils class
> -
>
> Key: FLINK-30699
> URL: https://issues.apache.org/jira/browse/FLINK-30699
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Bingye Chen
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2023-01-16-18-13-56-912.png, 
> image-2023-01-16-18-14-12-939.png
>
>
> This is a util class method that uses data.length to affect efficiency.
> !image-2023-01-16-18-13-56-912.png|width=398,height=148!
> !image-2023-01-16-18-14-12-939.png|width=398,height=114!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-30699) Improve the efficiency of the getRandomString method in the StringUtils class

2023-01-17 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17678078#comment-17678078
 ] 

Wencong Liu edited comment on FLINK-30699 at 1/18/23 6:14 AM:
--

Thanks for your proposal! [~TaoZex] . I'm a little confused about the aim of 
this issue. Do you mean your change will have a better efficiency? It will be 
more convincing that you design some tests and put the results on this issue.


was (Author: JIRAUSER281639):
Thanks for your proposal! [~TaoZex] . I'm a little confused about the aim of 
this issue. Do you mean your change will have a better efficiency? It will be 
more convenient that you design some tests and put the results on this issue.

> Improve the efficiency of the getRandomString method in the StringUtils class
> -
>
> Key: FLINK-30699
> URL: https://issues.apache.org/jira/browse/FLINK-30699
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Bingye Chen
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2023-01-16-18-13-56-912.png, 
> image-2023-01-16-18-14-12-939.png
>
>
> This is a util class method that uses data.length to affect efficiency.
> !image-2023-01-16-18-13-56-912.png|width=398,height=148!
> !image-2023-01-16-18-14-12-939.png|width=398,height=114!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30774) flink-utils module

2023-01-23 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680085#comment-17680085
 ] 

Wencong Liu commented on FLINK-30774:
-

Hello [~mapohl]. I think your proposal is reasonable! But I have a question. 
Currently many utility classes exists in the path 
"flink-core/src/main/java/org/apache/flink/util". Some of them are only used in 
the module flink-core, others are used in the modules depending on flink-core. 
How do we sift the classes moved to the flink-utils module? Or we just simply 
move all of them to the module flink-util.

> flink-utils module
> --
>
> Key: FLINK-30774
> URL: https://issues.apache.org/jira/browse/FLINK-30774
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: starter
>
> Currently, utility methods generic utility classes like {{Preconditions}} or 
> {{AbstractAutoCloseableRegistry}} are collected in {{flink-core}}. The flaw 
> of this approach is that we cannot use those classes in modules like 
> {{fink-migration-test-utils}}, {{flink-test-utils-junit}}, 
> {{flink-metrics-core}} or {{flink-annotations}}.
> We might want to have a generic {{flink-utils}} analogously to 
> {{flink-test-utils}} that collects Flink-independent utility functionality 
> that can be access by any module {{flink-core}} is depending on to make this 
> utility functionality available in any Flink-related module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30739) SqlGatewayRestEndpointStatementITCase failed with NullPointer

2023-01-23 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680088#comment-17680088
 ] 

Wencong Liu commented on FLINK-30739:
-

I seems like a existed similar issue. cc [~fsk119] 

> SqlGatewayRestEndpointStatementITCase failed with NullPointer
> -
>
> Key: FLINK-30739
> URL: https://issues.apache.org/jira/browse/FLINK-30739
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Yun Tang
>Priority: Major
>
> Instance 
> https://myasuka.visualstudio.com/flink/_build/results?buildId=437&view=logs&j=43a593e7-535d-554b-08cc-244368da36b4&t=82d122c0-8bbf-56f3-4c0d-8e3d69630d0f
> {code:java}
> Jan 18 10:54:20 [ERROR] 
> org.apache.flink.table.gateway.service.SqlGatewayServiceStatementITCase.testFlinkSqlStatements
>   Time elapsed: 1.37 s  <<< FAILURE!
> Jan 18 10:54:20 org.opentest4j.AssertionFailedError:
> Jan 18 10:54:20 
> Jan 18 10:54:20 expected: 
> Jan 18 10:54:20   "# table.q - CREATE/DROP/SHOW/ALTER/DESCRIBE TABLE
> Jan 18 10:54:20   #
> Jan 18 10:54:20   # Licensed to the Apache Software Foundation (ASF) under 
> one or more
> Jan 18 10:54:20   # contributor license agreements.  See the NOTICE file 
> distributed with
> Jan 18 10:54:20   # this work for additional information regarding copyright 
> ownership.
> Jan 18 10:54:20   # The ASF licenses this file to you under the Apache 
> License, Version 2.0
> Jan 18 10:54:20   # (the "License"); you may not use this file except in 
> compliance with
> Jan 18 10:54:20   # the License.  You may obtain a copy of the License at
> Jan 18 10:54:20   #
> Jan 18 10:54:20   # http://www.apache.org/licenses/LICENSE-2.0
> Jan 18 10:54:20   #
> Jan 18 10:54:20   # Unless required by applicable law or agreed to in 
> writing, software
> Jan 18 10:54:20   # distributed under the License is distributed on an "AS 
> IS" BASIS,
> Jan 18 10:54:20   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either 
> express or implied.
> Jan 18 10:54:20   # See the License for the specific language governing 
> permissions and
> Jan 18 10:54:20   # limitations under the License.
> Jan 18 10:54:20   
> Jan 18 10:54:20   # 
> ==
> Jan 18 10:54:20   # validation test
> Jan 18 10:54:20   # 
> ==
> Jan 18 10:54:20   
> Jan 18 10:54:20   create table tbl(a int, b as invalid_function());
> Jan 18 10:54:20   !output
> Jan 18 10:54:20   org.apache.calcite.sql.validate.SqlValidatorException: No 
> match found for function signature invalid_function()
> Jan 18 10:54:20   !error
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32065) Got NoSuchFileException when initialize source function.

2023-05-14 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17722638#comment-17722638
 ] 

Wencong Liu commented on FLINK-32065:
-

Hello [~SpongebobZ] , do you clean the tmp dir before the error happens ? It 
seems that the file was deleted when it need to be read.

> Got NoSuchFileException when initialize source function.
> 
>
> Key: FLINK-32065
> URL: https://issues.apache.org/jira/browse/FLINK-32065
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.14.4
>Reporter: Spongebob
>Priority: Major
> Attachments: image-2023-05-12-14-07-45-771.png, 
> image-2023-05-12-14-26-46-268.png, image-2023-05-12-17-37-09-002.png
>
>
> When I submit an application to flink standalone cluster, I got a 
> NoSuchFileException. I think it was failed to create the tmp channel file but 
> I am confused about the reason relative to this case.
> I found that this sub-directory `flink-netty-shuffle-xxx` was not existed, so 
> is this diretory only working for that step of the application ?
> BTW, this issue happen coincidently.
> !image-2023-05-12-14-07-45-771.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-32065) Got NoSuchFileException when initialize source function.

2023-05-14 Thread Wencong Liu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-32065 ]


Wencong Liu deleted comment on FLINK-32065:
-

was (Author: JIRAUSER281639):
Hello [~SpongebobZ] , do you clean the tmp dir before the error happens ? It 
seems that the file was deleted when it need to be read.

> Got NoSuchFileException when initialize source function.
> 
>
> Key: FLINK-32065
> URL: https://issues.apache.org/jira/browse/FLINK-32065
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.14.4
>Reporter: Spongebob
>Priority: Major
> Attachments: image-2023-05-12-14-07-45-771.png, 
> image-2023-05-12-14-26-46-268.png, image-2023-05-12-17-37-09-002.png
>
>
> When I submit an application to flink standalone cluster, I got a 
> NoSuchFileException. I think it was failed to create the tmp channel file but 
> I am confused about the reason relative to this case.
> I found that this sub-directory `flink-netty-shuffle-xxx` was not existed, so 
> is this diretory only working for that step of the application ?
> BTW, this issue happen coincidently.
> !image-2023-05-12-14-07-45-771.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32112) Fixed State Backend sample config in zh-doc

2023-05-16 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17723330#comment-17723330
 ] 

Wencong Liu commented on FLINK-32112:
-

Helllo [~xmzhou] , Thanks for your proposal. Currently, the config 
"state.backend: filesystem" is deprecated and it will be replaced by 
"state.backend: hashmap", the detailed logic is in here 
[[code|https://github.com/apache/flink/blob/4bd51ce122d03a13cfd6fdf69325630679cd5053/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java#L143]].
 If the user set "state.backend: filesystem", an error will be thrown. The 
Chinese doc should be updated.

> Fixed State Backend sample config in zh-doc
> ---
>
> Key: FLINK-32112
> URL: https://issues.apache.org/jira/browse/FLINK-32112
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.17.0
>Reporter: Xianming Zhou
>Priority: Critical
> Attachments: Snipaste_2023-05-16_23-45-02.jpg, 
> Snipaste_2023-05-16_23-45-47.jpg
>
>
> Current Version Avaliable State Backends :
>  * _HashMapStateBackend_
>  * _EmbeddedRocksDBStateBackend_
>  
> _But in the Operations/State & Fault Tolerance page of flink v1.17.0,_ _a 
> sample section in the configuration set state.backend: filesystem  in zh-doc._
> _The correct configuration should be:_
>   _state.backend: hashmap_
>  
> _I think it may cause misunderstandings for users._



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >