[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812184#comment-17812184
 ] 

Matthias Pohl commented on FLINK-31472:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57079&view=logs&j=1c002d28-a73d-5309-26ee-10036d8476b4&t=d1c117a6-8f13-5466-55f0-d48dbb767fcd&l=10596

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.j

[jira] [Comment Edited] (FLINK-34272) AdaptiveSchedulerClusterITCase failure due to MiniCluster not running

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812181#comment-17812181
 ] 

Matthias Pohl edited comment on FLINK-34272 at 1/30/24 7:59 AM:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57079&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9517]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57079&view=logs&j=77a9d8e1-d610-59b3-fc2a-4766541e0e33&t=125e07e7-8de0-5c6c-a541-a567415af3ef&l=9323]

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57079&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=be5a4b15-4b23-56b1-7582-795f58a645a2&l=9604


was (Author: mapohl):
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57079&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9517]

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57079&view=logs&j=77a9d8e1-d610-59b3-fc2a-4766541e0e33&t=125e07e7-8de0-5c6c-a541-a567415af3ef&l=9323

> AdaptiveSchedulerClusterITCase failure due to MiniCluster not running
> -
>
> Key: FLINK-34272
> URL: https://issues.apache.org/jira/browse/FLINK-34272
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57073&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9543]
> {code:java}
>  Jan 29 17:21:29 17:21:29.465 [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 12.48 s <<< FAILURE! -- in 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase
> Jan 29 17:21:29 17:21:29.465 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp
>  -- Time elapsed: 8.599 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getArchivedExecutionGraph(MiniCluster.java:840)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$waitUntilParallelismForVertexReached$3(AdaptiveSchedulerClusterITCase.java:270)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.waitUntilParallelismForVertexReached(AdaptiveSchedulerClusterITCase.java:265)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp(AdaptiveSchedulerClusterITCase.java:146)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 29 17:21:29 
> Jan 29 17:21:29 17:21:29.466 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale
>  -- Time elapsed: 2.036 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getExecutionGraph(MiniCluster.java:969)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$testCheckpointStatsPersistedAcrossRescale$1(AdaptiveSchedulerClusterITCase.java:183)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at

[jira] [Updated] (FLINK-34272) AdaptiveSchedulerClusterITCase failure due to MiniCluster not running

2024-01-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34272:
--
Priority: Blocker  (was: Critical)

> AdaptiveSchedulerClusterITCase failure due to MiniCluster not running
> -
>
> Key: FLINK-34272
> URL: https://issues.apache.org/jira/browse/FLINK-34272
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Blocker
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57073&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9543]
> {code:java}
>  Jan 29 17:21:29 17:21:29.465 [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 12.48 s <<< FAILURE! -- in 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase
> Jan 29 17:21:29 17:21:29.465 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp
>  -- Time elapsed: 8.599 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getArchivedExecutionGraph(MiniCluster.java:840)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$waitUntilParallelismForVertexReached$3(AdaptiveSchedulerClusterITCase.java:270)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.waitUntilParallelismForVertexReached(AdaptiveSchedulerClusterITCase.java:265)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp(AdaptiveSchedulerClusterITCase.java:146)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 29 17:21:29 
> Jan 29 17:21:29 17:21:29.466 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale
>  -- Time elapsed: 2.036 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getExecutionGraph(MiniCluster.java:969)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$testCheckpointStatsPersistedAcrossRescale$1(AdaptiveSchedulerClusterITCase.java:183)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale(AdaptiveSchedulerClusterITCase.java:180)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34272) AdaptiveSchedulerClusterITCase failure due to MiniCluster not running

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812186#comment-17812186
 ] 

Matthias Pohl commented on FLINK-34272:
---

[~dmvk] can you have a look. It might be that your changes related to 
FLINK-33976 caused the instabilities.

> AdaptiveSchedulerClusterITCase failure due to MiniCluster not running
> -
>
> Key: FLINK-34272
> URL: https://issues.apache.org/jira/browse/FLINK-34272
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57073&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9543]
> {code:java}
>  Jan 29 17:21:29 17:21:29.465 [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 12.48 s <<< FAILURE! -- in 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase
> Jan 29 17:21:29 17:21:29.465 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp
>  -- Time elapsed: 8.599 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getArchivedExecutionGraph(MiniCluster.java:840)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$waitUntilParallelismForVertexReached$3(AdaptiveSchedulerClusterITCase.java:270)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.waitUntilParallelismForVertexReached(AdaptiveSchedulerClusterITCase.java:265)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp(AdaptiveSchedulerClusterITCase.java:146)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 29 17:21:29 
> Jan 29 17:21:29 17:21:29.466 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale
>  -- Time elapsed: 2.036 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getExecutionGraph(MiniCluster.java:969)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$testCheckpointStatsPersistedAcrossRescale$1(AdaptiveSchedulerClusterITCase.java:183)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale(AdaptiveSchedulerClusterITCase.java:180)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34225) HybridShuffleITCase.testHybridSelectiveExchangesRestart failed due to NullPointerException

2024-01-30 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-34225:
--

Assignee: Yunfeng Zhou

> HybridShuffleITCase.testHybridSelectiveExchangesRestart failed due to 
> NullPointerException
> --
>
> Key: FLINK-34225
> URL: https://issues.apache.org/jira/browse/FLINK-34225
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Yunfeng Zhou
>Priority: Blocker
>  Labels: github-actions, pull-request-available, test-stability
> Attachments: FLINK-34225.log
>
>
> This test failed in a master nightly workflow run in GitHub Actions 
> ([FLIP-396|https://cwiki.apache.org/confluence/display/FLINK/FLIP-396%3A+Trial+to+test+GitHub+Actions+as+an+alternative+for+Flink's+current+Azure+CI+infrastructure])
>  which is based on 
> master@[fd673a2f4|https://github.com/apache/flink/commit/fd673a2f46206ff65978f05fcb96b525696aead2]
> https://github.com/XComp/flink/actions/runs/7632434859/job/20793612930#step:10:8625
> {code}
> Error: 01:07:53 01:07:53.367 [ERROR] Tests run: 12, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 16.85 s <<< FAILURE! -- in 
> org.apache.flink.test.runtime.HybridShuffleITCase
> Error: 01:07:53 01:07:53.367 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchangesRestart
>  -- Time elapsed: 1.164 s <<< FAILURE!
> Jan 24 01:07:53 java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: org.apache.flink.runtime.JobException: 
> Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=2, 
> backoffTimeMS=0)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:59)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:137)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchangesRestart(HybridShuffleITCase.java:91)
> Jan 24 01:07:53   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:1024)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
> Jan 24 01:07:53   at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
> Jan 24 01:07:53   at 
> java.base/java.util.concurre

[jira] [Closed] (FLINK-34225) HybridShuffleITCase.testHybridSelectiveExchangesRestart failed due to NullPointerException

2024-01-30 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-34225.
--
Resolution: Fixed

This also fixed via 973190e8ca5b7225f18b5c176726ef8680faffca.

> HybridShuffleITCase.testHybridSelectiveExchangesRestart failed due to 
> NullPointerException
> --
>
> Key: FLINK-34225
> URL: https://issues.apache.org/jira/browse/FLINK-34225
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Yunfeng Zhou
>Priority: Blocker
>  Labels: github-actions, pull-request-available, test-stability
> Attachments: FLINK-34225.log
>
>
> This test failed in a master nightly workflow run in GitHub Actions 
> ([FLIP-396|https://cwiki.apache.org/confluence/display/FLINK/FLIP-396%3A+Trial+to+test+GitHub+Actions+as+an+alternative+for+Flink's+current+Azure+CI+infrastructure])
>  which is based on 
> master@[fd673a2f4|https://github.com/apache/flink/commit/fd673a2f46206ff65978f05fcb96b525696aead2]
> https://github.com/XComp/flink/actions/runs/7632434859/job/20793612930#step:10:8625
> {code}
> Error: 01:07:53 01:07:53.367 [ERROR] Tests run: 12, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 16.85 s <<< FAILURE! -- in 
> org.apache.flink.test.runtime.HybridShuffleITCase
> Error: 01:07:53 01:07:53.367 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchangesRestart
>  -- Time elapsed: 1.164 s <<< FAILURE!
> Jan 24 01:07:53 java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: org.apache.flink.runtime.JobException: 
> Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=2, 
> backoffTimeMS=0)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:59)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:137)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchangesRestart(HybridShuffleITCase.java:91)
> Jan 24 01:07:53   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:1024)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
> Jan 24 01:07:53   at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
> Jan 24 

[jira] [Updated] (FLINK-34225) HybridShuffleITCase.testHybridSelectiveExchangesRestart failed due to NullPointerException

2024-01-30 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-34225:
---
Fix Version/s: 1.19.0

> HybridShuffleITCase.testHybridSelectiveExchangesRestart failed due to 
> NullPointerException
> --
>
> Key: FLINK-34225
> URL: https://issues.apache.org/jira/browse/FLINK-34225
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Yunfeng Zhou
>Priority: Blocker
>  Labels: github-actions, pull-request-available, test-stability
> Fix For: 1.19.0
>
> Attachments: FLINK-34225.log
>
>
> This test failed in a master nightly workflow run in GitHub Actions 
> ([FLIP-396|https://cwiki.apache.org/confluence/display/FLINK/FLIP-396%3A+Trial+to+test+GitHub+Actions+as+an+alternative+for+Flink's+current+Azure+CI+infrastructure])
>  which is based on 
> master@[fd673a2f4|https://github.com/apache/flink/commit/fd673a2f46206ff65978f05fcb96b525696aead2]
> https://github.com/XComp/flink/actions/runs/7632434859/job/20793612930#step:10:8625
> {code}
> Error: 01:07:53 01:07:53.367 [ERROR] Tests run: 12, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 16.85 s <<< FAILURE! -- in 
> org.apache.flink.test.runtime.HybridShuffleITCase
> Error: 01:07:53 01:07:53.367 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchangesRestart
>  -- Time elapsed: 1.164 s <<< FAILURE!
> Jan 24 01:07:53 java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: org.apache.flink.runtime.JobException: 
> Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=2, 
> backoffTimeMS=0)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:59)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:137)
> Jan 24 01:07:53   at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchangesRestart(HybridShuffleITCase.java:91)
> Jan 24 01:07:53   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 01:07:53   at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:1024)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
> Jan 24 01:07:53   at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 24 01:07:53   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
> Jan 24 01:07:53   at 
> java

[jira] [Updated] (FLINK-34272) AdaptiveSchedulerClusterITCase failure due to MiniCluster not running

2024-01-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34272:
--
Priority: Critical  (was: Blocker)

> AdaptiveSchedulerClusterITCase failure due to MiniCluster not running
> -
>
> Key: FLINK-34272
> URL: https://issues.apache.org/jira/browse/FLINK-34272
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57073&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9543]
> {code:java}
>  Jan 29 17:21:29 17:21:29.465 [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 12.48 s <<< FAILURE! -- in 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase
> Jan 29 17:21:29 17:21:29.465 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp
>  -- Time elapsed: 8.599 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getArchivedExecutionGraph(MiniCluster.java:840)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$waitUntilParallelismForVertexReached$3(AdaptiveSchedulerClusterITCase.java:270)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.waitUntilParallelismForVertexReached(AdaptiveSchedulerClusterITCase.java:265)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp(AdaptiveSchedulerClusterITCase.java:146)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 29 17:21:29 
> Jan 29 17:21:29 17:21:29.466 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale
>  -- Time elapsed: 2.036 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getExecutionGraph(MiniCluster.java:969)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$testCheckpointStatsPersistedAcrossRescale$1(AdaptiveSchedulerClusterITCase.java:183)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale(AdaptiveSchedulerClusterITCase.java:180)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33495) Add catalog and connector ability API and validation

2024-01-30 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-33495:
--

Assignee: Jim Hughes  (was: Timo Walther)

> Add catalog and connector ability API and validation
> 
>
> Key: FLINK-33495
> URL: https://issues.apache.org/jira/browse/FLINK-33495
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>
> Add API infra before adjusting the parser:
> - CatalogTable
> - CatalogTable.Builder
> - TableDistribution
> - SupportsBucketing
> This includes validation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34271) Fix the unstable test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT

2024-01-30 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34271:
---
Description: 
The underlying reason is that a previous PR introduced a test with state TTL as 
follows in the SQL: 
{code:java}
.runSql(
"INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '4d') */"
+ "b, "
+ "COUNT(*) AS cnt, "
+ "AVG(a) FILTER (WHERE a > 1) AS avg_a, "
+ "MIN(c) AS min_c "
+ "FROM source_t GROUP BY b"){code}
When the savepoint metadata was generated for the first time, the metadata 
recorded the time when a certain key was accessed. If the test is rerun after 
the TTL has expired, the state of this key in the metadata will be cleared, 
resulting in an incorrect test outcome.

To rectify this issue, I think the current tests in RestoreTestBase could be 
modified to regenerate a new savepoint metadata as needed every time. However, 
this seems to deviate from the original design purpose of RestoreTestBase.

For my test, I will work around this by removing the data 
"consumedBeforeRestore", as I am only interested in testing the generation of 
an expected JSON plan.

> Fix the unstable test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT
> -
>
> Key: FLINK-34271
> URL: https://issues.apache.org/jira/browse/FLINK-34271
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> The underlying reason is that a previous PR introduced a test with state TTL 
> as follows in the SQL: 
> {code:java}
> .runSql(
> "INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '4d') */"
> + "b, "
> + "COUNT(*) AS cnt, "
> + "AVG(a) FILTER (WHERE a > 1) AS avg_a, "
> + "MIN(c) AS min_c "
> + "FROM source_t GROUP BY b"){code}
> When the savepoint metadata was generated for the first time, the metadata 
> recorded the time when a certain key was accessed. If the test is rerun after 
> the TTL has expired, the state of this key in the metadata will be cleared, 
> resulting in an incorrect test outcome.
> To rectify this issue, I think the current tests in RestoreTestBase could be 
> modified to regenerate a new savepoint metadata as needed every time. 
> However, this seems to deviate from the original design purpose of 
> RestoreTestBase.
> For my test, I will work around this by removing the data 
> "consumedBeforeRestore", as I am only interested in testing the generation of 
> an expected JSON plan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34094] Adds documentation for AsyncScalarFunction [flink]

2024-01-30 Thread via GitHub


twalthr commented on code in PR #24224:
URL: https://github.com/apache/flink/pull/24224#discussion_r1470779501


##
docs/content/docs/dev/table/functions/udfs.md:
##
@@ -846,6 +847,119 @@ If you intend to implement or call functions in Python, 
please refer to the [Pyt
 
 {{< top >}}
 
+Asynchronous Scalar Functions
+
+
+A user-defined asynchronous scalar function maps zero, one, or multiple scalar 
values to a new scalar value, but does it asynchronously. Any data type listed 
in the [data types section]({{< ref "docs/dev/table/types" >}}) can be used as 
a parameter or return type of an evaluation method.
+
+In order to define an asynchronous scalar function, one has to extend the base 
class `AsyncScalarFunction` in `org.apache.flink.table.functions` and implement 
one or more evaluation methods named `eval(...)`.  The first argument must be a 
`CompletableFuture<...>` which is used to return the result, with subsequent 
arguments being the parameters passed to the function.
+
+The following example shows how to do work on a thread pool in the background, 
though any libraries exposing an async interface may be directly used to 
complete the `CompletableFuture` from a callback. See the [Implementation 
Guide](#implementation-guide) for more details.

Review Comment:
   Could we go into some runtime details as well? For example, about:
   - the need for Asynchronous I/O Operations
   - Order of Results
   - Event Time
   - Error handling
   
   I would suggest to summarize 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/asyncio/
 to what is necessary for `AsyncScalarFunction`. Otherwise we will face many 
questions on the user@ ML. Maybe we can also adopt the diagram from the page 
for clarity. We need to assume that a reader is a beginner that we should guide 
to the decision whether `ScalarFunction` or `AsyncScalarFunction` are more 
appropriate.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34271) Fix the unstable test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT

2024-01-30 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812197#comment-17812197
 ] 

xuyang commented on FLINK-34271:


BTW, I've noticed that the old tests for json plan changes (diff the json plan 
on a PR) are about to be completely removed due to FLINK-33421, but it seems 
that testing to check if the json plan has been affected by modifications from 
a PR is still necessary. This is because the new RestoreTestBase testing 
framework does not assess the risk of json plan modifications.

In the meantime, if some of the tests in RestoreTestBase fail due to explicit 
json plan incompatibility changes, is it possible to directly modify the 
failing tests in RestoreTestBase (by regenerating json plans and recreating 
savepoint metadata)?

> Fix the unstable test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT
> -
>
> Key: FLINK-34271
> URL: https://issues.apache.org/jira/browse/FLINK-34271
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> The underlying reason is that a previous PR introduced a test with state TTL 
> as follows in the SQL: 
> {code:java}
> .runSql(
> "INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '4d') */"
> + "b, "
> + "COUNT(*) AS cnt, "
> + "AVG(a) FILTER (WHERE a > 1) AS avg_a, "
> + "MIN(c) AS min_c "
> + "FROM source_t GROUP BY b"){code}
> When the savepoint metadata was generated for the first time, the metadata 
> recorded the time when a certain key was accessed. If the test is rerun after 
> the TTL has expired, the state of this key in the metadata will be cleared, 
> resulting in an incorrect test outcome.
> To rectify this issue, I think the current tests in RestoreTestBase could be 
> modified to regenerate a new savepoint metadata as needed every time. 
> However, this seems to deviate from the original design purpose of 
> RestoreTestBase.
> For my test, I will work around this by removing the data 
> "consumedBeforeRestore", as I am only interested in testing the generation of 
> an expected JSON plan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34122) Deprecate old serialization config methods and options

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812198#comment-17812198
 ] 

Martijn Visser commented on FLINK-34122:


[~Zhanghao Chen] Can you please include in the release notes information on 
what's deprecated, and what users should be using?

> Deprecate old serialization config methods and options
> --
>
> Key: FLINK-34122
> URL: https://issues.apache.org/jira/browse/FLINK-34122
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System, Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34190) Deprecate RestoreMode#LEGACY

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812199#comment-17812199
 ] 

Martijn Visser commented on FLINK-34190:


[~Zakelly] [~masteryhx] Can you please include in the release notes information 
on what's deprecated, and what users should be using?

> Deprecate RestoreMode#LEGACY
> 
>
> Key: FLINK-34190
> URL: https://issues.apache.org/jira/browse/FLINK-34190
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34271) Fix the unstable test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT

2024-01-30 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812204#comment-17812204
 ] 

xuyang commented on FLINK-34271:


cc [~qingyue] [~dwysakowicz] [~bvarghese] 

> Fix the unstable test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT
> -
>
> Key: FLINK-34271
> URL: https://issues.apache.org/jira/browse/FLINK-34271
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> The underlying reason is that a previous PR introduced a test with state TTL 
> as follows in the SQL: 
> {code:java}
> .runSql(
> "INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '4d') */"
> + "b, "
> + "COUNT(*) AS cnt, "
> + "AVG(a) FILTER (WHERE a > 1) AS avg_a, "
> + "MIN(c) AS min_c "
> + "FROM source_t GROUP BY b"){code}
> When the savepoint metadata was generated for the first time, the metadata 
> recorded the time when a certain key was accessed. If the test is rerun after 
> the TTL has expired, the state of this key in the metadata will be cleared, 
> resulting in an incorrect test outcome.
> To rectify this issue, I think the current tests in RestoreTestBase could be 
> modified to regenerate a new savepoint metadata as needed every time. 
> However, this seems to deviate from the original design purpose of 
> RestoreTestBase.
> For my test, I will work around this by removing the data 
> "consumedBeforeRestore", as I am only interested in testing the generation of 
> an expected JSON plan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34084) Deprecate unused configuration in BinaryInput/OutputFormat and FileInput/OutputFormat

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812202#comment-17812202
 ] 

Martijn Visser commented on FLINK-34084:


[~xuannan] [~xtsong] Can you please include in the release notes information on 
what's deprecated, and what users should be using?

> Deprecate unused configuration in BinaryInput/OutputFormat and 
> FileInput/OutputFormat
> -
>
> Key: FLINK-34084
> URL: https://issues.apache.org/jira/browse/FLINK-34084
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Xuannan Su
>Assignee: Xuannan Su
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Update FileInputFormat.java, FileOutputFormat.java, BinaryInputFormat.java, 
> and BinaryOutputFormat.java to deprecate unused string configuration keys.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34083) Deprecate string configuration keys and unused constants in ConfigConstants

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812200#comment-17812200
 ] 

Martijn Visser commented on FLINK-34083:


[~xuannan] [~fanrui] Can you please include in the release notes information on 
what's deprecated, and what users should be using?

> Deprecate string configuration keys and unused constants in ConfigConstants
> ---
>
> Key: FLINK-34083
> URL: https://issues.apache.org/jira/browse/FLINK-34083
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Xuannan Su
>Assignee: Xuannan Su
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> * Update ConfigConstants.java to deprecate and replace string configuration 
> keys
>  * Mark unused constants in ConfigConstants.java as deprecated



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34121][core] Introduce pipeline.force-kryo-avro to control whe… [flink]

2024-01-30 Thread via GitHub


JunRuiLee opened a new pull request, #24225:
URL: https://github.com/apache/flink/pull/24225

   …ther to force registration of Avro serializer with Kryo
   
   
   
   ## What is the purpose of the change
   
   Currently the avro serializers is registered with Kryo if flink-avro is on 
the classpath. This also happens if avro isn't even used by the job, be it due 
to a mistake in the dependencies setup, branching or flink-avro being in lib. 
This forces users to always provide flink-avro going forward for affected jobs, 
because on recovery Flink complains if any of the Kryo serializers cant be 
loaded.
   
   
   ## Brief change log
   
   - Introduce a new config option that controls whether or not register avro 
to KryoSerializer.
   
   
   ## Verifying this change
   
   Basically covered by existing tests, while also adding some unit tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**yes** / no)
 - The serializers: (**yes** / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812205#comment-17812205
 ] 

Martijn Visser commented on FLINK-32978:


[~Wencong Liu] Can you please include in the release notes information on 
what's deprecated, and what users should be using?

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33581) FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812206#comment-17812206
 ] 

Martijn Visser commented on FLINK-33581:


[~JunRuiLi] Can you please include in the release notes information on what's 
deprecated, and what users should be using?

> FLIP-381: Deprecate configuration getters/setters that return/set complex 
> Java objects
> --
>
> Key: FLINK-33581
> URL: https://issues.apache.org/jira/browse/FLINK-33581
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: image-2023-11-30-17-59-42-650.png
>
>
> Deprecate the non-ConfigOption objects in the StreamExecutionEnvironment, 
> CheckpointConfig, and ExecutionConfig, and ultimately removing them in FLINK 
> 2.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33781) Cleanup usage of deprecated org.apache.flink.table.api.TableConfig#ctor()

2024-01-30 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-33781:
--

Assignee: Jacky Lau

> Cleanup usage of deprecated org.apache.flink.table.api.TableConfig#ctor()
> -
>
> Key: FLINK-33781
> URL: https://issues.apache.org/jira/browse/FLINK-33781
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33781) Cleanup usage of deprecated org.apache.flink.table.api.TableConfig#ctor()

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812207#comment-17812207
 ] 

Martijn Visser commented on FLINK-33781:


[~jackylau] Can you please include in the release notes information on what's 
deprecated, and what users should be using?

> Cleanup usage of deprecated org.apache.flink.table.api.TableConfig#ctor()
> -
>
> Key: FLINK-33781
> URL: https://issues.apache.org/jira/browse/FLINK-33781
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34121][core] Introduce pipeline.force-kryo-avro to control whe… [flink]

2024-01-30 Thread via GitHub


flinkbot commented on PR #24225:
URL: https://github.com/apache/flink/pull/24225#issuecomment-1916346113

   
   ## CI report:
   
   * 7bdedd064f860071e5f27557b27b808c65530b53 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33973) Add new interfaces for SinkV2 to synchronize the API with the SourceV2 API

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812208#comment-17812208
 ] 

Martijn Visser commented on FLINK-33973:


[~pvary] [~gyfora] Can you please add release notes for this ticket?

> Add new interfaces for SinkV2 to synchronize the API with the SourceV2 API
> --
>
> Key: FLINK-33973
> URL: https://issues.apache.org/jira/browse/FLINK-33973
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Create the new interfaces, set inheritance and deprecation to finalize the 
> interface.
> After this change the new interafaces will exits, but they will not be 
> functional.
> The existing interfaces, and test should be working without issue, to verify 
> that adding the API will be backward compatible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33973) Add new interfaces for SinkV2 to synchronize the API with the SourceV2 API

2024-01-30 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-33973:
---
Fix Version/s: 1.19.0

> Add new interfaces for SinkV2 to synchronize the API with the SourceV2 API
> --
>
> Key: FLINK-33973
> URL: https://issues.apache.org/jira/browse/FLINK-33973
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Create the new interfaces, set inheritance and deprecation to finalize the 
> interface.
> After this change the new interafaces will exits, but they will not be 
> functional.
> The existing interfaces, and test should be working without issue, to verify 
> that adding the API will be backward compatible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-30 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu reopened FLINK-32978:
-

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25857) Add committer metrics to track the status of committables

2024-01-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812210#comment-17812210
 ] 

Martijn Visser commented on FLINK-25857:


[~pvary] Should we also add release notes for this ticket?

> Add committer metrics to track the status of committables
> -
>
> Key: FLINK-25857
> URL: https://issues.apache.org/jira/browse/FLINK-25857
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Fabian Paul
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: image-2023-10-20-17-23-09-595.png, screenshot-1.png
>
>
> With Sink V2 we can now track the progress of a committable during committing 
> and show metrics about the committing status. (i.e. failed, retried, 
> succeeded).
> The voted FLIP 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-371%3A+Provide+initialization+context+for+Committer+creation+in+TwoPhaseCommittingSink



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34121][core] Introduce pipeline.force-kryo-avro to control whe… [flink]

2024-01-30 Thread via GitHub


reswqa commented on PR #24129:
URL: https://github.com/apache/flink/pull/24129#issuecomment-1916353801

   This superseded by another PR, close it now. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34121][core] Introduce pipeline.force-kryo-avro to control whe… [flink]

2024-01-30 Thread via GitHub


reswqa closed pull request #24129: [FLINK-34121][core] Introduce 
pipeline.force-kryo-avro to control whe…
URL: https://github.com/apache/flink/pull/24129


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32684][rpc] Introduces RpcOptions and deprecates AkkaOptions [flink]

2024-01-30 Thread via GitHub


XComp commented on PR #24188:
URL: https://github.com/apache/flink/pull/24188#issuecomment-1916358082

   I verified in the release meeting today that this PR still is allowed to 
make it into master even after feature freeze.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32684][rpc] Introduces RpcOptions and deprecates AkkaOptions [flink]

2024-01-30 Thread via GitHub


XComp merged PR #24188:
URL: https://github.com/apache/flink/pull/24188


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-30 Thread Wencong Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wencong Liu closed FLINK-32978.
---
Release Note: 
The RichFunction#open(Configuration parameters) method has been deprecated and 
will be removed in future versions. Users are encouraged to migrate to the new 
RichFunction#open(OpenContext openContext) method, which provides a more 
comprehensive context for initialization.

Here are the key changes and recommendations for migration:

The open(Configuration parameters) method is now marked as deprecated.
A new method open(OpenContext openContext) has been added as a default method 
to the RichFunction interface.
Users should implement the new open(OpenContext openContext) method for 
function initialization tasks. The new method will be called automatically 
before the execution of any processing methods (map, join, etc.).
If the new open(OpenContext openContext) method is not implemented, Flink will 
fall back to invoking the deprecated open(Configuration parameters) method.
  Resolution: Fixed

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32684) Renaming AkkaOptions into RpcOptions

2024-01-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-32684:
--
Fix Version/s: 1.19.0
   (was: 2.0.0)

> Renaming AkkaOptions into RpcOptions
> 
>
> Key: FLINK-32684
> URL: https://issues.apache.org/jira/browse/FLINK-32684
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: 2.0-related, pull-request-available
> Fix For: 1.19.0
>
>
> FLINK-32468 introduced Apache Pekko as an replacement for Akka. This involved 
> renaming classes (besides updating comments). {{AkkaOptions}} was the only 
> occurrence that wasn't renamed as it's annotated as {{@PublicEvolving}}.
> This issue is about renaming {{AkkaOptions}} into {{PekkoOptions}} (or a more 
> general term considering FLINK-29281)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32684) Renaming AkkaOptions into RpcOptions

2024-01-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-32684.
---
Release Note: AkkaOptions are deprecated and replaced with RpcOptions
  Resolution: Fixed

master: 
[c678244a3890273145a786b9e1bf1a4f96f6dcfd|https://github.com/apache/flink/commit/c678244a3890273145a786b9e1bf1a4f96f6dcfd]

> Renaming AkkaOptions into RpcOptions
> 
>
> Key: FLINK-32684
> URL: https://issues.apache.org/jira/browse/FLINK-32684
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: 2.0-related, pull-request-available
> Fix For: 2.0.0
>
>
> FLINK-32468 introduced Apache Pekko as an replacement for Akka. This involved 
> renaming classes (besides updating comments). {{AkkaOptions}} was the only 
> occurrence that wasn't renamed as it's annotated as {{@PublicEvolving}}.
> This issue is about renaming {{AkkaOptions}} into {{PekkoOptions}} (or a more 
> general term considering FLINK-29281)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32978) Deprecate RichFunction#open(Configuration parameters)

2024-01-30 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812213#comment-17812213
 ] 

Wencong Liu commented on FLINK-32978:
-

[~martijnvisser] Thanks for the reminding. I've added the release notes 
information.

> Deprecate RichFunction#open(Configuration parameters)
> -
>
> Key: FLINK-32978
> URL: https://issues.apache.org/jira/browse/FLINK-32978
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The 
> [FLIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425231]
>  has decided that the parameter in RichFunction#open will be removed in the 
> next major version. We should deprecate it now and remove it in Flink 2.0. 
> The removal will be tracked in 
> [FLINK-6912|https://issues.apache.org/jira/browse/FLINK-6912].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34121][core] Introduce pipeline.force-kryo-avro to control whe… [flink]

2024-01-30 Thread via GitHub


X-czh commented on PR #24225:
URL: https://github.com/apache/flink/pull/24225#issuecomment-1916370068

   Thanks for taking over this. LGTM


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34233) HybridShuffleITCase.testHybridSelectiveExchangesRestart failed due to a IllegalStateException

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812216#comment-17812216
 ] 

Matthias Pohl commented on FLINK-34233:
---

The following build failure didn't contain the fix mentioned above, yet, and is 
only added here for documentation purposes:

* 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57079&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba&l=8355

> HybridShuffleITCase.testHybridSelectiveExchangesRestart failed due to a 
> IllegalStateException
> -
>
> Key: FLINK-34233
> URL: https://issues.apache.org/jira/browse/FLINK-34233
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Yunfeng Zhou
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56791&view=logs&j=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3&t=712ade8c-ca16-5b76-3acd-14df33bc1cb1&l=8357
> {code}
> Jan 24 02:10:03 02:10:03.582 [ERROR] Tests run: 12, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 34.74 s <<< FAILURE! -- in 
> org.apache.flink.test.runtime.HybridShuffleITCase
> Jan 24 02:10:03 02:10:03.582 [ERROR] 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchangesRestart
>  -- Time elapsed: 3.347 s <<< FAILURE!
> Jan 24 02:10:03 java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: org.apache.flink.runtime.JobException: 
> Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=2, 
> backoffTimeMS=0)
> Jan 24 02:10:03   at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:59)
> Jan 24 02:10:03   at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:137)
> Jan 24 02:10:03   at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchangesRestart(HybridShuffleITCase.java:91)
> Jan 24 02:10:03   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
> Jan 24 02:10:03   at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
> Jan 24 02:10:03   at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 24 02:10:03   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
> Jan 24 02:10:03   at 
> java.base/java.

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812217#comment-17812217
 ] 

Matthias Pohl commented on FLINK-31472:
---

1.18: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57080&view=logs&j=1c002d28-a73d-5309-26ee-10036d8476b4&t=d1c117a6-8f13-5466-55f0-d48dbb767fcd&l=10576

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchest

[jira] [Created] (FLINK-34273) git fetch fails

2024-01-30 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34273:
-

 Summary: git fetch fails
 Key: FLINK-34273
 URL: https://issues.apache.org/jira/browse/FLINK-34273
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI, Test Infrastructure
Affects Versions: 1.18.1, 1.19.0
Reporter: Matthias Pohl


We've seen multiple {{git fetch}} failures. I assume this to be an 
infrastructure issue. This Jira issue is for documentation purposes.
{code:java}
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
error: 5211 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57080&view=logs&j=0e7be18f-84f2-53f0-a32d-4a5e4a174679&t=5d6dc3d3-393d-5111-3a40-c6a5a36202e6&l=667



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34273) git fetch fails

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812218#comment-17812218
 ] 

Matthias Pohl commented on FLINK-34273:
---

* 
[https://dev.azure.com/apache-flink/web/build.aspx?pcguid=2d3c0ac8-fecf-45be-8407-6d87302181a9&builduri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f57036&tracking_data=ew0KICAic291cmNlIjogIlNsYWNrUGlwZWxpbmVzQXBwIiwNCiAgInNvdXJjZV9ldmVudF9uYW1lIjogImJ1aWxkLmNvbXBsZXRlIg0KfQ%3d%3d]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57022&view=results]
 

> git fetch fails
> ---
>
> Key: FLINK-34273
> URL: https://issues.apache.org/jira/browse/FLINK-34273
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI, Test Infrastructure
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> We've seen multiple {{git fetch}} failures. I assume this to be an 
> infrastructure issue. This Jira issue is for documentation purposes.
> {code:java}
> error: RPC failed; curl 18 transfer closed with outstanding read data 
> remaining
> error: 5211 bytes of body are still expected
> fetch-pack: unexpected disconnect while reading sideband packet
> fatal: early EOF
> fatal: fetch-pack: invalid index-pack output {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57080&view=logs&j=0e7be18f-84f2-53f0-a32d-4a5e4a174679&t=5d6dc3d3-393d-5111-3a40-c6a5a36202e6&l=667



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-01-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34202:
--
Priority: Critical  (was: Major)

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603&view=logs&j=3e4dd1a2-fe2f-5e5d-a581-48087e718d53&t=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603&view=logs&j=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812220#comment-17812220
 ] 

Matthias Pohl commented on FLINK-34202:
---

[~lincoln.86xy] do we have someone who can look into this?

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603&view=logs&j=3e4dd1a2-fe2f-5e5d-a581-48087e718d53&t=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603&view=logs&j=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-30 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1781#comment-1781
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

> Sure, if we are certain that this is a test issue and not an issue that was 
> introduced with 1.19?!

the stacktrace shows that the issue is that the timer is triggered by the test 
itself, so It is unlikely it is an issue from the sinkwriter. 
I will make sure to double check the impact as well. 

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator

[jira] [Commented] (FLINK-34272) AdaptiveSchedulerClusterITCase failure due to MiniCluster not running

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812226#comment-17812226
 ] 

Matthias Pohl commented on FLINK-34272:
---

FLINK-34274 is most-likely due to the same cause?

> AdaptiveSchedulerClusterITCase failure due to MiniCluster not running
> -
>
> Key: FLINK-34272
> URL: https://issues.apache.org/jira/browse/FLINK-34272
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57073&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9543]
> {code:java}
>  Jan 29 17:21:29 17:21:29.465 [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 12.48 s <<< FAILURE! -- in 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase
> Jan 29 17:21:29 17:21:29.465 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp
>  -- Time elapsed: 8.599 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getArchivedExecutionGraph(MiniCluster.java:840)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$waitUntilParallelismForVertexReached$3(AdaptiveSchedulerClusterITCase.java:270)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.waitUntilParallelismForVertexReached(AdaptiveSchedulerClusterITCase.java:265)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp(AdaptiveSchedulerClusterITCase.java:146)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 29 17:21:29 
> Jan 29 17:21:29 17:21:29.466 [ERROR] 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale
>  -- Time elapsed: 2.036 s <<< ERROR!
> Jan 29 17:21:29 java.lang.IllegalStateException: MiniCluster is not yet 
> running or has already been shut down.
> Jan 29 17:21:29   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getDispatcherGatewayFuture(MiniCluster.java:1118)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:991)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.minicluster.MiniCluster.getExecutionGraph(MiniCluster.java:969)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.lambda$testCheckpointStatsPersistedAcrossRescale$1(AdaptiveSchedulerClusterITCase.java:183)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
> Jan 29 17:21:29   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testCheckpointStatsPersistedAcrossRescale(AdaptiveSchedulerClusterITCase.java:180)
> Jan 29 17:21:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 29 17:21:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34148) Potential regression (Jan. 13): stringWrite with Java8

2024-01-30 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812230#comment-17812230
 ] 

Chesnay Schepler commented on FLINK-34148:
--

Just chiming in to point out that the shade-plugin version shouldn't be 
relevant; we used 3.4.1 in Flink already in 1.17 and didn't run into issues. We 
only ever had issue due to more recent maven, so I'm questioning the 
conclusions in this ticket a bit.

We should be able to just bump the shade-plugin and call it a day.

> Potential regression (Jan. 13): stringWrite with Java8
> --
>
> Key: FLINK-34148
> URL: https://issues.apache.org/jira/browse/FLINK-34148
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Zakelly Lan
>Priority: Blocker
> Fix For: 1.19.0
>
>
> Significant drop of performance in stringWrite with Java8 from commit 
> [881062f352|https://github.com/apache/flink/commit/881062f352f8bf8c21ab7cbea95e111fd82fdf20]
>  to 
> [5d9d8748b6|https://github.com/apache/flink/commit/5d9d8748b64ff1a75964a5cd2857ab5061312b51]
>  . It only involves strings not so long (128 or 4).
> stringWrite.128.ascii(Java8) baseline=1089.107756 current_value=754.52452
> stringWrite.128.chinese(Java8) baseline=504.244575 current_value=295.358989
> stringWrite.128.russian(Java8) baseline=655.582639 current_value=421.030188
> stringWrite.4.chinese(Java8) baseline=9598.791964 current_value=6627.929927
> stringWrite.4.russian(Java8) baseline=11070.666415 current_value=8289.95767



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-01-30 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812234#comment-17812234
 ] 

lincoln lee commented on FLINK-34202:
-

[~dianfu] Sorry for the ping, but consider that you're the expert of this area, 
could you help take a look at this issue?

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603&view=logs&j=3e4dd1a2-fe2f-5e5d-a581-48087e718d53&t=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603&view=logs&j=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812219#comment-17812219
 ] 

Matthias Pohl commented on FLINK-34202:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57080&view=logs&j=3e4dd1a2-fe2f-5e5d-a581-48087e718d53&t=b4612f28-e3b5-5853-8a8b-610ae894217a

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603&view=logs&j=3e4dd1a2-fe2f-5e5d-a581-48087e718d53&t=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603&view=logs&j=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34274) AdaptiveSchedulerTest.testRequirementLowerBoundDecreaseAfterResourceScarcityBelowAvailableSlots times out

2024-01-30 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34274:
-

 Summary: 
AdaptiveSchedulerTest.testRequirementLowerBoundDecreaseAfterResourceScarcityBelowAvailableSlots
 times out
 Key: FLINK-34274
 URL: https://issues.apache.org/jira/browse/FLINK-34274
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: Matthias Pohl


{code:java}
Jan 30 03:15:46 "ForkJoinPool-420-worker-25" #9746 daemon prio=5 os_prio=0 
tid=0x7fdfbb635800 nid=0x2dbd waiting on condition [0x7fdf39528000]
Jan 30 03:15:46java.lang.Thread.State: WAITING (parking)
Jan 30 03:15:46 at sun.misc.Unsafe.park(Native Method)
Jan 30 03:15:46 - parking to wait for  <0xfe642548> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
Jan 30 03:15:46 at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
Jan 30 03:15:46 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
Jan 30 03:15:46 at 
java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
Jan 30 03:15:46 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerTest$SubmissionBufferingTaskManagerGateway.waitForSubmissions(AdaptiveSchedulerTest.java:2225)
Jan 30 03:15:46 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerTest.awaitJobReachingParallelism(AdaptiveSchedulerTest.java:1333)
Jan 30 03:15:46 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerTest.testRequirementLowerBoundDecreaseAfterResourceScarcityBelowAvailableSlots(AdaptiveSchedulerTest.java:1273)
Jan 30 03:15:46 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
[...] {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57086&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9893



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33132] Flink Connector Redshift TableSink Implementation [flink-connector-aws]

2024-01-30 Thread via GitHub


Samrat002 commented on PR #114:
URL: 
https://github.com/apache/flink-connector-aws/pull/114#issuecomment-1916403746

   > 1、The tuncate table paramter is supported in the batch import scenario. If 
data exists in a table, duplicate data will be generated and the table must be 
cleared first 
   
   If the record exisits in table and redshift table created contains primary 
key or composite key . it carries out merge into operation . If you check the 
code we are doing merge into operation if ddl contains primary key . 
   
   
   > 2、Can upsert write data in Batch data import scenarios。 
https://docs.aws.amazon.com/redshift/latest/dg/t_updating-inserting-using-staging-tables-.html
   
   can you please elaborate more ,  as per what i understand you are concern 
how staged data get merged , in code we are using 
https://docs.aws.amazon.com/redshift/latest/dg/t_updating-inserting-using-staging-tables-.html#merge-method-specify-column-list
 . 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34275) Prepare Flink 1.19 Release

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34275:
---

 Summary: Prepare Flink 1.19 Release
 Key: FLINK-34275
 URL: https://issues.apache.org/jira/browse/FLINK-34275
 Project: Flink
  Issue Type: New Feature
  Components: Release System
Affects Versions: 1.17.0
Reporter: lincoln lee
Assignee: Leonard Xu
 Fix For: 1.17.0


This umbrella issue is meant as a test balloon for moving the [release 
documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
 into Jira.
h3. Prerequisites
h4. Environment Variables

Commands in the subtasks might expect some of the following enviroment 
variables to be set accordingly to the version that is about to be released:
{code:bash}
RELEASE_VERSION="1.5.0"
SHORT_RELEASE_VERSION="1.5"
CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
SHORT_NEXT_SNAPSHOT_VERSION="1.6"
{code}
h4. Build Tools

All of the following steps require to use Maven 3.2.5 and Java 8. Modify your 
PATH environment variable accordingly if needed.
h4. Flink Source
 * Create a new directory for this release and clone the Flink repository from 
Github to ensure you have a clean workspace (this step is optional).
 * Run {{mvn -Prelease clean install}} to ensure that the build processes that 
are specific to that profile are in good shape (this step is optional).

The rest of this instructions assumes that commands are run in the root (or 
{{./tools}} directory) of a repository on the branch of the release version 
with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812233#comment-17812233
 ] 

Matthias Pohl commented on FLINK-34200:
---

I did a local test run with the following diff to check whether the failure is 
still reproducible:
{code:java}
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskState.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskState.java
index e41bcfe7338..676e738ff45 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskState.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskState.java
@@ -290,14 +290,14 @@ public class PrioritizedOperatorSubtaskState {
 }
 
 return new PrioritizedOperatorSubtaskState(
-computePrioritizedAlternatives(
+resolvePrioritizedAlternatives(
 jobManagerState.getManagedKeyedState(),
 managedKeyedAlternatives,
-KeyedStateHandle::getKeyGroupRange),
-computePrioritizedAlternatives(
+
eqStateApprover(KeyedStateHandle::getKeyGroupRange)),
+resolvePrioritizedAlternatives(
 jobManagerState.getRawKeyedState(),
 rawKeyedAlternatives,
-KeyedStateHandle::getKeyGroupRange),
+
eqStateApprover(KeyedStateHandle::getKeyGroupRange)),
 resolvePrioritizedAlternatives(
 jobManagerState.getManagedOperatorState(),
 managedOperatorAlternatives, {code}
Even with the above change, the error appeared in the 2nd repetition. According 
to [~srichter] , that reveals that it must be either a test setup issue or a 
hidden issue that was just revealed by introducing the 
{{{}AutoRescalingITCase{}}}.

[~srichter] do we have someone who can look into it in more detail? I don't 
have the capacity right now.

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
> Attachments: FLINK-34200.failure.log.gz
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=ea7cf968-e585-52cb-e0fc-f48de023a7ca&l=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheck

[jira] [Created] (FLINK-34278) CLONE - Review and update documentation

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34278:
---

 Summary: CLONE - Review and update documentation
 Key: FLINK-34278
 URL: https://issues.apache.org/jira/browse/FLINK-34278
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.17.0
Reporter: lincoln lee
Assignee: Qingsheng Ren
 Fix For: 1.17.0


There are a few pages in the documentation that need to be reviewed and updated 
for each release.
 * Ensure that there exists a release notes page for each non-bugfix release 
(e.g., 1.5.0) in {{{}./docs/release-notes/{}}}, that it is up-to-date, and 
linked from the start page of the documentation.
 * Upgrading Applications and Flink Versions: 
[https://ci.apache.org/projects/flink/flink-docs-master/ops/upgrading.html]
 * ...

 

h3. Expectations
 * Update upgrade compatibility table 
([apache-flink:./docs/content/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content/docs/ops/upgrading.md#compatibility-table]
 and 
[apache-flink:./docs/content.zh/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content.zh/docs/ops/upgrading.md#compatibility-table]).
 * Update [Release Overview in 
Confluence|https://cwiki.apache.org/confluence/display/FLINK/Release+Management+and+Feature+Plan]
 * (minor only) The documentation for the new major release is visible under 
[https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION] 
(after at least one [doc 
build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
 * (minor only) The documentation for the new major release does not contain 
"-SNAPSHOT" in its version title, and all links refer to the corresponding 
version docs instead of {{{}master{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31472] Disable Intermittently failing throttling test [flink]

2024-01-30 Thread via GitHub


vahmed-hamdy commented on code in PR #24175:
URL: https://github.com/apache/flink/pull/24175#discussion_r1470844725


##
flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterThrottlingTest.java:
##
@@ -36,6 +37,7 @@
 import java.util.stream.LongStream;
 
 /** Test class for rate limiting functionalities of {@link AsyncSinkWriter}. */
+@Disabled("FLINK-31472")

Review Comment:
   yes good catch! 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31472] Disable Intermittently failing throttling test [flink]

2024-01-30 Thread via GitHub


XComp commented on code in PR #24175:
URL: https://github.com/apache/flink/pull/24175#discussion_r1470860598


##
flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriterThrottlingTest.java:
##
@@ -36,6 +37,7 @@
 import java.util.stream.LongStream;
 
 /** Test class for rate limiting functionalities of {@link AsyncSinkWriter}. */
+@Disabled("FLINK-31472")

Review Comment:
   You also want to rebase, I guess to have a stable master base again for CI



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34271) Fix the potential failure test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT

2024-01-30 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34271:
---
Summary: Fix the potential failure test about 
GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT  (was: Fix the unstable test 
about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT)

> Fix the potential failure test about 
> GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT
> --
>
> Key: FLINK-34271
> URL: https://issues.apache.org/jira/browse/FLINK-34271
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> The underlying reason is that a previous PR introduced a test with state TTL 
> as follows in the SQL: 
> {code:java}
> .runSql(
> "INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '4d') */"
> + "b, "
> + "COUNT(*) AS cnt, "
> + "AVG(a) FILTER (WHERE a > 1) AS avg_a, "
> + "MIN(c) AS min_c "
> + "FROM source_t GROUP BY b"){code}
> When the savepoint metadata was generated for the first time, the metadata 
> recorded the time when a certain key was accessed. If the test is rerun after 
> the TTL has expired, the state of this key in the metadata will be cleared, 
> resulting in an incorrect test outcome.
> To rectify this issue, I think the current tests in RestoreTestBase could be 
> modified to regenerate a new savepoint metadata as needed every time. 
> However, this seems to deviate from the original design purpose of 
> RestoreTestBase.
> For my test, I will work around this by removing the data 
> "consumedBeforeRestore", as I am only interested in testing the generation of 
> an expected JSON plan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34280) CLONE - Review Release Notes in JIRA

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34280:
---

 Summary: CLONE - Review Release Notes in JIRA
 Key: FLINK-34280
 URL: https://issues.apache.org/jira/browse/FLINK-34280
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Qingsheng Ren


JIRA automatically generates Release Notes based on the {{Fix Version}} field 
applied to issues. Release Notes are intended for Flink users (not Flink 
committers/contributors). You should ensure that Release Notes are informative 
and useful.

Open the release notes from the version status page by choosing the release 
underway and clicking Release Notes.

You should verify that the issues listed automatically by JIRA are appropriate 
to appear in the Release Notes. Specifically, issues should:
 * Be appropriately classified as {{{}Bug{}}}, {{{}New Feature{}}}, 
{{{}Improvement{}}}, etc.
 * Represent noteworthy user-facing changes, such as new functionality, 
backward-incompatible API changes, or performance improvements.
 * Have occurred since the previous release; an issue that was introduced and 
fixed between releases should not appear in the Release Notes.
 * Have an issue title that makes sense when read on its own.

Adjust any of the above properties to the improve clarity and presentation of 
the Release Notes.

Ensure that the JIRA release notes are also included in the release notes of 
the documentation (see section "Review and update documentation").
h4. Content of Release Notes field from JIRA tickets 

To get the list of "release notes" field from JIRA, you can ran the following 
script using JIRA REST API (notes the maxResults limits the number of entries):
{code:bash}
curl -s 
https://issues.apache.org/jira//rest/api/2/search?maxResults=200&jql=project%20%3D%20FLINK%20AND%20%22Release%20Note%22%20is%20not%20EMPTY%20and%20fixVersion%20%3D%20${RELEASE_VERSION}
 | jq '.issues[]|.key,.fields.summary,.fields.customfield_12310192' | paste - - 
-
{code}
{{jq}}  is present in most Linux distributions and on MacOS can be installed 
via brew.

 

h3. Expectations
 * Release Notes in JIRA have been audited and adjusted



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34277) CLONE - Triage release-blocking issues in JIRA

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34277:
---

 Summary: CLONE - Triage release-blocking issues in JIRA
 Key: FLINK-34277
 URL: https://issues.apache.org/jira/browse/FLINK-34277
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Qingsheng Ren


There could be outstanding release-blocking issues, which should be triaged 
before proceeding to build a release candidate. We track them by assigning a 
specific Fix version field even before the issue resolved.

The list of release-blocking issues is available at the version status page. 
Triage each unresolved issue with one of the following resolutions:
 * If the issue has been resolved and JIRA was not updated, resolve it 
accordingly.
 * If the issue has not been resolved and it is acceptable to defer this until 
the next release, update the Fix Version field to the new version you just 
created. Please consider discussing this with stakeholders and the dev@ mailing 
list, as appropriate.
 ** When using "Bulk Change" functionality of Jira
 *** First, add the newly created version to Fix Version for all unresolved 
tickets that have old the old version among its Fix Versions.
 *** Afterwards, remove the old version from the Fix Version.
 * If the issue has not been resolved and it is not acceptable to release until 
it is fixed, the release cannot proceed. Instead, work with the Flink community 
to resolve the issue.

 

h3. Expectations
 * There are no release blocking JIRA issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34282) CLONE - Create a release branch

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34282:
---

 Summary: CLONE - Create a release branch
 Key: FLINK-34282
 URL: https://issues.apache.org/jira/browse/FLINK-34282
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.17.0
Reporter: lincoln lee
Assignee: Leonard Xu
 Fix For: 1.17.0


If you are doing a new minor release, you need to update Flink version in the 
following repositories and the [AzureCI project 
configuration|https://dev.azure.com/apache-flink/apache-flink/]:
 * [apache/flink|https://github.com/apache/flink]
 * [apache/flink-docker|https://github.com/apache/flink-docker]
 * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]

Patch releases don't require the these repositories to be touched. Simply 
checkout the already existing branch for that version:
{code:java}
$ git checkout release-$SHORT_RELEASE_VERSION
{code}
h4. Flink repository

Create a branch for the new version that we want to release before updating the 
master branch to the next development version:
{code:bash}
$ cd ./tools
tools $ releasing/create_snapshot_branch.sh
tools $ git checkout master
tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
{code}
In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 as the last entry:
{code:java}
// ...
v1_12("1.12"),
v1_13("1.13"),
v1_14("1.14"),
v1_15("1.15"),
v1_16("1.16");
{code}
The newly created branch and updated {{master}} branch need to be pushed to the 
official repository.
h4. Flink Docker Repository

Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
[apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
sure that 
[apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
 points to the correct snapshot version; for {{dev-x.y}} it should point to 
{{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).

After pushing the new minor release branch, as the last step you should also 
update the documentation workflow to also build the documentation for the new 
release branch. Check [Managing 
Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
 on details on how to do that. You may also want to manually trigger a build to 
make the changes visible as soon as possible.

h4. Flink Benchmark Repository
First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
[apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
we can have a branch named {{dev-x.y}} which could be built on top of 
(${{CURRENT_SNAPSHOT_VERSION}}).

Then, inside the repository you need to manually update the {{flink.version}} 
property inside the parent *pom.xml* file. It should be pointing to the most 
recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
{code:xml}
1.18-SNAPSHOT
{code}

h4. AzureCI Project Configuration
The new release branch needs to be configured within AzureCI to make azure 
aware of the new release branch. This matter can only be handled by Ververica 
employees since they are owning the AzureCI setup.
 

h3. Expectations (Minor Version only if not stated otherwise)
 * Release branch has been created and pushed
 * Changes on the new release branch are picked up by [Azure 
CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
 * {{master}} branch has the version information updated to the new version 
(check pom.xml files and 
 * 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 enum)
 * New version is added to the 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 enum.
 * Make sure [flink-docker|https://github.com/apache/flink-docker/] has 
{{dev-x.y}} branch and docker e2e tests run against this branch in the 
corresponding Apache Flink release branch (see 
[apache/flink:flink-end-to-end-tests/test-scripts/common_docker.sh:51|https://github.com/apache/flink/blob/master/flink-end-to-end-tests/test-scripts/common_docker.sh#L51])
 * 
[apache-flink:docs/config.toml|https://github.com/apache/flink/blob/release-1.17/docs/config.toml]
 has been updated appropriately in the new Apache Flink release branch.
 * The {{flink.version}} property (see 
[apache/flink-benchmarks:pom.xml|https://github.com/apache/flink-benchmark

[jira] [Assigned] (FLINK-34275) Prepare Flink 1.19 Release

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34275:
---

Assignee: lincoln lee  (was: Leonard Xu)

> Prepare Flink 1.19 Release
> --
>
> Key: FLINK-34275
> URL: https://issues.apache.org/jira/browse/FLINK-34275
> Project: Flink
>  Issue Type: New Feature
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
> Fix For: 1.19.0
>
>
> This umbrella issue is meant as a test balloon for moving the [release 
> documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
>  into Jira.
> h3. Prerequisites
> h4. Environment Variables
> Commands in the subtasks might expect some of the following enviroment 
> variables to be set accordingly to the version that is about to be released:
> {code:bash}
> RELEASE_VERSION="1.5.0"
> SHORT_RELEASE_VERSION="1.5"
> CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
> NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
> SHORT_NEXT_SNAPSHOT_VERSION="1.6"
> {code}
> h4. Build Tools
> All of the following steps require to use Maven 3.2.5 and Java 8. Modify your 
> PATH environment variable accordingly if needed.
> h4. Flink Source
>  * Create a new directory for this release and clone the Flink repository 
> from Github to ensure you have a clean workspace (this step is optional).
>  * Run {{mvn -Prelease clean install}} to ensure that the build processes 
> that are specific to that profile are in good shape (this step is optional).
> The rest of this instructions assumes that commands are run in the root (or 
> {{./tools}} directory) of a repository on the branch of the release version 
> with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34275) Prepare Flink 1.19 Release

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34275:

Affects Version/s: 1.19.0
   (was: 1.17.0)

> Prepare Flink 1.19 Release
> --
>
> Key: FLINK-34275
> URL: https://issues.apache.org/jira/browse/FLINK-34275
> Project: Flink
>  Issue Type: New Feature
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.17.0
>
>
> This umbrella issue is meant as a test balloon for moving the [release 
> documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
>  into Jira.
> h3. Prerequisites
> h4. Environment Variables
> Commands in the subtasks might expect some of the following enviroment 
> variables to be set accordingly to the version that is about to be released:
> {code:bash}
> RELEASE_VERSION="1.5.0"
> SHORT_RELEASE_VERSION="1.5"
> CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
> NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
> SHORT_NEXT_SNAPSHOT_VERSION="1.6"
> {code}
> h4. Build Tools
> All of the following steps require to use Maven 3.2.5 and Java 8. Modify your 
> PATH environment variable accordingly if needed.
> h4. Flink Source
>  * Create a new directory for this release and clone the Flink repository 
> from Github to ensure you have a clean workspace (this step is optional).
>  * Run {{mvn -Prelease clean install}} to ensure that the build processes 
> that are specific to that profile are in good shape (this step is optional).
> The rest of this instructions assumes that commands are run in the root (or 
> {{./tools}} directory) of a repository on the branch of the release version 
> with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34275) Prepare Flink 1.19 Release

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34275:

Fix Version/s: 1.19.0
   (was: 1.17.0)

> Prepare Flink 1.19 Release
> --
>
> Key: FLINK-34275
> URL: https://issues.apache.org/jira/browse/FLINK-34275
> Project: Flink
>  Issue Type: New Feature
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.19.0
>
>
> This umbrella issue is meant as a test balloon for moving the [release 
> documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
>  into Jira.
> h3. Prerequisites
> h4. Environment Variables
> Commands in the subtasks might expect some of the following enviroment 
> variables to be set accordingly to the version that is about to be released:
> {code:bash}
> RELEASE_VERSION="1.5.0"
> SHORT_RELEASE_VERSION="1.5"
> CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
> NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
> SHORT_NEXT_SNAPSHOT_VERSION="1.6"
> {code}
> h4. Build Tools
> All of the following steps require to use Maven 3.2.5 and Java 8. Modify your 
> PATH environment variable accordingly if needed.
> h4. Flink Source
>  * Create a new directory for this release and clone the Flink repository 
> from Github to ensure you have a clean workspace (this step is optional).
>  * Run {{mvn -Prelease clean install}} to ensure that the build processes 
> that are specific to that profile are in good shape (this step is optional).
> The rest of this instructions assumes that commands are run in the root (or 
> {{./tools}} directory) of a repository on the branch of the release version 
> with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34281) CLONE - Select executing Release Manager

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34281:
---

 Summary: CLONE - Select executing Release Manager
 Key: FLINK-34281
 URL: https://issues.apache.org/jira/browse/FLINK-34281
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Affects Versions: 1.17.0
Reporter: lincoln lee
Assignee: Qingsheng Ren
 Fix For: 1.17.0


h4. GPG Key

You need to have a GPG key to sign the release artifacts. Please be aware of 
the ASF-wide [release signing 
guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t have 
a GPG key associated with your Apache account, please create one according to 
the guidelines.

Determine your Apache GPG Key and Key ID, as follows:
{code:java}
$ gpg --list-keys
{code}
This will list your GPG keys. One of these should reflect your Apache account, 
for example:
{code:java}
--
pub   2048R/845E6689 2016-02-23
uid  Nomen Nescio 
sub   2048R/BA4D50BE 2016-02-23
{code}
In the example above, the key ID is the 8-digit hex string in the {{pub}} line: 
{{{}845E6689{}}}.

Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache Flink 
release KEYS file|https://dist.apache.org/repos/dist/release/flink/KEYS] 
repository at [dist.apache.org|http://dist.apache.org/]. Follow the 
instructions listed at the top of these files. (Note: Only PMC members have 
write access to the release repository. If you end up getting 403 errors ask on 
the mailing list for assistance.)

Configure {{git}} to use this key when signing code by giving it your key ID, 
as follows:
{code:java}
$ git config --global user.signingkey 845E6689
{code}
You may drop the {{--global}} option if you’d prefer to use this key for the 
current repository only.

You may wish to start {{gpg-agent}} to unlock your GPG key only once using your 
passphrase. Otherwise, you may need to enter this passphrase hundreds of times. 
The setup for {{gpg-agent}} varies based on operating system, but may be 
something like this:
{code:bash}
$ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
$ export GPG_TTY=$(tty)
$ export GPG_AGENT_INFO
{code}
h4. Access to Apache Nexus repository

Configure access to the [Apache Nexus 
repository|https://repository.apache.org/], which enables final deployment of 
releases to the Maven Central Repository.
 # You log in with your Apache account.
 # Confirm you have appropriate access by finding {{org.apache.flink}} under 
{{{}Staging Profiles{}}}.
 # Navigate to your {{Profile}} (top right drop-down menu of the page).
 # Choose {{User Token}} from the dropdown, then click {{{}Access User 
Token{}}}. Copy a snippet of the Maven XML configuration block.
 # Insert this snippet twice into your global Maven {{settings.xml}} file, 
typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
{code:xml}

   
 
   apache.releases.https
   TOKEN_NAME
   TOKEN_PASSWORD
 
 
   apache.snapshots.https
   TOKEN_NAME
   TOKEN_PASSWORD
 
   
 
{code}

h4. Website development setup

Get ready for updating the Flink website by following the [website development 
instructions|https://flink.apache.org/contributing/improve-website.html].
h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)

The default tar application on Mac does not support GNU archive format and 
defaults to Pax. This bloats the archive with unnecessary metadata that can 
result in additional files when decompressing (see [1.15.2-RC2 vote 
thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
Install gnu-tar and create a symbolic link to use in preference of the default 
tar program.
{code:bash}
$ brew install gnu-tar
$ ln -s /usr/local/bin/gtar /usr/local/bin/tar
$ which tar
{code}
 

h3. Expectations
 * Release Manager’s GPG key is published to 
[dist.apache.org|http://dist.apache.org/]
 * Release Manager’s GPG key is configured in git configuration
 * Release Manager's GPG key is configured as the default gpg key.
 * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
Nexus
 * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34271][table-planner] fix the potential failure test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT [flink]

2024-01-30 Thread via GitHub


xuyangzhong opened a new pull request, #24226:
URL: https://github.com/apache/flink/pull/24226

   ## What is the purpose of the change
   
   This pr tries to fix the Due to state TTL, the state in the metadata 
savepoint may expire. Therefore, we should not rely on the state data in the 
savepoint to test.
   
   ## Brief change log
   
 - *Remove the dependency on the state data in the savepoint when testing*
   
   ## Verifying this change
   
   The modified test can cover this pr.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34276) CLONE - Create a new version in JIRA

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34276:
---

Assignee: (was: Martijn Visser)

> CLONE - Create a new version in JIRA
> 
>
> Key: FLINK-34276
> URL: https://issues.apache.org/jira/browse/FLINK-34276
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Priority: Major
>
> When contributors resolve an issue in JIRA, they are tagging it with a 
> release that will contain their changes. With the release currently underway, 
> new issues should be resolved against a subsequent future release. Therefore, 
> you should create a release item for this subsequent release, as follows:
>  # In JIRA, navigate to the [Flink > Administration > 
> Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
>  # Add a new release: choose the next minor version number compared to the 
> one currently underway, select today’s date as the Start Date, and choose Add.
> (Note: Only PMC members have access to the project administration. If you do 
> not have access, ask on the mailing list for assistance.)
>  
> 
> h3. Expectations
>  * The new version should be listed in the dropdown menu of {{fixVersion}} or 
> {{affectedVersion}} under "unreleased versions" when creating a new Jira 
> issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34271) Fix the potential failure test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT

2024-01-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34271:
---
Labels: pull-request-available  (was: )

> Fix the potential failure test about 
> GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT
> --
>
> Key: FLINK-34271
> URL: https://issues.apache.org/jira/browse/FLINK-34271
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> The underlying reason is that a previous PR introduced a test with state TTL 
> as follows in the SQL: 
> {code:java}
> .runSql(
> "INSERT INTO sink_t SELECT /*+ STATE_TTL('source_t' = '4d') */"
> + "b, "
> + "COUNT(*) AS cnt, "
> + "AVG(a) FILTER (WHERE a > 1) AS avg_a, "
> + "MIN(c) AS min_c "
> + "FROM source_t GROUP BY b"){code}
> When the savepoint metadata was generated for the first time, the metadata 
> recorded the time when a certain key was accessed. If the test is rerun after 
> the TTL has expired, the state of this key in the metadata will be cleared, 
> resulting in an incorrect test outcome.
> To rectify this issue, I think the current tests in RestoreTestBase could be 
> modified to regenerate a new savepoint metadata as needed every time. 
> However, this seems to deviate from the original design purpose of 
> RestoreTestBase.
> For my test, I will work around this by removing the data 
> "consumedBeforeRestore", as I am only interested in testing the generation of 
> an expected JSON plan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34277) Triage release-blocking issues in JIRA

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34277:
---

Assignee: (was: Qingsheng Ren)

> Triage release-blocking issues in JIRA
> --
>
> Key: FLINK-34277
> URL: https://issues.apache.org/jira/browse/FLINK-34277
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Priority: Major
>
> There could be outstanding release-blocking issues, which should be triaged 
> before proceeding to build a release candidate. We track them by assigning a 
> specific Fix version field even before the issue resolved.
> The list of release-blocking issues is available at the version status page. 
> Triage each unresolved issue with one of the following resolutions:
>  * If the issue has been resolved and JIRA was not updated, resolve it 
> accordingly.
>  * If the issue has not been resolved and it is acceptable to defer this 
> until the next release, update the Fix Version field to the new version you 
> just created. Please consider discussing this with stakeholders and the dev@ 
> mailing list, as appropriate.
>  ** When using "Bulk Change" functionality of Jira
>  *** First, add the newly created version to Fix Version for all unresolved 
> tickets that have old the old version among its Fix Versions.
>  *** Afterwards, remove the old version from the Fix Version.
>  * If the issue has not been resolved and it is not acceptable to release 
> until it is fixed, the release cannot proceed. Instead, work with the Flink 
> community to resolve the issue.
>  
> 
> h3. Expectations
>  * There are no release blocking JIRA issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34276) Create a new version in JIRA

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34276:

Summary: Create a new version in JIRA  (was: CLONE - Create a new version 
in JIRA)

> Create a new version in JIRA
> 
>
> Key: FLINK-34276
> URL: https://issues.apache.org/jira/browse/FLINK-34276
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Priority: Major
>
> When contributors resolve an issue in JIRA, they are tagging it with a 
> release that will contain their changes. With the release currently underway, 
> new issues should be resolved against a subsequent future release. Therefore, 
> you should create a release item for this subsequent release, as follows:
>  # In JIRA, navigate to the [Flink > Administration > 
> Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
>  # Add a new release: choose the next minor version number compared to the 
> one currently underway, select today’s date as the Start Date, and choose Add.
> (Note: Only PMC members have access to the project administration. If you do 
> not have access, ask on the mailing list for assistance.)
>  
> 
> h3. Expectations
>  * The new version should be listed in the dropdown menu of {{fixVersion}} or 
> {{affectedVersion}} under "unreleased versions" when creating a new Jira 
> issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34278) Review and update documentation

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34278:
---

Assignee: (was: Qingsheng Ren)

> Review and update documentation
> ---
>
> Key: FLINK-34278
> URL: https://issues.apache.org/jira/browse/FLINK-34278
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> There are a few pages in the documentation that need to be reviewed and 
> updated for each release.
>  * Ensure that there exists a release notes page for each non-bugfix release 
> (e.g., 1.5.0) in {{{}./docs/release-notes/{}}}, that it is up-to-date, and 
> linked from the start page of the documentation.
>  * Upgrading Applications and Flink Versions: 
> [https://ci.apache.org/projects/flink/flink-docs-master/ops/upgrading.html]
>  * ...
>  
> 
> h3. Expectations
>  * Update upgrade compatibility table 
> ([apache-flink:./docs/content/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content/docs/ops/upgrading.md#compatibility-table]
>  and 
> [apache-flink:./docs/content.zh/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content.zh/docs/ops/upgrading.md#compatibility-table]).
>  * Update [Release Overview in 
> Confluence|https://cwiki.apache.org/confluence/display/FLINK/Release+Management+and+Feature+Plan]
>  * (minor only) The documentation for the new major release is visible under 
> [https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION]
>  (after at least one [doc 
> build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
>  * (minor only) The documentation for the new major release does not contain 
> "-SNAPSHOT" in its version title, and all links refer to the corresponding 
> version docs instead of {{{}master{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34280) Review Release Notes in JIRA

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34280:
---

Assignee: (was: Qingsheng Ren)

> Review Release Notes in JIRA
> 
>
> Key: FLINK-34280
> URL: https://issues.apache.org/jira/browse/FLINK-34280
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Priority: Major
>
> JIRA automatically generates Release Notes based on the {{Fix Version}} field 
> applied to issues. Release Notes are intended for Flink users (not Flink 
> committers/contributors). You should ensure that Release Notes are 
> informative and useful.
> Open the release notes from the version status page by choosing the release 
> underway and clicking Release Notes.
> You should verify that the issues listed automatically by JIRA are 
> appropriate to appear in the Release Notes. Specifically, issues should:
>  * Be appropriately classified as {{{}Bug{}}}, {{{}New Feature{}}}, 
> {{{}Improvement{}}}, etc.
>  * Represent noteworthy user-facing changes, such as new functionality, 
> backward-incompatible API changes, or performance improvements.
>  * Have occurred since the previous release; an issue that was introduced and 
> fixed between releases should not appear in the Release Notes.
>  * Have an issue title that makes sense when read on its own.
> Adjust any of the above properties to the improve clarity and presentation of 
> the Release Notes.
> Ensure that the JIRA release notes are also included in the release notes of 
> the documentation (see section "Review and update documentation").
> h4. Content of Release Notes field from JIRA tickets 
> To get the list of "release notes" field from JIRA, you can ran the following 
> script using JIRA REST API (notes the maxResults limits the number of 
> entries):
> {code:bash}
> curl -s 
> https://issues.apache.org/jira//rest/api/2/search?maxResults=200&jql=project%20%3D%20FLINK%20AND%20%22Release%20Note%22%20is%20not%20EMPTY%20and%20fixVersion%20%3D%20${RELEASE_VERSION}
>  | jq '.issues[]|.key,.fields.summary,.fields.customfield_12310192' | paste - 
> - -
> {code}
> {{jq}}  is present in most Linux distributions and on MacOS can be installed 
> via brew.
>  
> 
> h3. Expectations
>  * Release Notes in JIRA have been audited and adjusted



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34280) Review Release Notes in JIRA

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34280:

Summary: Review Release Notes in JIRA  (was: CLONE - Review Release Notes 
in JIRA)

> Review Release Notes in JIRA
> 
>
> Key: FLINK-34280
> URL: https://issues.apache.org/jira/browse/FLINK-34280
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
>
> JIRA automatically generates Release Notes based on the {{Fix Version}} field 
> applied to issues. Release Notes are intended for Flink users (not Flink 
> committers/contributors). You should ensure that Release Notes are 
> informative and useful.
> Open the release notes from the version status page by choosing the release 
> underway and clicking Release Notes.
> You should verify that the issues listed automatically by JIRA are 
> appropriate to appear in the Release Notes. Specifically, issues should:
>  * Be appropriately classified as {{{}Bug{}}}, {{{}New Feature{}}}, 
> {{{}Improvement{}}}, etc.
>  * Represent noteworthy user-facing changes, such as new functionality, 
> backward-incompatible API changes, or performance improvements.
>  * Have occurred since the previous release; an issue that was introduced and 
> fixed between releases should not appear in the Release Notes.
>  * Have an issue title that makes sense when read on its own.
> Adjust any of the above properties to the improve clarity and presentation of 
> the Release Notes.
> Ensure that the JIRA release notes are also included in the release notes of 
> the documentation (see section "Review and update documentation").
> h4. Content of Release Notes field from JIRA tickets 
> To get the list of "release notes" field from JIRA, you can ran the following 
> script using JIRA REST API (notes the maxResults limits the number of 
> entries):
> {code:bash}
> curl -s 
> https://issues.apache.org/jira//rest/api/2/search?maxResults=200&jql=project%20%3D%20FLINK%20AND%20%22Release%20Note%22%20is%20not%20EMPTY%20and%20fixVersion%20%3D%20${RELEASE_VERSION}
>  | jq '.issues[]|.key,.fields.summary,.fields.customfield_12310192' | paste - 
> - -
> {code}
> {{jq}}  is present in most Linux distributions and on MacOS can be installed 
> via brew.
>  
> 
> h3. Expectations
>  * Release Notes in JIRA have been audited and adjusted



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34276) CLONE - Create a new version in JIRA

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34276:
---

 Summary: CLONE - Create a new version in JIRA
 Key: FLINK-34276
 URL: https://issues.apache.org/jira/browse/FLINK-34276
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Martijn Visser


When contributors resolve an issue in JIRA, they are tagging it with a release 
that will contain their changes. With the release currently underway, new 
issues should be resolved against a subsequent future release. Therefore, you 
should create a release item for this subsequent release, as follows:
 # In JIRA, navigate to the [Flink > Administration > 
Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
 # Add a new release: choose the next minor version number compared to the one 
currently underway, select today’s date as the Start Date, and choose Add.
(Note: Only PMC members have access to the project administration. If you do 
not have access, ask on the mailing list for assistance.)

 

h3. Expectations
 * The new version should be listed in the dropdown menu of {{fixVersion}} or 
{{affectedVersion}} under "unreleased versions" when creating a new Jira issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34281) Select executing Release Manager

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34281:

Summary: Select executing Release Manager  (was: CLONE - Select executing 
Release Manager)

> Select executing Release Manager
> 
>
> Key: FLINK-34281
> URL: https://issues.apache.org/jira/browse/FLINK-34281
> Project: Flink
>  Issue Type: Sub-task
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
> Fix For: 1.19.0
>
>
> h4. GPG Key
> You need to have a GPG key to sign the release artifacts. Please be aware of 
> the ASF-wide [release signing 
> guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t 
> have a GPG key associated with your Apache account, please create one 
> according to the guidelines.
> Determine your Apache GPG Key and Key ID, as follows:
> {code:java}
> $ gpg --list-keys
> {code}
> This will list your GPG keys. One of these should reflect your Apache 
> account, for example:
> {code:java}
> --
> pub   2048R/845E6689 2016-02-23
> uid  Nomen Nescio 
> sub   2048R/BA4D50BE 2016-02-23
> {code}
> In the example above, the key ID is the 8-digit hex string in the {{pub}} 
> line: {{{}845E6689{}}}.
> Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache 
> Flink release KEYS 
> file|https://dist.apache.org/repos/dist/release/flink/KEYS] repository at 
> [dist.apache.org|http://dist.apache.org/]. Follow the instructions listed at 
> the top of these files. (Note: Only PMC members have write access to the 
> release repository. If you end up getting 403 errors ask on the mailing list 
> for assistance.)
> Configure {{git}} to use this key when signing code by giving it your key ID, 
> as follows:
> {code:java}
> $ git config --global user.signingkey 845E6689
> {code}
> You may drop the {{--global}} option if you’d prefer to use this key for the 
> current repository only.
> You may wish to start {{gpg-agent}} to unlock your GPG key only once using 
> your passphrase. Otherwise, you may need to enter this passphrase hundreds of 
> times. The setup for {{gpg-agent}} varies based on operating system, but may 
> be something like this:
> {code:bash}
> $ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
> $ export GPG_TTY=$(tty)
> $ export GPG_AGENT_INFO
> {code}
> h4. Access to Apache Nexus repository
> Configure access to the [Apache Nexus 
> repository|https://repository.apache.org/], which enables final deployment of 
> releases to the Maven Central Repository.
>  # You log in with your Apache account.
>  # Confirm you have appropriate access by finding {{org.apache.flink}} under 
> {{{}Staging Profiles{}}}.
>  # Navigate to your {{Profile}} (top right drop-down menu of the page).
>  # Choose {{User Token}} from the dropdown, then click {{{}Access User 
> Token{}}}. Copy a snippet of the Maven XML configuration block.
>  # Insert this snippet twice into your global Maven {{settings.xml}} file, 
> typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
> this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
> {code:xml}
> 
>
>  
>apache.releases.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>  
>apache.snapshots.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>
>  
> {code}
> h4. Website development setup
> Get ready for updating the Flink website by following the [website 
> development 
> instructions|https://flink.apache.org/contributing/improve-website.html].
> h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)
> The default tar application on Mac does not support GNU archive format and 
> defaults to Pax. This bloats the archive with unnecessary metadata that can 
> result in additional files when decompressing (see [1.15.2-RC2 vote 
> thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
> Install gnu-tar and create a symbolic link to use in preference of the 
> default tar program.
> {code:bash}
> $ brew install gnu-tar
> $ ln -s /usr/local/bin/gtar /usr/local/bin/tar
> $ which tar
> {code}
>  
> 
> h3. Expectations
>  * Release Manager’s GPG key is published to 
> [dist.apache.org|http://dist.apache.org/]
>  * Release Manager’s GPG key is configured in git configuration
>  * Release Manager's GPG key is configured as the default gpg key.
>  * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
> Nexus
>  * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34281) CLONE - Select executing Release Manager

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34281:

Fix Version/s: 1.19.0
   (was: 1.17.0)

> CLONE - Select executing Release Manager
> 
>
> Key: FLINK-34281
> URL: https://issues.apache.org/jira/browse/FLINK-34281
> Project: Flink
>  Issue Type: Sub-task
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
> Fix For: 1.19.0
>
>
> h4. GPG Key
> You need to have a GPG key to sign the release artifacts. Please be aware of 
> the ASF-wide [release signing 
> guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t 
> have a GPG key associated with your Apache account, please create one 
> according to the guidelines.
> Determine your Apache GPG Key and Key ID, as follows:
> {code:java}
> $ gpg --list-keys
> {code}
> This will list your GPG keys. One of these should reflect your Apache 
> account, for example:
> {code:java}
> --
> pub   2048R/845E6689 2016-02-23
> uid  Nomen Nescio 
> sub   2048R/BA4D50BE 2016-02-23
> {code}
> In the example above, the key ID is the 8-digit hex string in the {{pub}} 
> line: {{{}845E6689{}}}.
> Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache 
> Flink release KEYS 
> file|https://dist.apache.org/repos/dist/release/flink/KEYS] repository at 
> [dist.apache.org|http://dist.apache.org/]. Follow the instructions listed at 
> the top of these files. (Note: Only PMC members have write access to the 
> release repository. If you end up getting 403 errors ask on the mailing list 
> for assistance.)
> Configure {{git}} to use this key when signing code by giving it your key ID, 
> as follows:
> {code:java}
> $ git config --global user.signingkey 845E6689
> {code}
> You may drop the {{--global}} option if you’d prefer to use this key for the 
> current repository only.
> You may wish to start {{gpg-agent}} to unlock your GPG key only once using 
> your passphrase. Otherwise, you may need to enter this passphrase hundreds of 
> times. The setup for {{gpg-agent}} varies based on operating system, but may 
> be something like this:
> {code:bash}
> $ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
> $ export GPG_TTY=$(tty)
> $ export GPG_AGENT_INFO
> {code}
> h4. Access to Apache Nexus repository
> Configure access to the [Apache Nexus 
> repository|https://repository.apache.org/], which enables final deployment of 
> releases to the Maven Central Repository.
>  # You log in with your Apache account.
>  # Confirm you have appropriate access by finding {{org.apache.flink}} under 
> {{{}Staging Profiles{}}}.
>  # Navigate to your {{Profile}} (top right drop-down menu of the page).
>  # Choose {{User Token}} from the dropdown, then click {{{}Access User 
> Token{}}}. Copy a snippet of the Maven XML configuration block.
>  # Insert this snippet twice into your global Maven {{settings.xml}} file, 
> typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
> this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
> {code:xml}
> 
>
>  
>apache.releases.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>  
>apache.snapshots.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>
>  
> {code}
> h4. Website development setup
> Get ready for updating the Flink website by following the [website 
> development 
> instructions|https://flink.apache.org/contributing/improve-website.html].
> h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)
> The default tar application on Mac does not support GNU archive format and 
> defaults to Pax. This bloats the archive with unnecessary metadata that can 
> result in additional files when decompressing (see [1.15.2-RC2 vote 
> thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
> Install gnu-tar and create a symbolic link to use in preference of the 
> default tar program.
> {code:bash}
> $ brew install gnu-tar
> $ ln -s /usr/local/bin/gtar /usr/local/bin/tar
> $ which tar
> {code}
>  
> 
> h3. Expectations
>  * Release Manager’s GPG key is published to 
> [dist.apache.org|http://dist.apache.org/]
>  * Release Manager’s GPG key is configured in git configuration
>  * Release Manager's GPG key is configured as the default gpg key.
>  * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
> Nexus
>  * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34282) CLONE - Create a release branch

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34282:

Fix Version/s: 1.19.0
   (was: 1.17.0)

> CLONE - Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.19.0
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  * New version is added to the 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum.
>  * Make sure [flink-docker|https://github.com/apache/flink-docker/] has 
> {{dev-x.y}} branch and docker e2e tests run against this branch in the 
> corresponding Apache Flink release branch (see 
> [apache/flink:flink-end-to-end-tests/test-scripts/common_docke

[jira] [Assigned] (FLINK-34282) Create a release branch

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34282:
---

Assignee: (was: Leonard Xu)

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Major
> Fix For: 1.19.0
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  * New version is added to the 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum.
>  * Make sure [flink-docker|https://github.com/apache/flink-docker/] has 
> {{dev-x.y}} branch and docker e2e tests run against this branch in the 
> corresponding Apache Flink release branch (see 
> [apache/flink:flink-end-to-end-tests/test-scripts/common_docker.sh:51|https://github.com/apache/flink/blob/master/flink-end-to-end-t

Re: [PR] [FLINK-34271][table-planner] fix the potential failure test about GroupAggregateRestoreTest#AGG_WITH_STATE_TTL_HINT [flink]

2024-01-30 Thread via GitHub


flinkbot commented on PR #24226:
URL: https://github.com/apache/flink/pull/24226#issuecomment-1916472032

   
   ## CI report:
   
   * 36e9b87d95a386244649e4fdc45711803ba12a63 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34279) Cross team testing

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34279:
---

Assignee: lincoln lee

> Cross team testing
> --
>
> Key: FLINK-34279
> URL: https://issues.apache.org/jira/browse/FLINK-34279
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>
> For user facing features that go into the release we'd like to ensure they 
> can actually _be used_ by Flink users. To achieve this the release managers 
> ensure that an issue for cross team testing is created in the Apache Flink 
> Jira. This can and should be picked up by other community members to verify 
> the functionality and usability of the feature.
> The issue should contain some entry points which enables other community 
> members to test it. It should not contain documentation on how to use the 
> feature as this should be part of the actual documentation. The cross team 
> tests are performed after the feature freeze. Documentation should be in 
> place before that. Those tests are manual tests, so do not confuse them with 
> automated tests.
> To sum that up:
>  * User facing features should be tested by other contributors
>  * The scope is usability and sanity of the feature
>  * The feature needs to be already documented
>  * The contributor creates an issue containing some pointers on how to get 
> started (e.g. link to the documentation, suggested targets of verification)
>  * Other community members pick those issues up and provide feedback
>  * Cross team testing happens right after the feature freeze
>  
> 
> h3. Expectations
>  * Jira issues for each expected release task according to the release plan 
> is created and labeled as {{{}release-testing{}}}.
>  * All the created release-testing-related Jira issues are resolved and the 
> corresponding blocker issues are fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34279) Cross team testing

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34279:
---

Assignee: (was: Qingsheng Ren)

> Cross team testing
> --
>
> Key: FLINK-34279
> URL: https://issues.apache.org/jira/browse/FLINK-34279
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Priority: Major
>
> For user facing features that go into the release we'd like to ensure they 
> can actually _be used_ by Flink users. To achieve this the release managers 
> ensure that an issue for cross team testing is created in the Apache Flink 
> Jira. This can and should be picked up by other community members to verify 
> the functionality and usability of the feature.
> The issue should contain some entry points which enables other community 
> members to test it. It should not contain documentation on how to use the 
> feature as this should be part of the actual documentation. The cross team 
> tests are performed after the feature freeze. Documentation should be in 
> place before that. Those tests are manual tests, so do not confuse them with 
> automated tests.
> To sum that up:
>  * User facing features should be tested by other contributors
>  * The scope is usability and sanity of the feature
>  * The feature needs to be already documented
>  * The contributor creates an issue containing some pointers on how to get 
> started (e.g. link to the documentation, suggested targets of verification)
>  * Other community members pick those issues up and provide feedback
>  * Cross team testing happens right after the feature freeze
>  
> 
> h3. Expectations
>  * Jira issues for each expected release task according to the release plan 
> is created and labeled as {{{}release-testing{}}}.
>  * All the created release-testing-related Jira issues are resolved and the 
> corresponding blocker issues are fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34278) Review and update documentation

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34278:

Affects Version/s: 1.19.0
   (was: 1.17.0)

> Review and update documentation
> ---
>
> Key: FLINK-34278
> URL: https://issues.apache.org/jira/browse/FLINK-34278
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> There are a few pages in the documentation that need to be reviewed and 
> updated for each release.
>  * Ensure that there exists a release notes page for each non-bugfix release 
> (e.g., 1.5.0) in {{{}./docs/release-notes/{}}}, that it is up-to-date, and 
> linked from the start page of the documentation.
>  * Upgrading Applications and Flink Versions: 
> [https://ci.apache.org/projects/flink/flink-docs-master/ops/upgrading.html]
>  * ...
>  
> 
> h3. Expectations
>  * Update upgrade compatibility table 
> ([apache-flink:./docs/content/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content/docs/ops/upgrading.md#compatibility-table]
>  and 
> [apache-flink:./docs/content.zh/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content.zh/docs/ops/upgrading.md#compatibility-table]).
>  * Update [Release Overview in 
> Confluence|https://cwiki.apache.org/confluence/display/FLINK/Release+Management+and+Feature+Plan]
>  * (minor only) The documentation for the new major release is visible under 
> [https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION]
>  (after at least one [doc 
> build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
>  * (minor only) The documentation for the new major release does not contain 
> "-SNAPSHOT" in its version title, and all links refer to the corresponding 
> version docs instead of {{{}master{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34278) Review and update documentation

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34278:

Summary: Review and update documentation  (was: CLONE - Review and update 
documentation)

> Review and update documentation
> ---
>
> Key: FLINK-34278
> URL: https://issues.apache.org/jira/browse/FLINK-34278
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.17.0
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> There are a few pages in the documentation that need to be reviewed and 
> updated for each release.
>  * Ensure that there exists a release notes page for each non-bugfix release 
> (e.g., 1.5.0) in {{{}./docs/release-notes/{}}}, that it is up-to-date, and 
> linked from the start page of the documentation.
>  * Upgrading Applications and Flink Versions: 
> [https://ci.apache.org/projects/flink/flink-docs-master/ops/upgrading.html]
>  * ...
>  
> 
> h3. Expectations
>  * Update upgrade compatibility table 
> ([apache-flink:./docs/content/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content/docs/ops/upgrading.md#compatibility-table]
>  and 
> [apache-flink:./docs/content.zh/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content.zh/docs/ops/upgrading.md#compatibility-table]).
>  * Update [Release Overview in 
> Confluence|https://cwiki.apache.org/confluence/display/FLINK/Release+Management+and+Feature+Plan]
>  * (minor only) The documentation for the new major release is visible under 
> [https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION]
>  (after at least one [doc 
> build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
>  * (minor only) The documentation for the new major release does not contain 
> "-SNAPSHOT" in its version title, and all links refer to the corresponding 
> version docs instead of {{{}master{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34279) CLONE - Cross team testing

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34279:
---

 Summary: CLONE - Cross team testing
 Key: FLINK-34279
 URL: https://issues.apache.org/jira/browse/FLINK-34279
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Qingsheng Ren


For user facing features that go into the release we'd like to ensure they can 
actually _be used_ by Flink users. To achieve this the release managers ensure 
that an issue for cross team testing is created in the Apache Flink Jira. This 
can and should be picked up by other community members to verify the 
functionality and usability of the feature.
The issue should contain some entry points which enables other community 
members to test it. It should not contain documentation on how to use the 
feature as this should be part of the actual documentation. The cross team 
tests are performed after the feature freeze. Documentation should be in place 
before that. Those tests are manual tests, so do not confuse them with 
automated tests.
To sum that up:
 * User facing features should be tested by other contributors
 * The scope is usability and sanity of the feature
 * The feature needs to be already documented
 * The contributor creates an issue containing some pointers on how to get 
started (e.g. link to the documentation, suggested targets of verification)
 * Other community members pick those issues up and provide feedback
 * Cross team testing happens right after the feature freeze

 

h3. Expectations
 * Jira issues for each expected release task according to the release plan is 
created and labeled as {{{}release-testing{}}}.
 * All the created release-testing-related Jira issues are resolved and the 
corresponding blocker issues are fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34275) Prepare Flink 1.19 Release

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34275:

Description: 
This umbrella issue is meant as a test balloon for moving the [release 
documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
 into Jira.
h3. Prerequisites
h4. Environment Variables

Commands in the subtasks might expect some of the following enviroment 
variables to be set accordingly to the version that is about to be released:
{code:bash}
RELEASE_VERSION="1.5.0"
SHORT_RELEASE_VERSION="1.5"
CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
SHORT_NEXT_SNAPSHOT_VERSION="1.6"
{code}
h4. Build Tools

All of the following steps require to use Maven 3.8.6 and Java 8. Modify your 
PATH environment variable accordingly if needed.
h4. Flink Source
 * Create a new directory for this release and clone the Flink repository from 
Github to ensure you have a clean workspace (this step is optional).
 * Run {{mvn -Prelease clean install}} to ensure that the build processes that 
are specific to that profile are in good shape (this step is optional).

The rest of this instructions assumes that commands are run in the root (or 
{{./tools}} directory) of a repository on the branch of the release version 
with the above environment variables set.

  was:
This umbrella issue is meant as a test balloon for moving the [release 
documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
 into Jira.
h3. Prerequisites
h4. Environment Variables

Commands in the subtasks might expect some of the following enviroment 
variables to be set accordingly to the version that is about to be released:
{code:bash}
RELEASE_VERSION="1.5.0"
SHORT_RELEASE_VERSION="1.5"
CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
SHORT_NEXT_SNAPSHOT_VERSION="1.6"
{code}
h4. Build Tools

All of the following steps require to use Maven 3.2.5 and Java 8. Modify your 
PATH environment variable accordingly if needed.
h4. Flink Source
 * Create a new directory for this release and clone the Flink repository from 
Github to ensure you have a clean workspace (this step is optional).
 * Run {{mvn -Prelease clean install}} to ensure that the build processes that 
are specific to that profile are in good shape (this step is optional).

The rest of this instructions assumes that commands are run in the root (or 
{{./tools}} directory) of a repository on the branch of the release version 
with the above environment variables set.


> Prepare Flink 1.19 Release
> --
>
> Key: FLINK-34275
> URL: https://issues.apache.org/jira/browse/FLINK-34275
> Project: Flink
>  Issue Type: New Feature
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
> Fix For: 1.19.0
>
>
> This umbrella issue is meant as a test balloon for moving the [release 
> documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
>  into Jira.
> h3. Prerequisites
> h4. Environment Variables
> Commands in the subtasks might expect some of the following enviroment 
> variables to be set accordingly to the version that is about to be released:
> {code:bash}
> RELEASE_VERSION="1.5.0"
> SHORT_RELEASE_VERSION="1.5"
> CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
> NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
> SHORT_NEXT_SNAPSHOT_VERSION="1.6"
> {code}
> h4. Build Tools
> All of the following steps require to use Maven 3.8.6 and Java 8. Modify your 
> PATH environment variable accordingly if needed.
> h4. Flink Source
>  * Create a new directory for this release and clone the Flink repository 
> from Github to ensure you have a clean workspace (this step is optional).
>  * Run {{mvn -Prelease clean install}} to ensure that the build processes 
> that are specific to that profile are in good shape (this step is optional).
> The rest of this instructions assumes that commands are run in the root (or 
> {{./tools}} directory) of a repository on the branch of the release version 
> with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34279) Cross team testing

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34279:

Summary: Cross team testing  (was: CLONE - Cross team testing)

> Cross team testing
> --
>
> Key: FLINK-34279
> URL: https://issues.apache.org/jira/browse/FLINK-34279
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
>
> For user facing features that go into the release we'd like to ensure they 
> can actually _be used_ by Flink users. To achieve this the release managers 
> ensure that an issue for cross team testing is created in the Apache Flink 
> Jira. This can and should be picked up by other community members to verify 
> the functionality and usability of the feature.
> The issue should contain some entry points which enables other community 
> members to test it. It should not contain documentation on how to use the 
> feature as this should be part of the actual documentation. The cross team 
> tests are performed after the feature freeze. Documentation should be in 
> place before that. Those tests are manual tests, so do not confuse them with 
> automated tests.
> To sum that up:
>  * User facing features should be tested by other contributors
>  * The scope is usability and sanity of the feature
>  * The feature needs to be already documented
>  * The contributor creates an issue containing some pointers on how to get 
> started (e.g. link to the documentation, suggested targets of verification)
>  * Other community members pick those issues up and provide feedback
>  * Cross team testing happens right after the feature freeze
>  
> 
> h3. Expectations
>  * Jira issues for each expected release task according to the release plan 
> is created and labeled as {{{}release-testing{}}}.
>  * All the created release-testing-related Jira issues are resolved and the 
> corresponding blocker issues are fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34281) CLONE - Select executing Release Manager

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34281:

Affects Version/s: 1.19.0
   (was: 1.17.0)

> CLONE - Select executing Release Manager
> 
>
> Key: FLINK-34281
> URL: https://issues.apache.org/jira/browse/FLINK-34281
> Project: Flink
>  Issue Type: Sub-task
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
> Fix For: 1.17.0
>
>
> h4. GPG Key
> You need to have a GPG key to sign the release artifacts. Please be aware of 
> the ASF-wide [release signing 
> guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t 
> have a GPG key associated with your Apache account, please create one 
> according to the guidelines.
> Determine your Apache GPG Key and Key ID, as follows:
> {code:java}
> $ gpg --list-keys
> {code}
> This will list your GPG keys. One of these should reflect your Apache 
> account, for example:
> {code:java}
> --
> pub   2048R/845E6689 2016-02-23
> uid  Nomen Nescio 
> sub   2048R/BA4D50BE 2016-02-23
> {code}
> In the example above, the key ID is the 8-digit hex string in the {{pub}} 
> line: {{{}845E6689{}}}.
> Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache 
> Flink release KEYS 
> file|https://dist.apache.org/repos/dist/release/flink/KEYS] repository at 
> [dist.apache.org|http://dist.apache.org/]. Follow the instructions listed at 
> the top of these files. (Note: Only PMC members have write access to the 
> release repository. If you end up getting 403 errors ask on the mailing list 
> for assistance.)
> Configure {{git}} to use this key when signing code by giving it your key ID, 
> as follows:
> {code:java}
> $ git config --global user.signingkey 845E6689
> {code}
> You may drop the {{--global}} option if you’d prefer to use this key for the 
> current repository only.
> You may wish to start {{gpg-agent}} to unlock your GPG key only once using 
> your passphrase. Otherwise, you may need to enter this passphrase hundreds of 
> times. The setup for {{gpg-agent}} varies based on operating system, but may 
> be something like this:
> {code:bash}
> $ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
> $ export GPG_TTY=$(tty)
> $ export GPG_AGENT_INFO
> {code}
> h4. Access to Apache Nexus repository
> Configure access to the [Apache Nexus 
> repository|https://repository.apache.org/], which enables final deployment of 
> releases to the Maven Central Repository.
>  # You log in with your Apache account.
>  # Confirm you have appropriate access by finding {{org.apache.flink}} under 
> {{{}Staging Profiles{}}}.
>  # Navigate to your {{Profile}} (top right drop-down menu of the page).
>  # Choose {{User Token}} from the dropdown, then click {{{}Access User 
> Token{}}}. Copy a snippet of the Maven XML configuration block.
>  # Insert this snippet twice into your global Maven {{settings.xml}} file, 
> typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
> this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
> {code:xml}
> 
>
>  
>apache.releases.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>  
>apache.snapshots.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>
>  
> {code}
> h4. Website development setup
> Get ready for updating the Flink website by following the [website 
> development 
> instructions|https://flink.apache.org/contributing/improve-website.html].
> h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)
> The default tar application on Mac does not support GNU archive format and 
> defaults to Pax. This bloats the archive with unnecessary metadata that can 
> result in additional files when decompressing (see [1.15.2-RC2 vote 
> thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
> Install gnu-tar and create a symbolic link to use in preference of the 
> default tar program.
> {code:bash}
> $ brew install gnu-tar
> $ ln -s /usr/local/bin/gtar /usr/local/bin/tar
> $ which tar
> {code}
>  
> 
> h3. Expectations
>  * Release Manager’s GPG key is published to 
> [dist.apache.org|http://dist.apache.org/]
>  * Release Manager’s GPG key is configured in git configuration
>  * Release Manager's GPG key is configured as the default gpg key.
>  * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
> Nexus
>  * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34283) CLONE - Verify that no exclusions were erroneously added to the japicmp plugin

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34283:
---

 Summary: CLONE - Verify that no exclusions were erroneously added 
to the japicmp plugin
 Key: FLINK-34283
 URL: https://issues.apache.org/jira/browse/FLINK-34283
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Matthias Pohl


Verify that no exclusions were erroneously added to the japicmp plugin that 
break compatibility guarantees. Check the exclusions for the 
japicmp-maven-plugin in the root pom (see 
[apache/flink:pom.xml:2175ff|https://github.com/apache/flink/blob/3856c49af77601cf7943a5072d8c932279ce46b4/pom.xml#L2175]
 for exclusions that:
* For minor releases: break source compatibility for {{@Public}} APIs
* For patch releases: break source/binary compatibility for 
{{@Public}}/{{@PublicEvolving}}  APIs
Any such exclusion must be properly justified, in advance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34277) Triage release-blocking issues in JIRA

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34277:

Summary: Triage release-blocking issues in JIRA  (was: CLONE - Triage 
release-blocking issues in JIRA)

> Triage release-blocking issues in JIRA
> --
>
> Key: FLINK-34277
> URL: https://issues.apache.org/jira/browse/FLINK-34277
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
>
> There could be outstanding release-blocking issues, which should be triaged 
> before proceeding to build a release candidate. We track them by assigning a 
> specific Fix version field even before the issue resolved.
> The list of release-blocking issues is available at the version status page. 
> Triage each unresolved issue with one of the following resolutions:
>  * If the issue has been resolved and JIRA was not updated, resolve it 
> accordingly.
>  * If the issue has not been resolved and it is acceptable to defer this 
> until the next release, update the Fix Version field to the new version you 
> just created. Please consider discussing this with stakeholders and the dev@ 
> mailing list, as appropriate.
>  ** When using "Bulk Change" functionality of Jira
>  *** First, add the newly created version to Fix Version for all unresolved 
> tickets that have old the old version among its Fix Versions.
>  *** Afterwards, remove the old version from the Fix Version.
>  * If the issue has not been resolved and it is not acceptable to release 
> until it is fixed, the release cannot proceed. Instead, work with the Flink 
> community to resolve the issue.
>  
> 
> h3. Expectations
>  * There are no release blocking JIRA issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34278) Review and update documentation

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34278:

Fix Version/s: 1.19.0
   (was: 1.17.0)

> Review and update documentation
> ---
>
> Key: FLINK-34278
> URL: https://issues.apache.org/jira/browse/FLINK-34278
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> There are a few pages in the documentation that need to be reviewed and 
> updated for each release.
>  * Ensure that there exists a release notes page for each non-bugfix release 
> (e.g., 1.5.0) in {{{}./docs/release-notes/{}}}, that it is up-to-date, and 
> linked from the start page of the documentation.
>  * Upgrading Applications and Flink Versions: 
> [https://ci.apache.org/projects/flink/flink-docs-master/ops/upgrading.html]
>  * ...
>  
> 
> h3. Expectations
>  * Update upgrade compatibility table 
> ([apache-flink:./docs/content/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content/docs/ops/upgrading.md#compatibility-table]
>  and 
> [apache-flink:./docs/content.zh/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content.zh/docs/ops/upgrading.md#compatibility-table]).
>  * Update [Release Overview in 
> Confluence|https://cwiki.apache.org/confluence/display/FLINK/Release+Management+and+Feature+Plan]
>  * (minor only) The documentation for the new major release is visible under 
> [https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION]
>  (after at least one [doc 
> build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
>  * (minor only) The documentation for the new major release does not contain 
> "-SNAPSHOT" in its version title, and all links refer to the corresponding 
> version docs instead of {{{}master{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34283) Verify that no exclusions were erroneously added to the japicmp plugin

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34283:
---

Assignee: (was: Matthias Pohl)

> Verify that no exclusions were erroneously added to the japicmp plugin
> --
>
> Key: FLINK-34283
> URL: https://issues.apache.org/jira/browse/FLINK-34283
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Priority: Major
>
> Verify that no exclusions were erroneously added to the japicmp plugin that 
> break compatibility guarantees. Check the exclusions for the 
> japicmp-maven-plugin in the root pom (see 
> [apache/flink:pom.xml:2175ff|https://github.com/apache/flink/blob/3856c49af77601cf7943a5072d8c932279ce46b4/pom.xml#L2175]
>  for exclusions that:
> * For minor releases: break source compatibility for {{@Public}} APIs
> * For patch releases: break source/binary compatibility for 
> {{@Public}}/{{@PublicEvolving}}  APIs
> Any such exclusion must be properly justified, in advance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34148) Potential regression (Jan. 13): stringWrite with Java8

2024-01-30 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812249#comment-17812249
 ] 

Chesnay Schepler commented on FLINK-34148:
--

So it's not really maven or the shade-plugin itself, but some interplay with 
the flatten plugin. The dependency reduced pom is written correctly, but then 
the flatten-plugin comes along and works on the original pom.

Upgrading the flatten-plugin to 1.2.7 resolved the issue for me locally.

> Potential regression (Jan. 13): stringWrite with Java8
> --
>
> Key: FLINK-34148
> URL: https://issues.apache.org/jira/browse/FLINK-34148
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Zakelly Lan
>Priority: Blocker
> Fix For: 1.19.0
>
>
> Significant drop of performance in stringWrite with Java8 from commit 
> [881062f352|https://github.com/apache/flink/commit/881062f352f8bf8c21ab7cbea95e111fd82fdf20]
>  to 
> [5d9d8748b6|https://github.com/apache/flink/commit/5d9d8748b64ff1a75964a5cd2857ab5061312b51]
>  . It only involves strings not so long (128 or 4).
> stringWrite.128.ascii(Java8) baseline=1089.107756 current_value=754.52452
> stringWrite.128.chinese(Java8) baseline=504.244575 current_value=295.358989
> stringWrite.128.russian(Java8) baseline=655.582639 current_value=421.030188
> stringWrite.4.chinese(Java8) baseline=9598.791964 current_value=6627.929927
> stringWrite.4.russian(Java8) baseline=11070.666415 current_value=8289.95767



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34281) Select executing Release Manager

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34281:
---

Assignee: (was: Qingsheng Ren)

> Select executing Release Manager
> 
>
> Key: FLINK-34281
> URL: https://issues.apache.org/jira/browse/FLINK-34281
> Project: Flink
>  Issue Type: Sub-task
>  Components: Release System
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Priority: Major
> Fix For: 1.19.0
>
>
> h4. GPG Key
> You need to have a GPG key to sign the release artifacts. Please be aware of 
> the ASF-wide [release signing 
> guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t 
> have a GPG key associated with your Apache account, please create one 
> according to the guidelines.
> Determine your Apache GPG Key and Key ID, as follows:
> {code:java}
> $ gpg --list-keys
> {code}
> This will list your GPG keys. One of these should reflect your Apache 
> account, for example:
> {code:java}
> --
> pub   2048R/845E6689 2016-02-23
> uid  Nomen Nescio 
> sub   2048R/BA4D50BE 2016-02-23
> {code}
> In the example above, the key ID is the 8-digit hex string in the {{pub}} 
> line: {{{}845E6689{}}}.
> Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache 
> Flink release KEYS 
> file|https://dist.apache.org/repos/dist/release/flink/KEYS] repository at 
> [dist.apache.org|http://dist.apache.org/]. Follow the instructions listed at 
> the top of these files. (Note: Only PMC members have write access to the 
> release repository. If you end up getting 403 errors ask on the mailing list 
> for assistance.)
> Configure {{git}} to use this key when signing code by giving it your key ID, 
> as follows:
> {code:java}
> $ git config --global user.signingkey 845E6689
> {code}
> You may drop the {{--global}} option if you’d prefer to use this key for the 
> current repository only.
> You may wish to start {{gpg-agent}} to unlock your GPG key only once using 
> your passphrase. Otherwise, you may need to enter this passphrase hundreds of 
> times. The setup for {{gpg-agent}} varies based on operating system, but may 
> be something like this:
> {code:bash}
> $ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
> $ export GPG_TTY=$(tty)
> $ export GPG_AGENT_INFO
> {code}
> h4. Access to Apache Nexus repository
> Configure access to the [Apache Nexus 
> repository|https://repository.apache.org/], which enables final deployment of 
> releases to the Maven Central Repository.
>  # You log in with your Apache account.
>  # Confirm you have appropriate access by finding {{org.apache.flink}} under 
> {{{}Staging Profiles{}}}.
>  # Navigate to your {{Profile}} (top right drop-down menu of the page).
>  # Choose {{User Token}} from the dropdown, then click {{{}Access User 
> Token{}}}. Copy a snippet of the Maven XML configuration block.
>  # Insert this snippet twice into your global Maven {{settings.xml}} file, 
> typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
> this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
> {code:xml}
> 
>
>  
>apache.releases.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>  
>apache.snapshots.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>
>  
> {code}
> h4. Website development setup
> Get ready for updating the Flink website by following the [website 
> development 
> instructions|https://flink.apache.org/contributing/improve-website.html].
> h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)
> The default tar application on Mac does not support GNU archive format and 
> defaults to Pax. This bloats the archive with unnecessary metadata that can 
> result in additional files when decompressing (see [1.15.2-RC2 vote 
> thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
> Install gnu-tar and create a symbolic link to use in preference of the 
> default tar program.
> {code:bash}
> $ brew install gnu-tar
> $ ln -s /usr/local/bin/gtar /usr/local/bin/tar
> $ which tar
> {code}
>  
> 
> h3. Expectations
>  * Release Manager’s GPG key is published to 
> [dist.apache.org|http://dist.apache.org/]
>  * Release Manager’s GPG key is configured in git configuration
>  * Release Manager's GPG key is configured as the default gpg key.
>  * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
> Nexus
>  * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34282) Create a release branch

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34282:

Summary: Create a release branch  (was: CLONE - Create a release branch)

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.19.0
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  * New version is added to the 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum.
>  * Make sure [flink-docker|https://github.com/apache/flink-docker/] has 
> {{dev-x.y}} branch and docker e2e tests run against this branch in the 
> corresponding Apache Flink release branch (see 
> [apache/flink:flink-end-to-end-tests/test-scripts/common_docker.

[jira] [Updated] (FLINK-34283) Verify that no exclusions were erroneously added to the japicmp plugin

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34283:

Summary: Verify that no exclusions were erroneously added to the japicmp 
plugin  (was: CLONE - Verify that no exclusions were erroneously added to the 
japicmp plugin)

> Verify that no exclusions were erroneously added to the japicmp plugin
> --
>
> Key: FLINK-34283
> URL: https://issues.apache.org/jira/browse/FLINK-34283
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Assignee: Matthias Pohl
>Priority: Major
>
> Verify that no exclusions were erroneously added to the japicmp plugin that 
> break compatibility guarantees. Check the exclusions for the 
> japicmp-maven-plugin in the root pom (see 
> [apache/flink:pom.xml:2175ff|https://github.com/apache/flink/blob/3856c49af77601cf7943a5072d8c932279ce46b4/pom.xml#L2175]
>  for exclusions that:
> * For minor releases: break source compatibility for {{@Public}} APIs
> * For patch releases: break source/binary compatibility for 
> {{@Public}}/{{@PublicEvolving}}  APIs
> Any such exclusion must be properly justified, in advance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34282) CLONE - Create a release branch

2024-01-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34282:

Affects Version/s: 1.19.0
   (was: 1.17.0)

> CLONE - Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.17.0
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  * New version is added to the 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum.
>  * Make sure [flink-docker|https://github.com/apache/flink-docker/] has 
> {{dev-x.y}} branch and docker e2e tests run against this branch in the 
> corresponding Apache Flink release branch (see 
> [apache/flink:flink-end-to-end-tests/test-scripts/comm

[jira] [Commented] (FLINK-34229) Duplicate entry in InnerClasses attribute in class file FusionStreamOperator

2024-01-30 Thread Dan Zou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812246#comment-17812246
 ] 

Dan Zou commented on FLINK-34229:
-

[~lincoln.86xy] I am working on it and expect to submit a CR before tomorrow.

> Duplicate entry in InnerClasses attribute in class file FusionStreamOperator
> 
>
> Key: FLINK-34229
> URL: https://issues.apache.org/jira/browse/FLINK-34229
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xingbe
>Priority: Major
> Attachments: image-2024-01-24-17-05-47-883.png, taskmanager_log.txt
>
>
> I noticed a runtime error happens in 10TB TPC-DS (q35.sql) benchmarks in 
> 1.19, the problem did not happen in 1.18.0. This issue may have been newly 
> introduced recently. !image-2024-01-24-17-05-47-883.png|width=589,height=279!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34284) Submit Software License Grant to ASF

2024-01-30 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-34284:
--

Assignee: Leonard Xu

> Submit Software License Grant to ASF
> 
>
> Key: FLINK-34284
> URL: https://issues.apache.org/jira/browse/FLINK-34284
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> As ASF software license grant[1] required, we need submit the Software Grant 
> Agreement.
> [1] https://www.apache.org/licenses/contributor-agreements.html#grants



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34178][autoscaler] Fix the bug that observed scaling restart time is always great than `stabilization.interval` [flink-kubernetes-operator]

2024-01-30 Thread via GitHub


afedulov commented on code in PR #759:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/759#discussion_r1470917165


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricCollector.java:
##
@@ -181,9 +186,12 @@ private static Instant getWindowFullTime(
 }
 
 @VisibleForTesting
-protected Instant getJobUpdateTs(JobDetailsInfo jobDetailsInfo) {
-return Instant.ofEpochMilli(
-
jobDetailsInfo.getTimestamps().values().stream().max(Long::compare).get());
+protected Instant getJobSwitchToRunningTs(JobDetailsInfo jobDetailsInfo) {
+final Map timestamps = jobDetailsInfo.getTimestamps();
+
+final Long runningTs = timestamps.get(JobStatus.RUNNING);
+checkState(runningTs != null, "Unable to find when the job was 
switched to RUNNING.");

Review Comment:
   >We can add this invariant here but it would only defend against a Byzantine 
behavior.
   
   I don't quite agree - I find it it a bit one-sided to look at public classes 
and methods only from the perspective of the implicit context in which they are 
**currently** called. 
   I believe the check is appropriate - nothing in the contract or comments  of 
`updateMetrics` states the scope in which it is supposed to be executed. The 
class is also open for extension - someone can later override methods partially 
without being clear about the implicit assumptions. An alternative is to seal 
everything and make this class only available to `JobAutoscalerImpl`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34284) Submit Software License Grant to ASF

2024-01-30 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-34284:
--

 Summary: Submit Software License Grant to ASF
 Key: FLINK-34284
 URL: https://issues.apache.org/jira/browse/FLINK-34284
 Project: Flink
  Issue Type: Sub-task
  Components: Flink CDC
Reporter: Leonard Xu


As ASF software license grant[1] required, we need submit the Software Grant 
Agreement.

[1] https://www.apache.org/licenses/contributor-agreements.html#grants



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34190) Deprecate RestoreMode#LEGACY

2024-01-30 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812255#comment-17812255
 ] 

Zakelly Lan commented on FLINK-34190:
-

[~martijnvisser] Thanks for the reminder, will do~

> Deprecate RestoreMode#LEGACY
> 
>
> Key: FLINK-34190
> URL: https://issues.apache.org/jira/browse/FLINK-34190
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34095] Adds restore tests for StreamExecAsyncCalc [flink]

2024-01-30 Thread via GitHub


twalthr commented on code in PR #24220:
URL: https://github.com/apache/flink/pull/24220#discussion_r1470990008


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/AsyncCalcTestPrograms.java:
##
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import 
org.apache.flink.table.planner.runtime.utils.JavaUserDefinedScalarFunctions;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.time.LocalDateTime;
+
+public class AsyncCalcTestPrograms {
+
+static final TableTestProgram ASYNC_CALC_UDF_SIMPLE =
+TableTestProgram.of("async-calc-simple", "validates async calc 
node with simple UDF")
+.setupTemporaryCatalogFunction(
+"udf1", 
JavaUserDefinedScalarFunctions.AsyncJavaFunc0.class)
+.setupTableSource(
+SourceTestStep.newBuilder("source_t")
+.addSchema("a INT")
+.producedBeforeRestore(Row.of(5))
+.producedAfterRestore(Row.of(5))
+.build())
+.setupTableSink(
+SinkTestStep.newBuilder("sink_t")
+.addSchema("a INT", "a1 BIGINT")
+.consumedBeforeRestore(Row.of(5, 6L))
+.consumedAfterRestore(Row.of(5, 6L))
+.build())
+.runSql("INSERT INTO sink_t SELECT a, udf1(a) FROM 
source_t")
+.build();
+
+static final TableTestProgram ASYNC_CALC_UDF_COMPLEX =
+TableTestProgram.of("async-calc-complex", "validates calc node 
with complex UDFs")
+.setupTemporaryCatalogFunction(
+"udf1", 
JavaUserDefinedScalarFunctions.AsyncJavaFunc0.class)
+.setupTemporaryCatalogFunction(
+"udf2", 
JavaUserDefinedScalarFunctions.AsyncJavaFunc1.class)
+.setupTemporarySystemFunction(
+"udf3", 
JavaUserDefinedScalarFunctions.AsyncJavaFunc2.class)
+.setupTemporarySystemFunction(
+"udf4", 
JavaUserDefinedScalarFunctions.AsyncUdfWithOpen.class)
+.setupCatalogFunction(
+"udf5", 
JavaUserDefinedScalarFunctions.AsyncJavaFunc5.class)
+.setupTableSource(
+SourceTestStep.newBuilder("source_t")
+.addSchema(
+"a BIGINT, b INT NOT NULL, c 
VARCHAR, d TIMESTAMP(3)")
+.producedBeforeRestore(
+Row.of(
+5L,
+11,
+"hello world",
+LocalDateTime.of(2023, 12, 
16, 1, 1, 1, 123)))
+.producedAfterRestore(
+Row.of(
+5L,
+11,
+"hello world",
+LocalDateTime.of(2023, 12, 
16, 1, 1, 1, 123)))
+.build())
+.setupTableSink(
+SinkTestStep.newBuilder("sink_t")
+.addSchema(
+"a BIGINT",
+"a1 VARCHAR",
+"b INT NOT NULL",
+

[PR] Revert "[FLINK-33705] Upgrade to flink-shaded 18.0" [flink]

2024-01-30 Thread via GitHub


snuyanzin opened a new pull request, #24227:
URL: https://github.com/apache/flink/pull/24227

   
   ## What is the purpose of the change
   
   After upgrade of flink-shaded to 18.0 there was appeared perf regression 
FLINK-34148
   As it was decided during release meeting for 1.19.x it should be reverted 
and then applied for the next release together with fix
   
   
   ## Brief change log
   
   Revert of 5d9d8748b64ff1a75964a5cd2857ab5061312b51
   + `guava32` - > `guava31` for some new imports
   
   
   ## Verifying this change
   
   
   This change is already covered by existing test
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes )
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: ( no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no )
 - The S3 file system connector: ( no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable )
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   4   >