[jira] [Resolved] (FLINK-25431) Implement file-based JobResultStore

2022-01-31 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-25431.
---
Fix Version/s: 1.15.0
   Resolution: Fixed

master: 
[2770acee1bc4a82a2f4223d4a4cd6073181dc840|https://github.com/apache/flink/commit/2770acee1bc4a82a2f4223d4a4cd6073181dc840]

> Implement file-based JobResultStore
> ---
>
> Key: FLINK-25431
> URL: https://issues.apache.org/jira/browse/FLINK-25431
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Assignee: Mika Naylor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The implementation should comply to what's described in 
> [FLIP-194|https://cwiki.apache.org/confluence/display/FLINK/FLIP-194%3A+Introduce+the+JobResultStore]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25885) ClusterEntrypointTest.testWorkingDirectoryIsDeletedIfApplicationCompletes failed on azure

2022-01-31 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-25885:
-

Assignee: Till Rohrmann

> ClusterEntrypointTest.testWorkingDirectoryIsDeletedIfApplicationCompletes  
> failed on azure
> --
>
> Key: FLINK-25885
> URL: https://issues.apache.org/jira/browse/FLINK-25885
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-01-31T05:00:07.3113870Z Jan 31 05:00:07 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Discard 
> message, because the rpc endpoint 
> akka.tcp://flink@127.0.0.1:6123/user/rpc/resourcemanager_2 has not been 
> started yet.
> 2022-01-31T05:00:07.3115008Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> 2022-01-31T05:00:07.3115778Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> 2022-01-31T05:00:07.3116527Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
> 2022-01-31T05:00:07.3117267Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-01-31T05:00:07.3118011Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3118770Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3119608Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:251)
> 2022-01-31T05:00:07.3120425Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-31T05:00:07.3121199Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-31T05:00:07.3121957Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3122716Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3123457Z Jan 31 05:00:07  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1387)
> 2022-01-31T05:00:07.3124241Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-01-31T05:00:07.3125106Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-01-31T05:00:07.3126063Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-01-31T05:00:07.3127207Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-31T05:00:07.3127982Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-31T05:00:07.3128741Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3129497Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3130385Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:45)
> 2022-01-31T05:00:07.3131092Z Jan 31 05:00:07  at 
> akka.dispatch.OnComplete.internal(Future.scala:299)
> 2022-01-31T05:00:07.3131695Z Jan 31 05:00:07  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-01-31T05:00:07.3132310Z Jan 31 05:00:07  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-01-31T05:00:07.3132943Z Jan 31 05:00:07  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-01-31T05:00:07.3133577Z Jan 31 05:00:07  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-01-31T05:00:07.3134340Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-01-31T05:00:07.3135149Z Jan 31 05:00:07  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-01-31T05:00:07.3135898Z Jan 31 05:00:07  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
> 2022-01

[GitHub] [flink] flinkbot edited a comment on pull request #18564: [FLINK-24345] [docs] Translate "File Systems" page of "Internals" int…

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18564:
URL: https://github.com/apache/flink/pull/18564#issuecomment-1025426056


   
   ## CI report:
   
   * 35abc632db9a1244bab14ed9188e0e7a88fe40e9 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30493)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24844) CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP

2022-01-31 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484562#comment-17484562
 ] 

Etienne Chauchot commented on FLINK-24844:
--

[~gaoyunhaii] of course !

> CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP
> --
>
> Key: FLINK-24844
> URL: https://issues.apache.org/jira/browse/FLINK-24844
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test {{CassandraConnectorITCase.testCassandraBatchPojoFormat}} fails on 
> AZP with
> {code}
> 2021-11-09T00:32:42.1369473Z Nov 09 00:32:42 [ERROR] Tests run: 17, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 152.962 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> 2021-11-09T00:32:42.1371638Z Nov 09 00:32:42 [ERROR] 
> testCassandraBatchPojoFormat  Time elapsed: 20.378 s  <<< ERROR!
> 2021-11-09T00:32:42.1372881Z Nov 09 00:32:42 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.batches already exists
> 2021-11-09T00:32:42.1373913Z Nov 09 00:32:42  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2021-11-09T00:32:42.1374921Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2021-11-09T00:32:42.1379615Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2021-11-09T00:32:42.1380668Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2021-11-09T00:32:42.1381523Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2021-11-09T00:32:42.1382552Z Nov 09 00:32:42  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraBatchPojoFormat(CassandraConnectorITCase.java:543)
> 2021-11-09T00:32:42.1383487Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-11-09T00:32:42.1384433Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-11-09T00:32:42.1385336Z Nov 09 00:32:42  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-11-09T00:32:42.1386119Z Nov 09 00:32:42  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-11-09T00:32:42.1387204Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-11-09T00:32:42.1388225Z Nov 09 00:32:42  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-11-09T00:32:42.1389101Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-11-09T00:32:42.1400913Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-11-09T00:32:42.1401588Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-11-09T00:32:42.1402487Z Nov 09 00:32:42  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2021-11-09T00:32:42.1403055Z Nov 09 00:32:42  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-11-09T00:32:42.1403556Z Nov 09 00:32:42  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-11-09T00:32:42.1404008Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-11-09T00:32:42.1404650Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-11-09T00:32:42.1405151Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-11-09T00:32:42.1405632Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-11-09T00:32:42.1406166Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-11-09T00:32:42.1406670Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2021-11-09T00:32:42.1407125Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2021-11-09T00:32:42.1407599Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2021-11-09T00:32:42.1408258Z Nov 09 00:32:42  at 
> org.junit.runners.ParentR

[GitHub] [flink] flinkbot edited a comment on pull request #18353: [FLINK-25129][docs]project configuation changes in docs

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18353:
URL: https://github.com/apache/flink/pull/18353#issuecomment-1012981377


   
   ## CI report:
   
   * 2e955fd5db3754a069a1ed8c48ce2e581aaef51b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30417)
 
   * 50e221c6373f2774eeb8ec134df27caef83b2d40 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] echauchot commented on pull request #18509: [FLINK-25771][connectors][Cassandra][test] Raise all read/write/miscellaneous requests timeouts

2022-01-31 Thread GitBox


echauchot commented on pull request #18509:
URL: https://github.com/apache/flink/pull/18509#issuecomment-1025491113


   @fapaul can we merge this PR ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18353: [FLINK-25129][docs]project configuation changes in docs

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18353:
URL: https://github.com/apache/flink/pull/18353#issuecomment-1012981377


   
   ## CI report:
   
   * 2e955fd5db3754a069a1ed8c48ce2e581aaef51b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30417)
 
   * 50e221c6373f2774eeb8ec134df27caef83b2d40 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30495)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul merged pull request #18509: [FLINK-25771][connectors][Cassandra][test] Raise all read/write/miscellaneous requests timeouts

2022-01-31 Thread GitBox


fapaul merged pull request #18509:
URL: https://github.com/apache/flink/pull/18509


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25882) Translate updated privacy policy to Chinese

2022-01-31 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484565#comment-17484565
 ] 

Martijn Visser commented on FLINK-25882:


[~liyubin117] I've assigned it to you, thank you

> Translate updated privacy policy to Chinese
> ---
>
> Key: FLINK-25882
> URL: https://issues.apache.org/jira/browse/FLINK-25882
> Project: Flink
>  Issue Type: Sub-task
>  Components: Project Website
>Reporter: Martijn Visser
>Assignee: Yubin Li
>Priority: Major
>  Labels: chinese-translation
>
> After https://github.com/apache/flink-web/pull/503 is merged, it requires 
> {{privacy-policy.zh.md}} to be translated to Chinese. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25882) Translate updated privacy policy to Chinese

2022-01-31 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-25882:
--

Assignee: Yubin Li

> Translate updated privacy policy to Chinese
> ---
>
> Key: FLINK-25882
> URL: https://issues.apache.org/jira/browse/FLINK-25882
> Project: Flink
>  Issue Type: Sub-task
>  Components: Project Website
>Reporter: Martijn Visser
>Assignee: Yubin Li
>Priority: Major
>  Labels: chinese-translation
>
> After https://github.com/apache/flink-web/pull/503 is merged, it requires 
> {{privacy-policy.zh.md}} to be translated to Chinese. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-25771) CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP

2022-01-31 Thread Fabian Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Paul resolved FLINK-25771.
-
Resolution: Fixed

> CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP
> -
>
> Key: FLINK-25771
> URL: https://issues.apache.org/jira/browse/FLINK-25771
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.15.0, 1.13.5
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> The test {{CassandraConnectorITCase.testRetrialAndDropTables}} fails on AZP 
> with
> {code}
> Jan 23 01:02:52 com.datastax.driver.core.exceptions.NoHostAvailableException: 
> All host(s) tried for query failed (tried: /172.17.0.1:59220 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: 
> [/172.17.0.1] Timed out waiting for server response))
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> Jan 23 01:02:52   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testRetrialAndDropTables(CassandraConnectorITCase.java:554)
> Jan 23 01:02:52   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jan 23 01:02:52   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jan 23 01:02:52   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jan 23 01:02:52   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:196)
> Jan 23 01:02:52   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 23 01:02:52   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jan 23 01:02:52   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jan 23 01:02:52   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jan 23 01:02:52   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Jan 23 01:02:52   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Jan 23 01:02:52   at org.junit.runners.Suite.runChild(Suite.java:128)
> Jan 23 01:02:52   at org.junit.runners.Suite.runChild(Suite.java:27)
> Jan 23 01:02:52   a

[jira] [Commented] (FLINK-25771) CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP

2022-01-31 Thread Fabian Paul (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484566#comment-17484566
 ] 

Fabian Paul commented on FLINK-25771:
-

Merged in master: 3144fae0dc8f3ef4b2ed6a8da4cdff920bcc4128

> CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP
> -
>
> Key: FLINK-25771
> URL: https://issues.apache.org/jira/browse/FLINK-25771
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.15.0, 1.13.5
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> The test {{CassandraConnectorITCase.testRetrialAndDropTables}} fails on AZP 
> with
> {code}
> Jan 23 01:02:52 com.datastax.driver.core.exceptions.NoHostAvailableException: 
> All host(s) tried for query failed (tried: /172.17.0.1:59220 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: 
> [/172.17.0.1] Timed out waiting for server response))
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> Jan 23 01:02:52   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testRetrialAndDropTables(CassandraConnectorITCase.java:554)
> Jan 23 01:02:52   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jan 23 01:02:52   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jan 23 01:02:52   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jan 23 01:02:52   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:196)
> Jan 23 01:02:52   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 23 01:02:52   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jan 23 01:02:52   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jan 23 01:02:52   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jan 23 01:02:52   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Jan 23 01:02:52   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Jan 23 01:02:52   at org.junit.runners.Suite.runChild(Suite.java:128)
> Jan 2

[jira] [Resolved] (FLINK-25864) Implement Matomo on Flink project website

2022-01-31 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser resolved FLINK-25864.

Fix Version/s: 1.15.0
   Resolution: Fixed

Fixed via:

[FLINK-25864] Remove Google Analytics implementation
6285f5bed9625772cc7cef67cc46c61f8c717f63

[FLINK-25864] Add Matomo tracking code to base layout
96e4427c00fe4b29228413028024cc7861072430

> Implement Matomo on Flink project website
> -
>
> Key: FLINK-25864
> URL: https://issues.apache.org/jira/browse/FLINK-25864
> Project: Flink
>  Issue Type: Sub-task
>  Components: Project Website
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] (FLINK-25432) Implement cleanup strategy

2022-01-31 Thread Matthias Pohl (Jira)


[ https://issues.apache.org/jira/browse/FLINK-25432 ]


Matthias Pohl deleted comment on FLINK-25432:
---

was (Author: mapohl):
The work is split up into multiple PRs:
 * [Github PR|https://github.com/apache/flink/pull/18536] about introducing the 
CleanableResource and ResourceCleaner interface
 * ...

> Implement cleanup strategy
> --
>
> Key: FLINK-25432
> URL: https://issues.apache.org/jira/browse/FLINK-25432
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> We want to combine the job-specific cleanup of the different resources and 
> provide a common {{ResourceCleaner}} taking care of the actual cleanup of all 
> resources.
> This needs to be integrated into the {{Dispatcher}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25886) Hard-coded ZK version in FlinkContainersBuilder#buildZookeeperContainer

2022-01-31 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-25886:


 Summary: Hard-coded ZK version in 
FlinkContainersBuilder#buildZookeeperContainer
 Key: FLINK-25886
 URL: https://issues.apache.org/jira/browse/FLINK-25886
 Project: Flink
  Issue Type: Technical Debt
  Components: Tests
Affects Versions: 1.15.0
Reporter: Chesnay Schepler


The Zookeeper version is hard-coded in 
FlinkContainersBuilder#buildZookeeperContainer, when it ideally should be tied 
to the {{zookeeper.version}} maven property.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25886) Hard-coded ZK version in FlinkContainersBuilder#buildZookeeperContainer

2022-01-31 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-25886:
-
Priority: Minor  (was: Major)

> Hard-coded ZK version in FlinkContainersBuilder#buildZookeeperContainer
> ---
>
> Key: FLINK-25886
> URL: https://issues.apache.org/jira/browse/FLINK-25886
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Priority: Minor
>
> The Zookeeper version is hard-coded in 
> FlinkContainersBuilder#buildZookeeperContainer, when it ideally should be 
> tied to the {{zookeeper.version}} maven property.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol opened a new pull request #18565: [FLINK-25145][build] Drop ZK 3.4 / Support ZK 3.6

2022-01-31 Thread GitBox


zentol opened a new pull request #18565:
URL: https://github.com/apache/flink/pull/18565


   - drop zookeeper 3.4
   - zookeeper 3.5 is new default
   - zookeeper 3.6 is new opt-in version
   - exclude all optional netty dependencies of zookeeper as we don't need them
   - HBase pinned to 3.4 because of HBase limitations


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-24844) CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP

2022-01-31 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484562#comment-17484562
 ] 

Etienne Chauchot edited comment on FLINK-24844 at 1/31/22, 8:39 AM:


[~gaoyunhaii] it was already backported to 1.13
cf: 26fb7a269f5fe3fc5dd0d52d88afb2915e452d1b
 
 


was (Author: echauchot):
[~gaoyunhaii] of course !

> CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP
> --
>
> Key: FLINK-24844
> URL: https://issues.apache.org/jira/browse/FLINK-24844
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test {{CassandraConnectorITCase.testCassandraBatchPojoFormat}} fails on 
> AZP with
> {code}
> 2021-11-09T00:32:42.1369473Z Nov 09 00:32:42 [ERROR] Tests run: 17, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 152.962 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> 2021-11-09T00:32:42.1371638Z Nov 09 00:32:42 [ERROR] 
> testCassandraBatchPojoFormat  Time elapsed: 20.378 s  <<< ERROR!
> 2021-11-09T00:32:42.1372881Z Nov 09 00:32:42 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.batches already exists
> 2021-11-09T00:32:42.1373913Z Nov 09 00:32:42  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2021-11-09T00:32:42.1374921Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2021-11-09T00:32:42.1379615Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2021-11-09T00:32:42.1380668Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2021-11-09T00:32:42.1381523Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2021-11-09T00:32:42.1382552Z Nov 09 00:32:42  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraBatchPojoFormat(CassandraConnectorITCase.java:543)
> 2021-11-09T00:32:42.1383487Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-11-09T00:32:42.1384433Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-11-09T00:32:42.1385336Z Nov 09 00:32:42  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-11-09T00:32:42.1386119Z Nov 09 00:32:42  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-11-09T00:32:42.1387204Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-11-09T00:32:42.1388225Z Nov 09 00:32:42  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-11-09T00:32:42.1389101Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-11-09T00:32:42.1400913Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-11-09T00:32:42.1401588Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-11-09T00:32:42.1402487Z Nov 09 00:32:42  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2021-11-09T00:32:42.1403055Z Nov 09 00:32:42  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-11-09T00:32:42.1403556Z Nov 09 00:32:42  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-11-09T00:32:42.1404008Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-11-09T00:32:42.1404650Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-11-09T00:32:42.1405151Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-11-09T00:32:42.1405632Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-11-09T00:32:42.1406166Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-11-09T00:32:42.1406670Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2021-11-09T00:32:42.1407125Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2021-11-09T00:32:42.14

[GitHub] [flink] flinkbot commented on pull request #18565: [FLINK-25145][build] Drop ZK 3.4 / Support ZK 3.6

2022-01-31 Thread GitBox


flinkbot commented on pull request #18565:
URL: https://github.com/apache/flink/pull/18565#issuecomment-1025498628


   
   ## CI report:
   
   * 4a53bc843f607a866c795cd983d71e811e513ef2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25887) FLIP-193: Snapshots ownership follow ups

2022-01-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25887:


 Summary: FLIP-193: Snapshots ownership follow ups
 Key: FLINK-25887
 URL: https://issues.apache.org/jira/browse/FLINK-25887
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25201) Implement duplicating for gcs

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25201:
-
Parent Issue: FLINK-25887  (was: FLINK-25154)

> Implement duplicating for gcs
> -
>
> Key: FLINK-25201
> URL: https://issues.apache.org/jira/browse/FLINK-25201
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.15.0
>
>
> We can use https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25200) Implement duplicating for s3 filesystem

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25200:
-
Parent Issue: FLINK-25887  (was: FLINK-25154)

> Implement duplicating for s3 filesystem
> ---
>
> Key: FLINK-25200
> URL: https://issues.apache.org/jira/browse/FLINK-25200
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.15.0
>
>
> We can use https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #18565: [FLINK-25145][build] Drop ZK 3.4 / Support ZK 3.6

2022-01-31 Thread GitBox


flinkbot commented on pull request #18565:
URL: https://github.com/apache/flink/pull/18565#issuecomment-1025500042


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4a53bc843f607a866c795cd983d71e811e513ef2 (Mon Jan 31 
08:44:16 UTC 2022)
   
   **Warnings:**
* **6 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25322) Support no-claim mode in changelog state backend

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25322:
-
Parent Issue: FLINK-25887  (was: FLINK-25154)

> Support no-claim mode in changelog state backend
> 
>
> Key: FLINK-25322
> URL: https://issues.apache.org/jira/browse/FLINK-25322
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Dawid Wysakowicz
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25670) StateFun: Unable to handle oversize HTTP message if state size is large

2022-01-31 Thread Gabor Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484569#comment-17484569
 ] 

Gabor Somogyi commented on FLINK-25670:
---

I personally don't see which endpoint throws that exception withing Flink 
because in the stacktrace one can only see netty entries.
If you could be a little bit more specific what is the use-case and on which 
endpoint statefun-flink-distribution tries to send
a request then maybe we can help. ChunkedWriteHandler adds support for writing 
a large data stream asynchronously neither
spending a lot of memory nor getting OutOfMemoryError. Without knowing the 
actual endpoint hard to tell where to add it.

If you can't answer this no problem, but then I suggest to open a jira at 
statefun-flink-distribution to find out the details.


> StateFun: Unable to handle oversize HTTP message if state size is large
> ---
>
> Key: FLINK-25670
> URL: https://issues.apache.org/jira/browse/FLINK-25670
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-3.1.1
>Reporter: Kyle
>Priority: Major
> Attachments: 00-module.yaml, functions.py
>
>
> Per requirement we need to handle state which is about 500MB large (72MB 
> state allocated in commented code as attached). However the HTTP message 
> limit disallows us to send back large state to StateFun cluster after saving 
> state in Stateful Function.
> Another question is whether large data is allowed to send to Stateful 
> Function from ingress.
>  
> 2022-01-17 07:57:18,416 WARN  
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception 
> caught while trying to deliver a message: (attempt 
> #10)ToFunctionRequestSummary(address=Address(example, hello, ), 
> batchSize=1, totalSizeInBytes=80, numberOfStates=2)
> org.apache.flink.shaded.netty4.io.netty.handler.codec.TooLongFrameException: 
> Response entity too large: DefaultHttpResponse(decodeResult: success, 
> version: HTTP/1.1)
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> Content-Length: 40579630
> Date: Mon, 17 Jan 2022 07:57:18 GMT
> Server: Python/3.9 aiohttp/3.8.1
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:276)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:87)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.invokeHandleOversizedMessage(MessageAggregator.java:404)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:254)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(By

[jira] [Updated] (FLINK-25202) Implement duplicating for azure

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25202:
-
Parent Issue: FLINK-25887  (was: FLINK-25154)

> Implement duplicating for azure
> ---
>
> Key: FLINK-25202
> URL: https://issues.apache.org/jira/browse/FLINK-25202
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.15.0
>
>
> We can use: 
> https://docs.microsoft.com/en-us/rest/api/storageservices/copy-blob



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25203) Implement duplicating for aliyun

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25203:
-
Parent Issue: FLINK-25887  (was: FLINK-25154)

> Implement duplicating for aliyun
> 
>
> Key: FLINK-25203
> URL: https://issues.apache.org/jira/browse/FLINK-25203
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.15.0
>
>
> We can use: https://www.alibabacloud.com/help/doc-detail/31979.htm



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18565: [FLINK-25145][build] Drop ZK 3.4 / Support ZK 3.6

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18565:
URL: https://github.com/apache/flink/pull/18565#issuecomment-1025498628


   
   ## CI report:
   
   * 4a53bc843f607a866c795cd983d71e811e513ef2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30497)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25201) Implement duplicating for gcs

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25201:
-
Fix Version/s: 1.16.0
   (was: 1.15.0)

> Implement duplicating for gcs
> -
>
> Key: FLINK-25201
> URL: https://issues.apache.org/jira/browse/FLINK-25201
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.16.0
>
>
> We can use https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25203) Implement duplicating for aliyun

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25203:
-
Fix Version/s: 1.16.0
   (was: 1.15.0)

> Implement duplicating for aliyun
> 
>
> Key: FLINK-25203
> URL: https://issues.apache.org/jira/browse/FLINK-25203
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.16.0
>
>
> We can use: https://www.alibabacloud.com/help/doc-detail/31979.htm



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25200) Implement duplicating for s3 filesystem

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25200:
-
Fix Version/s: 1.16.0
   (was: 1.15.0)

> Implement duplicating for s3 filesystem
> ---
>
> Key: FLINK-25200
> URL: https://issues.apache.org/jira/browse/FLINK-25200
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.16.0
>
>
> We can use https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25202) Implement duplicating for azure

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25202:
-
Fix Version/s: 1.16.0
   (was: 1.15.0)

> Implement duplicating for azure
> ---
>
> Key: FLINK-25202
> URL: https://issues.apache.org/jira/browse/FLINK-25202
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.16.0
>
>
> We can use: 
> https://docs.microsoft.com/en-us/rest/api/storageservices/copy-blob



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25322) Support no-claim mode in changelog state backend

2022-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-25322:
-
Fix Version/s: 1.16.0
   (was: 1.15.0)

> Support no-claim mode in changelog state backend
> 
>
> Key: FLINK-25322
> URL: https://issues.apache.org/jira/browse/FLINK-25322
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Dawid Wysakowicz
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18518: [FLINK-24229][connectors/dynamodb] Added DynamoDB connector

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18518:
URL: https://github.com/apache/flink/pull/18518#issuecomment-1022043829


   
   ## CI report:
   
   * 1ed40f830db68d828612207897418a868e6b55f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30476)
 
   * bce4406f01127a6956ef00592014e9c5b9d00aa4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on a change in pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


matriv commented on a change in pull request #18479:
URL: https://github.com/apache/flink/pull/18479#discussion_r795447587



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeContext.java
##
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec;
+
+import org.apache.flink.table.planner.plan.utils.ExecNodeMetadataUtil;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonValue;
+
+/**
+ * Helper Pojo that holds the necessary identifier fields that are used for 
JSON plan serialisation
+ * and de-serialisation.
+ */
+public class ExecNodeContext {
+
+private final int id;
+private final String name;
+private final Integer version;
+
+public ExecNodeContext(int id, String name, Integer version) {
+this.id = id;
+this.name = name;
+this.version = version;
+}
+
+public ExecNodeContext(int id) {

Review comment:
   It has been refactored, and now we have a static method `withId` for 
when we want to construct the context from the annotation.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25885) ClusterEntrypointTest.testWorkingDirectoryIsDeletedIfApplicationCompletes failed on azure

2022-01-31 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484571#comment-17484571
 ] 

Till Rohrmann commented on FLINK-25885:
---

I think the problem is that the 
{{ResourceManagerServiceImpl.deregisterApplication}} does not properly wait for 
the {{ResourceManager}} to be started. This has been introduced with 
FLINK-21667.

> ClusterEntrypointTest.testWorkingDirectoryIsDeletedIfApplicationCompletes  
> failed on azure
> --
>
> Key: FLINK-25885
> URL: https://issues.apache.org/jira/browse/FLINK-25885
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-01-31T05:00:07.3113870Z Jan 31 05:00:07 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Discard 
> message, because the rpc endpoint 
> akka.tcp://flink@127.0.0.1:6123/user/rpc/resourcemanager_2 has not been 
> started yet.
> 2022-01-31T05:00:07.3115008Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> 2022-01-31T05:00:07.3115778Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> 2022-01-31T05:00:07.3116527Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
> 2022-01-31T05:00:07.3117267Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-01-31T05:00:07.3118011Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3118770Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3119608Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:251)
> 2022-01-31T05:00:07.3120425Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-31T05:00:07.3121199Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-31T05:00:07.3121957Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3122716Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3123457Z Jan 31 05:00:07  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1387)
> 2022-01-31T05:00:07.3124241Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-01-31T05:00:07.3125106Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-01-31T05:00:07.3126063Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-01-31T05:00:07.3127207Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-31T05:00:07.3127982Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-31T05:00:07.3128741Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3129497Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3130385Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:45)
> 2022-01-31T05:00:07.3131092Z Jan 31 05:00:07  at 
> akka.dispatch.OnComplete.internal(Future.scala:299)
> 2022-01-31T05:00:07.3131695Z Jan 31 05:00:07  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-01-31T05:00:07.3132310Z Jan 31 05:00:07  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-01-31T05:00:07.3132943Z Jan 31 05:00:07  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-01-31T05:00:07.3133577Z Jan 31 05:00:07  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-01-31T05:00:07.3134340Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-01-31T05:00:07.3135149Z Jan 31 05:00:07  at 
> scal

[jira] [Commented] (FLINK-24844) CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP

2022-01-31 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484570#comment-17484570
 ] 

Etienne Chauchot commented on FLINK-24844:
--

The fix was merged a bit more than 7 days ago. I can check the stats through 
[https://dev.azure.com/apache-flink/apache-flink/_test/analytics?definitionId=1&contextType=build.|https://dev.azure.com/apache-flink/apache-flink/_test/analytics?definitionId=1&contextType=build]

This is for job flink-ci.flink-master-mirror. I guess it is on branch master 
right ? How can I get the same analytics for 1.13 and 1.14 branches? 

> CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP
> --
>
> Key: FLINK-24844
> URL: https://issues.apache.org/jira/browse/FLINK-24844
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test {{CassandraConnectorITCase.testCassandraBatchPojoFormat}} fails on 
> AZP with
> {code}
> 2021-11-09T00:32:42.1369473Z Nov 09 00:32:42 [ERROR] Tests run: 17, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 152.962 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> 2021-11-09T00:32:42.1371638Z Nov 09 00:32:42 [ERROR] 
> testCassandraBatchPojoFormat  Time elapsed: 20.378 s  <<< ERROR!
> 2021-11-09T00:32:42.1372881Z Nov 09 00:32:42 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.batches already exists
> 2021-11-09T00:32:42.1373913Z Nov 09 00:32:42  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2021-11-09T00:32:42.1374921Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2021-11-09T00:32:42.1379615Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2021-11-09T00:32:42.1380668Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2021-11-09T00:32:42.1381523Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2021-11-09T00:32:42.1382552Z Nov 09 00:32:42  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraBatchPojoFormat(CassandraConnectorITCase.java:543)
> 2021-11-09T00:32:42.1383487Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-11-09T00:32:42.1384433Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-11-09T00:32:42.1385336Z Nov 09 00:32:42  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-11-09T00:32:42.1386119Z Nov 09 00:32:42  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-11-09T00:32:42.1387204Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-11-09T00:32:42.1388225Z Nov 09 00:32:42  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-11-09T00:32:42.1389101Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-11-09T00:32:42.1400913Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-11-09T00:32:42.1401588Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-11-09T00:32:42.1402487Z Nov 09 00:32:42  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2021-11-09T00:32:42.1403055Z Nov 09 00:32:42  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-11-09T00:32:42.1403556Z Nov 09 00:32:42  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-11-09T00:32:42.1404008Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-11-09T00:32:42.1404650Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-11-09T00:32:42.1405151Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-11-09T00:32:42.1405632Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-11-09T00:32:42.1406166Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-11-09T00:32:42.1406670Z No

[jira] [Comment Edited] (FLINK-24844) CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP

2022-01-31 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484570#comment-17484570
 ] 

Etienne Chauchot edited comment on FLINK-24844 at 1/31/22, 8:50 AM:


[~gaoyunhaii] The fix was merged a bit more than 7 days ago. I can check the 
stats through 
[https://dev.azure.com/apache-flink/apache-flink/_test/analytics?definitionId=1&contextType=build.|https://dev.azure.com/apache-flink/apache-flink/_test/analytics?definitionId=1&contextType=build]

This is for job flink-ci.flink-master-mirror. I guess it is on branch master 
right ? How can I get the same analytics for 1.13 and 1.14 branches? 


was (Author: echauchot):
The fix was merged a bit more than 7 days ago. I can check the stats through 
[https://dev.azure.com/apache-flink/apache-flink/_test/analytics?definitionId=1&contextType=build.|https://dev.azure.com/apache-flink/apache-flink/_test/analytics?definitionId=1&contextType=build]

This is for job flink-ci.flink-master-mirror. I guess it is on branch master 
right ? How can I get the same analytics for 1.13 and 1.14 branches? 

> CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP
> --
>
> Key: FLINK-24844
> URL: https://issues.apache.org/jira/browse/FLINK-24844
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test {{CassandraConnectorITCase.testCassandraBatchPojoFormat}} fails on 
> AZP with
> {code}
> 2021-11-09T00:32:42.1369473Z Nov 09 00:32:42 [ERROR] Tests run: 17, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 152.962 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> 2021-11-09T00:32:42.1371638Z Nov 09 00:32:42 [ERROR] 
> testCassandraBatchPojoFormat  Time elapsed: 20.378 s  <<< ERROR!
> 2021-11-09T00:32:42.1372881Z Nov 09 00:32:42 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.batches already exists
> 2021-11-09T00:32:42.1373913Z Nov 09 00:32:42  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2021-11-09T00:32:42.1374921Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2021-11-09T00:32:42.1379615Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2021-11-09T00:32:42.1380668Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2021-11-09T00:32:42.1381523Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2021-11-09T00:32:42.1382552Z Nov 09 00:32:42  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraBatchPojoFormat(CassandraConnectorITCase.java:543)
> 2021-11-09T00:32:42.1383487Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-11-09T00:32:42.1384433Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-11-09T00:32:42.1385336Z Nov 09 00:32:42  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-11-09T00:32:42.1386119Z Nov 09 00:32:42  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-11-09T00:32:42.1387204Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-11-09T00:32:42.1388225Z Nov 09 00:32:42  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-11-09T00:32:42.1389101Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-11-09T00:32:42.1400913Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-11-09T00:32:42.1401588Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-11-09T00:32:42.1402487Z Nov 09 00:32:42  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2021-11-09T00:32:42.1403055Z Nov 09 00:32:42  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-11-09T00:32:42.1403556Z Nov 09 00:32:42  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-11-09T00:32:42.1404008Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-11-09T00:32:42.1404650Z Nov 09 00:32:

[GitHub] [flink] flinkbot edited a comment on pull request #18518: [FLINK-24229][connectors/dynamodb] Added DynamoDB connector

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18518:
URL: https://github.com/apache/flink/pull/18518#issuecomment-1022043829


   
   ## CI report:
   
   * 1ed40f830db68d828612207897418a868e6b55f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30476)
 
   * bce4406f01127a6956ef00592014e9c5b9d00aa4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30498)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on a change in pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


matriv commented on a change in pull request #18479:
URL: https://github.com/apache/flink/pull/18479#discussion_r795451478



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/utils/ExecNodeMetadataUtil.java
##
@@ -0,0 +1,262 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.utils;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNodeMetadata;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNodeMetadatas;
+import org.apache.flink.table.planner.plan.nodes.exec.serde.JsonSerdeUtil;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecCalc;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecChangelogNormalize;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecCorrelate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecDeduplicate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecDropUpdateBefore;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecExchange;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecExpand;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecGlobalGroupAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecGlobalWindowAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecGroupAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecGroupWindowAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecIncrementalGroupAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecIntervalJoin;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecJoin;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecLimit;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecLocalGroupAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecLocalWindowAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecLookupJoin;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecMatch;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecMiniBatchAssigner;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecPythonCalc;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecPythonCorrelate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecPythonGroupAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecPythonGroupWindowAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecPythonOverAggregate;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecRank;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSortLimit;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecTableSourceScan;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecTemporalJoin;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecTemporalSort;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecUnion;
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecValues;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecWatermarkAssigner;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecWindowAggregate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecWindowDeduplicate;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecWindowJoin;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecWindowRank;
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.S

[jira] [Assigned] (FLINK-25827) Potential memory leaks in SourceOperator

2022-01-31 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-25827:
--

Assignee: Piotr Nowojski

> Potential memory leaks in SourceOperator
> 
>
> Key: FLINK-25827
> URL: https://issues.apache.org/jira/browse/FLINK-25827
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Critical
> Fix For: 1.15.0, 1.14.4
>
>
> {{SourceOperator.SourceOperatorAvailabilityHelper}} is prone to the same type 
> of memory leak as FLINK-25728. Every time new CompletableFuture.any is 
> created:
> {code:java}
> currentCombinedFuture =
> CompletableFuture.anyOf(forcedStopFuture, 
> sourceReaderFuture);
> return currentCombinedFuture;
> {code} 
> Such future is never garbage collected, because {{forcedStopFuture}} will 
> keep a reference to it. This will eventually lead to a memory leak, or force 
> stopping might take very long time to complete.  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24844) CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP

2022-01-31 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484575#comment-17484575
 ] 

Yun Gao commented on FLINK-24844:
-

Hi [~echauchot] the failure happens on 1.13: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30465&view=results.
 On the master mirror there are also cron jobs for 1.13 & 1.14. 

> CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP
> --
>
> Key: FLINK-24844
> URL: https://issues.apache.org/jira/browse/FLINK-24844
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test {{CassandraConnectorITCase.testCassandraBatchPojoFormat}} fails on 
> AZP with
> {code}
> 2021-11-09T00:32:42.1369473Z Nov 09 00:32:42 [ERROR] Tests run: 17, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 152.962 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> 2021-11-09T00:32:42.1371638Z Nov 09 00:32:42 [ERROR] 
> testCassandraBatchPojoFormat  Time elapsed: 20.378 s  <<< ERROR!
> 2021-11-09T00:32:42.1372881Z Nov 09 00:32:42 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.batches already exists
> 2021-11-09T00:32:42.1373913Z Nov 09 00:32:42  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2021-11-09T00:32:42.1374921Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2021-11-09T00:32:42.1379615Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2021-11-09T00:32:42.1380668Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2021-11-09T00:32:42.1381523Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2021-11-09T00:32:42.1382552Z Nov 09 00:32:42  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraBatchPojoFormat(CassandraConnectorITCase.java:543)
> 2021-11-09T00:32:42.1383487Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-11-09T00:32:42.1384433Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-11-09T00:32:42.1385336Z Nov 09 00:32:42  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-11-09T00:32:42.1386119Z Nov 09 00:32:42  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-11-09T00:32:42.1387204Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-11-09T00:32:42.1388225Z Nov 09 00:32:42  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-11-09T00:32:42.1389101Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-11-09T00:32:42.1400913Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-11-09T00:32:42.1401588Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-11-09T00:32:42.1402487Z Nov 09 00:32:42  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2021-11-09T00:32:42.1403055Z Nov 09 00:32:42  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-11-09T00:32:42.1403556Z Nov 09 00:32:42  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-11-09T00:32:42.1404008Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-11-09T00:32:42.1404650Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-11-09T00:32:42.1405151Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-11-09T00:32:42.1405632Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-11-09T00:32:42.1406166Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-11-09T00:32:42.1406670Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2021-11-09T00:32:42.1407125Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2021-11-09T00:32:42.1407599Z Nov 09 00:

[GitHub] [flink] flinkbot edited a comment on pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18479:
URL: https://github.com/apache/flink/pull/18479#issuecomment-1020172062


   
   ## CI report:
   
   * 1c866a4904ff05c59f38cf4e873f5523186605b9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30426)
 
   * 8eedf31a5b616a1fcd730452f554ef262a75e5eb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Airblader merged pull request #18333: [FLINK-25220][test] Write an architectural rule for all IT cases w.r.t. the MiniCluster

2022-01-31 Thread GitBox


Airblader merged pull request #18333:
URL: https://github.com/apache/flink/pull/18333


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-benchmarks] pnowojski merged pull request #46: [hotfix] Corrected java11 build name

2022-01-31 Thread GitBox


pnowojski merged pull request #46:
URL: https://github.com/apache/flink-benchmarks/pull/46


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18479:
URL: https://github.com/apache/flink/pull/18479#issuecomment-1020172062


   
   ## CI report:
   
   * 1c866a4904ff05c59f38cf4e873f5523186605b9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30426)
 
   * 8eedf31a5b616a1fcd730452f554ef262a75e5eb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30499)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-24844) CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP

2022-01-31 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484575#comment-17484575
 ] 

Yun Gao edited comment on FLINK-24844 at 1/31/22, 9:04 AM:
---

Hi [~echauchot] the failure happens on 1.13: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30465&view=results.
 On the master mirror there are also cron jobs for 1.13 & 1.14. 

And for the analysis, it seems we could choose the branch from the menu on the 
top right~?


was (Author: gaoyunhaii):
Hi [~echauchot] the failure happens on 1.13: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30465&view=results.
 On the master mirror there are also cron jobs for 1.13 & 1.14. 

> CassandraConnectorITCase.testCassandraBatchPojoFormat fails on AZP
> --
>
> Key: FLINK-24844
> URL: https://issues.apache.org/jira/browse/FLINK-24844
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test {{CassandraConnectorITCase.testCassandraBatchPojoFormat}} fails on 
> AZP with
> {code}
> 2021-11-09T00:32:42.1369473Z Nov 09 00:32:42 [ERROR] Tests run: 17, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 152.962 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> 2021-11-09T00:32:42.1371638Z Nov 09 00:32:42 [ERROR] 
> testCassandraBatchPojoFormat  Time elapsed: 20.378 s  <<< ERROR!
> 2021-11-09T00:32:42.1372881Z Nov 09 00:32:42 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.batches already exists
> 2021-11-09T00:32:42.1373913Z Nov 09 00:32:42  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2021-11-09T00:32:42.1374921Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2021-11-09T00:32:42.1379615Z Nov 09 00:32:42  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2021-11-09T00:32:42.1380668Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2021-11-09T00:32:42.1381523Z Nov 09 00:32:42  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2021-11-09T00:32:42.1382552Z Nov 09 00:32:42  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraBatchPojoFormat(CassandraConnectorITCase.java:543)
> 2021-11-09T00:32:42.1383487Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-11-09T00:32:42.1384433Z Nov 09 00:32:42  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-11-09T00:32:42.1385336Z Nov 09 00:32:42  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-11-09T00:32:42.1386119Z Nov 09 00:32:42  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-11-09T00:32:42.1387204Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-11-09T00:32:42.1388225Z Nov 09 00:32:42  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-11-09T00:32:42.1389101Z Nov 09 00:32:42  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-11-09T00:32:42.1400913Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-11-09T00:32:42.1401588Z Nov 09 00:32:42  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-11-09T00:32:42.1402487Z Nov 09 00:32:42  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2021-11-09T00:32:42.1403055Z Nov 09 00:32:42  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-11-09T00:32:42.1403556Z Nov 09 00:32:42  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-11-09T00:32:42.1404008Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-11-09T00:32:42.1404650Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-11-09T00:32:42.1405151Z Nov 09 00:32:42  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-11-09T00:32:42.1405632Z Nov 09 00:32:42  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-11-09T00:32:42.1406166Z Nov 

[jira] [Commented] (FLINK-25220) Develop the ArchUnit Infra for test code and write an architectural rule for all IT cases w.r.t. MiniCluster

2022-01-31 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-25220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484579#comment-17484579
 ] 

Ingo Bürk commented on FLINK-25220:
---

Fixed on master

commit c132595205130622352821bb95c77399d54ba2e0
[test] update flink pom for ArchUnit test infra

commit bde48e96d8f99a011fe11d3127af76db72488f3f
[test] Split architecture-tests and build ArchUnit extensio

commit d7c0753afb747099cfa2e1cec53a38b14504b2ee
[test] develop ArchUnit rules and test base for ITCase

commit 8a0b1c034b3f540666874bf6a0dfa5dc8b4f26aa
[test] ITCase ArchUnit test for HBase-2.2 connector

commit 2dae76e135eb076d007b193bbf61a32ca4d0e838
[test] ITCase ArchUnit test for files connector

commit 4f6240848ed8b2cb2548239b49b7bcb392ee909e
[test] update README.md

[~jingge] I'll leave the issue open for the follow-up PR.

> Develop the ArchUnit Infra for test code and write an architectural rule for 
> all IT cases w.r.t. MiniCluster
> 
>
> Key: FLINK-25220
> URL: https://issues.apache.org/jira/browse/FLINK-25220
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>
> The original idea of this PR is to build architectural rules for ITCase and 
> quickly found that current architecture test submodule only focuses on 
> production code. In order to check the architecture violation of test code, 
> followings should be done:
>  * build the architecture test infra for test code
>  * isolate the classpaths of production code and test code, i.e. separation 
> of concers.
>  * define architectural rules for ITCase
>  * create ArchUnit test for some submodules
> The first try was using test jars of submodules and check the architectural 
> violation centrally. There are some cons of this solution. First, it will 
> need each submodule to create standard test jars that have conflict with some 
> submodules who need extra exclusion filter for their test jars. Second, 
> production code and test code mixed up, which makes it very difficult to 
> define the scope of analyse classes for each rule, because some rules should 
> only have effect on production code and others should only be used for test 
> code. As second try, a distributed solution will be used. The 
> architecture-test module will be split into three submodules: base for 
> ArchUnit common extension, production for ArchUnit test of production code, 
> test for defining rules for test code centrally. The real ArchUnit tests will 
> be distributed and developed within submodules where architectural violation 
> check is required.
> Architectural rules are required to verify that all IT cases should have:
>  * for JUnit4
> a public, static, final member of type MiniClusterWithClientResource 
> annotated with ClassRule.
> or
> a public, non-static, final member of type MiniClusterWithClientResource 
> annotated with Rule.
>  * for JUnit5
> a public, static, final member of type MiniClusterExtension
> and
> a public, static, final member of type AllCallbackWrapper annotated with 
> RegisterExtension
> The inheritance hierarchy need to be checked, because the member of 
> MiniCluster could be defined in the super class.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] fapaul commented on a change in pull request #18428: [FLINK-25575] Add Sink V2 operators and translation

2022-01-31 Thread GitBox


fapaul commented on a change in pull request #18428:
URL: https://github.com/apache/flink/pull/18428#discussion_r795465944



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CheckpointCommittableManagerImpl.java
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.runtime.operators.sink.committables;
+
+import org.apache.flink.api.connector.sink2.Committer;
+import org.apache.flink.streaming.api.connector.sink2.CommittableSummary;
+import org.apache.flink.streaming.api.connector.sink2.CommittableWithLineage;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+class CheckpointCommittableManagerImpl implements 
CheckpointCommittableManager {
+/** Mapping of subtask id to {@link SubtaskCommittableManager}. */
+private final Map> 
subtasksCommittableManagers;
+
+private final CommittableCollector collector;

Review comment:
   I think I can just pass the subtaskId and numberOfTasks to the 
`CheckpointCommittableManager` and remove this way the reverse dependency.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25888) Capture time that the job spends on deploying tasks

2022-01-31 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-25888:


 Summary: Capture time that the job spends on deploying tasks
 Key: FLINK-25888
 URL: https://issues.apache.org/jira/browse/FLINK-25888
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Coordination, Runtime / Metrics
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.15.0


FLINK-23976 added standardized metrics for capturing how much time we spend in 
each {{JobStatus}}. However, certain states in practice consist of several 
stages; for example the RUNNING state also includes the deployment of tasks.

To get a better picture on where time is spent I propose to add new metrics 
that capture the deployingTime based on the execution states. This will 
additionally get us closer to a proper uptime metric, which ideally will be 
runningTime - various stage time metrics.

A job is considered to be deploying,
* for batch jobs, if no task is running and at least one task is being deployed
* for streaming jobs, if at least one task is being deployed

The semantics are different for batch/streaming jobs because they differ in 
terms of how they make progress. For a streaming job all tasks need to be 
deployed for checkpointing to make work. For batch jobs any deployed task 
immediately starts progressing the job.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] dannycranmer commented on a change in pull request #18553: [FLINK-25846][FLINK-25848] Async Sink does not gracefully shutdown on Cancel, KDS Sink does not fast fail when invalid confi

2022-01-31 Thread GitBox


dannycranmer commented on a change in pull request #18553:
URL: https://github.com/apache/flink/pull/18553#discussion_r795462214



##
File path: 
flink-connectors/flink-connector-aws-kinesis-data-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisDataStreamsSinkWriter.java
##
@@ -176,11 +179,35 @@ private void handlePartiallyFailedRequest(
 
 private boolean isRetryable(Throwable err) {
 if (err instanceof CompletionException
-&& err.getCause() instanceof ResourceNotFoundException) {
+&& 
isInterruptingSignalException(ExceptionUtils.stripCompletionException(err))) {
+getFatalExceptionCons().accept(new FlinkException("Running job was 
cancelled"));
+return false;
+}
+if (err instanceof CompletionException

Review comment:
   This code is complex, and hard to read. Suggest we use a helper, would 
this work?:
   
   ```
   if (ExceptionUtils.findThrowable(ex, 
ResourceNotFoundException.class).isPresent()) {

   }
   
   ```

##
File path: 
flink-connectors/flink-connector-aws-kinesis-data-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisDataStreamsException.java
##
@@ -39,12 +39,12 @@ public KinesisDataStreamsException(final String message, 
final Throwable cause)
 
 public KinesisDataStreamsFailFastException() {
 super(
-"Encountered an exception while persisting records, not 
retrying due to {failOnError} being set.");

Review comment:
   Instead of duplicating this string, either call `this(null)` or extract 
to constant 

##
File path: 
flink-connectors/flink-connector-aws-kinesis-data-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisDataStreamsSinkWriter.java
##
@@ -176,11 +179,35 @@ private void handlePartiallyFailedRequest(
 
 private boolean isRetryable(Throwable err) {
 if (err instanceof CompletionException
-&& err.getCause() instanceof ResourceNotFoundException) {
+&& 
isInterruptingSignalException(ExceptionUtils.stripCompletionException(err))) {
+getFatalExceptionCons().accept(new FlinkException("Running job was 
cancelled"));
+return false;
+}
+if (err instanceof CompletionException
+&& ExceptionUtils.stripCompletionException(err)
+instanceof ResourceNotFoundException) {
 getFatalExceptionCons()
 .accept(
 new KinesisDataStreamsException(
-"Encountered non-recoverable exception", 
err));
+"Encountered non-recoverable exception 
relating to not being able to find the specified resources",
+err));
+return false;
+}
+if (err instanceof CompletionException
+&& ExceptionUtils.stripCompletionException(err) instanceof 
StsException) {
+getFatalExceptionCons()
+.accept(
+new KinesisDataStreamsException(
+"Encountered non-recoverable exception 
relating to the provided credentials.",
+err));
+return false;
+}
+if (err instanceof Error) {
+getFatalExceptionCons()
+.accept(
+new KinesisDataStreamsException(
+"Encountered non-recoverable exception 
relating to not being able to find the specified resources",

Review comment:
   Is this message correct for all sub classes of `Error`? I think we 
should make this more generic, something like `Encountered non-recoverable 
exception`

##
File path: 
flink-connectors/flink-connector-aws-kinesis-data-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisDataStreamsException.java
##
@@ -39,12 +39,12 @@ public KinesisDataStreamsException(final String message, 
final Throwable cause)
 
 public KinesisDataStreamsFailFastException() {
 super(
-"Encountered an exception while persisting records, not 
retrying due to {failOnError} being set.");
+"Encountered an exception while persisting records, not 
retrying due to either: {failOnError} being set or the exception should not be 
retried.");
 }
 
 public KinesisDataStreamsFailFastException(final Throwable cause) {
 super(
-"Encountered an exception while persisting records, not 
retrying due to {failOnError} being set.",
+"Encountered an exception while persisting records, not 
retrying due to either: {failOnError} being set or the exception should not be 
retried.",

Review comment:
   On second thoughts is this even correct? S

[GitHub] [flink] fapaul commented on a change in pull request #18428: [FLINK-25575] Add Sink V2 operators and translation

2022-01-31 Thread GitBox


fapaul commented on a change in pull request #18428:
URL: https://github.com/apache/flink/pull/18428#discussion_r795469019



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CommittableManager.java
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.runtime.operators.sink.committables;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.connector.sink2.Committer;
+import org.apache.flink.streaming.api.connector.sink2.CommittableSummary;
+import org.apache.flink.streaming.api.connector.sink2.CommittableWithLineage;
+
+import java.io.IOException;
+import java.util.Collection;
+
+/**
+ * Internal wrapper to handle the committing of committables.
+ *
+ * @param  type of the committable
+ */
+@Internal
+public interface CommittableManager {

Review comment:
   The idea here is to make the difference between batch and streaming 
execution mode clear. The `CommittableManager` interface is intended to be used 
on `endOfInput` and the `CheckpointCommittableManager` interface is used for 
`notifyCheckpointComplete`. I agree currently it is implemented by the same 
class but in the end, from an operator perspective, it does not matter.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] slinkydeveloper commented on a change in pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


slinkydeveloper commented on a change in pull request #18479:
URL: https://github.com/apache/flink/pull/18479#discussion_r795474727



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/MultipleExecNodeMetadata.java
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+import java.lang.annotation.Documented;
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Helper annotation to enable multiple {@link ExecNodeMetadata} annotations 
on an {@link ExecNode}
+ * class.
+ */
+@Documented
+@Target(ElementType.TYPE)
+@Retention(RetentionPolicy.RUNTIME)
+@PublicEvolving

Review comment:
   Experimental?

##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeMetadata.java
##
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec;
+
+import org.apache.flink.FlinkVersion;
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.api.config.ExecutionConfigOptions;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.lang.annotation.Documented;
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Repeatable;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Annotation to be used for {@link ExecNode}s to keep necessary metadata when
+ * serialising/deserializing them in a plan.
+ *
+ * Each {@link ExecNode} needs to be annotated and provide the necessary 
metadata info so that it
+ * can be correctly serialised and later on instantiated from a string (JSON) 
plan.
+ *
+ * It's possible for one {@link ExecNode} class to user multiple 
annotations to denote ability to
+ * upgrade to more versions.
+ */
+@Documented
+@Target(ElementType.TYPE)
+@Retention(RetentionPolicy.RUNTIME)
+@Repeatable(value = MultipleExecNodeMetadata.class)
+@PublicEvolving

Review comment:
   Experimental? 

##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeMetadata.java
##
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec;
+
+import org.apache.flink.FlinkVersion;
+import org.apache.flink.annotati

[GitHub] [flink] flinkbot edited a comment on pull request #18539: [FLINK-25745] Support RocksDB incremental native savepoints

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18539:
URL: https://github.com/apache/flink/pull/18539#issuecomment-1023172522


   
   ## CI report:
   
   * 528758ad9508f9fec5630b747fe9f454fbc30171 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30310)
 
   * 5886c2da10de2ef3e31c94b4bb56e1aceb6deceb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on a change in pull request #18428: [FLINK-25575] Add Sink V2 operators and translation

2022-01-31 Thread GitBox


fapaul commented on a change in pull request #18428:
URL: https://github.com/apache/flink/pull/18428#discussion_r795477843



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/CommitterOperator.java
##
@@ -17,95 +17,173 @@
 
 package org.apache.flink.streaming.runtime.operators.sink;
 
-import org.apache.flink.core.io.SimpleVersionedSerialization;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import 
org.apache.flink.api.common.typeutils.base.array.BytePrimitiveArraySerializer;
+import org.apache.flink.api.connector.sink2.Committer;
 import org.apache.flink.core.io.SimpleVersionedSerializer;
 import org.apache.flink.runtime.state.StateInitializationContext;
 import org.apache.flink.runtime.state.StateSnapshotContext;
+import org.apache.flink.streaming.api.connector.sink2.CommittableMessage;
+import org.apache.flink.streaming.api.connector.sink2.CommittableWithLineage;
+import org.apache.flink.streaming.api.graph.StreamConfig;
 import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
 import org.apache.flink.streaming.api.operators.BoundedOneInput;
 import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.api.operators.util.SimpleVersionedListState;
+import 
org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManager;
+import 
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector;
+import 
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollectorSerializer;
+import 
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableManager;
 import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
 import org.apache.flink.streaming.runtime.tasks.ProcessingTimeService;
+import org.apache.flink.streaming.runtime.tasks.StreamTask;
 
+import java.io.IOException;
+import java.util.Collection;
 import java.util.Collections;
+import java.util.OptionalLong;
 
 import static org.apache.flink.util.IOUtils.closeAll;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
  * An operator that processes committables of a {@link 
org.apache.flink.api.connector.sink.Sink}.
  *
- * The operator may be part of a sink pipeline but usually is the last 
operator. There are
- * currently two ways this operator is used:
+ * The operator may be part of a sink pipeline, and it always follows 
{@link SinkWriterOperator},
+ * which initially outputs the committables.
  *
- * 
- *   In streaming mode, there is a {@link SinkOperator} with parallelism p 
containing {@link
- *   org.apache.flink.api.connector.sink.SinkWriter} and {@link
- *   org.apache.flink.api.connector.sink.Committer} and this operator 
containing the {@link
- *   org.apache.flink.api.connector.sink.GlobalCommitter} with parallelism 
1.
- *   In batch mode, there is a {@link SinkOperator} with parallelism p 
containing {@link
- *   org.apache.flink.api.connector.sink.SinkWriter} and this operator 
containing the {@link
- *   org.apache.flink.api.connector.sink.Committer} and {@link
- *   org.apache.flink.api.connector.sink.GlobalCommitter} with parallelism 
1.
- * 
- *
- * @param  the type of the committable
- * @param  the type of the committable to send to downstream operators
+ * @param  the type of the committable
  */
-class CommitterOperator extends AbstractStreamOperator
-implements OneInputStreamOperator, BoundedOneInput {
+class CommitterOperator extends 
AbstractStreamOperator>
+implements OneInputStreamOperator, 
CommittableMessage>,
+BoundedOneInput {
+
+private static final long RETRY_DELAY = 1000;
+private final SimpleVersionedSerializer committableSerializer;
+private final Committer committer;
+private final boolean emitDownstream;
+private CommittableCollector committableCollector;
+private long lastCompletedCheckpointId = -1;
 
-private final SimpleVersionedSerializer inputSerializer;
-private final CommitterHandler committerHandler;
-private final CommitRetrier commitRetrier;
+/** The operator's state descriptor. */
+private static final ListStateDescriptor 
STREAMING_COMMITTER_RAW_STATES_DESC =
+new ListStateDescriptor<>(
+"streaming_committer_raw_states", 
BytePrimitiveArraySerializer.INSTANCE);
+
+/** The operator's state. */
+private ListState> committableCollectorState;
 
 public CommitterOperator(
 ProcessingTimeService processingTimeService,
-SimpleVersionedSerializer inputSerializer,
-CommitterHandler committerHandler) {
-this.inputSerializer = checkNotNull(inputSerializer);
-this.committerHandler = checkNotNull(committerHandler);
-this.processingTimeService = pr

[GitHub] [flink] flinkbot edited a comment on pull request #18536: [FLINK-25432] Introduces ResourceCleaner to be used in the Dispatcher

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18536:
URL: https://github.com/apache/flink/pull/18536#issuecomment-1023152110


   
   ## CI report:
   
   * 6c5e8781c20bcee5fb18d0f8524dd970c5ff2d9d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30427)
 
   * 0321a661fff58fa8feac5463096b4d5fe4e401b1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18539: [FLINK-25745] Support RocksDB incremental native savepoints

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18539:
URL: https://github.com/apache/flink/pull/18539#issuecomment-1023172522


   
   ## CI report:
   
   * 528758ad9508f9fec5630b747fe9f454fbc30171 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30310)
 
   * 5886c2da10de2ef3e31c94b4bb56e1aceb6deceb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30501)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on a change in pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


matriv commented on a change in pull request #18479:
URL: https://github.com/apache/flink/pull/18479#discussion_r795484145



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeMetadata.java
##
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec;
+
+import org.apache.flink.FlinkVersion;
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.api.config.ExecutionConfigOptions;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.lang.annotation.Documented;
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Repeatable;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Annotation to be used for {@link ExecNode}s to keep necessary metadata when
+ * serialising/deserializing them in a plan.
+ *
+ * Each {@link ExecNode} needs to be annotated and provide the necessary 
metadata info so that it
+ * can be correctly serialised and later on instantiated from a string (JSON) 
plan.
+ *
+ * It's possible for one {@link ExecNode} class to user multiple 
annotations to denote ability to
+ * upgrade to more versions.
+ */
+@Documented
+@Target(ElementType.TYPE)
+@Retention(RetentionPolicy.RUNTIME)
+@Repeatable(value = MultipleExecNodeMetadata.class)
+@PublicEvolving
+public @interface ExecNodeMetadata {
+// main information
+
+/**
+ * Unique name of the {@link ExecNode} for serialization/deserialization 
and uid() generation.
+ * Together with version, uniquely identifies the {@link ExecNode} class.
+ */
+String name();
+
+/**
+ * A positive integer denoting the evolving version of an {@link 
ExecNode}, used for
+ * serialization/deserialization and uid() generation. Together with 
{@link #name()}, uniquely
+ * identifies the {@link ExecNode} class.
+ */
+@JsonProperty("version")

Review comment:
   Nope, left over, I'm removing them
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on a change in pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


matriv commented on a change in pull request #18479:
URL: https://github.com/apache/flink/pull/18479#discussion_r795484618



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeMetadata.java
##
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec;
+
+import org.apache.flink.FlinkVersion;
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.api.config.ExecutionConfigOptions;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.lang.annotation.Documented;
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Repeatable;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Annotation to be used for {@link ExecNode}s to keep necessary metadata when
+ * serialising/deserializing them in a plan.
+ *
+ * Each {@link ExecNode} needs to be annotated and provide the necessary 
metadata info so that it
+ * can be correctly serialised and later on instantiated from a string (JSON) 
plan.
+ *
+ * It's possible for one {@link ExecNode} class to user multiple 
annotations to denote ability to
+ * upgrade to more versions.
+ */
+@Documented
+@Target(ElementType.TYPE)
+@Retention(RetentionPolicy.RUNTIME)
+@Repeatable(value = MultipleExecNodeMetadata.class)
+@PublicEvolving

Review comment:
   Probably yes, thx, @twalthr right?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18479:
URL: https://github.com/apache/flink/pull/18479#issuecomment-1020172062


   
   ## CI report:
   
   * 8eedf31a5b616a1fcd730452f554ef262a75e5eb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30499)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18536: [FLINK-25432] Introduces ResourceCleaner to be used in the Dispatcher

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18536:
URL: https://github.com/apache/flink/pull/18536#issuecomment-1023152110


   
   ## CI report:
   
   * 6c5e8781c20bcee5fb18d0f8524dd970c5ff2d9d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30427)
 
   * 0321a661fff58fa8feac5463096b4d5fe4e401b1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30502)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on a change in pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


matriv commented on a change in pull request #18479:
URL: https://github.com/apache/flink/pull/18479#discussion_r795485514



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecGroupWindowAggregate.java
##
@@ -136,30 +142,30 @@ public StreamExecGroupWindowAggregate(
 RowType outputType,
 String description) {
 this(
+
ExecNodeContext.newMetadata(StreamExecGroupWindowAggregate.class),

Review comment:
   Done!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol opened a new pull request #18566: [FLINK-25888] Capture time that the job spends on deploying tasks

2022-01-31 Thread GitBox


zentol opened a new pull request #18566:
URL: https://github.com/apache/flink/pull/18566


   FLINK-23976 added standardized metrics for capturing how much time we spend 
in each JobStatus. However, certain states in practice consist of several 
stages; for example the RUNNING state also includes the deployment of tasks.
   
   To get a better picture on where time is spent I propose to add new metrics 
that capture the deployingTime based on the execution states. This will 
additionally get us closer to a proper uptime metric, which ideally will be 
runningTime - various stage time metrics.
   
   A job is considered to be deploying,
   
   for batch jobs, if no task is running and at least one task is being 
deployed
   for streaming jobs, if at least one task is being deployed
   
   The semantics are different for batch/streaming jobs because they differ in 
terms of how they make progress. For a streaming job all tasks need to be 
deployed for checkpointing to make work. For batch jobs any deployed task 
immediately starts progressing the job.
   
   
   I will add documentation later once we have agreed on the semantics.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on a change in pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


matriv commented on a change in pull request #18479:
URL: https://github.com/apache/flink/pull/18479#discussion_r795486022



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeContext.java
##
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec;
+
+import org.apache.flink.table.planner.plan.utils.ExecNodeMetadataUtil;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonValue;
+
+/**
+ * Helper Pojo that holds the necessary identifier fields that are used for 
JSON plan serialisation
+ * and de-serialisation.
+ */
+public class ExecNodeContext {
+
+private final int id;
+private final String name;
+private final Integer version;
+
+public ExecNodeContext(int id, String name, Integer version) {
+this.id = id;
+this.name = name;
+this.version = version;
+}
+
+public ExecNodeContext(int id) {
+this(id, null, null);
+}
+
+@JsonCreator
+public ExecNodeContext(String value) {
+String[] split = value.split("_");
+this.id = Integer.parseInt(split[0]);
+this.name = split[1];
+this.version = Integer.valueOf(split[2]);
+}
+
+/** The unique identifier for each ExecNode in the JSON plan. */
+public int getId() {
+return id;
+}
+
+/** The type identifying an ExecNode in the JSON plan. See {@link 
ExecNodeMetadata#name()}. */
+public String getName() {
+return name;
+}
+
+/** The version of the ExecNode in the JSON plan. See {@link 
ExecNodeMetadata#version()}. */
+public Integer getVersion() {
+return version;
+}
+
+@JsonValue
+@Override
+public String toString() {
+return id + "_" + name + "_" + version;
+}
+
+@SuppressWarnings("rawtypes")
+public static ExecNodeContext newMetadata(Class 
execNode) {
+return newMetadata(execNode, ExecNodeBase.getNewNodeId());
+}
+
+@SuppressWarnings("rawtypes")
+static ExecNodeContext newMetadata(Class execNode, int 
id) {
+ExecNodeMetadata metadata = 
ExecNodeMetadataUtil.latestAnnotation(execNode);
+// Some StreamExecNodes likes StreamExecMultipleInput
+// still don't support the ExecNodeMetadata annotation.

Review comment:
   It's in the unsupported list, the code here has been changed to add 
validations.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25888) Capture time that the job spends on deploying tasks

2022-01-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25888:
---
Labels: pull-request-available  (was: )

> Capture time that the job spends on deploying tasks
> ---
>
> Key: FLINK-25888
> URL: https://issues.apache.org/jira/browse/FLINK-25888
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination, Runtime / Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> FLINK-23976 added standardized metrics for capturing how much time we spend 
> in each {{JobStatus}}. However, certain states in practice consist of several 
> stages; for example the RUNNING state also includes the deployment of tasks.
> To get a better picture on where time is spent I propose to add new metrics 
> that capture the deployingTime based on the execution states. This will 
> additionally get us closer to a proper uptime metric, which ideally will be 
> runningTime - various stage time metrics.
> A job is considered to be deploying,
> * for batch jobs, if no task is running and at least one task is being 
> deployed
> * for streaming jobs, if at least one task is being deployed
> The semantics are different for batch/streaming jobs because they differ in 
> terms of how they make progress. For a streaming job all tasks need to be 
> deployed for checkpointing to make work. For batch jobs any deployed task 
> immediately starts progressing the job.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol commented on a change in pull request #18542: [FLINK-25855] Allow JobMaster to accept excess slots when restarting the job

2022-01-31 Thread GitBox


zentol commented on a change in pull request #18542:
URL: https://github.com/apache/flink/pull/18542#discussion_r795487730



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTest.java
##
@@ -1816,6 +1819,60 @@ public void 
testJobMasterOnlyTerminatesAfterTheSchedulerHasClosed() throws Excep
 jobMasterTerminationFuture.get();
 }
 
+@Test
+public void testJobMasterAcceptsExcessSlotsWhenJobIsRestarting() throws 
Exception {
+configuration.set(RestartStrategyOptions.RESTART_STRATEGY, 
"fixed-delay");
+configuration.set(
+RestartStrategyOptions.RESTART_STRATEGY_FIXED_DELAY_DELAY, 
Duration.ofDays(1));
+final JobMaster jobMaster =
+new JobMasterBuilder(jobGraph, rpcService)
+.withConfiguration(configuration)
+.createJobMaster();
+
+try {
+jobMaster.start();
+
+final JobMasterGateway jobMasterGateway =
+jobMaster.getSelfGateway(JobMasterGateway.class);
+
+assertThat(
+jobMasterGateway.requestJobStatus(testingTimeout).get(), 
is(JobStatus.RUNNING));
+
+final LocalUnresolvedTaskManagerLocation 
unresolvedTaskManagerLocation =
+new LocalUnresolvedTaskManagerLocation();
+registerSlotsAtJobMaster(
+1,
+jobMasterGateway,
+jobGraph.getJobID(),
+new TestingTaskExecutorGatewayBuilder()
+.setAddress("firstTaskManager")
+.createTestingTaskExecutorGateway(),
+unresolvedTaskManagerLocation);
+
+jobMasterGateway.disconnectTaskManager(
+unresolvedTaskManagerLocation.getResourceID(),
+new FlinkException("Test exception."));
+
+assertThat(
+jobMasterGateway.requestJobStatus(testingTimeout).get(),
+is(JobStatus.RESTARTING));

Review comment:
   > in this specific case there is no race condition
   
   Why is that? I thought that since the JM runs in an actual actor system 
without a DirectExecutor the processing of the disconnect can happen at some 
point in the future.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18566: [FLINK-25888] Capture time that the job spends on deploying tasks

2022-01-31 Thread GitBox


flinkbot commented on pull request #18566:
URL: https://github.com/apache/flink/pull/18566#issuecomment-1025547072


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 05c8b566defa4127f8d704c3e1343dcff9898056 (Mon Jan 31 
09:41:42 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on pull request #18542: [FLINK-25855] Allow JobMaster to accept excess slots when restarting the job

2022-01-31 Thread GitBox


zentol commented on pull request #18542:
URL: https://github.com/apache/flink/pull/18542#issuecomment-1025547972


   > [the slot provider allocates] from the set of available slots before it 
requests a new slot
   
   Yes, then we shouldn't need it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18566: [FLINK-25888] Capture time that the job spends on deploying tasks

2022-01-31 Thread GitBox


flinkbot commented on pull request #18566:
URL: https://github.com/apache/flink/pull/18566#issuecomment-1025548729


   
   ## CI report:
   
   * 05c8b566defa4127f8d704c3e1343dcff9898056 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25889) Implement scroll tracking on Flink project website

2022-01-31 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-25889:
--

 Summary: Implement scroll tracking on Flink project website
 Key: FLINK-25889
 URL: https://issues.apache.org/jira/browse/FLINK-25889
 Project: Flink
  Issue Type: Sub-task
  Components: Project Website
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25890) Implement scroll tracking on Flink documentation

2022-01-31 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-25890:
--

 Summary: Implement scroll tracking on Flink documentation
 Key: FLINK-25890
 URL: https://issues.apache.org/jira/browse/FLINK-25890
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18566: [FLINK-25888] Capture time that the job spends on deploying tasks

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18566:
URL: https://github.com/apache/flink/pull/18566#issuecomment-1025548729


   
   ## CI report:
   
   * 05c8b566defa4127f8d704c3e1343dcff9898056 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30503)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #18482: [FLINK-25744] Support native savepoints

2022-01-31 Thread GitBox


dawidwys commented on pull request #18482:
URL: https://github.com/apache/flink/pull/18482#issuecomment-1025562743


   Hey, @akalash I added a single commit for documentation of the savepoint 
format. Could you take a look at #cdc389d as well?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18482: [FLINK-25744] Support native savepoints

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18482:
URL: https://github.com/apache/flink/pull/18482#issuecomment-1020278176


   
   ## CI report:
   
   * 91675405ae82d0e945f46caaa685d65763b296c3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30070)
 
   * cdc389d9c5827d45c922a10dc63b7c1ef3b6df3f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #18428: [FLINK-25575] Add Sink V2 operators and translation

2022-01-31 Thread GitBox


gaoyunhaii commented on a change in pull request #18428:
URL: https://github.com/apache/flink/pull/18428#discussion_r795511412



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/CommitterOperator.java
##
@@ -17,95 +17,173 @@
 
 package org.apache.flink.streaming.runtime.operators.sink;
 
-import org.apache.flink.core.io.SimpleVersionedSerialization;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import 
org.apache.flink.api.common.typeutils.base.array.BytePrimitiveArraySerializer;
+import org.apache.flink.api.connector.sink2.Committer;
 import org.apache.flink.core.io.SimpleVersionedSerializer;
 import org.apache.flink.runtime.state.StateInitializationContext;
 import org.apache.flink.runtime.state.StateSnapshotContext;
+import org.apache.flink.streaming.api.connector.sink2.CommittableMessage;
+import org.apache.flink.streaming.api.connector.sink2.CommittableWithLineage;
+import org.apache.flink.streaming.api.graph.StreamConfig;
 import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
 import org.apache.flink.streaming.api.operators.BoundedOneInput;
 import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.api.operators.util.SimpleVersionedListState;
+import 
org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManager;
+import 
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector;
+import 
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollectorSerializer;
+import 
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableManager;
 import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
 import org.apache.flink.streaming.runtime.tasks.ProcessingTimeService;
+import org.apache.flink.streaming.runtime.tasks.StreamTask;
 
+import java.io.IOException;
+import java.util.Collection;
 import java.util.Collections;
+import java.util.OptionalLong;
 
 import static org.apache.flink.util.IOUtils.closeAll;
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
  * An operator that processes committables of a {@link 
org.apache.flink.api.connector.sink.Sink}.
  *
- * The operator may be part of a sink pipeline but usually is the last 
operator. There are
- * currently two ways this operator is used:
+ * The operator may be part of a sink pipeline, and it always follows 
{@link SinkWriterOperator},
+ * which initially outputs the committables.
  *
- * 
- *   In streaming mode, there is a {@link SinkOperator} with parallelism p 
containing {@link
- *   org.apache.flink.api.connector.sink.SinkWriter} and {@link
- *   org.apache.flink.api.connector.sink.Committer} and this operator 
containing the {@link
- *   org.apache.flink.api.connector.sink.GlobalCommitter} with parallelism 
1.
- *   In batch mode, there is a {@link SinkOperator} with parallelism p 
containing {@link
- *   org.apache.flink.api.connector.sink.SinkWriter} and this operator 
containing the {@link
- *   org.apache.flink.api.connector.sink.Committer} and {@link
- *   org.apache.flink.api.connector.sink.GlobalCommitter} with parallelism 
1.
- * 
- *
- * @param  the type of the committable
- * @param  the type of the committable to send to downstream operators
+ * @param  the type of the committable
  */
-class CommitterOperator extends AbstractStreamOperator
-implements OneInputStreamOperator, BoundedOneInput {
+class CommitterOperator extends 
AbstractStreamOperator>
+implements OneInputStreamOperator, 
CommittableMessage>,
+BoundedOneInput {
+
+private static final long RETRY_DELAY = 1000;
+private final SimpleVersionedSerializer committableSerializer;
+private final Committer committer;
+private final boolean emitDownstream;
+private CommittableCollector committableCollector;
+private long lastCompletedCheckpointId = -1;
 
-private final SimpleVersionedSerializer inputSerializer;
-private final CommitterHandler committerHandler;
-private final CommitRetrier commitRetrier;
+/** The operator's state descriptor. */
+private static final ListStateDescriptor 
STREAMING_COMMITTER_RAW_STATES_DESC =
+new ListStateDescriptor<>(
+"streaming_committer_raw_states", 
BytePrimitiveArraySerializer.INSTANCE);
+
+/** The operator's state. */
+private ListState> committableCollectorState;
 
 public CommitterOperator(
 ProcessingTimeService processingTimeService,
-SimpleVersionedSerializer inputSerializer,
-CommitterHandler committerHandler) {
-this.inputSerializer = checkNotNull(inputSerializer);
-this.committerHandler = checkNotNull(committerHandler);
-this.processingTimeService 

[GitHub] [flink] flinkbot edited a comment on pull request #18482: [FLINK-25744] Support native savepoints

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18482:
URL: https://github.com/apache/flink/pull/18482#issuecomment-1020278176


   
   ## CI report:
   
   * 91675405ae82d0e945f46caaa685d65763b296c3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30070)
 
   * cdc389d9c5827d45c922a10dc63b7c1ef3b6df3f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30504)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] matriv commented on a change in pull request #18479: [FLINK-25387] Introduce ExecNodeMetadata

2022-01-31 Thread GitBox


matriv commented on a change in pull request #18479:
URL: https://github.com/apache/flink/pull/18479#discussion_r795518199



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeMetadata.java
##
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec;
+
+import org.apache.flink.FlinkVersion;
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.api.config.ExecutionConfigOptions;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.lang.annotation.Documented;
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Annotation to be used for {@link ExecNode}s to keep necessary metadata when
+ * serialising/deserialising them in a plan.
+ *
+ * Each {@link ExecNode} needs to be annotated and provide the necessary 
metadata info so that it
+ * can be correctly serialised and later on instantiated from a string (JSON) 
plan.
+ *
+ * It's possible for one {@link ExecNode} class to user multiple 
annotations to denote ability to
+ * upgrade to more versions.
+ */
+@Documented
+@Target(ElementType.TYPE)
+@Retention(RetentionPolicy.RUNTIME)
+@PublicEvolving
+public @interface ExecNodeMetadata {
+// main information
+
+/**
+ * Unique name of the {@link ExecNode} for serialization/deserialization 
and uid() generation.
+ * Together with version, uniquely identifies the {@link ExecNode} class.
+ */
+String name();
+
+/**
+ * A positive integer denoting the evolving version of an {@link 
ExecNode}, used for
+ * serialization/deserialization and uid() generation. Together with 
{@link #name()}, uniquely
+ * identifies the {@link ExecNode} class.
+ */
+@JsonProperty("version")
+int version();
+
+// maintenance information for internal/community/test usage
+
+/**
+ * Hard coded list of {@link ExecutionConfigOptions} keys of in the Flink 
version when the
+ * ExecNode was added. Does not reference instances in the {@link 
ExecutionConfigOptions} class
+ * in case those get refactored.
+ *
+ * Completeness tests can verify that every option is set once in 
restore and change
+ * detection tests.
+ *
+ * Completeness tests can verify that the ExecutionConfigOptions class 
still contains an
+ * option (via key or fallback key) for the given key.
+ *
+ * Restore can verify whether the restored ExecNode config map contains 
only options of the
+ * given keys.
+ */
+@JsonProperty("consumedOptions")
+String[] consumedOptions() default {};
+
+/**
+ * Set of operator names that can be part of the resulting Transformations.
+ *
+ * Restore and completeness tests can verify there exists at least one 
test that adds each
+ * operator and that the created Transformations contain only operators 
with `uid`s containing
+ * the given operator names.
+ *
+ * The concrete combinations or existence of these operators in the 
final pipeline depends on
+ * various parameters (both configuration and ExecNode-specific arguments 
such as interval size
+ * etc.).
+ */
+@JsonProperty("producedOperators")
+String[] producedOperators() default {};
+
+/**
+ * Used for plan validation and potentially plan migration.
+ *
+ * Needs to be updated when the JSON for the ExecNode changes: e.g. 
after adding an attribute
+ * to the JSON spec of the ExecNode.
+ *
+ * The annotation does not need to be updated for every Flink version. 
As the name suggests
+ * it is about the "minimum" version for a restore. If the minimum version 
is higher than the
+ * current Flink version, plan migration is necessary.
+ *
+ * Changing this version will always result in a new ExecNode {@link 
#version()}.
+ *
+ * Plan migration tests can use this information.
+ *
+ * Completeness tests can verify that restore tests exist for all JSON 
plan variations.
+ */
+@JsonProper

[jira] [Created] (FLINK-25891) NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark

2022-01-31 Thread Anton Kalashnikov (Jira)
Anton Kalashnikov created FLINK-25891:
-

 Summary: NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark
 Key: FLINK-25891
 URL: https://issues.apache.org/jira/browse/FLINK-25891
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks
Affects Versions: 1.15.0
Reporter: Anton Kalashnikov






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25891) NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark

2022-01-31 Thread Anton Kalashnikov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kalashnikov updated FLINK-25891:
--
Description: 
After FLINK-25016(upgrading Netty) StreamNetworkThroughputBenchmark benchmark 
fails when it tries to connect via SSL with error:

{noformat}
java.lang.NoClassDefFoundError: 
org/apache/flink/shaded/netty4/io/netty/internal/tcnative/AsyncSSLPrivateKeyMethod
at 
org.apache.flink.shaded.netty4.io.netty.handler.ssl.OpenSslX509KeyManagerFactory$OpenSslKeyManagerFactorySpi.engineInit(OpenSslX509KeyManagerFactory.java:129)
at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
at 
org.apache.flink.runtime.net.SSLUtils.getKeyManagerFactory(SSLUtils.java:279)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:324)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:303)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalClientSSLEngineFactory(SSLUtils.java:119)
at 
org.apache.flink.runtime.io.network.netty.NettyConfig.createClientSSLEngineFactory(NettyConfig.java:147)
at 
org.apache.flink.runtime.io.network.netty.NettyClient.init(NettyClient.java:115)
at 
org.apache.flink.runtime.io.network.netty.NettyConnectionManager.start(NettyConnectionManager.java:64)
at 
org.apache.flink.runtime.io.network.NettyShuffleEnvironment.start(NettyShuffleEnvironment.java:329)
at 
org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkBenchmarkEnvironment.setUp(StreamNetworkBenchmarkEnvironment.java:133)
at 
org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkThroughputBenchmark.setUp(StreamNetworkThroughputBenchmark.java:108)
at 
org.apache.flink.benchmark.StreamNetworkThroughputBenchmarkExecutor$MultiEnvironment.setUp(StreamNetworkThroughputBenchmarkExecutor.java:117)
at 
org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest._jmh_tryInit_f_multienvironment1_1(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:351)
at 
org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.networkThroughput_Throughput(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
at 
org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.shaded.netty4.io.netty.internal.tcnative.AsyncSSLPrivateKeyMethod
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 27 more
{noformat}
 

  was:
After FLINK-25016(upgrading Netty) StreamNetworkThroughputBenchmark benchmark 
fails when it tries to connect via SSL with error:

 
java.lang.NoClassDefFoundError: 
org/apache/flink/shaded/netty4/io/netty/internal/tcnative/AsyncSSLPrivateKeyMethod
at 
org.apache.flink.shaded.netty4.io.netty.handler.ssl.OpenSslX509KeyManagerFactory$OpenSslKeyManagerFactorySpi.engineInit(OpenSslX509KeyManagerFactory.java:129)
at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
at 
org.apache.flink.runtime.net.SSLUtils.getKeyManagerFactory(SSLUtils.java:279)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:324)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:303)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalClientSSLEngineFactory(SSLUtils.java:119)
at 
org.apache.flink.runtime.io.network.netty.NettyConfig.createClientSSLEngineFactory(NettyConfig.java:147)
at 
org.apache.flink.runtime.io.network.netty.NettyClient.init(NettyClient.java:115)
at 
org.apache.flink.runtime.

[jira] [Updated] (FLINK-25891) NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark

2022-01-31 Thread Anton Kalashnikov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kalashnikov updated FLINK-25891:
--
Description: 
After FLINK-25016(upgrading Netty) StreamNetworkThroughputBenchmark benchmark 
fails when it tries to connect via SSL with error:

 
java.lang.NoClassDefFoundError: 
org/apache/flink/shaded/netty4/io/netty/internal/tcnative/AsyncSSLPrivateKeyMethod
at 
org.apache.flink.shaded.netty4.io.netty.handler.ssl.OpenSslX509KeyManagerFactory$OpenSslKeyManagerFactorySpi.engineInit(OpenSslX509KeyManagerFactory.java:129)
at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
at 
org.apache.flink.runtime.net.SSLUtils.getKeyManagerFactory(SSLUtils.java:279)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:324)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:303)
at 
org.apache.flink.runtime.net.SSLUtils.createInternalClientSSLEngineFactory(SSLUtils.java:119)
at 
org.apache.flink.runtime.io.network.netty.NettyConfig.createClientSSLEngineFactory(NettyConfig.java:147)
at 
org.apache.flink.runtime.io.network.netty.NettyClient.init(NettyClient.java:115)
at 
org.apache.flink.runtime.io.network.netty.NettyConnectionManager.start(NettyConnectionManager.java:64)
at 
org.apache.flink.runtime.io.network.NettyShuffleEnvironment.start(NettyShuffleEnvironment.java:329)
at 
org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkBenchmarkEnvironment.setUp(StreamNetworkBenchmarkEnvironment.java:133)
at 
org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkThroughputBenchmark.setUp(StreamNetworkThroughputBenchmark.java:108)
at 
org.apache.flink.benchmark.StreamNetworkThroughputBenchmarkExecutor$MultiEnvironment.setUp(StreamNetworkThroughputBenchmarkExecutor.java:117)
at 
org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest._jmh_tryInit_f_multienvironment1_1(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:351)
at 
org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.networkThroughput_Throughput(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
at 
org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.shaded.netty4.io.netty.internal.tcnative.AsyncSSLPrivateKeyMethod
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 27 more
 

> NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark
> --
>
> Key: FLINK-25891
> URL: https://issues.apache.org/jira/browse/FLINK-25891
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.15.0
>Reporter: Anton Kalashnikov
>Priority: Major
>
> After FLINK-25016(upgrading Netty) StreamNetworkThroughputBenchmark benchmark 
> fails when it tries to connect via SSL with error:
>  
> java.lang.NoClassDefFoundError: 
> org/apache/flink/shaded/netty4/io/netty/internal/tcnative/AsyncSSLPrivateKeyMethod
>   at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.OpenSslX509KeyManagerFactory$OpenSslKeyManagerFactorySpi.engineInit(OpenSslX509KeyManagerFactory.java:129)
>   at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
>   at 
> org.apache.flink.runtime.net.SSLUtils.getKeyManagerFactory(SSLUtils.java:279)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:324)
>   at 
> org.apache.flink.runtime.n

[jira] [Updated] (FLINK-25891) NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark

2022-01-31 Thread Anton Kalashnikov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kalashnikov updated FLINK-25891:
--
Priority: Blocker  (was: Major)

> NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark
> --
>
> Key: FLINK-25891
> URL: https://issues.apache.org/jira/browse/FLINK-25891
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.15.0
>Reporter: Anton Kalashnikov
>Priority: Blocker
>
> After FLINK-25016(upgrading Netty) StreamNetworkThroughputBenchmark benchmark 
> fails when it tries to connect via SSL with error:
> {noformat}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/shaded/netty4/io/netty/internal/tcnative/AsyncSSLPrivateKeyMethod
>   at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.OpenSslX509KeyManagerFactory$OpenSslKeyManagerFactorySpi.engineInit(OpenSslX509KeyManagerFactory.java:129)
>   at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
>   at 
> org.apache.flink.runtime.net.SSLUtils.getKeyManagerFactory(SSLUtils.java:279)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:324)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:303)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalClientSSLEngineFactory(SSLUtils.java:119)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyConfig.createClientSSLEngineFactory(NettyConfig.java:147)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyClient.init(NettyClient.java:115)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyConnectionManager.start(NettyConnectionManager.java:64)
>   at 
> org.apache.flink.runtime.io.network.NettyShuffleEnvironment.start(NettyShuffleEnvironment.java:329)
>   at 
> org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkBenchmarkEnvironment.setUp(StreamNetworkBenchmarkEnvironment.java:133)
>   at 
> org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkThroughputBenchmark.setUp(StreamNetworkThroughputBenchmark.java:108)
>   at 
> org.apache.flink.benchmark.StreamNetworkThroughputBenchmarkExecutor$MultiEnvironment.setUp(StreamNetworkThroughputBenchmarkExecutor.java:117)
>   at 
> org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest._jmh_tryInit_f_multienvironment1_1(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:351)
>   at 
> org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.networkThroughput_Throughput(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
>   at 
> org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.shaded.netty4.io.netty.internal.tcnative.AsyncSSLPrivateKeyMethod
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   ... 27 more
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] CrynetLogistics commented on a change in pull request #18553: [FLINK-25846][FLINK-25848] Async Sink does not gracefully shutdown on Cancel, KDS Sink does not fast fail when invalid co

2022-01-31 Thread GitBox


CrynetLogistics commented on a change in pull request #18553:
URL: https://github.com/apache/flink/pull/18553#discussion_r795522964



##
File path: 
flink-connectors/flink-connector-aws-kinesis-data-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisDataStreamsException.java
##
@@ -39,12 +39,12 @@ public KinesisDataStreamsException(final String message, 
final Throwable cause)
 
 public KinesisDataStreamsFailFastException() {
 super(
-"Encountered an exception while persisting records, not 
retrying due to {failOnError} being set.");
+"Encountered an exception while persisting records, not 
retrying due to either: {failOnError} being set or the exception should not be 
retried.");
 }
 
 public KinesisDataStreamsFailFastException(final Throwable cause) {
 super(
-"Encountered an exception while persisting records, not 
retrying due to {failOnError} being set.",
+"Encountered an exception while persisting records, not 
retrying due to either: {failOnError} being set or the exception should not be 
retried.",

Review comment:
   I'm currently wrapping any exception that I think is not retryable with 
the `KinesisDataStreamsException`, if we encounter it in the KDSSink.
   
   I hear you, maybe it will be better to throw any `Error`s and 
`RuntimeException`s  directly without wrapping. This way these non-checked 
exceptions would be thrown without a KDSException wrapper confusing things - 
but obvs they must still be accepted by the consumer because even non-checked 
exceptions aren't fatal until they are consumed by the mailbox.
   
   Also agree we can split failOnError and not retrying exceptions.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25892) Develop ArchUnit test for connectors

2022-01-31 Thread Jing Ge (Jira)
Jing Ge created FLINK-25892:
---

 Summary: Develop ArchUnit test for connectors
 Key: FLINK-25892
 URL: https://issues.apache.org/jira/browse/FLINK-25892
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Reporter: Jing Ge
Assignee: Jing Ge


ArchUnit test should be developed for connector submodules after the ArchUnit 
infra for test code has been built.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25893) ResourceManagerServiceImpl's lifecycle can lead to exceptions

2022-01-31 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-25893:
-

 Summary: ResourceManagerServiceImpl's lifecycle can lead to 
exceptions
 Key: FLINK-25893
 URL: https://issues.apache.org/jira/browse/FLINK-25893
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.14.3, 1.15.0
Reporter: Till Rohrmann


The {{ResourceManagerServiceImpl}} lifecycle can lead to exceptions when 
calling {{ResourceManagerServiceImpl.deregisterApplication}}. The problem 
arises when the {{DispatcherResourceManagerComponent}} is shutdown before the 
{{ResourceManagerServiceImpl}} gains leadership or while it is starting the 
{{ResourceManager}}.

One problem is that {{deregisterApplication}} returns an exceptionally 
completed future if there is no leading {{ResourceManager}}.

Another problem is that if there is a leading {{ResourceManager}}, then it can 
still be the case that it has not been started yet. If this is the case, then 
[ResourceManagerGateway.deregisterApplication|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManagerServiceImpl.java#L143]
 will be discarded. The reason for this behaviour is that we create a 
{{ResourceManager}} in one {{Runnable}} and only start it in another. Due to 
this there can be the {{deregisterApplication}} call that gets the {{lock}} in 
between.

I'd suggest to correct the lifecycle and contract of the 
{{ResourceManagerServiceImpl.deregisterApplication}}.

Please note that due to this problem, the error reporting of this method has 
been suppressed. See FLINK-25885 for more details.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] pnowojski commented on pull request #18475: [FLINK-25728] Protential memory leaks in StreamMultipleInputProcessor

2022-01-31 Thread GitBox


pnowojski commented on pull request #18475:
URL: https://github.com/apache/flink/pull/18475#issuecomment-1025585234


   Your efforts didn't go for nothing. We deeply appreciate that you have 
reported and analysed this bug. A half year ago another user reported similar 
symptoms, but neither he nor we were able to track it down back then. Analysing 
it was definitely the most valuable and important part of this issue.
   
   Apart of the things that I've already commented, there are a couple of other 
smaller (stylistic) issues. Also we will potentially need to deduplicate this 
code with a fix for FLINK-25827. To speed things up, I will take over your 
commit, drop the tests as (we will need to reimplement them in 
https://issues.apache.org/jira/browse/FLINK-25869 ), while merging most of your 
production code as it is. 
   
   >  I couldn't see how can the callback get the obsolete object. Dirty 
inconsistence between CPU cache cannot last that long. The callback will see 
the correct object.
   
   I'm afraid you still don't quite get it. If you are further curious about 
the subject, please search around for some other resources. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25893) ResourceManagerServiceImpl's lifecycle can lead to exceptions

2022-01-31 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484597#comment-17484597
 ] 

Till Rohrmann commented on FLINK-25893:
---

[~xtsong] could you take a look at this problem. I think you know this code 
part the best.

> ResourceManagerServiceImpl's lifecycle can lead to exceptions
> -
>
> Key: FLINK-25893
> URL: https://issues.apache.org/jira/browse/FLINK-25893
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Priority: Critical
>
> The {{ResourceManagerServiceImpl}} lifecycle can lead to exceptions when 
> calling {{ResourceManagerServiceImpl.deregisterApplication}}. The problem 
> arises when the {{DispatcherResourceManagerComponent}} is shutdown before the 
> {{ResourceManagerServiceImpl}} gains leadership or while it is starting the 
> {{ResourceManager}}.
> One problem is that {{deregisterApplication}} returns an exceptionally 
> completed future if there is no leading {{ResourceManager}}.
> Another problem is that if there is a leading {{ResourceManager}}, then it 
> can still be the case that it has not been started yet. If this is the case, 
> then 
> [ResourceManagerGateway.deregisterApplication|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManagerServiceImpl.java#L143]
>  will be discarded. The reason for this behaviour is that we create a 
> {{ResourceManager}} in one {{Runnable}} and only start it in another. Due to 
> this there can be the {{deregisterApplication}} call that gets the {{lock}} 
> in between.
> I'd suggest to correct the lifecycle and contract of the 
> {{ResourceManagerServiceImpl.deregisterApplication}}.
> Please note that due to this problem, the error reporting of this method has 
> been suppressed. See FLINK-25885 for more details.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] CrynetLogistics commented on a change in pull request #18553: [FLINK-25846][FLINK-25848] Async Sink does not gracefully shutdown on Cancel, KDS Sink does not fast fail when invalid co

2022-01-31 Thread GitBox


CrynetLogistics commented on a change in pull request #18553:
URL: https://github.com/apache/flink/pull/18553#discussion_r795529758



##
File path: 
flink-connectors/flink-connector-aws-kinesis-data-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisDataStreamsException.java
##
@@ -39,12 +39,12 @@ public KinesisDataStreamsException(final String message, 
final Throwable cause)
 
 public KinesisDataStreamsFailFastException() {
 super(
-"Encountered an exception while persisting records, not 
retrying due to {failOnError} being set.");
+"Encountered an exception while persisting records, not 
retrying due to either: {failOnError} being set or the exception should not be 
retried.");
 }
 
 public KinesisDataStreamsFailFastException(final Throwable cause) {
 super(
-"Encountered an exception while persisting records, not 
retrying due to {failOnError} being set.",
+"Encountered an exception while persisting records, not 
retrying due to either: {failOnError} being set or the exception should not be 
retried.",

Review comment:
   Ah nevermind, I understand now.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25885) ClusterEntrypointTest.testWorkingDirectoryIsDeletedIfApplicationCompletes failed on azure

2022-01-31 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484598#comment-17484598
 ] 

Till Rohrmann commented on FLINK-25885:
---

I think there are some deeper problems with the lifecycle of the 
{{ResourceManagerServiceImpl}}. I'd suggest to disable the error reporting for 
the {{ResourceManagerServiceImpl.deregisterApplication}} in order to stabilize 
the test and to fix the true problem with FLINK-25893.

> ClusterEntrypointTest.testWorkingDirectoryIsDeletedIfApplicationCompletes  
> failed on azure
> --
>
> Key: FLINK-25885
> URL: https://issues.apache.org/jira/browse/FLINK-25885
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-01-31T05:00:07.3113870Z Jan 31 05:00:07 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Discard 
> message, because the rpc endpoint 
> akka.tcp://flink@127.0.0.1:6123/user/rpc/resourcemanager_2 has not been 
> started yet.
> 2022-01-31T05:00:07.3115008Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> 2022-01-31T05:00:07.3115778Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> 2022-01-31T05:00:07.3116527Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
> 2022-01-31T05:00:07.3117267Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-01-31T05:00:07.3118011Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3118770Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3119608Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:251)
> 2022-01-31T05:00:07.3120425Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-31T05:00:07.3121199Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-31T05:00:07.3121957Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3122716Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3123457Z Jan 31 05:00:07  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1387)
> 2022-01-31T05:00:07.3124241Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-01-31T05:00:07.3125106Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-01-31T05:00:07.3126063Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-01-31T05:00:07.3127207Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-31T05:00:07.3127982Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-31T05:00:07.3128741Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-31T05:00:07.3129497Z Jan 31 05:00:07  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2022-01-31T05:00:07.3130385Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:45)
> 2022-01-31T05:00:07.3131092Z Jan 31 05:00:07  at 
> akka.dispatch.OnComplete.internal(Future.scala:299)
> 2022-01-31T05:00:07.3131695Z Jan 31 05:00:07  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-01-31T05:00:07.3132310Z Jan 31 05:00:07  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-01-31T05:00:07.3132943Z Jan 31 05:00:07  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-01-31T05:00:07.3133577Z Jan 31 05:00:07  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-01-31T05:00:07.3134340Z Jan 31 05:00:07  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute

[GitHub] [flink] tillrohrmann opened a new pull request #18567: [FLINK-25893] Suppress error reporting for ResourceManagerServiceImpl.deregisterApplication

2022-01-31 Thread GitBox


tillrohrmann opened a new pull request #18567:
URL: https://github.com/apache/flink/pull/18567


   This commit suppresses the error reporting for 
ResourceManagerServiceImpl.deregisterApplication
   in order to harden the 
ClusterEntrypointTest.testWorkingDirectoryIsDeletedIfApplicationCompletes.
   This is a temporary fix until FLINK-25893 has been fixed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25893) ResourceManagerServiceImpl's lifecycle can lead to exceptions

2022-01-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25893:
---
Labels: pull-request-available  (was: )

> ResourceManagerServiceImpl's lifecycle can lead to exceptions
> -
>
> Key: FLINK-25893
> URL: https://issues.apache.org/jira/browse/FLINK-25893
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available
>
> The {{ResourceManagerServiceImpl}} lifecycle can lead to exceptions when 
> calling {{ResourceManagerServiceImpl.deregisterApplication}}. The problem 
> arises when the {{DispatcherResourceManagerComponent}} is shutdown before the 
> {{ResourceManagerServiceImpl}} gains leadership or while it is starting the 
> {{ResourceManager}}.
> One problem is that {{deregisterApplication}} returns an exceptionally 
> completed future if there is no leading {{ResourceManager}}.
> Another problem is that if there is a leading {{ResourceManager}}, then it 
> can still be the case that it has not been started yet. If this is the case, 
> then 
> [ResourceManagerGateway.deregisterApplication|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManagerServiceImpl.java#L143]
>  will be discarded. The reason for this behaviour is that we create a 
> {{ResourceManager}} in one {{Runnable}} and only start it in another. Due to 
> this there can be the {{deregisterApplication}} call that gets the {{lock}} 
> in between.
> I'd suggest to correct the lifecycle and contract of the 
> {{ResourceManagerServiceImpl.deregisterApplication}}.
> Please note that due to this problem, the error reporting of this method has 
> been suppressed. See FLINK-25885 for more details.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25891) NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark

2022-01-31 Thread Anton Kalashnikov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484601#comment-17484601
 ] 

Anton Kalashnikov commented on FLINK-25891:
---

[~chesnay], Maybe do you have an idea how to fix it quickly?

> NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark
> --
>
> Key: FLINK-25891
> URL: https://issues.apache.org/jira/browse/FLINK-25891
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.15.0
>Reporter: Anton Kalashnikov
>Priority: Blocker
>
> After FLINK-25016(upgrading Netty) StreamNetworkThroughputBenchmark benchmark 
> fails when it tries to connect via SSL with error:
> {noformat}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/shaded/netty4/io/netty/internal/tcnative/AsyncSSLPrivateKeyMethod
>   at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.OpenSslX509KeyManagerFactory$OpenSslKeyManagerFactorySpi.engineInit(OpenSslX509KeyManagerFactory.java:129)
>   at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
>   at 
> org.apache.flink.runtime.net.SSLUtils.getKeyManagerFactory(SSLUtils.java:279)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:324)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:303)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalClientSSLEngineFactory(SSLUtils.java:119)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyConfig.createClientSSLEngineFactory(NettyConfig.java:147)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyClient.init(NettyClient.java:115)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyConnectionManager.start(NettyConnectionManager.java:64)
>   at 
> org.apache.flink.runtime.io.network.NettyShuffleEnvironment.start(NettyShuffleEnvironment.java:329)
>   at 
> org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkBenchmarkEnvironment.setUp(StreamNetworkBenchmarkEnvironment.java:133)
>   at 
> org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkThroughputBenchmark.setUp(StreamNetworkThroughputBenchmark.java:108)
>   at 
> org.apache.flink.benchmark.StreamNetworkThroughputBenchmarkExecutor$MultiEnvironment.setUp(StreamNetworkThroughputBenchmarkExecutor.java:117)
>   at 
> org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest._jmh_tryInit_f_multienvironment1_1(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:351)
>   at 
> org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.networkThroughput_Throughput(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
>   at 
> org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.shaded.netty4.io.netty.internal.tcnative.AsyncSSLPrivateKeyMethod
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   ... 27 more
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] vahmed-hamdy commented on a change in pull request #18553: [FLINK-25846][FLINK-25848] Async Sink does not gracefully shutdown on Cancel, KDS Sink does not fast fail when invalid confi

2022-01-31 Thread GitBox


vahmed-hamdy commented on a change in pull request #18553:
URL: https://github.com/apache/flink/pull/18553#discussion_r795534589



##
File path: 
flink-connectors/flink-connector-aws-kinesis-data-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisDataStreamsSinkWriter.java
##
@@ -176,11 +179,35 @@ private void handlePartiallyFailedRequest(
 
 private boolean isRetryable(Throwable err) {
 if (err instanceof CompletionException
-&& err.getCause() instanceof ResourceNotFoundException) {
+&& 
isInterruptingSignalException(ExceptionUtils.stripCompletionException(err))) {
+getFatalExceptionCons().accept(new FlinkException("Running job was 
cancelled"));
+return false;
+}
+if (err instanceof CompletionException
+&& ExceptionUtils.stripCompletionException(err)
+instanceof ResourceNotFoundException) {
 getFatalExceptionCons()
 .accept(
 new KinesisDataStreamsException(
-"Encountered non-recoverable exception", 
err));
+"Encountered non-recoverable exception 
relating to not being able to find the specified resources",
+err));
+return false;
+}
+if (err instanceof CompletionException
+&& ExceptionUtils.stripCompletionException(err) instanceof 
StsException) {
+getFatalExceptionCons()
+.accept(
+new KinesisDataStreamsException(
+"Encountered non-recoverable exception 
relating to the provided credentials.",
+err));
+return false;
+}
+if (err instanceof Error) {
+getFatalExceptionCons()

Review comment:
   The parent (`AsynSinkWriter`) doesn't offer retry strategy leaving the 
request submission implementation to concrete classes. Is it better to enforce 
having a retry strategy on all implementations or just move the common logic to 
a validator/classifier class to be reused by all implementations without 
changing base class?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18567: [FLINK-25893] Suppress error reporting for ResourceManagerServiceImpl.deregisterApplication

2022-01-31 Thread GitBox


flinkbot commented on pull request #18567:
URL: https://github.com/apache/flink/pull/18567#issuecomment-1025592946


   
   ## CI report:
   
   * ba1818d9a60b997a27b971c7a1b8dd348509d2a8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18567: [FLINK-25893] Suppress error reporting for ResourceManagerServiceImpl.deregisterApplication

2022-01-31 Thread GitBox


flinkbot commented on pull request #18567:
URL: https://github.com/apache/flink/pull/18567#issuecomment-1025594692


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ba1818d9a60b997a27b971c7a1b8dd348509d2a8 (Mon Jan 31 
10:33:38 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-25893).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25847) KubernetesHighAvailabilityRecoverFromSavepointITCase. testRecoverFromSavepoint failed on azure

2022-01-31 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17484603#comment-17484603
 ] 

Till Rohrmann commented on FLINK-25847:
---

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30494&view=logs&j=bea52777-eaf8-5663-8482-18fbc3630e81&t=b2642e3a-5b86-574d-4c8a-f7e2842bfb14&l=5342

> KubernetesHighAvailabilityRecoverFromSavepointITCase. 
> testRecoverFromSavepoint failed on azure
> --
>
> Key: FLINK-25847
> URL: https://issues.apache.org/jira/browse/FLINK-25847
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-01-27T06:08:57.7214748Z Jan 27 06:08:57 [INFO] Running 
> org.apache.flink.kubernetes.highavailability.KubernetesHighAvailabilityRecoverFromSavepointITCase
> 2022-01-27T06:10:23.2568324Z Jan 27 06:10:23 [ERROR] Tests run: 1, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 85.553 s <<< FAILURE! - in 
> org.apache.flink.kubernetes.highavailability.KubernetesHighAvailabilityRecoverFromSavepointITCase
> 2022-01-27T06:10:23.2572289Z Jan 27 06:10:23 [ERROR] 
> org.apache.flink.kubernetes.highavailability.KubernetesHighAvailabilityRecoverFromSavepointITCase.testRecoverFromSavepoint
>   Time elapsed: 84.078 s  <<< ERROR!
> 2022-01-27T06:10:23.2573945Z Jan 27 06:10:23 
> java.util.concurrent.TimeoutException
> 2022-01-27T06:10:23.2574625Z Jan 27 06:10:23  at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> 2022-01-27T06:10:23.2575381Z Jan 27 06:10:23  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> 2022-01-27T06:10:23.2576428Z Jan 27 06:10:23  at 
> org.apache.flink.kubernetes.highavailability.KubernetesHighAvailabilityRecoverFromSavepointITCase.testRecoverFromSavepoint(KubernetesHighAvailabilityRecoverFromSavepointITCase.java:104)
> 2022-01-27T06:10:23.2578437Z Jan 27 06:10:23  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-27T06:10:23.2579141Z Jan 27 06:10:23  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-27T06:10:23.2579893Z Jan 27 06:10:23  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-27T06:10:23.2594686Z Jan 27 06:10:23  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-27T06:10:23.2595622Z Jan 27 06:10:23  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-27T06:10:23.2596397Z Jan 27 06:10:23  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-27T06:10:23.2597158Z Jan 27 06:10:23  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-27T06:10:23.2597900Z Jan 27 06:10:23  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-27T06:10:23.2598630Z Jan 27 06:10:23  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-27T06:10:23.2599335Z Jan 27 06:10:23  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-01-27T06:10:23.2600044Z Jan 27 06:10:23  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-27T06:10:23.2600736Z Jan 27 06:10:23  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-27T06:10:23.2601408Z Jan 27 06:10:23  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-27T06:10:23.2602124Z Jan 27 06:10:23  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-27T06:10:23.2602831Z Jan 27 06:10:23  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-27T06:10:23.2603531Z Jan 27 06:10:23  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-27T06:10:23.2604270Z Jan 27 06:10:23  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-27T06:10:23.2604975Z Jan 27 06:10:23  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-27T06:10:23.2605641Z Jan 27 06:10:23  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-27T06:10:23.2606313Z Jan 27 06:10:23  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-27T06:10:23.2607713Z Jan 27 06:10:23  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-27T06:10:23.2608497Z Jan 27 06:10:23  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)

[GitHub] [flink] flinkbot edited a comment on pull request #18567: [FLINK-25893] Suppress error reporting for ResourceManagerServiceImpl.deregisterApplication

2022-01-31 Thread GitBox


flinkbot edited a comment on pull request #18567:
URL: https://github.com/apache/flink/pull/18567#issuecomment-1025592946


   
   ## CI report:
   
   * ba1818d9a60b997a27b971c7a1b8dd348509d2a8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30505)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-25891) NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark

2022-01-31 Thread Anton Kalashnikov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kalashnikov reassigned FLINK-25891:
-

Assignee: Anton Kalashnikov

> NoClassDefFoundError AsyncSSLPrivateKeyMethod in benchmark
> --
>
> Key: FLINK-25891
> URL: https://issues.apache.org/jira/browse/FLINK-25891
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.15.0
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Blocker
>
> After FLINK-25016(upgrading Netty) StreamNetworkThroughputBenchmark benchmark 
> fails when it tries to connect via SSL with error:
> {noformat}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/shaded/netty4/io/netty/internal/tcnative/AsyncSSLPrivateKeyMethod
>   at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.OpenSslX509KeyManagerFactory$OpenSslKeyManagerFactorySpi.engineInit(OpenSslX509KeyManagerFactory.java:129)
>   at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
>   at 
> org.apache.flink.runtime.net.SSLUtils.getKeyManagerFactory(SSLUtils.java:279)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:324)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalNettySSLContext(SSLUtils.java:303)
>   at 
> org.apache.flink.runtime.net.SSLUtils.createInternalClientSSLEngineFactory(SSLUtils.java:119)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyConfig.createClientSSLEngineFactory(NettyConfig.java:147)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyClient.init(NettyClient.java:115)
>   at 
> org.apache.flink.runtime.io.network.netty.NettyConnectionManager.start(NettyConnectionManager.java:64)
>   at 
> org.apache.flink.runtime.io.network.NettyShuffleEnvironment.start(NettyShuffleEnvironment.java:329)
>   at 
> org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkBenchmarkEnvironment.setUp(StreamNetworkBenchmarkEnvironment.java:133)
>   at 
> org.apache.flink.streaming.runtime.io.benchmark.StreamNetworkThroughputBenchmark.setUp(StreamNetworkThroughputBenchmark.java:108)
>   at 
> org.apache.flink.benchmark.StreamNetworkThroughputBenchmarkExecutor$MultiEnvironment.setUp(StreamNetworkThroughputBenchmarkExecutor.java:117)
>   at 
> org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest._jmh_tryInit_f_multienvironment1_1(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:351)
>   at 
> org.apache.flink.benchmark.generated.StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.networkThroughput_Throughput(StreamNetworkThroughputBenchmarkExecutor_networkThroughput_jmhTest.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
>   at 
> org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.shaded.netty4.io.netty.internal.tcnative.AsyncSSLPrivateKeyMethod
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   ... 27 more
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21232) Kerberos delegation token framework

2022-01-31 Thread Gabor Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated FLINK-21232:
--
Summary: Kerberos delegation token framework  (was: Introduce pluggable 
Hadoop delegation token providers)

> Kerberos delegation token framework
> ---
>
> Key: FLINK-21232
> URL: https://issues.apache.org/jira/browse/FLINK-21232
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common, Deployment / Kubernetes, Deployment 
> / YARN
>Affects Versions: 1.15.0
>Reporter: jackwangcs
>Assignee: Gabor Somogyi
>Priority: Major
>  Labels: security
>
> Introduce a pluggable delegation provider via SPI. 
> Delegation provider could be placed in connector related code and is more 
> extendable comparing using reflection way to obtain DTs.
> Email dicussion thread:
> [https://lists.apache.org/thread.html/rbedb6e769358a10c6426c4c42b3b51cdbed48a3b6537e4ebde912bc0%40%3Cdev.flink.apache.org%3E]
>  
> [https://lists.apache.org/thread.html/r20d4be431ff2f6faff94129b5321a047fcbb0c71c8e092504cd91183%40%3Cdev.flink.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21232) Kerberos delegation token framework

2022-01-31 Thread Gabor Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated FLINK-21232:
--
Description: 
Proper delegation token handling is completely missing from Flink. This jira is 
an umbrella to track this effort.
Please see the attached FLIP and discussion threads for more details.

Email dicussion threads:

[https://lists.apache.org/thread.html/rbedb6e769358a10c6426c4c42b3b51cdbed48a3b6537e4ebde912bc0%40%3Cdev.flink.apache.org%3E]
[https://lists.apache.org/thread.html/r20d4be431ff2f6faff94129b5321a047fcbb0c71c8e092504cd91183%40%3Cdev.flink.apache.org%3E]
[https://lists.apache.org/thread/cvwknd5fhohj0wfv8mfwn70jwpjvxrjj]

 

  was:
Introduce a pluggable delegation provider via SPI. 

Delegation provider could be placed in connector related code and is more 
extendable comparing using reflection way to obtain DTs.

Email dicussion thread:

[https://lists.apache.org/thread.html/rbedb6e769358a10c6426c4c42b3b51cdbed48a3b6537e4ebde912bc0%40%3Cdev.flink.apache.org%3E]
 
[https://lists.apache.org/thread.html/r20d4be431ff2f6faff94129b5321a047fcbb0c71c8e092504cd91183%40%3Cdev.flink.apache.org%3E]

 


> Kerberos delegation token framework
> ---
>
> Key: FLINK-21232
> URL: https://issues.apache.org/jira/browse/FLINK-21232
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common, Deployment / Kubernetes, Deployment 
> / YARN
>Affects Versions: 1.15.0
>Reporter: jackwangcs
>Assignee: Gabor Somogyi
>Priority: Major
>  Labels: security
>
> Proper delegation token handling is completely missing from Flink. This jira 
> is an umbrella to track this effort.
> Please see the attached FLIP and discussion threads for more details.
> Email dicussion threads:
> [https://lists.apache.org/thread.html/rbedb6e769358a10c6426c4c42b3b51cdbed48a3b6537e4ebde912bc0%40%3Cdev.flink.apache.org%3E]
> [https://lists.apache.org/thread.html/r20d4be431ff2f6faff94129b5321a047fcbb0c71c8e092504cd91183%40%3Cdev.flink.apache.org%3E]
> [https://lists.apache.org/thread/cvwknd5fhohj0wfv8mfwn70jwpjvxrjj]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-benchmarks] akalash opened a new pull request #47: [FLINK-25891] Upgraded version of flink.shaded.version and netty.tcna…

2022-01-31 Thread GitBox


akalash opened a new pull request #47:
URL: https://github.com/apache/flink-benchmarks/pull/47


   …tive.version


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >