[jira] [Updated] (FLINK-26637) Document Basic Concepts and Architecture
[ https://issues.apache.org/jira/browse/FLINK-26637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-26637: --- Labels: pull-request-available (was: ) > Document Basic Concepts and Architecture > > > Key: FLINK-26637 > URL: https://issues.apache.org/jira/browse/FLINK-26637 > Project: Flink > Issue Type: Sub-task >Reporter: Matyas Orhidi >Assignee: Matyas Orhidi >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26639) Publish flink-kubernetes-operator maven artifacts
[ https://issues.apache.org/jira/browse/FLINK-26639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506735#comment-17506735 ] Yang Wang commented on FLINK-26639: --- Is this a step in "Creating a Flink Kubernetes Operator Release"? > Publish flink-kubernetes-operator maven artifacts > - > > Key: FLINK-26639 > URL: https://issues.apache.org/jira/browse/FLINK-26639 > Project: Flink > Issue Type: Sub-task >Reporter: Thomas Weise >Priority: Major > > We should publish the Maven artifacts in addition to the Docker images so > that downstream Java projects can utilize the CRD classes directly. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] XComp merged pull request #19075: [FLINK-26500][runtime][test] Increases the deadline to wait for parallelism
XComp merged pull request #19075: URL: https://github.com/apache/flink/pull/19075 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25233) UpsertKafkaTableITCase.testAggregate fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-25233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506736#comment-17506736 ] Yun Gao commented on FLINK-25233: - I'll first move this issue to 1.16 since it seems to be related to the environment setups. We'll continue investigating this issue. > UpsertKafkaTableITCase.testAggregate fails on AZP > - > > Key: FLINK-25233 > URL: https://issues.apache.org/jira/browse/FLINK-25233 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.15.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.15.0 > > > {{UpsertKafkaTableITCase.testAggregate}} fails on AZP with > {code} > 2021-12-09T01:41:49.8038402Z Dec 09 01:41:49 [ERROR] > UpsertKafkaTableITCase.testAggregate Time elapsed: 90.624 s <<< ERROR! > 2021-12-09T01:41:49.8039372Z Dec 09 01:41:49 > java.util.concurrent.ExecutionException: > org.apache.flink.table.api.TableException: Failed to wait job finish > 2021-12-09T01:41:49.8040303Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2021-12-09T01:41:49.8040956Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2021-12-09T01:41:49.8041862Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118) > 2021-12-09T01:41:49.8042939Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81) > 2021-12-09T01:41:49.8044130Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.wordCountToUpsertKafka(UpsertKafkaTableITCase.java:436) > 2021-12-09T01:41:49.8045308Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testAggregate(UpsertKafkaTableITCase.java:79) > 2021-12-09T01:41:49.8045940Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-12-09T01:41:49.8052892Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-12-09T01:41:49.8053812Z Dec 09 01:41:49 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-12-09T01:41:49.8054458Z Dec 09 01:41:49 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-12-09T01:41:49.8055027Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2021-12-09T01:41:49.8055649Z Dec 09 01:41:49 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2021-12-09T01:41:49.8056644Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2021-12-09T01:41:49.8057911Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2021-12-09T01:41:49.8058858Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2021-12-09T01:41:49.8059907Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2021-12-09T01:41:49.8060871Z Dec 09 01:41:49 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2021-12-09T01:41:49.8061847Z Dec 09 01:41:49 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2021-12-09T01:41:49.8062898Z Dec 09 01:41:49 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2021-12-09T01:41:49.8063804Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2021-12-09T01:41:49.8064963Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2021-12-09T01:41:49.8065992Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2021-12-09T01:41:49.8066940Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2021-12-09T01:41:49.8067939Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2021-12-09T01:41:49.8068904Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2021-12-09T01:41:49.8069837Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2021-12-09T01:41:49.8070715Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2021-12-09T01:41:49.8071587Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2021-12-09T01:41:49.8072582Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$2
[jira] [Updated] (FLINK-25233) UpsertKafkaTableITCase.testAggregate fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-25233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-25233: Affects Version/s: 1.15.0 (was: 1.16.0) > UpsertKafkaTableITCase.testAggregate fails on AZP > - > > Key: FLINK-25233 > URL: https://issues.apache.org/jira/browse/FLINK-25233 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.15.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.15.0 > > > {{UpsertKafkaTableITCase.testAggregate}} fails on AZP with > {code} > 2021-12-09T01:41:49.8038402Z Dec 09 01:41:49 [ERROR] > UpsertKafkaTableITCase.testAggregate Time elapsed: 90.624 s <<< ERROR! > 2021-12-09T01:41:49.8039372Z Dec 09 01:41:49 > java.util.concurrent.ExecutionException: > org.apache.flink.table.api.TableException: Failed to wait job finish > 2021-12-09T01:41:49.8040303Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2021-12-09T01:41:49.8040956Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2021-12-09T01:41:49.8041862Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118) > 2021-12-09T01:41:49.8042939Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81) > 2021-12-09T01:41:49.8044130Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.wordCountToUpsertKafka(UpsertKafkaTableITCase.java:436) > 2021-12-09T01:41:49.8045308Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testAggregate(UpsertKafkaTableITCase.java:79) > 2021-12-09T01:41:49.8045940Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-12-09T01:41:49.8052892Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-12-09T01:41:49.8053812Z Dec 09 01:41:49 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-12-09T01:41:49.8054458Z Dec 09 01:41:49 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-12-09T01:41:49.8055027Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2021-12-09T01:41:49.8055649Z Dec 09 01:41:49 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2021-12-09T01:41:49.8056644Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2021-12-09T01:41:49.8057911Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2021-12-09T01:41:49.8058858Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2021-12-09T01:41:49.8059907Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2021-12-09T01:41:49.8060871Z Dec 09 01:41:49 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2021-12-09T01:41:49.8061847Z Dec 09 01:41:49 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2021-12-09T01:41:49.8062898Z Dec 09 01:41:49 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2021-12-09T01:41:49.8063804Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2021-12-09T01:41:49.8064963Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2021-12-09T01:41:49.8065992Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2021-12-09T01:41:49.8066940Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2021-12-09T01:41:49.8067939Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2021-12-09T01:41:49.8068904Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2021-12-09T01:41:49.8069837Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2021-12-09T01:41:49.8070715Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2021-12-09T01:41:49.8071587Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2021-12-09T01:41:49.8072582Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2021-12-09T01:41:49.8073540Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.run
[jira] [Updated] (FLINK-25233) UpsertKafkaTableITCase.testAggregate fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-25233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-25233: Affects Version/s: 1.16.0 > UpsertKafkaTableITCase.testAggregate fails on AZP > - > > Key: FLINK-25233 > URL: https://issues.apache.org/jira/browse/FLINK-25233 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.15.0, 1.16.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.15.0 > > > {{UpsertKafkaTableITCase.testAggregate}} fails on AZP with > {code} > 2021-12-09T01:41:49.8038402Z Dec 09 01:41:49 [ERROR] > UpsertKafkaTableITCase.testAggregate Time elapsed: 90.624 s <<< ERROR! > 2021-12-09T01:41:49.8039372Z Dec 09 01:41:49 > java.util.concurrent.ExecutionException: > org.apache.flink.table.api.TableException: Failed to wait job finish > 2021-12-09T01:41:49.8040303Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2021-12-09T01:41:49.8040956Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2021-12-09T01:41:49.8041862Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118) > 2021-12-09T01:41:49.8042939Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81) > 2021-12-09T01:41:49.8044130Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.wordCountToUpsertKafka(UpsertKafkaTableITCase.java:436) > 2021-12-09T01:41:49.8045308Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testAggregate(UpsertKafkaTableITCase.java:79) > 2021-12-09T01:41:49.8045940Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-12-09T01:41:49.8052892Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-12-09T01:41:49.8053812Z Dec 09 01:41:49 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-12-09T01:41:49.8054458Z Dec 09 01:41:49 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-12-09T01:41:49.8055027Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2021-12-09T01:41:49.8055649Z Dec 09 01:41:49 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2021-12-09T01:41:49.8056644Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2021-12-09T01:41:49.8057911Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2021-12-09T01:41:49.8058858Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2021-12-09T01:41:49.8059907Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2021-12-09T01:41:49.8060871Z Dec 09 01:41:49 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2021-12-09T01:41:49.8061847Z Dec 09 01:41:49 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2021-12-09T01:41:49.8062898Z Dec 09 01:41:49 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2021-12-09T01:41:49.8063804Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2021-12-09T01:41:49.8064963Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2021-12-09T01:41:49.8065992Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2021-12-09T01:41:49.8066940Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2021-12-09T01:41:49.8067939Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2021-12-09T01:41:49.8068904Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2021-12-09T01:41:49.8069837Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2021-12-09T01:41:49.8070715Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2021-12-09T01:41:49.8071587Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2021-12-09T01:41:49.8072582Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2021-12-09T01:41:49.8073540Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > 2021-12
[jira] [Updated] (FLINK-25233) UpsertKafkaTableITCase.testAggregate fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-25233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-25233: Fix Version/s: 1.16.0 (was: 1.15.0) > UpsertKafkaTableITCase.testAggregate fails on AZP > - > > Key: FLINK-25233 > URL: https://issues.apache.org/jira/browse/FLINK-25233 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.15.0, 1.16.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.16.0 > > > {{UpsertKafkaTableITCase.testAggregate}} fails on AZP with > {code} > 2021-12-09T01:41:49.8038402Z Dec 09 01:41:49 [ERROR] > UpsertKafkaTableITCase.testAggregate Time elapsed: 90.624 s <<< ERROR! > 2021-12-09T01:41:49.8039372Z Dec 09 01:41:49 > java.util.concurrent.ExecutionException: > org.apache.flink.table.api.TableException: Failed to wait job finish > 2021-12-09T01:41:49.8040303Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2021-12-09T01:41:49.8040956Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2021-12-09T01:41:49.8041862Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118) > 2021-12-09T01:41:49.8042939Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81) > 2021-12-09T01:41:49.8044130Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.wordCountToUpsertKafka(UpsertKafkaTableITCase.java:436) > 2021-12-09T01:41:49.8045308Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testAggregate(UpsertKafkaTableITCase.java:79) > 2021-12-09T01:41:49.8045940Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-12-09T01:41:49.8052892Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-12-09T01:41:49.8053812Z Dec 09 01:41:49 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-12-09T01:41:49.8054458Z Dec 09 01:41:49 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-12-09T01:41:49.8055027Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2021-12-09T01:41:49.8055649Z Dec 09 01:41:49 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2021-12-09T01:41:49.8056644Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2021-12-09T01:41:49.8057911Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2021-12-09T01:41:49.8058858Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2021-12-09T01:41:49.8059907Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2021-12-09T01:41:49.8060871Z Dec 09 01:41:49 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2021-12-09T01:41:49.8061847Z Dec 09 01:41:49 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2021-12-09T01:41:49.8062898Z Dec 09 01:41:49 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2021-12-09T01:41:49.8063804Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2021-12-09T01:41:49.8064963Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2021-12-09T01:41:49.8065992Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2021-12-09T01:41:49.8066940Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2021-12-09T01:41:49.8067939Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2021-12-09T01:41:49.8068904Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2021-12-09T01:41:49.8069837Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2021-12-09T01:41:49.8070715Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2021-12-09T01:41:49.8071587Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2021-12-09T01:41:49.8072582Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2021-12-09T01:41:49.8073540Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.run
[jira] [Updated] (FLINK-25233) UpsertKafkaTableITCase.testAggregate fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-25233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-25233: Affects Version/s: 1.16.0 (was: 1.15.0) > UpsertKafkaTableITCase.testAggregate fails on AZP > - > > Key: FLINK-25233 > URL: https://issues.apache.org/jira/browse/FLINK-25233 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.16.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.15.0 > > > {{UpsertKafkaTableITCase.testAggregate}} fails on AZP with > {code} > 2021-12-09T01:41:49.8038402Z Dec 09 01:41:49 [ERROR] > UpsertKafkaTableITCase.testAggregate Time elapsed: 90.624 s <<< ERROR! > 2021-12-09T01:41:49.8039372Z Dec 09 01:41:49 > java.util.concurrent.ExecutionException: > org.apache.flink.table.api.TableException: Failed to wait job finish > 2021-12-09T01:41:49.8040303Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2021-12-09T01:41:49.8040956Z Dec 09 01:41:49 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2021-12-09T01:41:49.8041862Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118) > 2021-12-09T01:41:49.8042939Z Dec 09 01:41:49 at > org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81) > 2021-12-09T01:41:49.8044130Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.wordCountToUpsertKafka(UpsertKafkaTableITCase.java:436) > 2021-12-09T01:41:49.8045308Z Dec 09 01:41:49 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testAggregate(UpsertKafkaTableITCase.java:79) > 2021-12-09T01:41:49.8045940Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-12-09T01:41:49.8052892Z Dec 09 01:41:49 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-12-09T01:41:49.8053812Z Dec 09 01:41:49 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-12-09T01:41:49.8054458Z Dec 09 01:41:49 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-12-09T01:41:49.8055027Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2021-12-09T01:41:49.8055649Z Dec 09 01:41:49 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2021-12-09T01:41:49.8056644Z Dec 09 01:41:49 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2021-12-09T01:41:49.8057911Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2021-12-09T01:41:49.8058858Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2021-12-09T01:41:49.8059907Z Dec 09 01:41:49 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2021-12-09T01:41:49.8060871Z Dec 09 01:41:49 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2021-12-09T01:41:49.8061847Z Dec 09 01:41:49 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2021-12-09T01:41:49.8062898Z Dec 09 01:41:49 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2021-12-09T01:41:49.8063804Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2021-12-09T01:41:49.8064963Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2021-12-09T01:41:49.8065992Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2021-12-09T01:41:49.8066940Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2021-12-09T01:41:49.8067939Z Dec 09 01:41:49 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2021-12-09T01:41:49.8068904Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2021-12-09T01:41:49.8069837Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2021-12-09T01:41:49.8070715Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2021-12-09T01:41:49.8071587Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2021-12-09T01:41:49.8072582Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2021-12-09T01:41:49.8073540Z Dec 09 01:41:49 at > org.junit.runners.ParentRunner.run
[GitHub] [flink] XComp merged pull request #19088: [FLINK-26500][BP-1.15][runtime][test] Increases the deadline to wait for parallelism
XComp merged pull request #19088: URL: https://github.com/apache/flink/pull/19088 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] XComp merged pull request #19076: [BP-1.14][FLINK-26500][runtime][test] Increases the deadline to wait for parallelism
XComp merged pull request #19076: URL: https://github.com/apache/flink/pull/19076 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-26500) AdaptiveSchedulerClusterITCase.testAutomaticScaleUp failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl resolved FLINK-26500. --- Fix Version/s: 1.15.0 1.16.0 1.14.5 Resolution: Fixed master: 1d2f1e73e250fc5a4a11e82fa6a6e77fb2637e37 1.15: 6d4d50da7b46155d4ffdb58d4b61eb1742a98932 1.14: cf1f70ee9d61b8d6eced9c77ddd6c9e3f5e492f5 > AdaptiveSchedulerClusterITCase.testAutomaticScaleUp failed on azure > --- > > Key: FLINK-26500 > URL: https://issues.apache.org/jira/browse/FLINK-26500 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0, 1.14.4, 1.16.0 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.15.0, 1.16.0, 1.14.5 > > Attachments: test-failure.log, test-success.log > > > {code:java} > Mar 03 13:38:24 [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 21.854 s <<< FAILURE! - in > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase > Mar 03 13:38:24 [ERROR] > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp > Time elapsed: 16.035 s <<< ERROR! > Mar 03 13:38:24 java.util.concurrent.TimeoutException: Condition was not met > in given timeout. > Mar 03 13:38:24 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:167) > Mar 03 13:38:24 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145) > Mar 03 13:38:24 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:137) > Mar 03 13:38:24 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.waitUntilParallelismForVertexReached(AdaptiveSchedulerClusterITCase.java:267) > Mar 03 13:38:24 at > org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerClusterITCase.testAutomaticScaleUp(AdaptiveSchedulerClusterITCase.java:147) > Mar 03 13:38:24 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Mar 03 13:38:24 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Mar 03 13:38:24 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Mar 03 13:38:24 at java.lang.reflect.Method.invoke(Method.java:498) > Mar 03 13:38:24 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Mar 03 13:38:24 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Mar 03 13:38:24 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Mar 03 13:38:24 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Mar 03 13:38:24 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Mar 03 13:38:24 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > Mar 03 13:38:24 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Mar 03 13:38:24 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Mar 03 13:38:24 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Mar 03 13:38:24 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > Mar 03 13:38:24 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Mar 03 13:38:24 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > Mar 03 13:38:24 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > Mar 03 13:38:24 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) > Mar 03 13:38:24 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(Runn
[GitHub] [flink] flinkbot edited a comment on pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax
flinkbot edited a comment on pull request #18386: URL: https://github.com/apache/flink/pull/18386#issuecomment-1015100174 ## CI report: * afdaf43f43e70634a9df988f9a141144fb9b0918 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33062) * fd8d787ed420aea973486d1e98af4670e266f30f Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33069) * c53142c8b103fd6da55bb95d600d59a01e1f3198 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twalthr closed pull request #18980: [FLINK-26421] Use only EnvironmentSettings to configure the environment
twalthr closed pull request #18980: URL: https://github.com/apache/flink/pull/18980 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26640) Consider changing flinkVersion to enum type or removing it completely
[ https://issues.apache.org/jira/browse/FLINK-26640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506742#comment-17506742 ] Yang Wang commented on FLINK-26640: --- Given that we could not guarantee the internal interfaces in the upstream project never change, I think we could make flinkVersion as enum type. And we need to run all the e2e tests with the supported flink versions. The major.minor schema also makes sense to me. > Consider changing flinkVersion to enum type or removing it completely > - > > Key: FLINK-26640 > URL: https://issues.apache.org/jira/browse/FLINK-26640 > Project: Flink > Issue Type: Sub-task >Reporter: Gyula Fora >Priority: Major > > Currently the flinkVersion is a string field that we do not use anywhere. > There might be some cases in the future where knowing the flink version might > be valuable but an optional string field will not work. > We should either make this a required enum of the supported flink versions (I > suggest only major.minor -> 1.14, 1.15...) or remove it completely for now. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax
flinkbot edited a comment on pull request #18386: URL: https://github.com/apache/flink/pull/18386#issuecomment-1015100174 ## CI report: * fd8d787ed420aea973486d1e98af4670e266f30f Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33069) * c53142c8b103fd6da55bb95d600d59a01e1f3198 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33077) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-26645) Pulsar Source subscribe to a single topic partition will consume all partitions from that topic
Yufei Zhang created FLINK-26645: --- Summary: Pulsar Source subscribe to a single topic partition will consume all partitions from that topic Key: FLINK-26645 URL: https://issues.apache.org/jira/browse/FLINK-26645 Project: Flink Issue Type: Bug Components: Connectors / Pulsar Affects Versions: 1.14.4, 1.15.0 Reporter: Yufei Zhang Fix For: 1.15.0, 1.14.4 Say users subscribe to 4 partitions of a topic with 16 partitions, current Pulsar source will actually consume from all 16 partitions. Expect to consume from 4 partitions only. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #19077: drop flink-sql-parser-hive module
flinkbot edited a comment on pull request #19077: URL: https://github.com/apache/flink/pull/19077#issuecomment-1066739670 ## CI report: * f4b532a0eef9aa597cc994446822a25f5ad158c9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33059) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-26646) Flink kubernetes operator helm template is broken
Yang Wang created FLINK-26646: - Summary: Flink kubernetes operator helm template is broken Key: FLINK-26646 URL: https://issues.apache.org/jira/browse/FLINK-26646 Project: Flink Issue Type: Bug Components: Kubernetes Operator Reporter: Yang Wang {code:java} wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install flink-operator helm/flink-operator --set image.repository=wangyang09180523/flink-java-operator --set image.tag=latest --set metrics.port= Error: template: flink-operator/templates/flink-operator.yaml:143:12: executing "flink-operator/templates/flink-operator.yaml" at : error calling eq: incompatible types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26220) KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on Azure Pipelines
[ https://issues.apache.org/jira/browse/FLINK-26220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506744#comment-17506744 ] Matthias Pohl commented on FLINK-26220: --- This one also failed on a 1.14 backport: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=33030&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=7687 I'll update the affected versions accordingly. > KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on Azure Pipelines > --- > > Key: FLINK-26220 > URL: https://issues.apache.org/jira/browse/FLINK-26220 > Project: Flink > Issue Type: Technical Debt > Components: Connectors / Kafka >Affects Versions: 1.15.0 >Reporter: Alexander Preuss >Priority: Major > > {code:java} > Feb 16 18:10:37 [ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 60.963 s <<< FAILURE! - in > org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests > Feb 16 18:10:37 [ERROR] > org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(boolean)[1] > Time elapsed: 11.21 s <<< FAILURE! > Feb 16 18:10:37 java.lang.AssertionError: Create test topic : > testTimestamp-3028462271882246016 failed, The topic metadata failed to > propagate to Kafka broker. > Feb 16 18:10:37 at org.junit.Assert.fail(Assert.java:89) > Feb 16 18:10:37 at > org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:223) > Feb 16 18:10:37 at > org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:98) > Feb 16 18:10:37 at > org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:216) > Feb 16 18:10:37 at > org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(KafkaSourceITCase.java:108) > Feb 16 18:10:37 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Feb 16 18:10:37 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 16 18:10:37 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 16 18:10:37 at java.lang.reflect.Method.invoke(Method.java:498) > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31686&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=35870 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26220) KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on Azure Pipelines
[ https://issues.apache.org/jira/browse/FLINK-26220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-26220: -- Affects Version/s: 1.14.4 > KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on Azure Pipelines > --- > > Key: FLINK-26220 > URL: https://issues.apache.org/jira/browse/FLINK-26220 > Project: Flink > Issue Type: Technical Debt > Components: Connectors / Kafka >Affects Versions: 1.15.0, 1.14.4 >Reporter: Alexander Preuss >Priority: Major > > {code:java} > Feb 16 18:10:37 [ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 60.963 s <<< FAILURE! - in > org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests > Feb 16 18:10:37 [ERROR] > org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(boolean)[1] > Time elapsed: 11.21 s <<< FAILURE! > Feb 16 18:10:37 java.lang.AssertionError: Create test topic : > testTimestamp-3028462271882246016 failed, The topic metadata failed to > propagate to Kafka broker. > Feb 16 18:10:37 at org.junit.Assert.fail(Assert.java:89) > Feb 16 18:10:37 at > org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:223) > Feb 16 18:10:37 at > org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:98) > Feb 16 18:10:37 at > org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:216) > Feb 16 18:10:37 at > org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(KafkaSourceITCase.java:108) > Feb 16 18:10:37 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Feb 16 18:10:37 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 16 18:10:37 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 16 18:10:37 at java.lang.reflect.Method.invoke(Method.java:498) > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31686&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=35870 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506745#comment-17506745 ] Yang Wang commented on FLINK-26646: --- cc [~gyfora] > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #19091: [BP-1.15][docs] Update wrong links in the datastream/execution_mode.md page.
flinkbot edited a comment on pull request #19091: URL: https://github.com/apache/flink/pull/19091#issuecomment-1067583598 ## CI report: * f55e9d61327da4033c0f45b13cbdfbe7a84417fa Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33071) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506745#comment-17506745 ] Yang Wang edited comment on FLINK-26646 at 3/15/22, 7:28 AM: - cc [~gyfora] I am not sure whether it is related with my helm version(v3.6.3). was (Author: fly_in_gis): cc [~gyfora] > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] imaffe opened a new pull request #19092: WIP: [FLINK-26645][Connector/Pulsar] Fix Pulsar source subscriber consume from all partitions when only subscribed to 1 partition
imaffe opened a new pull request #19092: URL: https://github.com/apache/flink/pull/19092 ## What is the purpose of the change Fix FLINK-26645: When users specify only consume from 1 partition but the source consumes from all partitions. ## Brief change log - changed TopicListSubscriber implementation. - added a test case to validate it has been fixed. ## Verifying this change This change added tests and can be verified as follows: - run PulsarSubscriberTest#topicPartitionSubscribe() ## Does this pull request potentially affect one of the following parts: No ## Documentation No -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25188) Cannot install PyFlink on MacOS with M1 chip
[ https://issues.apache.org/jira/browse/FLINK-25188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506747#comment-17506747 ] xuxiang commented on FLINK-25188: - We need to run pyflink on a linux arm server, looking forward this too > Cannot install PyFlink on MacOS with M1 chip > > > Key: FLINK-25188 > URL: https://issues.apache.org/jira/browse/FLINK-25188 > Project: Flink > Issue Type: Improvement > Components: API / Python >Affects Versions: 1.14.0 >Reporter: Luning Wang >Priority: Major > Labels: pull-request-available > Attachments: image-2022-01-04-11-36-20-090.png > > > Need to update dependencies: numpy>= > 1.20.3、pyarrow>=5.0.0、pandas>=1.3.0、apache-beam==2.36.0 > This following is some dependencies adapt M1 chip informations > Numpy version: > [https://stackoverflow.com/questions/65336789/numpy-build-fail-in-m1-big-sur-11-1] > [https://github.com/numpy/numpy/releases/tag/v1.21.4] > pyarrow version: > [https://stackoverflow.com/questions/68385728/installing-pyarrow-cant-copy-build-lib-macosx-11-arm64-3-9-pyarrow-include-ar] > pandas version: > [https://github.com/pandas-dev/pandas/issues/40611#issuecomment-901569655] > Apache beam: > https://issues.apache.org/jira/browse/BEAM-12957 > https://issues.apache.org/jira/browse/BEAM-11703 > This following is dependency tree after installed successfully > Although Beam need numpy<1.21.0 and M1 need numpy >=1.21.4, when I using > numpy 1.20.3 I install successfully on M1 chip. > {code:java} > apache-flink==1.14.dev0 > - apache-beam [required: ==2.34.0, installed: 2.34.0] > - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1] > - crcmod [required: >=1.7,<2.0, installed: 1.7] > - dill [required: >=0.3.1.1,<0.3.2, installed: 0.3.1.1] > - fastavro [required: >=0.21.4,<2, installed: 0.23.6] > - pytz [required: Any, installed: 2021.3] > - future [required: >=0.18.2,<1.0.0, installed: 0.18.2] > - grpcio [required: >=1.29.0,<2, installed: 1.42.0] > - six [required: >=1.5.2, installed: 1.16.0] > - hdfs [required: >=2.1.0,<3.0.0, installed: 2.6.0] > - docopt [required: Any, installed: 0.6.2] > - requests [required: >=2.7.0, installed: 2.26.0] > - certifi [required: >=2017.4.17, installed: 2021.10.8] > - charset-normalizer [required: ~=2.0.0, installed: 2.0.9] > - idna [required: >=2.5,<4, installed: 3.3] > - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7] > - six [required: >=1.9.0, installed: 1.16.0] > - httplib2 [required: >=0.8,<0.20.0, installed: 0.19.1] > - pyparsing [required: >=2.4.2,<3, installed: 2.4.7] > - numpy [required: >=1.14.3,<1.21.0, installed: 1.20.3] > - oauth2client [required: >=2.0.1,<5, installed: 4.1.3] > - httplib2 [required: >=0.9.1, installed: 0.19.1] > - pyparsing [required: >=2.4.2,<3, installed: 2.4.7] > - pyasn1 [required: >=0.1.7, installed: 0.4.8] > - pyasn1-modules [required: >=0.0.5, installed: 0.2.8] > - pyasn1 [required: >=0.4.6,<0.5.0, installed: 0.4.8] > - rsa [required: >=3.1.4, installed: 4.8] > - pyasn1 [required: >=0.1.3, installed: 0.4.8] > - six [required: >=1.6.1, installed: 1.16.0] > - orjson [required: <4.0, installed: 3.6.5] > - protobuf [required: >=3.12.2,<4, installed: 3.17.3] > - six [required: >=1.9, installed: 1.16.0] > - pyarrow [required: >=0.15.1,<6.0.0, installed: 5.0.0] > - numpy [required: >=1.16.6, installed: 1.20.3] > - pydot [required: >=1.2.0,<2, installed: 1.4.2] > - pyparsing [required: >=2.1.4, installed: 2.4.7] > - pymongo [required: >=3.8.0,<4.0.0, installed: 3.12.2] > - python-dateutil [required: >=2.8.0,<3, installed: 2.8.0] > - six [required: >=1.5, installed: 1.16.0] > - pytz [required: >=2018.3, installed: 2021.3] > - requests [required: >=2.24.0,<3.0.0, installed: 2.26.0] > - certifi [required: >=2017.4.17, installed: 2021.10.8] > - charset-normalizer [required: ~=2.0.0, installed: 2.0.9] > - idna [required: >=2.5,<4, installed: 3.3] > - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7] > - typing-extensions [required: >=3.7.0,<4, installed: 3.10.0.2] > - apache-flink-libraries [required: ==1.14.dev0, installed: 1.14.dev0] > - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1] > - cloudpickle [required: ==1.2.2, installed: 1.2.2] > - fastavro [required: >=0.21.4,<0.24, installed: 0.23.6] > - pytz [required: Any, installed: 2021.3] > - numpy [required: >=1.20.3, installed: 1.20.3] > - pandas [required: >=1.3.0, installed: 1.3.0] > - numpy [required: >=1.17.3, installed: 1.20.3] > - python-dateutil [required: >=2.7.3, installed: 2.8.0] >
[jira] [Updated] (FLINK-26645) Pulsar Source subscribe to a single topic partition will consume all partitions from that topic
[ https://issues.apache.org/jira/browse/FLINK-26645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-26645: --- Labels: pull-request-available (was: ) > Pulsar Source subscribe to a single topic partition will consume all > partitions from that topic > > > Key: FLINK-26645 > URL: https://issues.apache.org/jira/browse/FLINK-26645 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0, 1.14.4 >Reporter: Yufei Zhang >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0, 1.14.4 > > > Say users subscribe to 4 partitions of a topic with 16 partitions, current > Pulsar source > will actually consume from all 16 partitions. Expect to consume from 4 > partitions only. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26568) BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle timing out on Azure
[ https://issues.apache.org/jira/browse/FLINK-26568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506748#comment-17506748 ] Yingjie Cao commented on FLINK-26568: - I can not find any problem in the stack, I will try to reproduce the issue locally. > BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle timing > out on Azure > - > > Key: FLINK-26568 > URL: https://issues.apache.org/jira/browse/FLINK-26568 > Project: Flink > Issue Type: Bug > Components: Runtime / Network, Runtime / Task >Affects Versions: 1.15.0 >Reporter: Matthias Pohl >Priority: Critical > Labels: test-stability > Fix For: 1.15.0 > > > [This > build|https://dev.azure.com/mapohl/flink/_build/results?buildId=845&view=logs&j=0a15d512-44ac-5ba5-97ab-13a5d066c22c&t=9a028d19-6c4b-5a4e-d378-03fca149d0b1&l=12865] > timed out due the test > {{BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle}} not > finishing. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] paul8263 closed pull request #16109: [hotfix][connector-kafka] Fixed typo of the method name constructKafk…
paul8263 closed pull request #16109: URL: https://github.com/apache/flink/pull/16109 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26621) flink-tests failed on azure due to Error occurred in starting fork
[ https://issues.apache.org/jira/browse/FLINK-26621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506749#comment-17506749 ] Matthias Pohl commented on FLINK-26621: --- The same error was observed on a 1.14 backport: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=33030&view=logs&j=a57e0635-3fad-5b08-57c7-a4142d7d6fa9&t=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7&l=11052 I'm going to update the affected versions accordingly > flink-tests failed on azure due to Error occurred in starting fork > -- > > Key: FLINK-26621 > URL: https://issues.apache.org/jira/browse/FLINK-26621 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Major > Labels: test-stability > > {code:java} > 2022-03-11T16:20:12.6929558Z Mar 11 16:20:12 [WARNING] The requested profile > "skip-webui-build" could not be activated because it does not exist. > 2022-03-11T16:20:12.6939269Z Mar 11 16:20:12 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test > (integration-tests) on project flink-tests: There are test failures. > 2022-03-11T16:20:12.6940062Z Mar 11 16:20:12 [ERROR] > 2022-03-11T16:20:12.6940954Z Mar 11 16:20:12 [ERROR] Please refer to > /__w/2/s/flink-tests/target/surefire-reports for the individual test results. > 2022-03-11T16:20:12.6941875Z Mar 11 16:20:12 [ERROR] Please refer to dump > files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > 2022-03-11T16:20:12.6942966Z Mar 11 16:20:12 [ERROR] ExecutionException Error > occurred in starting fork, check output in log > 2022-03-11T16:20:12.6943919Z Mar 11 16:20:12 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException Error occurred in starting fork, check output in log > 2022-03-11T16:20:12.6945023Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 2022-03-11T16:20:12.6945878Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 2022-03-11T16:20:12.6946761Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 2022-03-11T16:20:12.6947532Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > 2022-03-11T16:20:12.6953051Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314) > 2022-03-11T16:20:12.6954035Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159) > 2022-03-11T16:20:12.6954917Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932) > 2022-03-11T16:20:12.6955749Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > 2022-03-11T16:20:12.6956542Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > 2022-03-11T16:20:12.6957456Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > 2022-03-11T16:20:12.6958232Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > 2022-03-11T16:20:12.6959038Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > 2022-03-11T16:20:12.6960553Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > 2022-03-11T16:20:12.6962116Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > 2022-03-11T16:20:12.6963009Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) > 2022-03-11T16:20:12.6963737Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355) > 2022-03-11T16:20:12.6964644Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) > 2022-03-11T16:20:12.6965647Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) > 2022-03-11T16:20:12.6966732Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216) > 2022-03-11T16:20:12.6967818Z Mar 11 16:20:12 [ERROR] at > org.apache.m
[jira] [Updated] (FLINK-26621) flink-tests failed on azure due to Error occurred in starting fork
[ https://issues.apache.org/jira/browse/FLINK-26621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-26621: -- Affects Version/s: 1.14.4 > flink-tests failed on azure due to Error occurred in starting fork > -- > > Key: FLINK-26621 > URL: https://issues.apache.org/jira/browse/FLINK-26621 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Affects Versions: 1.15.0, 1.14.4 >Reporter: Yun Gao >Priority: Major > Labels: test-stability > > {code:java} > 2022-03-11T16:20:12.6929558Z Mar 11 16:20:12 [WARNING] The requested profile > "skip-webui-build" could not be activated because it does not exist. > 2022-03-11T16:20:12.6939269Z Mar 11 16:20:12 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test > (integration-tests) on project flink-tests: There are test failures. > 2022-03-11T16:20:12.6940062Z Mar 11 16:20:12 [ERROR] > 2022-03-11T16:20:12.6940954Z Mar 11 16:20:12 [ERROR] Please refer to > /__w/2/s/flink-tests/target/surefire-reports for the individual test results. > 2022-03-11T16:20:12.6941875Z Mar 11 16:20:12 [ERROR] Please refer to dump > files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > 2022-03-11T16:20:12.6942966Z Mar 11 16:20:12 [ERROR] ExecutionException Error > occurred in starting fork, check output in log > 2022-03-11T16:20:12.6943919Z Mar 11 16:20:12 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException Error occurred in starting fork, check output in log > 2022-03-11T16:20:12.6945023Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > 2022-03-11T16:20:12.6945878Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > 2022-03-11T16:20:12.6946761Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > 2022-03-11T16:20:12.6947532Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > 2022-03-11T16:20:12.6953051Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314) > 2022-03-11T16:20:12.6954035Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159) > 2022-03-11T16:20:12.6954917Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932) > 2022-03-11T16:20:12.6955749Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > 2022-03-11T16:20:12.6956542Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > 2022-03-11T16:20:12.6957456Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > 2022-03-11T16:20:12.6958232Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > 2022-03-11T16:20:12.6959038Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > 2022-03-11T16:20:12.6960553Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > 2022-03-11T16:20:12.6962116Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > 2022-03-11T16:20:12.6963009Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) > 2022-03-11T16:20:12.6963737Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355) > 2022-03-11T16:20:12.6964644Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) > 2022-03-11T16:20:12.6965647Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) > 2022-03-11T16:20:12.6966732Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216) > 2022-03-11T16:20:12.6967818Z Mar 11 16:20:12 [ERROR] at > org.apache.maven.cli.MavenCli.main(MavenCli.java:160) > 2022-03-11T16:20:12.6968857Z Mar 11 16:20:12 [ERROR] at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2022-03-11T16:20:12.6969986Z Mar 11 16:20:12 [ERROR] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
[GitHub] [flink] XComp commented on pull request #19082: [FLINK-26596][BP-1.14][runtime][test] Adds leadership loss handling
XComp commented on pull request #19082: URL: https://github.com/apache/flink/pull/19082#issuecomment-1067659895 Errors in ci are unrelated: * FLINK-26220 * FLINK-26621 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] XComp merged pull request #19080: [FLINK-26121][BP-1.14][runtime] Adds clearUnhandledEvents() method
XComp merged pull request #19080: URL: https://github.com/apache/flink/pull/19080 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506750#comment-17506750 ] Gyula Fora commented on FLINK-26646: when do you get this error? > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot commented on pull request #19092: WIP: [FLINK-26645][Connector/Pulsar] Fix Pulsar source subscriber consume from all partitions when only subscribed to 1 partition
flinkbot commented on pull request #19092: URL: https://github.com/apache/flink/pull/19092#issuecomment-1067660495 ## CI report: * 7294f936a7361d9d3e5fdb2ddb87bd323a97ed2b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26121) ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-26121: -- Affects Version/s: 1.14.4 > ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification > failed on azure > -- > > Key: FLINK-26121 > URL: https://issues.apache.org/jira/browse/FLINK-26121 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0, 1.14.4 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > {code:java} > 2022-02-11T21:43:35.4936452Z Feb 11 21:43:35 java.lang.AssertionError: The > TestingFatalErrorHandler caught an exception. > 2022-02-11T21:43:35.4940444Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource.after(TestingFatalErrorHandlerResource.java:81) > 2022-02-11T21:43:35.4941937Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource.access$300(TestingFatalErrorHandlerResource.java:36) > 2022-02-11T21:43:35.4943249Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$1.evaluate(TestingFatalErrorHandlerResource.java:60) > 2022-02-11T21:43:35.4944745Z Feb 11 21:43:35 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2022-02-11T21:43:35.4945682Z Feb 11 21:43:35 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2022-02-11T21:43:35.4946655Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-02-11T21:43:35.4947847Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2022-02-11T21:43:35.4948876Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2022-02-11T21:43:35.4949842Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2022-02-11T21:43:35.4951142Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2022-02-11T21:43:35.4952153Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2022-02-11T21:43:35.4953115Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2022-02-11T21:43:35.4954068Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2022-02-11T21:43:35.4955003Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2022-02-11T21:43:35.4955981Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2022-02-11T21:43:35.4956930Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-02-11T21:43:35.4958008Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > 2022-02-11T21:43:35.4958899Z Feb 11 21:43:35 at > org.junit.runner.JUnitCore.run(JUnitCore.java:137) > 2022-02-11T21:43:35.4959774Z Feb 11 21:43:35 at > org.junit.runner.JUnitCore.run(JUnitCore.java:115) > 2022-02-11T21:43:35.4960911Z Feb 11 21:43:35 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) > 2022-02-11T21:43:35.4962095Z Feb 11 21:43:35 at > org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) > 2022-02-11T21:43:35.4963136Z Feb 11 21:43:35 at > org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) > 2022-02-11T21:43:35.4964275Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) > 2022-02-11T21:43:35.4965527Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) > 2022-02-11T21:43:35.4966787Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) > 2022-02-11T21:43:35.4968228Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) > 2022-02-11T21:43:35.4969485Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) > 2022-02-11T21:43:35.4970753Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) > 2022-02-11T21:43:35.4971842Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) > 202
[jira] [Commented] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506753#comment-17506753 ] Yang Wang commented on FLINK-26646: --- If you run the same command, I think you could reproduce. It is strange the CI passed. {code:java} helm install flink-operator helm/flink-operator --set image.repository=wangyang09180523/flink-java-operator --set image.tag=latest --set metrics.port= {code} > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (FLINK-26121) ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506752#comment-17506752 ] Matthias Pohl edited comment on FLINK-26121 at 3/15/22, 7:38 AM: - 1.14: 2ecc3614201526a1cbb8bd3e5cee44d3f1f5b42c master/1.15 was already updated earlier (see comment above): {quote} master: 8d86133d0619a4c3a94bc0657aa21d87292963ea {quote} was (Author: mapohl): 1.14: 2ecc3614201526a1cbb8bd3e5cee44d3f1f5b42c > ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification > failed on azure > -- > > Key: FLINK-26121 > URL: https://issues.apache.org/jira/browse/FLINK-26121 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0, 1.14.4 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.15.0, 1.14.5 > > > {code:java} > 2022-02-11T21:43:35.4936452Z Feb 11 21:43:35 java.lang.AssertionError: The > TestingFatalErrorHandler caught an exception. > 2022-02-11T21:43:35.4940444Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource.after(TestingFatalErrorHandlerResource.java:81) > 2022-02-11T21:43:35.4941937Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource.access$300(TestingFatalErrorHandlerResource.java:36) > 2022-02-11T21:43:35.4943249Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$1.evaluate(TestingFatalErrorHandlerResource.java:60) > 2022-02-11T21:43:35.4944745Z Feb 11 21:43:35 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2022-02-11T21:43:35.4945682Z Feb 11 21:43:35 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2022-02-11T21:43:35.4946655Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-02-11T21:43:35.4947847Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2022-02-11T21:43:35.4948876Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2022-02-11T21:43:35.4949842Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2022-02-11T21:43:35.4951142Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2022-02-11T21:43:35.4952153Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2022-02-11T21:43:35.4953115Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2022-02-11T21:43:35.4954068Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2022-02-11T21:43:35.4955003Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2022-02-11T21:43:35.4955981Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2022-02-11T21:43:35.4956930Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-02-11T21:43:35.4958008Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > 2022-02-11T21:43:35.4958899Z Feb 11 21:43:35 at > org.junit.runner.JUnitCore.run(JUnitCore.java:137) > 2022-02-11T21:43:35.4959774Z Feb 11 21:43:35 at > org.junit.runner.JUnitCore.run(JUnitCore.java:115) > 2022-02-11T21:43:35.4960911Z Feb 11 21:43:35 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) > 2022-02-11T21:43:35.4962095Z Feb 11 21:43:35 at > org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) > 2022-02-11T21:43:35.4963136Z Feb 11 21:43:35 at > org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) > 2022-02-11T21:43:35.4964275Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) > 2022-02-11T21:43:35.4965527Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) > 2022-02-11T21:43:35.4966787Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) > 2022-02-11T21:43:35.4968228Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) > 2022-02-11T21:43:35.4969485Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrch
[jira] [Resolved] (FLINK-26121) ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl resolved FLINK-26121. --- Fix Version/s: 1.14.5 Resolution: Fixed 1.14: 2ecc3614201526a1cbb8bd3e5cee44d3f1f5b42c > ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification > failed on azure > -- > > Key: FLINK-26121 > URL: https://issues.apache.org/jira/browse/FLINK-26121 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0, 1.14.4 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.15.0, 1.14.5 > > > {code:java} > 2022-02-11T21:43:35.4936452Z Feb 11 21:43:35 java.lang.AssertionError: The > TestingFatalErrorHandler caught an exception. > 2022-02-11T21:43:35.4940444Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource.after(TestingFatalErrorHandlerResource.java:81) > 2022-02-11T21:43:35.4941937Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource.access$300(TestingFatalErrorHandlerResource.java:36) > 2022-02-11T21:43:35.4943249Z Feb 11 21:43:35 at > org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$1.evaluate(TestingFatalErrorHandlerResource.java:60) > 2022-02-11T21:43:35.4944745Z Feb 11 21:43:35 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2022-02-11T21:43:35.4945682Z Feb 11 21:43:35 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2022-02-11T21:43:35.4946655Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-02-11T21:43:35.4947847Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2022-02-11T21:43:35.4948876Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2022-02-11T21:43:35.4949842Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2022-02-11T21:43:35.4951142Z Feb 11 21:43:35 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2022-02-11T21:43:35.4952153Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2022-02-11T21:43:35.4953115Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2022-02-11T21:43:35.4954068Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2022-02-11T21:43:35.4955003Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2022-02-11T21:43:35.4955981Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2022-02-11T21:43:35.4956930Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-02-11T21:43:35.4958008Z Feb 11 21:43:35 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > 2022-02-11T21:43:35.4958899Z Feb 11 21:43:35 at > org.junit.runner.JUnitCore.run(JUnitCore.java:137) > 2022-02-11T21:43:35.4959774Z Feb 11 21:43:35 at > org.junit.runner.JUnitCore.run(JUnitCore.java:115) > 2022-02-11T21:43:35.4960911Z Feb 11 21:43:35 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) > 2022-02-11T21:43:35.4962095Z Feb 11 21:43:35 at > org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) > 2022-02-11T21:43:35.4963136Z Feb 11 21:43:35 at > org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) > 2022-02-11T21:43:35.4964275Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) > 2022-02-11T21:43:35.4965527Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) > 2022-02-11T21:43:35.4966787Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) > 2022-02-11T21:43:35.4968228Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) > 2022-02-11T21:43:35.4969485Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) > 2022-02-11T21:43:35.4970753Z Feb 11 21:43:35 at > org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) > 2022-02-11T21:43:35.4971842Z Feb 11 21:43:35 at > org.juni
[jira] [Updated] (FLINK-26596) ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-26596: -- Affects Version/s: 1.14.4 > ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification > failed on azure > -- > > Key: FLINK-26596 > URL: https://issues.apache.org/jira/browse/FLINK-26596 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0, 1.14.4 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > {code:java} > Mar 10 09:16:30 [ERROR] > org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification > Time elapsed: 20.752 s <<< ERROR! > Mar 10 09:16:30 java.lang.NullPointerException > Mar 10 09:16:30 at > org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalConnectionHandlingTest.lambda$null$9(ZooKeeperLeaderRetrievalConnectionHandlingTest.java:292) > Mar 10 09:16:30 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:161) > Mar 10 09:16:30 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145) > Mar 10 09:16:30 at > org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:137) > Mar 10 09:16:30 at > org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalConnectionHandlingTest.lambda$testNewLeaderAfterReconnectTriggersListenerNotification$10(ZooKeeperLeaderRetrievalConnectionHandlingTest.java:288) > Mar 10 09:16:30 at > org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalConnectionHandlingTest.testWithQueueLeaderElectionListener(ZooKeeperLeaderRetrievalConnectionHandlingTest.java:313) > Mar 10 09:16:30 at > org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalConnectionHandlingTest.testNewLeaderAfterReconnectTriggersListenerNotification(ZooKeeperLeaderRetrievalConnectionHandlingTest.java:250) > Mar 10 09:16:30 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Mar 10 09:16:30 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Mar 10 09:16:30 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Mar 10 09:16:30 at java.lang.reflect.Method.invoke(Method.java:498) > Mar 10 09:16:30 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > Mar 10 09:16:30 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > Mar 10 09:16:30 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > Mar 10 09:16:30 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > Mar 10 09:16:30 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) > Mar 10 09:16:30 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214) > Mar 10 09:16:30 at > org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) > Mar 10 09:16:3
[GitHub] [flink-table-store] leonardBang opened a new pull request #44: [hotfix][docs] Remove unnecessary html link tags and fix typo
leonardBang opened a new pull request #44: URL: https://github.com/apache/flink-table-store/pull/44 * 1 Remove unnecessary html link tags * 2 Fix typo -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] XComp commented on pull request #19082: [FLINK-26596][BP-1.14][runtime][test] Adds leadership loss handling
XComp commented on pull request #19082: URL: https://github.com/apache/flink/pull/19082#issuecomment-1067663904 rebased `release-1.14` after the [FLINK-26121 backport PR](https://github.com/apache/flink/pull/19080) was merged -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26568) BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle timing out on Azure
[ https://issues.apache.org/jira/browse/FLINK-26568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506755#comment-17506755 ] Yingjie Cao commented on FLINK-26568: - BTW, since 1.15, the sort-shuffle implementation becomes the default blocking shuffle implementation, the BoundedBlocking implementation will not be used by default. Besides, we did not change the BoundedBlocking implementation in 1.15, it is maybe not a 1.15-only issue. > BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle timing > out on Azure > - > > Key: FLINK-26568 > URL: https://issues.apache.org/jira/browse/FLINK-26568 > Project: Flink > Issue Type: Bug > Components: Runtime / Network, Runtime / Task >Affects Versions: 1.15.0 >Reporter: Matthias Pohl >Priority: Critical > Labels: test-stability > Fix For: 1.15.0 > > > [This > build|https://dev.azure.com/mapohl/flink/_build/results?buildId=845&view=logs&j=0a15d512-44ac-5ba5-97ab-13a5d066c22c&t=9a028d19-6c4b-5a4e-d378-03fca149d0b1&l=12865] > timed out due the test > {{BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle}} not > finishing. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #19092: WIP: [FLINK-26645][Connector/Pulsar] Fix Pulsar source subscriber consume from all partitions when only subscribed to 1 partition
flinkbot edited a comment on pull request #19092: URL: https://github.com/apache/flink/pull/19092#issuecomment-1067660495 ## CI report: * 7294f936a7361d9d3e5fdb2ddb87bd323a97ed2b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33079) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506757#comment-17506757 ] Gyula Fora commented on FLINK-26646: for me this works correctly, could be a helm version issue indeed. For me it's "v3.8.0" > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Closed] (FLINK-26421) Cleanup EnvironmentSettings and only use ConfigOptions
[ https://issues.apache.org/jira/browse/FLINK-26421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther closed FLINK-26421. Fix Version/s: 1.15.0 Release Note: flink-conf.yaml and other configuration from outer layers (e.g. CLI) are now propagated into TableConfig. Even though configuration set directly in TableConfig has still precedence, this change can have side effects if table configuration was accidentally set in other layers. Resolution: Fixed Fixed in master: commit c16e4b4ce20704a0ad4387591894f13105d5e530 [table-api-java] Improve JavaDocs of TableConfig commit 57742b85095147711070c566069244c40ed8e77c Use only EnvironmentSettings to configure the environment commit 9e3e51d3549f033bc0e1836bb5f5f6c831740b5f [table] Remove planner & executor string identifiers Fixed in release-1.15: commit f14f63623056d8ea2c3aba6d0bc017e0418a5ea2 [table-api-java] Improve JavaDocs of TableConfig commit 567b28e26b65601759c51908c10fe362e48e1a89 Use only EnvironmentSettings to configure the environment commit 5feff4a46f7c4c9d395347e42b9d68b8fe00c8a5 [table] Remove planner & executor string identifiers > Cleanup EnvironmentSettings and only use ConfigOptions > -- > > Key: FLINK-26421 > URL: https://issues.apache.org/jira/browse/FLINK-26421 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API >Reporter: Marios Trivyzas >Assignee: Marios Trivyzas >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > * integrate Configuration into EnvironmentSettings > * EnvironmentSettings should only contain a Configuration, not other members > (create config options for all members) > * Remove `TableEnvironmentImpl.create(EnvSettings, Configuration)` -> > create(EnvSettings) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26421) Propagate executor config to TableConfig
[ https://issues.apache.org/jira/browse/FLINK-26421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther updated FLINK-26421: - Summary: Propagate executor config to TableConfig (was: Cleanup EnvironmentSettings and only use ConfigOptions) > Propagate executor config to TableConfig > > > Key: FLINK-26421 > URL: https://issues.apache.org/jira/browse/FLINK-26421 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API >Reporter: Marios Trivyzas >Assignee: Marios Trivyzas >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > * integrate Configuration into EnvironmentSettings > * EnvironmentSettings should only contain a Configuration, not other members > (create config options for all members) > * Remove `TableEnvironmentImpl.create(EnvSettings, Configuration)` -> > create(EnvSettings) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] wangyang0918 commented on a change in pull request #18854: [FLINK-25648][Kubernetes] Avoid redundant to query Kubernetes deployment when creating task manager pods
wangyang0918 commented on a change in pull request #18854: URL: https://github.com/apache/flink/pull/18854#discussion_r826665717 ## File path: flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/Fabric8FlinkKubeClientTest.java ## @@ -250,6 +243,32 @@ public void testCreateFlinkTaskManagerPod() throws Exception { resultTaskManagerPod.getMetadata().getOwnerReferences().get(0).getUid()); } +@Test +public void testCreateTwoTaskManagerPods() throws Exception { + flinkKubeClient.createJobManagerComponent(this.kubernetesJobManagerSpecification); + flinkKubeClient.createTaskManagerPod(buildKubernetesPod("mock-task-manager-pod1")).get(); +mockGetDeploymentWithError(); +try { +flinkKubeClient + .createTaskManagerPod(buildKubernetesPod("mock-task-manager-pod2")) +.get(); +} catch (Exception e) { +fail("should only get the master deployment once"); +} +} + +@NotNull Review comment: Unnecessary `NotNull`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26421) Propagate executor config to TableConfig
[ https://issues.apache.org/jira/browse/FLINK-26421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther updated FLINK-26421: - Description: * integrate Configuration into EnvironmentSettings * EnvironmentSettings should only contain a Configuration, not other members (create config options for all members) * Remove `TableEnvironmentImpl.create(EnvSettings, Configuration)` -> create(EnvSettings) * Introduce a root configuration in TableConfig that contains the executor config for defaults and fallback was: * integrate Configuration into EnvironmentSettings * EnvironmentSettings should only contain a Configuration, not other members (create config options for all members) * Remove `TableEnvironmentImpl.create(EnvSettings, Configuration)` -> create(EnvSettings) > Propagate executor config to TableConfig > > > Key: FLINK-26421 > URL: https://issues.apache.org/jira/browse/FLINK-26421 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API >Reporter: Marios Trivyzas >Assignee: Marios Trivyzas >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > * integrate Configuration into EnvironmentSettings > * EnvironmentSettings should only contain a Configuration, not other members > (create config options for all members) > * Remove `TableEnvironmentImpl.create(EnvSettings, Configuration)` -> > create(EnvSettings) > * Introduce a root configuration in TableConfig that contains the executor > config for defaults and fallback -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #19082: [FLINK-26596][BP-1.14][runtime][test] Adds leadership loss handling
flinkbot edited a comment on pull request #19082: URL: https://github.com/apache/flink/pull/19082#issuecomment-1066919434 ## CI report: * 8e513a60edab58ca4b28ea3dd0c686a48f38f8c4 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33030) * 749c4b387cdfdaeca158f33706b51de842e1796f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #19089: [FLINK-26607][python] There are multiple MAX_LONG_VALUE value errors in pyflink code
flinkbot edited a comment on pull request #19089: URL: https://github.com/apache/flink/pull/19089#issuecomment-1067544609 ## CI report: * 146e605acf068b9dcdcb4913fc7fc6e805cfd8af Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33068) * 103331d5386317a184e7f363fa490a43842b8915 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33073) * 36faf315257032ac88e6bbb103e22b60409dff7e UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #19090: [BP-1.14][docs] Update wrong links in the datastream/execution_mode.md page.
flinkbot edited a comment on pull request #19090: URL: https://github.com/apache/flink/pull/19090#issuecomment-1067583371 ## CI report: * 11ef66762336f4bb29cc8a9d47df859aa0cab1f9 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33070) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-26647) Can not add extra config files on native Kubernetes
Zhe Wang created FLINK-26647: Summary: Can not add extra config files on native Kubernetes Key: FLINK-26647 URL: https://issues.apache.org/jira/browse/FLINK-26647 Project: Flink Issue Type: Bug Components: Deployment / Kubernetes Affects Versions: 1.13.5 Reporter: Zhe Wang When using native Kubernetes mode (both session and application), predefine FLINK_CONF_DIR environment with config files in. Only two files( *flink-conf.yaml and log4j-console.properties* ) are populated to configmap which means missing of other config files(like sql-client-defaults.yaml, zoo.cfg etc.) Tried these, neither worked out: 1) After native Kubernetes startup, change both configmap and deployment: 1. add all my config files to configmap. 2. add config file to deployment.spec.template.spec.volumes[] 3. Flink job pod startup failed(log: lost leadership ) 2) Using a *pod-template-file.taskmanager* file: 1. add config files to created confimap. 2. add my config files to template(others can be merged by Flink as guide says) 3. Flink task pod fail to startup, log: Duplicated volume name -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Closed] (FLINK-26528) Trigger the updateControl when the FlinkDeployment have changed
[ https://issues.apache.org/jira/browse/FLINK-26528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gyula Fora closed FLINK-26528. -- Resolution: Fixed merged: ec2e0a11eda5e05619e683b4a661ac4875142bc2 > Trigger the updateControl when the FlinkDeployment have changed > --- > > Key: FLINK-26528 > URL: https://issues.apache.org/jira/browse/FLINK-26528 > Project: Flink > Issue Type: Sub-task > Components: Kubernetes Operator >Reporter: Aitozi >Assignee: Aitozi >Priority: Major > Labels: pull-request-available > > If the CR has not changed since last reconcile, we could create a > UpdateControl with {{UpdateControl#noUpdate}} , this is meant to reduce the > unnecessary update for resource -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] Myasuka commented on a change in pull request #19051: [FLINK-26063][state/changelog] Compute keys of the removed PQ elements
Myasuka commented on a change in pull request #19051: URL: https://github.com/apache/flink/pull/19051#discussion_r826677119 ## File path: flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/AbstractStateChangeLogger.java ## @@ -145,9 +145,17 @@ protected void log( @Nullable ThrowingConsumer dataWriter, Ns ns) throws IOException { +log(op, dataWriter, ns, keyContext.getCurrentKeyGroupIndex()); +} + +protected void log( +StateChangeOperation op, +@Nullable ThrowingConsumer dataWriter, +Ns ns, +int keyGroup) Review comment: I think the root cause is that we don't have the correct `currentKey` during pqState#poll, if we could modify `InternalTimerServiceImpl` like below, to set the current key before calling poll: ~~~ java private void onProcessingTime(long time) throws Exception { // null out the timer in case the Triggerable calls registerProcessingTimeTimer() // inside the callback. nextTimer = null; InternalTimer timer; while ((timer = processingTimeTimersQueue.peek()) != null && timer.getTimestamp() <= time) { keyContext.setCurrentKey(timer.getKey()); processingTimeTimersQueue.poll(); triggerTarget.onProcessingTime(timer); } if (timer != null && nextTimer == null) { nextTimer = processingTimeService.registerTimer( timer.getTimestamp(), this::onProcessingTime); } } ~~~ and ~~~ java public void advanceWatermark(long time) throws Exception { currentWatermark = time; InternalTimer timer; while ((timer = eventTimeTimersQueue.peek()) != null && timer.getTimestamp() <= time) { keyContext.setCurrentKey(timer.getKey()); eventTimeTimersQueue.poll(); triggerTarget.onEventTime(timer); } ~~~ The test could also pass. I think this change looks better with less change and avoid to compute the key group again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #44: [hotfix][docs] Remove unnecessary html link tags and fix typo
JingsongLi commented on a change in pull request #44: URL: https://github.com/apache/flink-table-store/pull/44#discussion_r826676368 ## File path: README.md ## @@ -1,19 +1,19 @@ -# FLink Table Store +# Flink Table Store Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink. Flink Table Store is developed under the umbrella of [Apache Flink](https://flink.apache.org/). -## Building the Project Review comment: It is better to leave this, the title can be `build`. ## File path: README.md ## @@ -1,19 +1,19 @@ -# FLink Table Store +# Flink Table Store Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink. Flink Table Store is developed under the umbrella of [Apache Flink](https://flink.apache.org/). -## Building the Project +## Building the Project Run the `mvn clean package` command. Then you will find a JAR file that contains your application, plus any libraries that you may have added as dependencies to the application: `target/-.jar`. -## Contributing +## Contributing You can learn more about how to contribute on the [Apache Flink website](https://flink.apache.org/contributing/how-to-contribute.html). For code contributions, please read carefully the [Contributing Code](https://flink.apache.org/contributing/contribute-code.html) section for an overview of ongoing community work. -## License +## License -The code in this repository is licensed under the [Apache Software License 2](LICENSE). Review comment: `Apache Software License` is ASL. https://wiki.debian.org/DFSGLicenses#The_Apache_Software_License_.28ASL.29 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26647) Can not add extra config files on native Kubernetes
[ https://issues.apache.org/jira/browse/FLINK-26647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Wang updated FLINK-26647: - Description: When using native Kubernetes mode (both session and application), predefine FLINK_CONF_DIR environment with config files in. Only two files( *flink-conf.yaml and log4j-console.properties* ) are populated to configmap which means missing of other config files(like sql-client-defaults.yaml, zoo.cfg etc.) Tried these, neither worked out: 1) After native Kubernetes startup, change both configmap and deployment: 1. add all my config files to configmap. 2. add config file to deployment.spec.template.spec.volumes[] 3. Flink job pod startups fail(log: lost leadership ) 2) Using a *pod-template-file.taskmanager* file: 1. add config files to created confimap. 2. add my config files to template(others can be merged by Flink as guide says) 3. Flink task pod startup fail, log: Duplicated volume name was: When using native Kubernetes mode (both session and application), predefine FLINK_CONF_DIR environment with config files in. Only two files( *flink-conf.yaml and log4j-console.properties* ) are populated to configmap which means missing of other config files(like sql-client-defaults.yaml, zoo.cfg etc.) Tried these, neither worked out: 1) After native Kubernetes startup, change both configmap and deployment: 1. add all my config files to configmap. 2. add config file to deployment.spec.template.spec.volumes[] 3. Flink job pod startup failed(log: lost leadership ) 2) Using a *pod-template-file.taskmanager* file: 1. add config files to created confimap. 2. add my config files to template(others can be merged by Flink as guide says) 3. Flink task pod fail to startup, log: Duplicated volume name > Can not add extra config files on native Kubernetes > > > Key: FLINK-26647 > URL: https://issues.apache.org/jira/browse/FLINK-26647 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.13.5 >Reporter: Zhe Wang >Priority: Critical > > When using native Kubernetes mode (both session and application), predefine > FLINK_CONF_DIR environment with config files in. Only two files( > *flink-conf.yaml and log4j-console.properties* ) are populated to configmap > which means missing of other config files(like sql-client-defaults.yaml, > zoo.cfg etc.) > Tried these, neither worked out: > 1) After native Kubernetes startup, change both configmap and deployment: > 1. add all my config files to configmap. > 2. add config file to deployment.spec.template.spec.volumes[] > 3. Flink job pod startups fail(log: lost leadership ) > > 2) Using a *pod-template-file.taskmanager* file: > 1. add config files to created confimap. > 2. add my config files to template(others can be merged by Flink as guide > says) > 3. Flink task pod startup fail, log: Duplicated volume name -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] metaswirl opened a new pull request #19093: [FLINK-24960][yarn]enable log line of YarnClusterDescriptor
metaswirl opened a new pull request #19093: URL: https://github.com/apache/flink/pull/19093 Enable log line to debug future instances of YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots timing out. See https://issues.apache.org/jira/browse/FLINK-24960 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-24960) YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots hang
[ https://issues.apache.org/jira/browse/FLINK-24960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-24960: --- Labels: pull-request-available test-stability (was: test-stability) > YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots > hangs on azure > --- > > Key: FLINK-24960 > URL: https://issues.apache.org/jira/browse/FLINK-24960 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.15.0, 1.14.3 >Reporter: Yun Gao >Assignee: Niklas Semmler >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > {code:java} > Nov 18 22:37:08 > > Nov 18 22:37:08 Test > testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase) > is running. > Nov 18 22:37:08 > > Nov 18 22:37:25 22:37:25,470 [main] INFO > org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase [] - Extracted > hostname:port: 5718b812c7ab:38622 > Nov 18 22:52:36 > == > Nov 18 22:52:36 Process produced no output for 900 seconds. > Nov 18 22:52:36 > == > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26722&view=logs&j=f450c1a5-64b1-5955-e215-49cb1ad5ec88&t=cc452273-9efa-565d-9db8-ef62a38a0c10&l=36395 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26634) Update Chinese version of Elasticsearch connector docs
[ https://issues.apache.org/jira/browse/FLINK-26634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506768#comment-17506768 ] Alexander Preuss commented on FLINK-26634: -- Hi [~chenzihao] you can open the PR already, the 19035 PR will likely be merged today > Update Chinese version of Elasticsearch connector docs > -- > > Key: FLINK-26634 > URL: https://issues.apache.org/jira/browse/FLINK-26634 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation >Affects Versions: 1.15.0 >Reporter: Alexander Preuss >Assignee: chenzihao >Priority: Major > > In [https://github.com/apache/flink/pull/19035] we made some smaller changes > to the documentation for the Elasticsearch connector with regards to the > delivery guarantee. These changes still not to be ported over to the chinese > docs. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] ZhangChaoming commented on pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax
ZhangChaoming commented on pull request #18386: URL: https://github.com/apache/flink/pull/18386#issuecomment-1067685766 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506769#comment-17506769 ] Yang Wang commented on FLINK-26646: --- Hmm... It works with v3.8.0. > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] MartijnVisser merged pull request #19090: [BP-1.14][docs] Update wrong links in the datastream/execution_mode.md page.
MartijnVisser merged pull request #19090: URL: https://github.com/apache/flink/pull/19090 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MartijnVisser merged pull request #19091: [BP-1.15][docs] Update wrong links in the datastream/execution_mode.md page.
MartijnVisser merged pull request #19091: URL: https://github.com/apache/flink/pull/19091 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-25800) Update wrong links in the datastream/execution_mode.md page.
[ https://issues.apache.org/jira/browse/FLINK-25800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn Visser closed FLINK-25800. -- Fix Version/s: 1.15.0 1.16.0 1.14.5 Resolution: Fixed > Update wrong links in the datastream/execution_mode.md page. > > > Key: FLINK-25800 > URL: https://issues.apache.org/jira/browse/FLINK-25800 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: RocMarshal >Assignee: RocMarshal >Priority: Minor > Labels: pull-request-available > Fix For: 1.15.0, 1.16.0, 1.14.5 > > > * flink/docs/content/docs/dev/datastream/execution_mode.md > * flink/docs/content.zh/docs/dev/datastream/execution_mode.md -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] Myasuka merged pull request #18840: [FLINK-25528][state-processor-api] state processor api do not support increment checkpoint
Myasuka merged pull request #18840: URL: https://github.com/apache/flink/pull/18840 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25800) Update wrong links in the datastream/execution_mode.md page.
[ https://issues.apache.org/jira/browse/FLINK-25800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506771#comment-17506771 ] Martijn Visser commented on FLINK-25800: Fixed in release-1.15: 3b7eac6ca52d4dc019ee83a41a42b71109406514 Fixed in release-1.14: 0d9809fedcc7daffdc851106cdb8dd1363cffc4a > Update wrong links in the datastream/execution_mode.md page. > > > Key: FLINK-25800 > URL: https://issues.apache.org/jira/browse/FLINK-25800 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: RocMarshal >Assignee: RocMarshal >Priority: Minor > Labels: pull-request-available > Fix For: 1.15.0, 1.16.0, 1.14.5 > > > * flink/docs/content/docs/dev/datastream/execution_mode.md > * flink/docs/content.zh/docs/dev/datastream/execution_mode.md -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506772#comment-17506772 ] Gyula Fora commented on FLINK-26646: Maybe there is another way to do this check correctly. I wanted to have append as optional with default true and this was the only thing that seemed to have worked corectly. The error you are getting is probably due to comparing a nil with False. > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26320) Update hive doc for 1 m->1 min
[ https://issues.apache.org/jira/browse/FLINK-26320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn Visser updated FLINK-26320: --- Fix Version/s: 1.15.0 (was: 1.16.0) > Update hive doc for 1 m->1 min > -- > > Key: FLINK-26320 > URL: https://issues.apache.org/jira/browse/FLINK-26320 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive, Documentation >Reporter: hehuiyuan >Assignee: hehuiyuan >Priority: Minor > Labels: pull-request-available > Fix For: 1.15.0 > > Attachments: image-2022-02-23-15-36-54-649.png > > > {{Time interval unit label 'm' does not match any of the recognized units: > DAYS: (d | day | days), HOURS: (h | hour | hours), MINUTES: (min | minute | > minutes), SECONDS: (s | sec | secs | second | seconds), MILLISECONDS: (ms | > milli | millis | millisecond | milliseconds), MICROSECONDS: (µs | micro | > micros | microsecond | microseconds), NANOSECONDS: (ns | nano | nanos | > nanosecond | nanoseconds)}} > ‘1 m’ is misleading when we used, which is not correct. > > !image-2022-02-23-15-36-54-649.png|width=491,height=219! -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-25761) Translate Avro format page into Chinese.
[ https://issues.apache.org/jira/browse/FLINK-25761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn Visser updated FLINK-25761: --- Fix Version/s: 1.15.0 (was: 1.16.0) > Translate Avro format page into Chinese. > > > Key: FLINK-25761 > URL: https://issues.apache.org/jira/browse/FLINK-25761 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: RocMarshal >Assignee: baisike >Priority: Minor > Labels: chinese-translation, pull-request-available > Fix For: 1.15.0 > > > file location: > flink/docs/content.zh/docs/connectors/datastream/formats/avro.md -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] MartijnVisser commented on pull request #18982: [FLINK-26334][datastream] Fix getWindowStartWithOffset in TimeWindow.java
MartijnVisser commented on pull request #18982: URL: https://github.com/apache/flink/pull/18982#issuecomment-1067689211 @zjuwangg Unfortunately this is not my expertise, so I can't review and merge it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506776#comment-17506776 ] Yang Wang commented on FLINK-26646: --- Yes. Maybe we could explicitly set the {{.Values.operatorConfiguration.append}} to false. > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] golden-yang opened a new pull request #19094: [FLINK-26642][connector/pulsar] Fix support for non-partitioned topic
golden-yang opened a new pull request #19094: URL: https://github.com/apache/flink/pull/19094 ## What is the purpose of the change *(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)* ## Brief change log *(for example:)* - *The TaskInfo is stored in the blob store on job creation time as a persistent artifact* - *Deployments RPC transmits only the blob storage reference* - *TaskManagers retrieve the TaskInfo from the blob cache* ## Verifying this change Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing *(Please pick either of the following options)* This change is a trivial rework / code cleanup without any test coverage. *(or)* This change is already covered by existing tests, such as *(please describe tests)*. *(or)* This change added tests and can be verified as follows: *(example:)* - *Added integration tests for end-to-end deployment with large payloads (100MB)* - *Extended integration test for recovery after master (JobManager) failure* - *Added test that validates that TaskInfo is transferred only once across recoveries* - *Manually verified the change by running a 4 node cluster with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) - The serializers: (yes / no / don't know) - The runtime per-record code paths (performance sensitive): (yes / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (yes / no / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26642) Pulsar sink fails with non-partitioned topic
[ https://issues.apache.org/jira/browse/FLINK-26642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-26642: --- Labels: pull-request-available (was: ) > Pulsar sink fails with non-partitioned topic > > > Key: FLINK-26642 > URL: https://issues.apache.org/jira/browse/FLINK-26642 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0 >Reporter: goldenyang >Priority: Major > Labels: pull-request-available > Original Estimate: 72h > Remaining Estimate: 72h > > Flink support pulsar sink now in > [FLINK-20732|https://issues.apache.org/jira/browse/FLINK-20732]. I > encountered a problem when using pulsar sink in master branch, when I use > non-partitioned topic. > The current test found that both partitioned topics and non-partitioned > topics ending with -partition-i can be supported, but ordinary > non-partitioned topics without -partition-i will have problems, such as > 'test_topic'. > Reproducing the problem requires writing to a non-partitioned topic. Below is > the stack information when the exception is encountered. I briefly > communicated with [~Jianyun Zhao] , this may be a bug. > > {code:java} > 2022-03-08 21:39:13,622 - INFO - > [flink-akka.actor.default-dispatcher-13:Execution@1419] - Source: Pulsar > Source -> (Sink: Writer -> Sink: Committer, Sink: Print to Std. Out) (1/6) > (44af5e8a2b9d553952c7ed3e5d40e672) switched from RUNNING to FAILED on > 54284e57-42a9-4e2e-9c41-54b0ad559832 @ 127.0.0.1 > (dataPort=-1).java.lang.IllegalArgumentException: You should provide topics > for routing topic by message key hash.at > org.apache.flink.shaded.guava30.com.google.common.base.Preconditions.checkArgument(Preconditions.java:144)at > > org.apache.flink.connector.pulsar.sink.writer.router.RoundRobinTopicRouter.route(RoundRobinTopicRouter.java:54)at > > org.apache.flink.connector.pulsar.sink.writer.PulsarWriter.write(PulsarWriter.java:138)at > > org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:124)at > > org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)at > > org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)at > > org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)at > > org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:77)at > > org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:32)at > > org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:205)at > > org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110)at > > org.apache.flink.connector.pulsar.source.reader.emitter.PulsarRecordEmitter.emitRecord(PulsarRecordEmitter.java:41)at > > org.apache.flink.connector.pulsar.source.reader.emitter.PulsarRecordEmitter.emitRecord(PulsarRecordEmitter.java:33)at > > org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:143)at > > org.apache.flink.connector.pulsar.source.reader.source.PulsarOrderedSourceReader.pollNext(PulsarOrderedSourceReader.java:106)at > > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:382)at > > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)at > > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)at > > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)at > > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)at > > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804)at > > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753)at > > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)at > org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)at > org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)at > java.lang.Thread.run(Thread.java:748) {code} > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax
flinkbot edited a comment on pull request #18386: URL: https://github.com/apache/flink/pull/18386#issuecomment-1015100174 ## CI report: * fd8d787ed420aea973486d1e98af4670e266f30f Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33069) * c53142c8b103fd6da55bb95d600d59a01e1f3198 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33077) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] leonardBang commented on a change in pull request #44: [hotfix][docs] Remove unnecessary html link tags and fix typo
leonardBang commented on a change in pull request #44: URL: https://github.com/apache/flink-table-store/pull/44#discussion_r826693355 ## File path: README.md ## @@ -1,19 +1,19 @@ -# FLink Table Store +# Flink Table Store Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink. Flink Table Store is developed under the umbrella of [Apache Flink](https://flink.apache.org/). -## Building the Project Review comment: In fact you do not need this, the `Building the Project` is not to long, or you can use `Build` as the title for unification with other tiles. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506778#comment-17506778 ] Gyula Fora commented on FLINK-26646: yes, let's do that and then we can avoid using eq > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] leonardBang commented on a change in pull request #44: [hotfix][docs] Remove unnecessary html link tags and fix typo
leonardBang commented on a change in pull request #44: URL: https://github.com/apache/flink-table-store/pull/44#discussion_r826694081 ## File path: README.md ## @@ -1,19 +1,19 @@ -# FLink Table Store +# Flink Table Store Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink. Flink Table Store is developed under the umbrella of [Apache Flink](https://flink.apache.org/). -## Building the Project +## Building the Project Run the `mvn clean package` command. Then you will find a JAR file that contains your application, plus any libraries that you may have added as dependencies to the application: `target/-.jar`. -## Contributing +## Contributing You can learn more about how to contribute on the [Apache Flink website](https://flink.apache.org/contributing/how-to-contribute.html). For code contributions, please read carefully the [Contributing Code](https://flink.apache.org/contributing/contribute-code.html) section for an overview of ongoing community work. -## License +## License -The code in this repository is licensed under the [Apache Software License 2](LICENSE). Review comment: Just let us keep the readme same with our license.  -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rkhachatryan commented on a change in pull request #19079: [FLINK-26573][test] Do not resolve the metadata file which is in prog…
rkhachatryan commented on a change in pull request #19079: URL: https://github.com/apache/flink/pull/19079#discussion_r826697410 ## File path: flink-tests/src/test/java/org/apache/flink/test/checkpointing/ChangelogPeriodicMaterializationTestBase.java ## @@ -234,6 +235,25 @@ protected JobID generateJobID() { return Collections.emptySet(); } +private static Optional getMostRecentCompletedCheckpointMetadata( +File checkpointFolder) throws IOException { +try { Review comment: Thanks for the fix @masteryhx. I think it's correct. However, it only solves the problem for the current test. There are a couple of more tests using the same code. Maybe it makes sense to fix it in `TestUtils`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26642) Pulsar sink fails with non-partitioned topic
[ https://issues.apache.org/jira/browse/FLINK-26642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506782#comment-17506782 ] goldenyang commented on FLINK-26642: [~affe] Thanks. Please take a look, we tried to use this Pulsar Sink, and we fixed a version after encountering this problem. Please see if this modification is acceptable. > Pulsar sink fails with non-partitioned topic > > > Key: FLINK-26642 > URL: https://issues.apache.org/jira/browse/FLINK-26642 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0 >Reporter: goldenyang >Priority: Major > Labels: pull-request-available > Original Estimate: 72h > Remaining Estimate: 72h > > Flink support pulsar sink now in > [FLINK-20732|https://issues.apache.org/jira/browse/FLINK-20732]. I > encountered a problem when using pulsar sink in master branch, when I use > non-partitioned topic. > The current test found that both partitioned topics and non-partitioned > topics ending with -partition-i can be supported, but ordinary > non-partitioned topics without -partition-i will have problems, such as > 'test_topic'. > Reproducing the problem requires writing to a non-partitioned topic. Below is > the stack information when the exception is encountered. I briefly > communicated with [~Jianyun Zhao] , this may be a bug. > > {code:java} > 2022-03-08 21:39:13,622 - INFO - > [flink-akka.actor.default-dispatcher-13:Execution@1419] - Source: Pulsar > Source -> (Sink: Writer -> Sink: Committer, Sink: Print to Std. Out) (1/6) > (44af5e8a2b9d553952c7ed3e5d40e672) switched from RUNNING to FAILED on > 54284e57-42a9-4e2e-9c41-54b0ad559832 @ 127.0.0.1 > (dataPort=-1).java.lang.IllegalArgumentException: You should provide topics > for routing topic by message key hash.at > org.apache.flink.shaded.guava30.com.google.common.base.Preconditions.checkArgument(Preconditions.java:144)at > > org.apache.flink.connector.pulsar.sink.writer.router.RoundRobinTopicRouter.route(RoundRobinTopicRouter.java:54)at > > org.apache.flink.connector.pulsar.sink.writer.PulsarWriter.write(PulsarWriter.java:138)at > > org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:124)at > > org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)at > > org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)at > > org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)at > > org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:77)at > > org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:32)at > > org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:205)at > > org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110)at > > org.apache.flink.connector.pulsar.source.reader.emitter.PulsarRecordEmitter.emitRecord(PulsarRecordEmitter.java:41)at > > org.apache.flink.connector.pulsar.source.reader.emitter.PulsarRecordEmitter.emitRecord(PulsarRecordEmitter.java:33)at > > org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:143)at > > org.apache.flink.connector.pulsar.source.reader.source.PulsarOrderedSourceReader.pollNext(PulsarOrderedSourceReader.java:106)at > > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:382)at > > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)at > > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)at > > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)at > > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)at > > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804)at > > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753)at > > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)at > org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)at > org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)at > java.lang.Thread.run(Thread.java:748) {code} > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-24960) YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots ha
[ https://issues.apache.org/jira/browse/FLINK-24960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506780#comment-17506780 ] Niklas Semmler commented on FLINK-24960: [~mapohl] and I debugged this some more. It looks like the external address of the rest server is set by the [YarnClusterDescriptor|https://github.com/apache/flink/blob/c16e4b4ce20704a0ad4387591894f13105d5e530/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java#L1801]. In short, the MiniYarnCluster propages the YarnClusterDescriptor into the execution process of the RestClusterClient. The address is then set via leader retrieval, but there is no actual leader retrieval taking place. Instead, the StandaloneHaServices returns the preconfigured rest server address. In principle, this should never return a "localhost" address. To better debug future scenarios of this bug, we add a PR that ensures that the log line in the code above is always printed. If this returns "localhost", then there is something going wrong with the address the YARN application report includes. If instead, it returns an external address but the RestClusterClient can still not connect, then we missed another place where this property is set. Finally, if the log line does not appear at all, then we need to figure out if there is yet another code path. > YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots > hangs on azure > --- > > Key: FLINK-24960 > URL: https://issues.apache.org/jira/browse/FLINK-24960 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.15.0, 1.14.3 >Reporter: Yun Gao >Assignee: Niklas Semmler >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > {code:java} > Nov 18 22:37:08 > > Nov 18 22:37:08 Test > testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase) > is running. > Nov 18 22:37:08 > > Nov 18 22:37:25 22:37:25,470 [main] INFO > org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase [] - Extracted > hostname:port: 5718b812c7ab:38622 > Nov 18 22:52:36 > == > Nov 18 22:52:36 Process produced no output for 900 seconds. > Nov 18 22:52:36 > == > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26722&view=logs&j=f450c1a5-64b1-5955-e215-49cb1ad5ec88&t=cc452273-9efa-565d-9db8-ef62a38a0c10&l=36395 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] leonardBang commented on a change in pull request #44: [hotfix][docs] Remove unnecessary html link tags and fix typo
leonardBang commented on a change in pull request #44: URL: https://github.com/apache/flink-table-store/pull/44#discussion_r826700940 ## File path: README.md ## @@ -1,19 +1,19 @@ -# FLink Table Store +# Flink Table Store Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink. Flink Table Store is developed under the umbrella of [Apache Flink](https://flink.apache.org/). -## Building the Project Review comment: I make it shorter, sir -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18956: [FLINK-26394] CheckpointCoordinator.isTriggering can not be reset in a particular case.
flinkbot edited a comment on pull request #18956: URL: https://github.com/apache/flink/pull/18956#issuecomment-1056194247 ## CI report: * 0563fbb7c06b4e3026bedd9786cb8abf97428211 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33063) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18983: [FLINK-25543][flink-yarn] [JUnit5 Migration] Module: flink-yarn
flinkbot edited a comment on pull request #18983: URL: https://github.com/apache/flink/pull/18983#issuecomment-1059740356 ## CI report: * 3cfb40cab5eb51b3a53f758e91455c0fea3017d5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33061) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18854: [FLINK-25648][Kubernetes] Avoid redundant to query Kubernetes deployment when creating task manager pods
flinkbot edited a comment on pull request #18854: URL: https://github.com/apache/flink/pull/18854#issuecomment-1046285232 ## CI report: * 44decf8e7f841b433e542467d81996ee8a8daa84 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33048) * 05d17ba7cd8bed907cb632c186e3dd420be1db13 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] leonardBang commented on a change in pull request #44: [hotfix][docs] Remove unnecessary html link tags and fix typo
leonardBang commented on a change in pull request #44: URL: https://github.com/apache/flink-table-store/pull/44#discussion_r826701551 ## File path: README.md ## @@ -1,19 +1,19 @@ -# FLink Table Store +# Flink Table Store Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink. Flink Table Store is developed under the umbrella of [Apache Flink](https://flink.apache.org/). -## Building the Project +## Building the Project Run the `mvn clean package` command. Then you will find a JAR file that contains your application, plus any libraries that you may have added as dependencies to the application: `target/-.jar`. -## Contributing +## Contributing You can learn more about how to contribute on the [Apache Flink website](https://flink.apache.org/contributing/how-to-contribute.html). For code contributions, please read carefully the [Contributing Code](https://flink.apache.org/contributing/contribute-code.html) section for an overview of ongoing community work. -## License +## License -The code in this repository is licensed under the [Apache Software License 2](LICENSE). Review comment: well, I think ASL is fine to me, too. I revert the change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25916) Using upsert-kafka with a flush buffer results in Null Pointer Exception
[ https://issues.apache.org/jira/browse/FLINK-25916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506786#comment-17506786 ] Yao Zhang commented on FLINK-25916: --- Hi all, I think one potential point might be the timestamp of one record in reduceBuffer is null. Null value is legal for Long type but if it was unboxed a NPE would be thrown. The ReducingUpsertWriter::setTimestamp accepts a timestamp with primitive long type. {code:java} public void setTimestamp(long timestamp) { this.timestamp = timestamp; } {code} And I tracked the call stack and finally got SinkWriterOperator.Context::getTimestamp: {code:java} @Override public Long timestamp() { if (element.hasTimestamp() && element.getTimestamp() != TimestampAssigner.NO_TIMESTAMP) { return element.getTimestamp(); } return null; } {code} Timestamp could be null if the element(StreamRecord) does not has a timestamp. Correct me if I am wrong. > Using upsert-kafka with a flush buffer results in Null Pointer Exception > > > Key: FLINK-25916 > URL: https://issues.apache.org/jira/browse/FLINK-25916 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Table SQL / Runtime >Affects Versions: 1.15.0, 1.14.3 > Environment: CentOS 7.9 x64 > Intel Xeon Gold 6140 CPU >Reporter: Corey Shaw >Priority: Critical > > Flink Version: 1.14.3 > upsert-kafka version: 1.14.3 > > I have been trying to buffer output from the upsert-kafka connector using the > documented parameters {{sink.buffer-flush.max-rows}} and > {{sink.buffer-flush.interval}} > Whenever I attempt to run an INSERT query with buffering, I receive the > following error (shortened for brevity): > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.flush(ReducingUpsertWriter.java:145) > > at > org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.lambda$registerFlush$3(ReducingUpsertWriter.java:124) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1693) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$null$22(StreamTask.java:1684) > > at > org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:338) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:324) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761) > > at > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) > > at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) > at java.lang.Thread.run(Thread.java:829) [?:?] {code} > > If I remove the parameters related to flush buffering, then everything works > as expected with no problems at all. For reference, here is the full setup > with source, destination, and queries. Yes, I realize the INSERT could use > an overhaul, but that's not the issue at hand :). > {code:java} > CREATE TABLE `source_topic` ( > `timeGMT` INT, > `eventtime` AS TO_TIMESTAMP(FROM_UNIXTIME(`timeGMT`)), > `visIdHigh` BIGINT, > `visIdLow` BIGINT, > `visIdStr` AS CONCAT(IF(`visIdHigh` IS NULL, '', CAST(`visIdHigh` AS > STRING)), IF(`visIdLow` IS NULL, '', CAST(`visIdLow` AS STRING))), > WATERMARK FOR eventtime AS eventtime - INTERVAL '25' SECONDS > ) WITH ( > 'connector' = 'kafka', > 'properties.group.id' = 'flink_metrics', > 'properties.bootstrap.servers' = 'brokers.example.com:9093', > 'topic' = 'source_topic', > 'scan.startup.mode' = 'earliest-offset', > 'value.format' = 'avro-confluent', > 'value.avro-confluent.url' = 'http://schema.example.com', > 'value.fields-include' = 'EXCEPT_KEY' > ); > CREATE TABLE dest_topic ( > `messageType` VARCHAR, > `observationID` BIGINT, > `obsYear` BIGINT, > `obsMonth` BIGINT, > `obsDay` BIGINT, > `obsHour` BIGI
[GitHub] [flink] flinkbot edited a comment on pull request #19040: [FLINK-26548][runtime] set the source parallelism correctly when using legacy file sources with AdaptiveBatcheScheduler
flinkbot edited a comment on pull request #19040: URL: https://github.com/apache/flink/pull/19040#issuecomment-1064026493 ## CI report: * 5d2fa164cd0bc5b51729c96f47e5a2bc1dbdd703 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33060) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #19082: [FLINK-26596][BP-1.14][runtime][test] Adds leadership loss handling
flinkbot edited a comment on pull request #19082: URL: https://github.com/apache/flink/pull/19082#issuecomment-1066919434 ## CI report: * 8e513a60edab58ca4b28ea3dd0c686a48f38f8c4 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33030) * 749c4b387cdfdaeca158f33706b51de842e1796f Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33081) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #19089: [FLINK-26607][python] There are multiple MAX_LONG_VALUE value errors in pyflink code
flinkbot edited a comment on pull request #19089: URL: https://github.com/apache/flink/pull/19089#issuecomment-1067544609 ## CI report: * 103331d5386317a184e7f363fa490a43842b8915 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33073) * 36faf315257032ac88e6bbb103e22b60409dff7e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33082) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] link3280 commented on pull request #19070: [FLINK-26618][sql-client] Fix Remove-Jar statement not effective
link3280 commented on pull request #19070: URL: https://github.com/apache/flink/pull/19070#issuecomment-1067705775 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-25528) support state processor api to create native savepoint
[ https://issues.apache.org/jira/browse/FLINK-25528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang updated FLINK-25528: - Summary: support state processor api to create native savepoint (was: support state processor api to create checkpoint) > support state processor api to create native savepoint > -- > > Key: FLINK-25528 > URL: https://issues.apache.org/jira/browse/FLINK-25528 > Project: Flink > Issue Type: Improvement > Components: API / State Processor, Runtime / State Backends >Reporter: 刘方奇 >Assignee: 刘方奇 >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > > As the title, in the state-processor-api, we use the savepoint opition to > snapshot state defaultly in org.apache.flink.state.api.output.SnapshotUtils. > But in many cases, we maybe need to snapshot state incremently which have > better performance than savepoint. > Shall we add the config to chose the checkpoint way just like flink stream > backend? -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (FLINK-25528) support state processor api to create checkpoint
[ https://issues.apache.org/jira/browse/FLINK-25528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang resolved FLINK-25528. -- Resolution: Fixed Merged in master: a0d31c5e0914d8e759917a72ca7b667d3db2f1d2 > support state processor api to create checkpoint > > > Key: FLINK-25528 > URL: https://issues.apache.org/jira/browse/FLINK-25528 > Project: Flink > Issue Type: Improvement > Components: API / State Processor, Runtime / State Backends >Reporter: 刘方奇 >Assignee: 刘方奇 >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > > As the title, in the state-processor-api, we use the savepoint opition to > snapshot state defaultly in org.apache.flink.state.api.output.SnapshotUtils. > But in many cases, we maybe need to snapshot state incremently which have > better performance than savepoint. > Shall we add the config to chose the checkpoint way just like flink stream > backend? -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] rkhachatryan commented on a change in pull request #19051: [FLINK-26063][state/changelog] Compute keys of the removed PQ elements
rkhachatryan commented on a change in pull request #19051: URL: https://github.com/apache/flink/pull/19051#discussion_r826709921 ## File path: flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/AbstractStateChangeLogger.java ## @@ -145,9 +145,17 @@ protected void log( @Nullable ThrowingConsumer dataWriter, Ns ns) throws IOException { +log(op, dataWriter, ns, keyContext.getCurrentKeyGroupIndex()); +} + +protected void log( +StateChangeOperation op, +@Nullable ThrowingConsumer dataWriter, +Ns ns, +int keyGroup) Review comment: Indeed, your proposal is much more simple. However, I think it has several drawbacks: 1. There could be other places where `eventTimeTimersQueue.poll()` or `remove` is called. So it must be a (javadoc) contract that `setCurrentKey` is required before `poll` 2. Following the above contract, `setCurrentKey` should also be called on recovery (by `PriorityQueueStateChangeApplier` - similar to `KvStateChangeApplier`) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax
flinkbot edited a comment on pull request #18386: URL: https://github.com/apache/flink/pull/18386#issuecomment-1015100174 ## CI report: * c53142c8b103fd6da55bb95d600d59a01e1f3198 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33077) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18854: [FLINK-25648][Kubernetes] Avoid redundant to query Kubernetes deployment when creating task manager pods
flinkbot edited a comment on pull request #18854: URL: https://github.com/apache/flink/pull/18854#issuecomment-1046285232 ## CI report: * 44decf8e7f841b433e542467d81996ee8a8daa84 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33048) * 05d17ba7cd8bed907cb632c186e3dd420be1db13 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33086) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-25916) Using upsert-kafka with a flush buffer results in Null Pointer Exception
[ https://issues.apache.org/jira/browse/FLINK-25916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506786#comment-17506786 ] Yao Zhang edited comment on FLINK-25916 at 3/15/22, 8:41 AM: - Hi all, I think one potential point might be the timestamp of one record in reduceBuffer is null. Null value is legal for Long type but if it was unboxed a NPE would be thrown. The ReducingUpsertWriter::setTimestamp accepts a timestamp with primitive long type. {code:java} public void setTimestamp(long timestamp) { this.timestamp = timestamp; } {code} And I tracked the call stack and finally got SinkWriterOperator.Context::getTimestamp: {code:java} @Override public Long timestamp() { if (element.hasTimestamp() && element.getTimestamp() != TimestampAssigner.NO_TIMESTAMP) { return element.getTimestamp(); } return null; } {code} Timestamp could be null if the element(StreamRecord) did not have a timestamp. Correct me if I am wrong. was (Author: paul8263): Hi all, I think one potential point might be the timestamp of one record in reduceBuffer is null. Null value is legal for Long type but if it was unboxed a NPE would be thrown. The ReducingUpsertWriter::setTimestamp accepts a timestamp with primitive long type. {code:java} public void setTimestamp(long timestamp) { this.timestamp = timestamp; } {code} And I tracked the call stack and finally got SinkWriterOperator.Context::getTimestamp: {code:java} @Override public Long timestamp() { if (element.hasTimestamp() && element.getTimestamp() != TimestampAssigner.NO_TIMESTAMP) { return element.getTimestamp(); } return null; } {code} Timestamp could be null if the element(StreamRecord) dids not have a timestamp. Correct me if I am wrong. > Using upsert-kafka with a flush buffer results in Null Pointer Exception > > > Key: FLINK-25916 > URL: https://issues.apache.org/jira/browse/FLINK-25916 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Table SQL / Runtime >Affects Versions: 1.15.0, 1.14.3 > Environment: CentOS 7.9 x64 > Intel Xeon Gold 6140 CPU >Reporter: Corey Shaw >Priority: Critical > > Flink Version: 1.14.3 > upsert-kafka version: 1.14.3 > > I have been trying to buffer output from the upsert-kafka connector using the > documented parameters {{sink.buffer-flush.max-rows}} and > {{sink.buffer-flush.interval}} > Whenever I attempt to run an INSERT query with buffering, I receive the > following error (shortened for brevity): > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.flush(ReducingUpsertWriter.java:145) > > at > org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.lambda$registerFlush$3(ReducingUpsertWriter.java:124) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1693) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$null$22(StreamTask.java:1684) > > at > org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:338) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:324) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761) > > at > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) > > at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) > at java.lang.Thread.run(Thread.java:829) [?:?] {code} > > If I remove the parameters related to flush buffering, then everything works > as expected with no problems at all. For reference, here is the full setup > with source, destination, and queries. Yes, I realize the INSERT could use > an overhaul, but that's not the issue at hand :). > {code:java} > CREATE TABLE `source_topic` ( > `timeGMT` INT, > `eventtime` AS TO_TIMESTA
[jira] [Comment Edited] (FLINK-25916) Using upsert-kafka with a flush buffer results in Null Pointer Exception
[ https://issues.apache.org/jira/browse/FLINK-25916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506786#comment-17506786 ] Yao Zhang edited comment on FLINK-25916 at 3/15/22, 8:41 AM: - Hi all, I think one potential point might be the timestamp of one record in reduceBuffer is null. Null value is legal for Long type but if it was unboxed a NPE would be thrown. The ReducingUpsertWriter::setTimestamp accepts a timestamp with primitive long type. {code:java} public void setTimestamp(long timestamp) { this.timestamp = timestamp; } {code} And I tracked the call stack and finally got SinkWriterOperator.Context::getTimestamp: {code:java} @Override public Long timestamp() { if (element.hasTimestamp() && element.getTimestamp() != TimestampAssigner.NO_TIMESTAMP) { return element.getTimestamp(); } return null; } {code} Timestamp could be null if the element(StreamRecord) dids not have a timestamp. Correct me if I am wrong. was (Author: paul8263): Hi all, I think one potential point might be the timestamp of one record in reduceBuffer is null. Null value is legal for Long type but if it was unboxed a NPE would be thrown. The ReducingUpsertWriter::setTimestamp accepts a timestamp with primitive long type. {code:java} public void setTimestamp(long timestamp) { this.timestamp = timestamp; } {code} And I tracked the call stack and finally got SinkWriterOperator.Context::getTimestamp: {code:java} @Override public Long timestamp() { if (element.hasTimestamp() && element.getTimestamp() != TimestampAssigner.NO_TIMESTAMP) { return element.getTimestamp(); } return null; } {code} Timestamp could be null if the element(StreamRecord) does not has a timestamp. Correct me if I am wrong. > Using upsert-kafka with a flush buffer results in Null Pointer Exception > > > Key: FLINK-25916 > URL: https://issues.apache.org/jira/browse/FLINK-25916 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Table SQL / Runtime >Affects Versions: 1.15.0, 1.14.3 > Environment: CentOS 7.9 x64 > Intel Xeon Gold 6140 CPU >Reporter: Corey Shaw >Priority: Critical > > Flink Version: 1.14.3 > upsert-kafka version: 1.14.3 > > I have been trying to buffer output from the upsert-kafka connector using the > documented parameters {{sink.buffer-flush.max-rows}} and > {{sink.buffer-flush.interval}} > Whenever I attempt to run an INSERT query with buffering, I receive the > following error (shortened for brevity): > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.flush(ReducingUpsertWriter.java:145) > > at > org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.lambda$registerFlush$3(ReducingUpsertWriter.java:124) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1693) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$null$22(StreamTask.java:1684) > > at > org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:338) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:324) > > at > org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809) > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761) > > at > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) > > at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) > at java.lang.Thread.run(Thread.java:829) [?:?] {code} > > If I remove the parameters related to flush buffering, then everything works > as expected with no problems at all. For reference, here is the full setup > with source, destination, and queries. Yes, I realize the INSERT could use > an overhaul, but that's not the issue at hand :). > {code:java} > CREATE TABLE `source_topic` ( > `timeGMT` INT, > `eventtime` AS TO_TIMESTA
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #44: [hotfix][docs] Remove unnecessary html link tags and fix typo
JingsongLi commented on a change in pull request #44: URL: https://github.com/apache/flink-table-store/pull/44#discussion_r826710166 ## File path: README.md ## @@ -1,19 +1,19 @@ -# FLink Table Store +# Flink Table Store Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink. Flink Table Store is developed under the umbrella of [Apache Flink](https://flink.apache.org/). -## Building the Project Review comment: shorter is good (: -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-26646) Flink kubernetes operator helm template is broken
[ https://issues.apache.org/jira/browse/FLINK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gyula Fora reassigned FLINK-26646: -- Assignee: Gyula Fora > Flink kubernetes operator helm template is broken > - > > Key: FLINK-26646 > URL: https://issues.apache.org/jira/browse/FLINK-26646 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator >Reporter: Yang Wang >Assignee: Gyula Fora >Priority: Major > > {code:java} > wangyang-pc:flink-kubernetes-operator danrtsey.wy$ helm install > flink-operator helm/flink-operator --set > image.repository=wangyang09180523/flink-java-operator --set image.tag=latest > --set metrics.port= > Error: template: flink-operator/templates/flink-operator.yaml:143:12: > executing "flink-operator/templates/flink-operator.yaml" at (.Values.operatorConfiguration).append false>: error calling eq: incompatible > types for comparison {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] leonardBang merged pull request #44: [hotfix][docs] Remove unnecessary html link tags and fix typo
leonardBang merged pull request #44: URL: https://github.com/apache/flink-table-store/pull/44 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org