[jira] [Commented] (FLINK-26166) flink-runtime-web fails to compile if newline is cr lf
[ https://issues.apache.org/jira/browse/FLINK-26166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493739#comment-17493739 ] Márton Balassi commented on FLINK-26166: [~chesnay] unless you see any harm in this I prefer having [~gaborgsomogyi]'s proposed change in to aid the out-of-box experience as you suggested. > flink-runtime-web fails to compile if newline is cr lf > -- > > Key: FLINK-26166 > URL: https://issues.apache.org/jira/browse/FLINK-26166 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.16.0 >Reporter: Gabor Somogyi >Assignee: Gabor Somogyi >Priority: Minor > Labels: pull-request-available > > Normally I'm developing on linux based system but sometimes reviewing on > Windows based machines. Compile blows up in the following way: > {code:java} > [INFO] > d:\projects\flink\flink-runtime-web\web-dashboard\src\@types\d3-flame-graph\index.d.ts > [INFO]1:3 error Delete `â??` prettier/prettier > ... > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) * b263f91c0e716e39a9fd6ee6999f4d9b8fbe40b9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-26187) Chinese docs override english aliases
[ https://issues.apache.org/jira/browse/FLINK-26187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-26187. Resolution: Fixed master: de220ba263a3512673336a91e8f439696945a6f5 1.14: 03883ebb690b497d2ec421515d3f0b788323655c > Chinese docs override english aliases > - > > Key: FLINK-26187 > URL: https://issues.apache.org/jira/browse/FLINK-26187 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0, 1.14.4 > > > Various chinese pages define an alias for an URL to an english page. This > results in redirects being set up that point to the chinese version of the > docs. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] zentol commented on pull request #18806: [FLINK-26105][e2e] Fixes log file extension
zentol commented on pull request #18806: URL: https://github.com/apache/flink/pull/18806#issuecomment-1042670525 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] LadyForest commented on a change in pull request #20: [FLINK-26103] Introduce log store
LadyForest commented on a change in pull request #20: URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808767370 ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/log/LogOptions.java ## @@ -0,0 +1,192 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.log; + +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.Description; +import org.apache.flink.configuration.description.InlineElement; + +import java.time.Duration; + +import static org.apache.flink.configuration.description.TextElement.text; +import static org.apache.flink.table.store.utils.OptionsUtils.formatEnumOption; + +/** Options for log store. */ +public class LogOptions { + +public static final ConfigOption SCAN = +ConfigOptions.key("scan") +.enumType(LogStartupMode.class) +.defaultValue(LogStartupMode.FULL) +.withDescription( +Description.builder() +.text("Specifies the startup mode for log consumer.") +.linebreak() + .list(formatEnumOption(LogStartupMode.FULL)) + .list(formatEnumOption(LogStartupMode.LATEST)) + .list(formatEnumOption(LogStartupMode.FROM_TIMESTAMP)) +.build()); + +public static final ConfigOption SCAN_TIMESTAMP_MILLS = +ConfigOptions.key("scan.timestamp-millis") +.longType() +.noDefaultValue() +.withDescription( +"Optional timestamp used in case of \"from-timestamp\" scan mode"); + +public static final ConfigOption RETENTION = +ConfigOptions.key("retention") +.durationType() +.noDefaultValue() +.withDescription( +"It means how long changes log will be kept. The default value is from the log system cluster."); + +public static final ConfigOption CONSISTENCY = +ConfigOptions.key("consistency") +.enumType(LogConsistency.class) +.defaultValue(LogConsistency.TRANSACTIONAL) +.withDescription( +Description.builder() +.text("Specifies the log consistency mode for table.") +.linebreak() +.list( + formatEnumOption(LogConsistency.TRANSACTIONAL), + formatEnumOption(LogConsistency.EVENTUAL)) +.build()); + +public static final ConfigOption CHANGELOG_MODE = +ConfigOptions.key("changelog-mode") +.enumType(LogChangelogMode.class) +.defaultValue(LogChangelogMode.AUTO) +.withDescription( +Description.builder() +.text("Specifies the log changelog mode for table.") +.linebreak() +.list( + formatEnumOption(LogChangelogMode.AUTO), + formatEnumOption(LogChangelogMode.ALL), + formatEnumOption(LogChangelogMode.UPSERT)) +.build()); + +public static final ConfigOption KEY_FORMAT = +ConfigOptions.key("key.format") +.stringType() +.defaultValue("json") +.withDescription( +"Specifies the key message format of log system with primary key."); + +public static final ConfigOption FORMAT = +Conf
[GitHub] [flink] infoverload commented on a change in pull request #18746: [FLINK-26162][docs]revamp security pages
infoverload commented on a change in pull request #18746: URL: https://github.com/apache/flink/pull/18746#discussion_r808767412 ## File path: docs/content/docs/deployment/security/ssl.md ## @@ -0,0 +1,243 @@ +--- +title: "Encryption and Authentication using SSL" +weight: 3 +type: docs +aliases: + - /deployment/security/ssl.html + - /ops/security-ssl.html +--- + + +# Encryption and Authentication using SSL + +Flink supports mutual authentication (when two parties authenticate each other at the same time) and +encryption of network communication with SSL for internal and external communication. + +**By default, SSL/TLS authentication and encryption is not enabled** (to have defaults work out-of-the-box). + +This guide will explain internal vs external connectivity, and provide instructions on how to enable +SSL/TLS authentication and encryption for network communication with and between Flink processes. We +will go through steps such as generating certificates, setting up TrustStores and KeyStores, and +configuring cipher suites. + +For how-tos and tips for different deployment environments (i.e. standalone clusters, Kubernetes, YARN), +check out the section on [Incorporating Security Features in a Running Cluster](#). + +## Internal and External Communication + +There are two types of network connections to authenticate and encrypt: internal and external. + +{{< img src="/fig/ssl_internal_external.svg" alt="Internal and External Connectivity" width=75% >}} + +For more flexibility, security for internal and external connectivity can be enabled and configured +separately. + +### Internal Connectivity + +Flink internal communication refers to all connections made between Flink processes. These include: + +- Control messages: RPC between JobManager / TaskManager / Dispatcher / ResourceManager +- Transfers on the data plane: connections between TaskManagers to exchange data during shuffles, + broadcasts, redistribution, etc +- Blob service communication: distribution of libraries and other artifacts + +All internal connections are SSL authenticated and encrypted. The connections use **mutual authentication**, +meaning both server and client side of each connection need to present the certificate to each other. +The certificate acts as a shared secret and can be embedded into container images or attached to your +deployment setup. These connections run Flink custom protocols. Users never connect directly to internal +connectivity endpoints. + +### External Connectivity + +Flink external communication refers to all connections made from the outside to Flink processes. +This includes: +- communication with the Dispatcher to submit Flink jobs (session clusters) +- communication of the Flink CLI with the JobManager to inspect and modify a running Flink job/application + +Most of these connections are exposed via REST/HTTP endpoints (and used by the web UI). Some external +services used as sources or sinks may use some other network protocol. + +The server will, by default, accept connections from any client, meaning that the REST endpoint does +not authenticate the client. These REST endpoints, however, can be configured to require SSL encryption +and mutual authentication. + +However, the recommended approach is setting up and configuring a dedicated proxy service (a "sidecar +proxy") that controls access to the REST endpoint. This involves binding the REST endpoint to the +loopback interface (or the pod-local interface in Kubernetes) and starting a REST proxy that authenticates +and forwards the requests to Flink. Examples for proxies that Flink users have deployed are [Envoy Proxy](https://www.envoyproxy.io/) +or [NGINX with MOD_AUTH](http://nginx.org/en/docs/http/ngx_http_auth_request_module.html). + +The rationale behind delegating authentication to a proxy is that such proxies offer a wide variety +of authentication options and thus better integration into existing infrastructures. + +## Queryable State + +Connections to the [queryable state]({{< ref "docs/dev/datastream/fault-tolerance/queryable_state" >}}) +endpoints is currently not authenticated or encrypted. + +## SSL Setups + +{{< img src="/fig/ssl_mutual_auth.svg" alt="SSL Mutual Authentication" width=75% >}} Review comment: I experienced this issue locally as well and I just could not figure out what is wrong. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-26210) PulsarSourceUnorderedE2ECase failed on azure due to multiple causes
Yun Gao created FLINK-26210: --- Summary: PulsarSourceUnorderedE2ECase failed on azure due to multiple causes Key: FLINK-26210 URL: https://issues.apache.org/jira/browse/FLINK-26210 Project: Flink Issue Type: Bug Components: Connectors / Pulsar Affects Versions: 1.15.0 Reporter: Yun Gao {code:java} Feb 17 04:58:33 [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 85.664 s <<< FAILURE! - in org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase Feb 17 04:58:33 [ERROR] org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase.testOneSplitWithMultipleConsumers(TestEnvironment, DataStreamSourceExternalContext)[1] Time elapsed: 0.571 s <<< ERROR! Feb 17 04:58:33 org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: Feb 17 04:58:33 java.util.concurrent.ExecutionException: org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: A MultiException has 2 exceptions. They are: Feb 17 04:58:33 1. java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlElement Feb 17 04:58:33 2. java.lang.IllegalStateException: Unable to perform operation: create on org.apache.pulsar.shade.org.glassfish.jersey.jackson.internal.DefaultJacksonJaxbJsonProvider Feb 17 04:58:33 Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.BaseResource.request(BaseResource.java:70) Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.BaseResource.asyncPutRequest(BaseResource.java:120) Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:430) Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:421) Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopic(TopicsImpl.java:373) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.lambda$createPartitionedTopic$11(PulsarRuntimeOperator.java:504) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneaky(PulsarExceptionUtils.java:60) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin(PulsarExceptionUtils.java:50) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createPartitionedTopic(PulsarRuntimeOperator.java:504) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createTopic(PulsarRuntimeOperator.java:184) Feb 17 04:58:33 at org.apache.flink.tests.util.pulsar.cases.KeySharedSubscriptionContext.createSourceSplitDataWriter(KeySharedSubscriptionContext.java:111) Feb 17 04:58:33 at org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers(UnorderedSourceTestSuiteBase.java:73) Feb 17 04:58:33 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Feb 17 04:58:33 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Feb 17 04:58:33 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Feb 17 04:58:33 at java.base/java.lang.reflect.Method.invoke(Method.java:566) Feb 17 04:58:33 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) Feb 17 04:58:33 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) Feb 17 04:58:33 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) Feb 17 04:58:33 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) Feb 17 04:58:33 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) {code} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31702&view=logs&j=6e8542d7-de38-5a33-4aca-458d6c87066d&t=5846934b-7a4f-545b-e5b0-eb4d8bda32e1&l=15537 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-26211) PulsarSourceUnorderedE2ECase failed on azure due to multiple causes
Yun Gao created FLINK-26211: --- Summary: PulsarSourceUnorderedE2ECase failed on azure due to multiple causes Key: FLINK-26211 URL: https://issues.apache.org/jira/browse/FLINK-26211 Project: Flink Issue Type: Bug Components: Connectors / Pulsar Affects Versions: 1.15.0 Reporter: Yun Gao {code:java} Feb 17 04:58:33 [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 85.664 s <<< FAILURE! - in org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase Feb 17 04:58:33 [ERROR] org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase.testOneSplitWithMultipleConsumers(TestEnvironment, DataStreamSourceExternalContext)[1] Time elapsed: 0.571 s <<< ERROR! Feb 17 04:58:33 org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: Feb 17 04:58:33 java.util.concurrent.ExecutionException: org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: A MultiException has 2 exceptions. They are: Feb 17 04:58:33 1. java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlElement Feb 17 04:58:33 2. java.lang.IllegalStateException: Unable to perform operation: create on org.apache.pulsar.shade.org.glassfish.jersey.jackson.internal.DefaultJacksonJaxbJsonProvider Feb 17 04:58:33 Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.BaseResource.request(BaseResource.java:70) Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.BaseResource.asyncPutRequest(BaseResource.java:120) Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:430) Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:421) Feb 17 04:58:33 at org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopic(TopicsImpl.java:373) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.lambda$createPartitionedTopic$11(PulsarRuntimeOperator.java:504) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneaky(PulsarExceptionUtils.java:60) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin(PulsarExceptionUtils.java:50) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createPartitionedTopic(PulsarRuntimeOperator.java:504) Feb 17 04:58:33 at org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createTopic(PulsarRuntimeOperator.java:184) Feb 17 04:58:33 at org.apache.flink.tests.util.pulsar.cases.KeySharedSubscriptionContext.createSourceSplitDataWriter(KeySharedSubscriptionContext.java:111) Feb 17 04:58:33 at org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers(UnorderedSourceTestSuiteBase.java:73) Feb 17 04:58:33 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Feb 17 04:58:33 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Feb 17 04:58:33 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Feb 17 04:58:33 at java.base/java.lang.reflect.Method.invoke(Method.java:566) Feb 17 04:58:33 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) Feb 17 04:58:33 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) Feb 17 04:58:33 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) Feb 17 04:58:33 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) Feb 17 04:58:33 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) {code} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31702&view=logs&j=6e8542d7-de38-5a33-4aca-458d6c87066d&t=5846934b-7a4f-545b-e5b0-eb4d8bda32e1&l=15537 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] mbalassi commented on pull request #18796: [FLINK-26166][runtime-web] Add auto newline detection to prettier formatter
mbalassi commented on pull request #18796: URL: https://github.com/apache/flink/pull/18796#issuecomment-1042673130 Hi @gaborgsomogyi! The output on Linux via the Azura CI looks good, I will run a build on Mac, however I do not have Windows handy, so I might give you a call to show me how it looks there. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18746: [FLINK-26162][docs]revamp security pages
flinkbot edited a comment on pull request #18746: URL: https://github.com/apache/flink/pull/18746#issuecomment-1038829702 ## CI report: * c4f76a841937d005508c859a643c6ac4c0fcbf7f Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31547) * bc4c3c9c52616fe913872e5396bcef703c9ce45e UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.
flinkbot edited a comment on pull request #18769: URL: https://github.com/apache/flink/pull/18769#issuecomment-1039977246 ## CI report: * 7b021db85f9d61abd8e550d9dbaaf26d42b82e56 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31716) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] LadyForest commented on a change in pull request #20: [FLINK-26103] Introduce log store
LadyForest commented on a change in pull request #20: URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808768117 ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/log/LogOptions.java ## @@ -0,0 +1,192 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.log; + +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.Description; +import org.apache.flink.configuration.description.InlineElement; + +import java.time.Duration; + +import static org.apache.flink.configuration.description.TextElement.text; +import static org.apache.flink.table.store.utils.OptionsUtils.formatEnumOption; + +/** Options for log store. */ +public class LogOptions { + +public static final ConfigOption SCAN = +ConfigOptions.key("scan") +.enumType(LogStartupMode.class) +.defaultValue(LogStartupMode.FULL) +.withDescription( +Description.builder() +.text("Specifies the startup mode for log consumer.") Review comment: Nit: use third-person singular pronouns? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18806: [FLINK-26105][e2e] Fixes log file extension
flinkbot edited a comment on pull request #18806: URL: https://github.com/apache/flink/pull/18806#issuecomment-1041656779 ## CI report: * ea2f81891557df0fa6ce6cba818a37a18c011c6f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] fapaul commented on pull request #18784: [FLINK-25941][streaming] Only emit committables with Long.MAX_VALUE as checkpoint id in batch mode
fapaul commented on pull request #18784: URL: https://github.com/apache/flink/pull/18784#issuecomment-1042674007 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26211) PulsarSourceUnorderedE2ECase failed on azure due to multiple causes
[ https://issues.apache.org/jira/browse/FLINK-26211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26211: Labels: test-stability (was: ) > PulsarSourceUnorderedE2ECase failed on azure due to multiple causes > --- > > Key: FLINK-26211 > URL: https://issues.apache.org/jira/browse/FLINK-26211 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Major > Labels: test-stability > > {code:java} > Feb 17 04:58:33 [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 85.664 s <<< FAILURE! - in > org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase > Feb 17 04:58:33 [ERROR] > org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase.testOneSplitWithMultipleConsumers(TestEnvironment, > DataStreamSourceExternalContext)[1] Time elapsed: 0.571 s <<< ERROR! > Feb 17 04:58:33 > org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: > > Feb 17 04:58:33 java.util.concurrent.ExecutionException: > org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: > A MultiException has 2 exceptions. They are: > Feb 17 04:58:33 1. java.lang.NoClassDefFoundError: > javax/xml/bind/annotation/XmlElement > Feb 17 04:58:33 2. java.lang.IllegalStateException: Unable to perform > operation: create on > org.apache.pulsar.shade.org.glassfish.jersey.jackson.internal.DefaultJacksonJaxbJsonProvider > Feb 17 04:58:33 > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.BaseResource.request(BaseResource.java:70) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.BaseResource.asyncPutRequest(BaseResource.java:120) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:430) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:421) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopic(TopicsImpl.java:373) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.lambda$createPartitionedTopic$11(PulsarRuntimeOperator.java:504) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneaky(PulsarExceptionUtils.java:60) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin(PulsarExceptionUtils.java:50) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createPartitionedTopic(PulsarRuntimeOperator.java:504) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createTopic(PulsarRuntimeOperator.java:184) > Feb 17 04:58:33 at > org.apache.flink.tests.util.pulsar.cases.KeySharedSubscriptionContext.createSourceSplitDataWriter(KeySharedSubscriptionContext.java:111) > Feb 17 04:58:33 at > org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers(UnorderedSourceTestSuiteBase.java:73) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 17 04:58:33 at > java.base/java.lang.reflect.Method.invoke(Method.java:566) > Feb 17 04:58:33 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) > Feb 17 04:58:33 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > Feb 17 04:58:33 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > Feb 17 04:58:33 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > Feb 17 04:58:33 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31702&view=logs&j=6e8542d7-de38-5a33-4aca-458d6c87066d&t=5846934b-7a4f-545b-e5b0-eb4d8bda32e1&l=15537 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] XComp commented on a change in pull request #18806: [FLINK-26105][e2e] Fixes log file extension
XComp commented on a change in pull request #18806: URL: https://github.com/apache/flink/pull/18806#discussion_r808769310 ## File path: flink-end-to-end-tests/test-scripts/common_ha.sh ## @@ -49,7 +49,7 @@ function verify_num_occurences_in_logs() { local text="$2" local expected_no="$3" -local actual_no=$(grep -r --include "*${log_pattern}*.log" -e "${text}" "$FLINK_LOG_DIR/" | cut -d ":" -f 1 | uniq | wc -l) +local actual_no=$(grep -r --include "*${log_pattern}*.log*" -e "${text}" "$FLINK_LOG_DIR/" | cut -d ":" -f 1 | sed "s/\.log\.[0-9]\{1,\}$/.log/g" | uniq | wc -l) Review comment: You're right. Initially, my intention was to be more strict on which file names to adjust. But that's not necessary due to the preceding `grep` call which only allows `*.log*` files to be included -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26211) PulsarSourceUnorderedE2ECase failed on azure due to multiple causes
[ https://issues.apache.org/jira/browse/FLINK-26211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26211: Priority: Critical (was: Major) > PulsarSourceUnorderedE2ECase failed on azure due to multiple causes > --- > > Key: FLINK-26211 > URL: https://issues.apache.org/jira/browse/FLINK-26211 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 17 04:58:33 [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 85.664 s <<< FAILURE! - in > org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase > Feb 17 04:58:33 [ERROR] > org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase.testOneSplitWithMultipleConsumers(TestEnvironment, > DataStreamSourceExternalContext)[1] Time elapsed: 0.571 s <<< ERROR! > Feb 17 04:58:33 > org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: > > Feb 17 04:58:33 java.util.concurrent.ExecutionException: > org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: > A MultiException has 2 exceptions. They are: > Feb 17 04:58:33 1. java.lang.NoClassDefFoundError: > javax/xml/bind/annotation/XmlElement > Feb 17 04:58:33 2. java.lang.IllegalStateException: Unable to perform > operation: create on > org.apache.pulsar.shade.org.glassfish.jersey.jackson.internal.DefaultJacksonJaxbJsonProvider > Feb 17 04:58:33 > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.BaseResource.request(BaseResource.java:70) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.BaseResource.asyncPutRequest(BaseResource.java:120) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:430) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:421) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopic(TopicsImpl.java:373) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.lambda$createPartitionedTopic$11(PulsarRuntimeOperator.java:504) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneaky(PulsarExceptionUtils.java:60) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin(PulsarExceptionUtils.java:50) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createPartitionedTopic(PulsarRuntimeOperator.java:504) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createTopic(PulsarRuntimeOperator.java:184) > Feb 17 04:58:33 at > org.apache.flink.tests.util.pulsar.cases.KeySharedSubscriptionContext.createSourceSplitDataWriter(KeySharedSubscriptionContext.java:111) > Feb 17 04:58:33 at > org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers(UnorderedSourceTestSuiteBase.java:73) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 17 04:58:33 at > java.base/java.lang.reflect.Method.invoke(Method.java:566) > Feb 17 04:58:33 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) > Feb 17 04:58:33 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > Feb 17 04:58:33 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > Feb 17 04:58:33 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > Feb 17 04:58:33 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31702&view=logs&j=6e8542d7-de38-5a33-4aca-458d6c87066d&t=5846934b-7a4f-545b-e5b0-eb4d8bda32e1&l=15537 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25851) CassandraConnectorITCase.testRetrialAndDropTables shows table already exists errors on AZP
[ https://issues.apache.org/jira/browse/FLINK-25851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493744#comment-17493744 ] Yun Gao commented on FLINK-25851: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31702&view=logs&j=e9af9cde-9a65-5281-a58e-2c8511d36983&t=c520d2c3-4d17-51f1-813b-4b0b74a0c307&l=13310 > CassandraConnectorITCase.testRetrialAndDropTables shows table already exists > errors on AZP > -- > > Key: FLINK-25851 > URL: https://issues.apache.org/jira/browse/FLINK-25851 > Project: Flink > Issue Type: Bug > Components: Connectors / Cassandra >Affects Versions: 1.15.0 >Reporter: Etienne Chauchot >Assignee: Etienne Chauchot >Priority: Critical > Labels: pull-request-available, test-stability > > It happens even if the whole keyspace is dropped in a BeforeClass method and > the table noticed in the stacktrace is dropped in an After method and this > after method is executed even in case of retrials through the Rule. > Jan 24 20:21:33 com.datastax.driver.core.exceptions.AlreadyExistsException: > Table flink.batches already exists > Jan 24 20:21:33 at > com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111) > > Jan 24 20:21:33 at > com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) > > Jan 24 20:21:33 at > com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245) > > Jan 24 20:21:33 at > com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63) > Jan 24 20:21:33 at > com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39) > Jan 24 20:21:33 at > org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testRetrialAndDropTables(CassandraConnectorITCase.java:554) > > Jan 24 20:21:33 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jan 24 20:21:33 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jan 24 20:21:33 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > Jan 24 20:21:33 at java.lang.reflect.Method.invoke(Method.java:498) > Jan 24 20:21:33 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > > Jan 24 20:21:33 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > > Jan 24 20:21:33 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > > Jan 24 20:21:33 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > > Jan 24 20:21:33 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Jan 24 20:21:33 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Jan 24 20:21:33 at > org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192) > > Jan 24 20:21:33 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Jan 24 20:21:33 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Jan 24 20:21:33 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > > Jan 24 20:21:33 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Jan 24 20:21:33 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > > Jan 24 20:21:33 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > > Jan 24 20:21:33 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Jan 24 20:21:33 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > > cf: > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30050&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&s=ae4f8708-9994-57d3-c2d7-b892156e7812&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=11999] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-25851) CassandraConnectorITCase.testRetrialAndDropTables shows table already exists errors on AZP
[ https://issues.apache.org/jira/browse/FLINK-25851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-25851: Priority: Critical (was: Major) > CassandraConnectorITCase.testRetrialAndDropTables shows table already exists > errors on AZP > -- > > Key: FLINK-25851 > URL: https://issues.apache.org/jira/browse/FLINK-25851 > Project: Flink > Issue Type: Bug > Components: Connectors / Cassandra >Affects Versions: 1.15.0 >Reporter: Etienne Chauchot >Assignee: Etienne Chauchot >Priority: Critical > Labels: pull-request-available, test-stability > > It happens even if the whole keyspace is dropped in a BeforeClass method and > the table noticed in the stacktrace is dropped in an After method and this > after method is executed even in case of retrials through the Rule. > Jan 24 20:21:33 com.datastax.driver.core.exceptions.AlreadyExistsException: > Table flink.batches already exists > Jan 24 20:21:33 at > com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111) > > Jan 24 20:21:33 at > com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) > > Jan 24 20:21:33 at > com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245) > > Jan 24 20:21:33 at > com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63) > Jan 24 20:21:33 at > com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39) > Jan 24 20:21:33 at > org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testRetrialAndDropTables(CassandraConnectorITCase.java:554) > > Jan 24 20:21:33 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jan 24 20:21:33 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jan 24 20:21:33 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > Jan 24 20:21:33 at java.lang.reflect.Method.invoke(Method.java:498) > Jan 24 20:21:33 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > > Jan 24 20:21:33 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > > Jan 24 20:21:33 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > > Jan 24 20:21:33 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > > Jan 24 20:21:33 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Jan 24 20:21:33 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Jan 24 20:21:33 at > org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192) > > Jan 24 20:21:33 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Jan 24 20:21:33 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Jan 24 20:21:33 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > > Jan 24 20:21:33 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Jan 24 20:21:33 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > > Jan 24 20:21:33 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > > Jan 24 20:21:33 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Jan 24 20:21:33 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Jan 24 20:21:33 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > > cf: > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30050&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&s=ae4f8708-9994-57d3-c2d7-b892156e7812&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=11999] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] JingsongLi merged pull request #19: [FLINK-26066] Introduce FileStoreRead
JingsongLi merged pull request #19: URL: https://github.com/apache/flink-table-store/pull/19 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] LadyForest commented on a change in pull request #20: [FLINK-26103] Introduce log store
LadyForest commented on a change in pull request #20: URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808770287 ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/log/LogOptions.java ## @@ -0,0 +1,192 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.log; + +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.Description; +import org.apache.flink.configuration.description.InlineElement; + +import java.time.Duration; + +import static org.apache.flink.configuration.description.TextElement.text; +import static org.apache.flink.table.store.utils.OptionsUtils.formatEnumOption; + +/** Options for log store. */ +public class LogOptions { + +public static final ConfigOption SCAN = +ConfigOptions.key("scan") +.enumType(LogStartupMode.class) +.defaultValue(LogStartupMode.FULL) +.withDescription( +Description.builder() +.text("Specifies the startup mode for log consumer.") +.linebreak() + .list(formatEnumOption(LogStartupMode.FULL)) + .list(formatEnumOption(LogStartupMode.LATEST)) + .list(formatEnumOption(LogStartupMode.FROM_TIMESTAMP)) +.build()); + +public static final ConfigOption SCAN_TIMESTAMP_MILLS = +ConfigOptions.key("scan.timestamp-millis") +.longType() +.noDefaultValue() +.withDescription( +"Optional timestamp used in case of \"from-timestamp\" scan mode"); + +public static final ConfigOption RETENTION = +ConfigOptions.key("retention") +.durationType() +.noDefaultValue() +.withDescription( +"It means how long changes log will be kept. The default value is from the log system cluster."); + +public static final ConfigOption CONSISTENCY = +ConfigOptions.key("consistency") +.enumType(LogConsistency.class) +.defaultValue(LogConsistency.TRANSACTIONAL) +.withDescription( +Description.builder() +.text("Specifies the log consistency mode for table.") +.linebreak() +.list( + formatEnumOption(LogConsistency.TRANSACTIONAL), + formatEnumOption(LogConsistency.EVENTUAL)) +.build()); + +public static final ConfigOption CHANGELOG_MODE = +ConfigOptions.key("changelog-mode") +.enumType(LogChangelogMode.class) +.defaultValue(LogChangelogMode.AUTO) +.withDescription( +Description.builder() +.text("Specifies the log changelog mode for table.") +.linebreak() +.list( + formatEnumOption(LogChangelogMode.AUTO), + formatEnumOption(LogChangelogMode.ALL), + formatEnumOption(LogChangelogMode.UPSERT)) +.build()); + +public static final ConfigOption KEY_FORMAT = +ConfigOptions.key("key.format") +.stringType() +.defaultValue("json") +.withDescription( +"Specifies the key message format of log system with primary key."); + +public static final ConfigOption FORMAT = +Conf
[jira] [Commented] (FLINK-26165) SavepointFormatITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-26165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493746#comment-17493746 ] Yun Gao commented on FLINK-26165: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31702&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=ea7cf968-e585-52cb-e0fc-f48de023a7ca&l=5706 > SavepointFormatITCase fails on azure > > > Key: FLINK-26165 > URL: https://issues.apache.org/jira/browse/FLINK-26165 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.15.0 >Reporter: Roman Khachatryan >Assignee: Roman Khachatryan >Priority: Blocker > Labels: pull-request-available > Fix For: 1.15.0 > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31474&view=logs&j=a57e0635-3fad-5b08-57c7-a4142d7d6fa9&t=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7&l=13116] > {code} > [ERROR] > org.apache.flink.test.checkpointing.SavepointFormatITCase.testTriggerSavepointAndResumeWithFileBasedCheckpointsAndRelocateBasePath(SavepointFormatType, > StateBackendConfig)[2] Time elapsed: 14.209 s <<< ERROR! > java.util.concurrent.ExecutionException: java.io.IOException: Unknown > implementation of StreamStateHa ndle: class > org.apache.flink.runtime.state.PlaceholderStreamStateHandle > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > at > org.apache.flink.test.checkpointing.SavepointFormatITCase.submitJobAndTakeSavepoint(SavepointFormatITCase.java:328) > at > org.apache.flink.test.checkpointing.SavepointFormatITCase.testTriggerSavepointAndResumeWithFileBasedCheckpointsAndRelocateBasePath(SavepointFormatITCase.java:248) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) > at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(Invo > cationInterceptorChain.java:131) > at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.ja > va:140) > at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtensio > n.java:92) > at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMet > hod$0(ExecutableInvoker.java:115) > at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105 > ) > at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(Inv > ocationInterceptorChain.java:106) > at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChai > n.java:64) > at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationIntercep > torChain.java:45) > at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain > .java:37) > at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) > at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMeth > odTestDescriptor.java:214) > at > org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.ja > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] zentol commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe
zentol commented on a change in pull request #18496: URL: https://github.com/apache/flink/pull/18496#discussion_r808770920 ## File path: flink-test-utils-parent/flink-connector-test-utils/pom.xml ## @@ -95,4 +95,30 @@ compile + + + + + org.apache.maven.plugins + maven-shade-plugin + + + package + + shade + + + true + source + + + **/connector/testframe/source/** Review comment: Thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) * b263f91c0e716e39a9fd6ee6999f4d9b8fbe40b9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18746: [FLINK-26162][docs]revamp security pages
flinkbot edited a comment on pull request #18746: URL: https://github.com/apache/flink/pull/18746#issuecomment-1038829702 ## CI report: * bc4c3c9c52616fe913872e5396bcef703c9ce45e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31720) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18784: [FLINK-25941][streaming] Only emit committables with Long.MAX_VALUE as checkpoint id in batch mode
flinkbot edited a comment on pull request #18784: URL: https://github.com/apache/flink/pull/18784#issuecomment-1040430875 ## CI report: * f690f34e7826d733e6feca4044dc1a0cbc194458 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31662) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-26066) Introduce FileStoreRead
[ https://issues.apache.org/jira/browse/FLINK-26066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee closed FLINK-26066. Resolution: Fixed master: b9e30553eaa46cf51635337e5c439fc8634167b6 > Introduce FileStoreRead > --- > > Key: FLINK-26066 > URL: https://issues.apache.org/jira/browse/FLINK-26066 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Reporter: Caizhi Weng >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > Apart from {{FileStoreWrite}}, we also need a {{FileStoreRead}} operation to > read actual key-values for a specific partition and bucket. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26031) Support projection pushdown on keys and values in sst file readers
[ https://issues.apache.org/jira/browse/FLINK-26031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-26031: - Fix Version/s: table-store-0.1.0 (was: 1.15.0) > Support projection pushdown on keys and values in sst file readers > -- > > Key: FLINK-26031 > URL: https://issues.apache.org/jira/browse/FLINK-26031 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Reporter: Caizhi Weng >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.1.0 > > > Projection pushdown is an optimization for sources. With this optimization, > we can avoid reading useless columns and thus improve performance. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26066) Introduce FileStoreRead
[ https://issues.apache.org/jira/browse/FLINK-26066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-26066: - Fix Version/s: table-store-0.1.0 (was: 1.15.0) > Introduce FileStoreRead > --- > > Key: FLINK-26066 > URL: https://issues.apache.org/jira/browse/FLINK-26066 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Reporter: Caizhi Weng >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.1.0 > > > Apart from {{FileStoreWrite}}, we also need a {{FileStoreRead}} operation to > read actual key-values for a specific partition and bucket. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] LadyForest commented on a change in pull request #20: [FLINK-26103] Introduce log store
LadyForest commented on a change in pull request #20: URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808771586 ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/log/LogOptions.java ## @@ -0,0 +1,192 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.log; + +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.Description; +import org.apache.flink.configuration.description.InlineElement; + +import java.time.Duration; + +import static org.apache.flink.configuration.description.TextElement.text; +import static org.apache.flink.table.store.utils.OptionsUtils.formatEnumOption; + +/** Options for log store. */ +public class LogOptions { + +public static final ConfigOption SCAN = +ConfigOptions.key("scan") +.enumType(LogStartupMode.class) +.defaultValue(LogStartupMode.FULL) +.withDescription( +Description.builder() +.text("Specifies the startup mode for log consumer.") +.linebreak() + .list(formatEnumOption(LogStartupMode.FULL)) + .list(formatEnumOption(LogStartupMode.LATEST)) + .list(formatEnumOption(LogStartupMode.FROM_TIMESTAMP)) +.build()); + +public static final ConfigOption SCAN_TIMESTAMP_MILLS = +ConfigOptions.key("scan.timestamp-millis") +.longType() +.noDefaultValue() +.withDescription( +"Optional timestamp used in case of \"from-timestamp\" scan mode"); + +public static final ConfigOption RETENTION = +ConfigOptions.key("retention") +.durationType() +.noDefaultValue() +.withDescription( +"It means how long changes log will be kept. The default value is from the log system cluster."); + +public static final ConfigOption CONSISTENCY = +ConfigOptions.key("consistency") +.enumType(LogConsistency.class) +.defaultValue(LogConsistency.TRANSACTIONAL) +.withDescription( +Description.builder() +.text("Specifies the log consistency mode for table.") +.linebreak() +.list( + formatEnumOption(LogConsistency.TRANSACTIONAL), + formatEnumOption(LogConsistency.EVENTUAL)) +.build()); + +public static final ConfigOption CHANGELOG_MODE = +ConfigOptions.key("changelog-mode") +.enumType(LogChangelogMode.class) +.defaultValue(LogChangelogMode.AUTO) +.withDescription( +Description.builder() +.text("Specifies the log changelog mode for table.") +.linebreak() +.list( + formatEnumOption(LogChangelogMode.AUTO), + formatEnumOption(LogChangelogMode.ALL), + formatEnumOption(LogChangelogMode.UPSERT)) +.build()); + +public static final ConfigOption KEY_FORMAT = +ConfigOptions.key("key.format") +.stringType() +.defaultValue("json") +.withDescription( +"Specifies the key message format of log system with primary key."); + +public static final ConfigOption FORMAT = +Conf
[GitHub] [flink-table-store] LadyForest commented on a change in pull request #20: [FLINK-26103] Introduce log store
LadyForest commented on a change in pull request #20: URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808771943 ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/log/LogOptions.java ## @@ -0,0 +1,192 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.log; + +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.Description; +import org.apache.flink.configuration.description.InlineElement; + +import java.time.Duration; + +import static org.apache.flink.configuration.description.TextElement.text; +import static org.apache.flink.table.store.utils.OptionsUtils.formatEnumOption; + +/** Options for log store. */ +public class LogOptions { + +public static final ConfigOption SCAN = +ConfigOptions.key("scan") +.enumType(LogStartupMode.class) +.defaultValue(LogStartupMode.FULL) +.withDescription( +Description.builder() +.text("Specifies the startup mode for log consumer.") +.linebreak() + .list(formatEnumOption(LogStartupMode.FULL)) + .list(formatEnumOption(LogStartupMode.LATEST)) + .list(formatEnumOption(LogStartupMode.FROM_TIMESTAMP)) +.build()); + +public static final ConfigOption SCAN_TIMESTAMP_MILLS = +ConfigOptions.key("scan.timestamp-millis") +.longType() +.noDefaultValue() +.withDescription( +"Optional timestamp used in case of \"from-timestamp\" scan mode"); + +public static final ConfigOption RETENTION = +ConfigOptions.key("retention") +.durationType() +.noDefaultValue() +.withDescription( +"It means how long changes log will be kept. The default value is from the log system cluster."); + +public static final ConfigOption CONSISTENCY = +ConfigOptions.key("consistency") +.enumType(LogConsistency.class) +.defaultValue(LogConsistency.TRANSACTIONAL) +.withDescription( +Description.builder() +.text("Specifies the log consistency mode for table.") +.linebreak() +.list( + formatEnumOption(LogConsistency.TRANSACTIONAL), + formatEnumOption(LogConsistency.EVENTUAL)) +.build()); + +public static final ConfigOption CHANGELOG_MODE = +ConfigOptions.key("changelog-mode") +.enumType(LogChangelogMode.class) +.defaultValue(LogChangelogMode.AUTO) +.withDescription( +Description.builder() +.text("Specifies the log changelog mode for table.") +.linebreak() +.list( + formatEnumOption(LogChangelogMode.AUTO), + formatEnumOption(LogChangelogMode.ALL), + formatEnumOption(LogChangelogMode.UPSERT)) +.build()); + +public static final ConfigOption KEY_FORMAT = +ConfigOptions.key("key.format") +.stringType() +.defaultValue("json") +.withDescription( +"Specifies the key message format of log system with primary key."); + +public static final ConfigOption FORMAT = +Conf
[jira] [Commented] (FLINK-16419) Avoid to recommit transactions which are known committed successfully to Kafka upon recovery
[ https://issues.apache.org/jira/browse/FLINK-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493749#comment-17493749 ] Jun Qin commented on FLINK-16419: - My understanding is that [~fpaul] meant the problem mentioned by [~qzhzm173227] at 16/Nov/21 20:53 is unrelated to the issue described in this Jira originally. > Avoid to recommit transactions which are known committed successfully to > Kafka upon recovery > > > Key: FLINK-16419 > URL: https://issues.apache.org/jira/browse/FLINK-16419 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, Runtime / Checkpointing >Reporter: Jun Qin >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor, > usability > > When recovering from a snapshot (checkpoint/savepoint), FlinkKafkaProducer > tries to recommit all pre-committed transactions which are in the snapshot, > even if those transactions were successfully committed before (i.e., the call > to {{kafkaProducer.commitTransaction()}} via {{notifyCheckpointComplete()}} > returns OK). This may lead to recovery failures when recovering from a very > old snapshot because the transactional IDs in that snapshot may have been > expired and removed from Kafka. For example the following scenario: > # Start a Flink job with FlinkKafkaProducer sink with exactly-once > # Suspend the Flink job with a savepoint A > # Wait for time longer than {{transactional.id.expiration.ms}} + > {{transaction.remove.expired.transaction.cleanup.interval.ms}} > # Recover the job with savepoint A. > # The recovery will fail with the following error: > {noformat} > 2020-02-26 14:33:25,817 INFO > org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer > - Attempting to resume transaction Source: Custom Source -> Sink: > Unnamed-7df19f87deec5680128845fd9a6ca18d-1 with producerId 2001 and epoch > 1202020-02-26 14:33:25,914 INFO org.apache.kafka.clients.Metadata > - Cluster ID: RN0aqiOwTUmF5CnHv_IPxA > 2020-02-26 14:33:26,017 INFO org.apache.kafka.clients.producer.KafkaProducer > - [Producer clientId=producer-1, transactionalId=Source: Custom > Source -> Sink: Unnamed-7df19f87deec5680128845fd9a6ca18d-1] Closing the Kafka > producer with timeoutMillis = 92233720 > 36854775807 ms. > 2020-02-26 14:33:26,019 INFO org.apache.flink.runtime.taskmanager.Task > - Source: Custom Source -> Sink: Unnamed (1/1) > (a77e457941f09cd0ebbd7b982edc0f02) switched from RUNNING to FAILED. > org.apache.kafka.common.KafkaException: Unhandled error in EndTxnResponse: > The producer attempted to use a producer id which is not currently assigned > to its transactional id. > at > org.apache.kafka.clients.producer.internals.TransactionManager$EndTxnHandler.handleResponse(TransactionManager.java:1191) > at > org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:909) > at > org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) > at > org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:288) > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) > at java.lang.Thread.run(Thread.java:748) > {noformat} > For now, the workaround is to call > {{producer.ignoreFailuresAfterTransactionTimeout()}}. This is a bit risky, as > it may hide real transaction timeout errors. > After discussed with [~becket_qin], [~pnowojski] and [~aljoscha], a possible > way is to let JobManager, after successfully notifies all operators the > completion of a snapshot (via {{notifyCheckpoingComplete}}), record the > success, e.g., write the successful transactional IDs somewhere in the > snapshot. Then those transactions need not recommit upon recovery. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] imaffe commented on a change in pull request #18406: WIP: [FLINK-25686][pulsar]: add schema evolution support for pulsar source connector
imaffe commented on a change in pull request #18406: URL: https://github.com/apache/flink/pull/18406#discussion_r808773157 ## File path: flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/PulsarSourceBuilder.java ## @@ -460,6 +471,21 @@ } } +// Schema evolution check. +if (deserializationSchema instanceof PulsarSchemaWrapper +&& !Boolean.TRUE.equals(configBuilder.get(PULSAR_READ_SCHEMA_EVOLUTION))) { +LOG.info( +"It seems like you want to send message in Pulsar Schema." Review comment: typo: read -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18406: WIP: [FLINK-25686][pulsar]: add schema evolution support for pulsar source connector
flinkbot edited a comment on pull request #18406: URL: https://github.com/apache/flink/pull/18406#issuecomment-1017085273 ## CI report: * cc7f4cf450c7e8b88877b5a276aecc843bef039c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31677) * 8264c715c42f1489b9ce53c57b20b4d07e189870 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18784: [FLINK-25941][streaming] Only emit committables with Long.MAX_VALUE as checkpoint id in batch mode
flinkbot edited a comment on pull request #18784: URL: https://github.com/apache/flink/pull/18784#issuecomment-1040430875 ## CI report: * f690f34e7826d733e6feca4044dc1a0cbc194458 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31662) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] LadyForest commented on a change in pull request #20: [FLINK-26103] Introduce log store
LadyForest commented on a change in pull request #20: URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808775969 ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/sink/SinkRecord.java ## @@ -31,20 +31,15 @@ private final int bucket; -private final RowKind rowKind; - private final BinaryRowData key; private final RowData row; -public SinkRecord( -BinaryRowData partition, int bucket, RowKind rowKind, BinaryRowData key, RowData row) { +public SinkRecord(BinaryRowData partition, int bucket, BinaryRowData key, RowData row) { Review comment: Nit: If the`rowKind` is removed from `SinkRecord`, we'd better update the comment as well. ``` /** A sink record contains key, value and partition, bucket information. */ ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-26198) ArchitectureTest fails on AZP (table.api.StatementSet)
[ https://issues.apache.org/jira/browse/FLINK-26198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-26198. Fix Version/s: (was: 1.15.0) Resolution: Duplicate > ArchitectureTest fails on AZP (table.api.StatementSet) > -- > > Key: FLINK-26198 > URL: https://issues.apache.org/jira/browse/FLINK-26198 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.15.0 >Reporter: Roman Khachatryan >Priority: Blocker > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31681&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=26849 > {code} > [INFO] Running org.apache.flink.architecture.rules.ApiAnnotationRules > [ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 48.583 s <<< FAILURE! - in > org.apache.flink.architecture.rules.ApiAnnotationRules > [ERROR] > ApiAnnotationRules.PUBLIC_EVOLVING_API_METHODS_USE_ONLY_PUBLIC_EVOLVING_API_TYPES > Time elapsed: 0.282 s <<< FAILURE! > java.lang.AssertionError: > Architecture Violation [Priority: MEDIUM] - Rule 'Return and argument types > of methods annotated with @PublicEvolving must be annotated with > @Public(Evolving).' was violated (1 times): > org.apache.flink.table.api.StatementSet.compilePlan(): Returned leaf type > org.apache.flink.table.api.CompiledPlan does not satisfy: reside outside of > package 'org.apache.flink..' or annotated with @Public or annotated with > @PublicEvolving or annotated with @Deprecated > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18406: WIP: [FLINK-25686][pulsar]: add schema evolution support for pulsar source connector
flinkbot edited a comment on pull request #18406: URL: https://github.com/apache/flink/pull/18406#issuecomment-1017085273 ## CI report: * cc7f4cf450c7e8b88877b5a276aecc843bef039c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31677) * 8264c715c42f1489b9ce53c57b20b4d07e189870 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31722) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) * b263f91c0e716e39a9fd6ee6999f4d9b8fbe40b9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-26212) UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint failed due to java.nio.file.NoSuchFileException
Yun Gao created FLINK-26212: --- Summary: UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint failed due to java.nio.file.NoSuchFileException Key: FLINK-26212 URL: https://issues.apache.org/jira/browse/FLINK-26212 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Affects Versions: 1.14.3 Reporter: Yun Gao {code:java} Feb 17 01:59:30 [ERROR] Tests run: 36, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 203.954 s <<< FAILURE! - in org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase Feb 17 01:59:30 [ERROR] shouldRescaleUnalignedCheckpoint[downscale multi_input from 2 to 1](org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase) Time elapsed: 1.154 s <<< ERROR! Feb 17 01:59:30 java.io.UncheckedIOException: java.nio.file.NoSuchFileException: /tmp/junit9158163965206615901/junit1958406566349108348/eec2fe5565487e3ce4c95764a842f712/chk-6/7ab2b240-a313-4d38-9806-8a97334abbc8 Feb 17 01:59:30 at java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87) Feb 17 01:59:30 at java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103) Feb 17 01:59:30 at java.base/java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1811) Feb 17 01:59:30 at java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:127) Feb 17 01:59:30 at java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:502) Feb 17 01:59:30 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:488) Feb 17 01:59:30 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) Feb 17 01:59:30 at java.base/java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:150) Feb 17 01:59:30 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) Feb 17 01:59:30 at java.base/java.util.stream.ReferencePipeline.findAny(ReferencePipeline.java:548) Feb 17 01:59:30 at org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.hasMetadata(UnalignedCheckpointTestBase.java:189) Feb 17 01:59:30 at org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.isCompletedCheckpoint(UnalignedCheckpointTestBase.java:179) Feb 17 01:59:30 at java.base/java.nio.file.Files.lambda$find$2(Files.java:3948) Feb 17 01:59:30 at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:176) Feb 17 01:59:30 at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) Feb 17 01:59:30 at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) Feb 17 01:59:30 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) Feb 17 01:59:30 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) Feb 17 01:59:30 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) Feb 17 01:59:30 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) Feb 17 01:59:30 at java.base/java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:558) Feb 17 01:59:30 at java.base/java.util.stream.ReferencePipeline.max(ReferencePipeline.java:594) Feb 17 01:59:30 at org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:169) Feb 17 01:59:30 at org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint(UnalignedCheckpointRescaleITCase.java:515) Feb 17 01:59:30 at jdk.internal.reflect.GeneratedMethodAccessor98.invoke(Unknown Source) Feb 17 01:59:30 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Feb 17 01:59:30 at java.base/java.lang.reflect.Method.invoke(Method.java:566) Feb 17 01:59:30 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) Feb 17 01:59:30 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Feb 17 01:59:30 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) Feb 17 01:59:30 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Feb 17 01:59:30 at org.junit.rules.Verifier$1.evaluate(Verifier.java:35) Feb 17 01:59:30 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) Feb 17 01:59:30 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) Feb 17 01:59:30 at org.junit.rules.TestWatcher$1.evaluate(TestWatc
[GitHub] [flink] rkhachatryan commented on pull request #18787: [FLINK-26165][tests] Don't test NATIVE savepoints with Changelog enabled
rkhachatryan commented on pull request #18787: URL: https://github.com/apache/flink/pull/18787#issuecomment-1042683826 Rebased. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] LadyForest commented on a change in pull request #20: [FLINK-26103] Introduce log store
LadyForest commented on a change in pull request #20: URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808779315 ## File path: flink-table-store-core/src/main/java/org/apache/flink/table/store/sink/SinkRecordConverter.java ## @@ -21,36 +21,49 @@ import org.apache.flink.table.data.RowData; import org.apache.flink.table.data.binary.BinaryRowData; import org.apache.flink.table.runtime.generated.Projection; -import org.apache.flink.table.runtime.typeutils.RowDataSerializer; import org.apache.flink.table.store.utils.ProjectionUtils; import org.apache.flink.table.types.logical.RowType; import org.apache.flink.types.RowKind; +import java.util.stream.IntStream; + /** Converter for converting {@link RowData} to {@link SinkRecord}. */ public class SinkRecordConverter { private final int numBucket; -private final RowDataSerializer rowSerializer; +private final Projection allProjection; private final Projection partProjection; private final Projection keyProjection; public SinkRecordConverter(int numBucket, RowType inputType, int[] partitions, int[] keys) { this.numBucket = numBucket; -this.rowSerializer = new RowDataSerializer(inputType); +this.allProjection = +ProjectionUtils.newProjection( +inputType, IntStream.range(0, inputType.getFieldCount()).toArray()); this.partProjection = ProjectionUtils.newProjection(inputType, partitions); this.keyProjection = ProjectionUtils.newProjection(inputType, keys); } public SinkRecord convert(RowData row) { -RowKind rowKind = row.getRowKind(); -row.setRowKind(RowKind.INSERT); BinaryRowData partition = partProjection.apply(row); BinaryRowData key = keyProjection.apply(row); -int hash = key.getArity() == 0 ? rowSerializer.toBinaryRow(row).hashCode() : key.hashCode(); +int hash = key.getArity() == 0 ? hashRow(row) : key.hashCode(); int bucket = Math.abs(hash % numBucket); -return new SinkRecord(partition, bucket, rowKind, key, row); +return new SinkRecord(partition, bucket, key, row); +} + +private int hashRow(RowData row) { +if (row instanceof BinaryRowData) { +RowKind rowKind = row.getRowKind(); +row.setRowKind(RowKind.INSERT); +int hash = row.hashCode(); +row.setRowKind(rowKind); Review comment: Why set row kind twice? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-16419) Avoid to recommit transactions which are known committed successfully to Kafka upon recovery
[ https://issues.apache.org/jira/browse/FLINK-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493758#comment-17493758 ] Fabian Paul commented on FLINK-16419: - The KafkaSink is a bit more relaxed when it comes to errors during committing and allows to skip certain exceptions. [https://github.com/apache/flink/blob/54b21e87a5931f764631add1d495ad2e961bca35/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaCommitter.java#L79] It should definitely mitigate the problem but users will lose data but we plan to add a metric in 1.16 to count these things that users are aware of the problem. > Avoid to recommit transactions which are known committed successfully to > Kafka upon recovery > > > Key: FLINK-16419 > URL: https://issues.apache.org/jira/browse/FLINK-16419 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, Runtime / Checkpointing >Reporter: Jun Qin >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor, > usability > > When recovering from a snapshot (checkpoint/savepoint), FlinkKafkaProducer > tries to recommit all pre-committed transactions which are in the snapshot, > even if those transactions were successfully committed before (i.e., the call > to {{kafkaProducer.commitTransaction()}} via {{notifyCheckpointComplete()}} > returns OK). This may lead to recovery failures when recovering from a very > old snapshot because the transactional IDs in that snapshot may have been > expired and removed from Kafka. For example the following scenario: > # Start a Flink job with FlinkKafkaProducer sink with exactly-once > # Suspend the Flink job with a savepoint A > # Wait for time longer than {{transactional.id.expiration.ms}} + > {{transaction.remove.expired.transaction.cleanup.interval.ms}} > # Recover the job with savepoint A. > # The recovery will fail with the following error: > {noformat} > 2020-02-26 14:33:25,817 INFO > org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer > - Attempting to resume transaction Source: Custom Source -> Sink: > Unnamed-7df19f87deec5680128845fd9a6ca18d-1 with producerId 2001 and epoch > 1202020-02-26 14:33:25,914 INFO org.apache.kafka.clients.Metadata > - Cluster ID: RN0aqiOwTUmF5CnHv_IPxA > 2020-02-26 14:33:26,017 INFO org.apache.kafka.clients.producer.KafkaProducer > - [Producer clientId=producer-1, transactionalId=Source: Custom > Source -> Sink: Unnamed-7df19f87deec5680128845fd9a6ca18d-1] Closing the Kafka > producer with timeoutMillis = 92233720 > 36854775807 ms. > 2020-02-26 14:33:26,019 INFO org.apache.flink.runtime.taskmanager.Task > - Source: Custom Source -> Sink: Unnamed (1/1) > (a77e457941f09cd0ebbd7b982edc0f02) switched from RUNNING to FAILED. > org.apache.kafka.common.KafkaException: Unhandled error in EndTxnResponse: > The producer attempted to use a producer id which is not currently assigned > to its transactional id. > at > org.apache.kafka.clients.producer.internals.TransactionManager$EndTxnHandler.handleResponse(TransactionManager.java:1191) > at > org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:909) > at > org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) > at > org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:288) > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) > at java.lang.Thread.run(Thread.java:748) > {noformat} > For now, the workaround is to call > {{producer.ignoreFailuresAfterTransactionTimeout()}}. This is a bit risky, as > it may hide real transaction timeout errors. > After discussed with [~becket_qin], [~pnowojski] and [~aljoscha], a possible > way is to let JobManager, after successfully notifies all operators the > completion of a snapshot (via {{notifyCheckpoingComplete}}), record the > success, e.g., write the successful transactional IDs somewhere in the > snapshot. Then those transactions need not recommit upon recovery. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Reopened] (FLINK-25129) Update docs to use flink-table-planner-loader instead of flink-table-planner
[ https://issues.apache.org/jira/browse/FLINK-25129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Guardiani reopened FLINK-25129: - I'm reopening as I have a PR to clarify some details, improve wording and add more info. > Update docs to use flink-table-planner-loader instead of flink-table-planner > > > Key: FLINK-25129 > URL: https://issues.apache.org/jira/browse/FLINK-25129 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Examples, Table SQL / API >Reporter: Francesco Guardiani >Assignee: Francesco Guardiani >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > For more details > https://docs.google.com/document/d/12yDUCnvcwU2mODBKTHQ1xhfOq1ujYUrXltiN_rbhT34/edit?usp=sharing -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) * 70cdf760be9de80eaf3bc353b9ff3bb7e8fe80c1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18780: [FLINK-26160][pulsar][doc] update the doc of setUnboundedStopCursor()
flinkbot edited a comment on pull request #18780: URL: https://github.com/apache/flink/pull/18780#issuecomment-1040253405 ## CI report: * da76073dfc386e3159e933945d3bdb0ab8c824a9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31538) * 894bb5879180ef988b2448c0053f51b0aba2ac92 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18787: [FLINK-26165][tests] Don't test NATIVE savepoints with Changelog enabled
flinkbot edited a comment on pull request #18787: URL: https://github.com/apache/flink/pull/18787#issuecomment-1040578372 ## CI report: * 853f0fd7bfd9d7d31cadf08b7742ed2ddf81253a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31681) * 37833966bc60ef06fcd5f97de8374d7d7e107f8d UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] slinkydeveloper opened a new pull request #18812: [FLINK-25129][docs] Improvements to the table-planner-loader related docs
slinkydeveloper opened a new pull request #18812: URL: https://github.com/apache/flink/pull/18812 This is a followup of https://github.com/apache/flink/pull/18353 It includes improvements to the documentation for the table-planner-loader story, in several parts of the doc, and it also includes additional improvements to other related project configuration docs, such as the new `Running and packaging` and changes to the connectors page. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24677) JdbcBatchingOutputFormat should not generate circulate chaining of exceptions when flushing fails in timer thread
[ https://issues.apache.org/jira/browse/FLINK-24677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493762#comment-17493762 ] 张健 commented on FLINK-24677: [~TsReaper] I see class _JdbcBatchingOutputFormat_ is refactored to class _JdbcOutputFormat_ in flink release-1.14. Should I just fix this bug in _JdbcOutputFormat_ from master branch? Or both fix _JdbcBatchingOutputFormat_ before flink release-1.13? > JdbcBatchingOutputFormat should not generate circulate chaining of exceptions > when flushing fails in timer thread > - > > Key: FLINK-24677 > URL: https://issues.apache.org/jira/browse/FLINK-24677 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: 1.15.0 >Reporter: Caizhi Weng >Priority: Major > > This is reported from the [user mailing > list|https://lists.apache.org/thread.html/r3e725f52e4f325b9dcb790635cc642bd6018c4bca39f86c71b8a60f4%40%3Cuser.flink.apache.org%3E]. > In the timer thread created in {{JdbcBatchingOutputFormat#open}}, > {{flushException}} field will be recorded if the call to {{flush}} throws an > exception. This exception is used to fail the job in the main thread. > However {{JdbcBatchingOutputFormat#flush}} will also check for this exception > and will wrap it with a new layer of runtime exception. This will cause a > super long stack when the main thread finally discover the exception and > fails. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18780: [FLINK-26160][pulsar][doc] update the doc of setUnboundedStopCursor()
flinkbot edited a comment on pull request #18780: URL: https://github.com/apache/flink/pull/18780#issuecomment-1040253405 ## CI report: * da76073dfc386e3159e933945d3bdb0ab8c824a9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31538) * 894bb5879180ef988b2448c0053f51b0aba2ac92 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31724) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18787: [FLINK-26165][tests] Don't test NATIVE savepoints with Changelog enabled
flinkbot edited a comment on pull request #18787: URL: https://github.com/apache/flink/pull/18787#issuecomment-1040578372 ## CI report: * 853f0fd7bfd9d7d31cadf08b7742ed2ddf81253a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31681) * 37833966bc60ef06fcd5f97de8374d7d7e107f8d Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31725) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18806: [FLINK-26105][e2e] Fixes log file extension
flinkbot edited a comment on pull request #18806: URL: https://github.com/apache/flink/pull/18806#issuecomment-1041656779 ## CI report: * ea2f81891557df0fa6ce6cba818a37a18c011c6f UNKNOWN * b1c883f8c2fd5e038b06b9c20eb41f23fbbf6f63 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #18812: [FLINK-25129][docs] Improvements to the table-planner-loader related docs
flinkbot commented on pull request #18812: URL: https://github.com/apache/flink/pull/18812#issuecomment-1042692210 ## CI report: * 3594fc6828daaf7a8bae4dedff82d9a73e81d1cc UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #18812: [FLINK-25129][docs] Improvements to the table-planner-loader related docs
flinkbot commented on pull request #18812: URL: https://github.com/apache/flink/pull/18812#issuecomment-1042692615 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 3594fc6828daaf7a8bae4dedff82d9a73e81d1cc (Thu Feb 17 08:30:07 UTC 2022) **Warnings:** * Documentation files were touched, but no `docs/content.zh/` files: Update Chinese documentation or file Jira ticket. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zentol commented on a change in pull request #18808: [FLINK-26014][docs] Add documentation for how to configure the working directory
zentol commented on a change in pull request #18808: URL: https://github.com/apache/flink/pull/18808#discussion_r808781782 ## File path: docs/content/docs/deployment/resource-providers/standalone/kubernetes.md ## @@ -246,6 +246,17 @@ To use Reactive Mode on Kubernetes, follow the same steps as for [deploying a jo Once you have deployed the *Application Cluster*, you can scale your job up or down by changing the replica count in the `flink-taskmanager` deployment. +### Enabling Local Recovery Across Pod Restarts + +In order to speed up recoveries in case of pod failures, you can leverage Flink's [working directory]({{< ref "docs/deployment/resource-providers/standalone/working_directory" >}}) feature together with local recovery. +If the working directory is configured to reside on a persistent volume that gets remounted to a restarted TaskManager pod, then Flink is able to recover state locally. +With the [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), Kubernetes gives you the exact tool you need to map a pod to a persistent volume. + +So instead of deploying the TaskManagers as a Deployment, you need to configure a StatefulSet for the TaskManagers. +The StatefulSet allows to configure a volume claim template that you use to mount persistent volumes to the TaskManagers. Review comment: ```suggestion This requires to deploy the TaskManagers as a StatefulSet, which allows you to configure a volume claim template that is used to mount persistent volumes to the TaskManagers. ``` The leading "So instead" is bothering me. ## File path: docs/content/docs/deployment/resource-providers/standalone/kubernetes.md ## @@ -246,6 +246,17 @@ To use Reactive Mode on Kubernetes, follow the same steps as for [deploying a jo Once you have deployed the *Application Cluster*, you can scale your job up or down by changing the replica count in the `flink-taskmanager` deployment. +### Enabling Local Recovery Across Pod Restarts + +In order to speed up recoveries in case of pod failures, you can leverage Flink's [working directory]({{< ref "docs/deployment/resource-providers/standalone/working_directory" >}}) feature together with local recovery. +If the working directory is configured to reside on a persistent volume that gets remounted to a restarted TaskManager pod, then Flink is able to recover state locally. +With the [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), Kubernetes gives you the exact tool you need to map a pod to a persistent volume. + +So instead of deploying the TaskManagers as a Deployment, you need to configure a StatefulSet for the TaskManagers. +The StatefulSet allows to configure a volume claim template that you use to mount persistent volumes to the TaskManagers. Review comment: Aren't volume claim templates also usable for deployments? It seems we need them primarily to have the stable ID, but this line reads differently. ## File path: docs/content/docs/deployment/resource-providers/standalone/kubernetes.md ## @@ -246,6 +246,17 @@ To use Reactive Mode on Kubernetes, follow the same steps as for [deploying a jo Once you have deployed the *Application Cluster*, you can scale your job up or down by changing the replica count in the `flink-taskmanager` deployment. +### Enabling Local Recovery Across Pod Restarts + +In order to speed up recoveries in case of pod failures, you can leverage Flink's [working directory]({{< ref "docs/deployment/resource-providers/standalone/working_directory" >}}) feature together with local recovery. +If the working directory is configured to reside on a persistent volume that gets remounted to a restarted TaskManager pod, then Flink is able to recover state locally. +With the [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), Kubernetes gives you the exact tool you need to map a pod to a persistent volume. + +So instead of deploying the TaskManagers as a Deployment, you need to configure a StatefulSet for the TaskManagers. +The StatefulSet allows to configure a volume claim template that you use to mount persistent volumes to the TaskManagers. Review comment: So maybe change the order a bit; first saying that we need a deterministic ID, and for that leverage the pod name of a statefulset. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] slinkydeveloper opened a new pull request #18813: [FLINK-26125][docs][table] Add new documentation for the CAST changes in 1.15
slinkydeveloper opened a new pull request #18813: URL: https://github.com/apache/flink/pull/18813 ## What is the purpose of the change This PR adds documentation for the new TRY_CAST and describes the various changes related to casting in 1.15, including a matrix of the supported cast tuples. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26125) Doc overhaul for the CAST behaviour
[ https://issues.apache.org/jira/browse/FLINK-26125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-26125: --- Labels: pull-request-available (was: ) > Doc overhaul for the CAST behaviour > --- > > Key: FLINK-26125 > URL: https://issues.apache.org/jira/browse/FLINK-26125 > Project: Flink > Issue Type: Sub-task > Components: Documentation, Table SQL / API >Reporter: Francesco Guardiani >Assignee: Francesco Guardiani >Priority: Major > Labels: pull-request-available > > This includes: > * Proper documentation of the new TRY_CAST > * Add a CAST matrix to document which CAST tuples are supported -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26105) Rolling log filenames cause running HA (hashmap, async) end-to-end test to fail on azure
[ https://issues.apache.org/jira/browse/FLINK-26105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-26105: -- Summary: Rolling log filenames cause running HA (hashmap, async) end-to-end test to fail on azure (was: Running HA (hashmap, async) end-to-end test failed on azure) > Rolling log filenames cause running HA (hashmap, async) end-to-end test to > fail on azure > > > Key: FLINK-26105 > URL: https://issues.apache.org/jira/browse/FLINK-26105 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Critical > Labels: pull-request-available, test-stability > > {code:java} > Feb 14 01:31:29 Killed TM @ 255483 > Feb 14 01:31:29 Starting new TM. > Feb 14 01:31:42 Killed TM @ 258722 > Feb 14 01:31:42 Starting new TM. > Feb 14 01:32:00 Checking for non-empty .out files... > Feb 14 01:32:00 No non-empty .out files. > Feb 14 01:32:00 FAILURE: A JM did not take over. > Feb 14 01:32:00 One or more tests FAILED. > Feb 14 01:32:00 Stopping job timeout watchdog (with pid=250820) > Feb 14 01:32:00 Killing JM watchdog @ 252644 > Feb 14 01:32:00 Killing TM watchdog @ 253262 > Feb 14 01:32:00 [FAIL] Test script contains errors. > Feb 14 01:32:00 Checking of logs skipped. > Feb 14 01:32:00 > Feb 14 01:32:00 [FAIL] 'Running HA (hashmap, async) end-to-end test' failed > after 2 minutes and 51 seconds! Test exited with exit code 1 > Feb 14 01:32:00 > 01:32:00 ##[group]Environment Information > Feb 14 01:32:01 Searching for .dump, .dumpstream and related files in > '/home/vsts/work/1/s' > dmesg: read kernel buffer failed: Operation not permitted > Feb 14 01:32:06 Stopping taskexecutor daemon (pid: 259377) on host > fv-az313-602. > Feb 14 01:32:07 Stopping standalonesession daemon (pid: 256528) on host > fv-az313-602. > Feb 14 01:32:08 Stopping zookeeper... > Feb 14 01:32:08 Stopping zookeeper daemon (pid: 251023) on host fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 251636), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 255483), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 258722), because it is not > running anymore on fv-az313-602. > The STDIO streams did not close within 10 seconds of the exit event from > process '/usr/bin/bash'. This may indicate a child process inherited the > STDIO streams and has not yet exited. > ##[error]Bash exited with code '1'. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347&view=logs&j=e9d3d34f-3d15-59f4-0e3e-35067d100dfe&t=f8a6d3eb-38cf-5cca-9a99-d0badeb5fe62&l=8020 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] zentol commented on a change in pull request #18807: [FLINK-26195][connectors/kafka] hotfix logging issues due to mixed junit versions
zentol commented on a change in pull request #18807: URL: https://github.com/apache/flink/pull/18807#discussion_r808789959 ## File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaSourceReaderTest.java ## @@ -86,7 +86,7 @@ public class KafkaSourceReaderTest extends SourceReaderTestBase { private static final String TOPIC = "KafkaSourceReaderTest"; -@BeforeAll +@BeforeClass Review comment: Given that SourceReaderTestBase also uses junit5 this doesn't seem correct. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MartijnVisser commented on a change in pull request #18780: [FLINK-26160][pulsar][doc] update the doc of setUnboundedStopCursor()
MartijnVisser commented on a change in pull request #18780: URL: https://github.com/apache/flink/pull/18780#discussion_r808789593 ## File path: flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/PulsarSourceBuilder.java ## @@ -311,6 +311,11 @@ * return {@link Boundedness#CONTINUOUS_UNBOUNDED} even though it will stop at the stopping * offsets specified by the stopping offsets {@link StopCursor}. * + * However. to stop the connector user has to disable the auto partition discovery. As auto + * partition discovery always expected new splits to come and not exiting. To disable auto + * partition discovery, use builder.setConfig({@link + * PulsarSourceOptions.PULSAR_PARTITION_DISCOVERY_INTERVAL_MS}, -1). + * Review comment: This is probably also worthwhile to include below line 93? ## File path: flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/PulsarSourceBuilder.java ## @@ -311,6 +311,11 @@ * return {@link Boundedness#CONTINUOUS_UNBOUNDED} even though it will stop at the stopping Review comment: For consistency purposes, I think it would be great if the text from line 89-93 and and 303-307 can be copied over, except for the different methods of course. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zentol commented on a change in pull request #18807: [FLINK-26195][connectors/kafka] hotfix logging issues due to mixed junit versions
zentol commented on a change in pull request #18807: URL: https://github.com/apache/flink/pull/18807#discussion_r808789959 ## File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaSourceReaderTest.java ## @@ -86,7 +86,7 @@ public class KafkaSourceReaderTest extends SourceReaderTestBase { private static final String TOPIC = "KafkaSourceReaderTest"; -@BeforeAll +@BeforeClass Review comment: Given that SourceReaderTestBase also uses junit5 (but uses TestLogger) this doesn't seem correct. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) * 70cdf760be9de80eaf3bc353b9ff3bb7e8fe80c1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26105) Rolling log filenames cause end-to-end test to fail (example test failure "Running HA (hashmap, async)")
[ https://issues.apache.org/jira/browse/FLINK-26105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-26105: -- Summary: Rolling log filenames cause end-to-end test to fail (example test failure "Running HA (hashmap, async)") (was: Rolling log filenames cause running HA (hashmap, async) end-to-end test to fail on azure) > Rolling log filenames cause end-to-end test to fail (example test failure > "Running HA (hashmap, async)") > > > Key: FLINK-26105 > URL: https://issues.apache.org/jira/browse/FLINK-26105 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Critical > Labels: pull-request-available, test-stability > > {code:java} > Feb 14 01:31:29 Killed TM @ 255483 > Feb 14 01:31:29 Starting new TM. > Feb 14 01:31:42 Killed TM @ 258722 > Feb 14 01:31:42 Starting new TM. > Feb 14 01:32:00 Checking for non-empty .out files... > Feb 14 01:32:00 No non-empty .out files. > Feb 14 01:32:00 FAILURE: A JM did not take over. > Feb 14 01:32:00 One or more tests FAILED. > Feb 14 01:32:00 Stopping job timeout watchdog (with pid=250820) > Feb 14 01:32:00 Killing JM watchdog @ 252644 > Feb 14 01:32:00 Killing TM watchdog @ 253262 > Feb 14 01:32:00 [FAIL] Test script contains errors. > Feb 14 01:32:00 Checking of logs skipped. > Feb 14 01:32:00 > Feb 14 01:32:00 [FAIL] 'Running HA (hashmap, async) end-to-end test' failed > after 2 minutes and 51 seconds! Test exited with exit code 1 > Feb 14 01:32:00 > 01:32:00 ##[group]Environment Information > Feb 14 01:32:01 Searching for .dump, .dumpstream and related files in > '/home/vsts/work/1/s' > dmesg: read kernel buffer failed: Operation not permitted > Feb 14 01:32:06 Stopping taskexecutor daemon (pid: 259377) on host > fv-az313-602. > Feb 14 01:32:07 Stopping standalonesession daemon (pid: 256528) on host > fv-az313-602. > Feb 14 01:32:08 Stopping zookeeper... > Feb 14 01:32:08 Stopping zookeeper daemon (pid: 251023) on host fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 251636), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 255483), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 258722), because it is not > running anymore on fv-az313-602. > The STDIO streams did not close within 10 seconds of the exit event from > process '/usr/bin/bash'. This may indicate a child process inherited the > STDIO streams and has not yet exited. > ##[error]Bash exited with code '1'. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347&view=logs&j=e9d3d34f-3d15-59f4-0e3e-35067d100dfe&t=f8a6d3eb-38cf-5cca-9a99-d0badeb5fe62&l=8020 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18785: [FLINK-26167][table-planner] Explicitly set the partitioner for the sql operators whose shuffle and sort are removed
flinkbot edited a comment on pull request #18785: URL: https://github.com/apache/flink/pull/18785#issuecomment-1040524147 ## CI report: * 15dc3494e8378bd67fd2ca859eeb317d88d45c95 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31646) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18806: [FLINK-26105][e2e] Fixes log file extension
flinkbot edited a comment on pull request #18806: URL: https://github.com/apache/flink/pull/18806#issuecomment-1041656779 ## CI report: * ea2f81891557df0fa6ce6cba818a37a18c011c6f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18810: [FLINK-25289][hotfix] use the normal jar in flink-end-to-end-tests instead of a separate one
flinkbot edited a comment on pull request #18810: URL: https://github.com/apache/flink/pull/18810#issuecomment-1042612648 ## CI report: * 5ec04ad6ed6a32bb3e6b7167fbacda8511f12b48 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31712) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] (FLINK-25246) Make benchmarks runnable on Java 11
[ https://issues.apache.org/jira/browse/FLINK-25246 ] Yun Gao deleted comment on FLINK-25246: - was (Author: gaoyunhaii): Hi [~akalashnikov]~ The previous test are based on the Alibaba JDK, thus the result might not be exactly the same. Also it changed some JDK parameters and the ones also applied to open jdk / oracle jdk is _-XX:-CompactStrings -XX:-UseBiasedLocking -XX:-UseCountedLoopSafepoint -XX:FreqInlineSize=500 -XX:InlineSmallCode=3000_ . The above changes to the option might related to G1 garbage collector, thus if convenient could you first have a try if changing the default GC back to CMS for JDK11 might make some differences? > Make benchmarks runnable on Java 11 > --- > > Key: FLINK-25246 > URL: https://issues.apache.org/jira/browse/FLINK-25246 > Project: Flink > Issue Type: Sub-task > Components: Benchmarks >Reporter: Chesnay Schepler >Assignee: Anton Kalashnikov >Priority: Major > Labels: pull-request-available > Attachments: benchmarksResult.csv, java8vsjava11.png > > > We should find a way to run our benchmarks on Java 11 to prepare for an > eventual migration. Whether this means running Java 8/11 side-by-side, or > only Java 11 is up for debate. > To clarify, this ticket is only about setting up the infrastructure, not > about resolving any performance issues. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18812: [FLINK-25129][docs] Improvements to the table-planner-loader related docs
flinkbot edited a comment on pull request #18812: URL: https://github.com/apache/flink/pull/18812#issuecomment-1042692210 ## CI report: * 3594fc6828daaf7a8bae4dedff82d9a73e81d1cc Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31726) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #18813: [FLINK-26125][docs][table] Add new documentation for the CAST changes in 1.15
flinkbot commented on pull request #18813: URL: https://github.com/apache/flink/pull/18813#issuecomment-1042696330 ## CI report: * 5b42c60afe85ce7cf91a95d57c2a736a81677af8 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #18813: [FLINK-26125][docs][table] Add new documentation for the CAST changes in 1.15
flinkbot commented on pull request #18813: URL: https://github.com/apache/flink/pull/18813#issuecomment-1042697082 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 5b42c60afe85ce7cf91a95d57c2a736a81677af8 (Thu Feb 17 08:36:06 UTC 2022) **Warnings:** * Documentation files were touched, but no `docs/content.zh/` files: Update Chinese documentation or file Jira ticket. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18797: [FLINK-26180] Update docs to introduce the compaction for FileSink.
flinkbot edited a comment on pull request #18797: URL: https://github.com/apache/flink/pull/18797#issuecomment-1041240844 ## CI report: * 270a217fe153986cc08b9e0b2cc9a1327e20b7cf Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31715) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18806: [FLINK-26105][e2e] Fixes log file extension
flinkbot edited a comment on pull request #18806: URL: https://github.com/apache/flink/pull/18806#issuecomment-1041656779 ## CI report: * ea2f81891557df0fa6ce6cba818a37a18c011c6f UNKNOWN * b1c883f8c2fd5e038b06b9c20eb41f23fbbf6f63 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18812: [FLINK-25129][docs] Improvements to the table-planner-loader related docs
flinkbot edited a comment on pull request #18812: URL: https://github.com/apache/flink/pull/18812#issuecomment-1042692210 ## CI report: * 3594fc6828daaf7a8bae4dedff82d9a73e81d1cc Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31726) * c7273cc2faa20382e0b9e182ae6d3a64b9ba5e48 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18813: [FLINK-26125][docs][table] Add new documentation for the CAST changes in 1.15
flinkbot edited a comment on pull request #18813: URL: https://github.com/apache/flink/pull/18813#issuecomment-1042696330 ## CI report: * 5b42c60afe85ce7cf91a95d57c2a736a81677af8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31727) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hackergin commented on pull request #18058: [FLINK-24571][connectors/elasticsearch] Supports a system time function(now() and current_timestamp) in index pattern
hackergin commented on pull request #18058: URL: https://github.com/apache/flink/pull/18058#issuecomment-1042703294 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26105) Rolling log filenames cause end-to-end test to fail (example test failure "Running HA (hashmap, async)")
[ https://issues.apache.org/jira/browse/FLINK-26105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-26105: -- Affects Version/s: 1.14.3 1.13.6 > Rolling log filenames cause end-to-end test to fail (example test failure > "Running HA (hashmap, async)") > > > Key: FLINK-26105 > URL: https://issues.apache.org/jira/browse/FLINK-26105 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0, 1.13.6, 1.14.3 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Critical > Labels: pull-request-available, test-stability > > {code:java} > Feb 14 01:31:29 Killed TM @ 255483 > Feb 14 01:31:29 Starting new TM. > Feb 14 01:31:42 Killed TM @ 258722 > Feb 14 01:31:42 Starting new TM. > Feb 14 01:32:00 Checking for non-empty .out files... > Feb 14 01:32:00 No non-empty .out files. > Feb 14 01:32:00 FAILURE: A JM did not take over. > Feb 14 01:32:00 One or more tests FAILED. > Feb 14 01:32:00 Stopping job timeout watchdog (with pid=250820) > Feb 14 01:32:00 Killing JM watchdog @ 252644 > Feb 14 01:32:00 Killing TM watchdog @ 253262 > Feb 14 01:32:00 [FAIL] Test script contains errors. > Feb 14 01:32:00 Checking of logs skipped. > Feb 14 01:32:00 > Feb 14 01:32:00 [FAIL] 'Running HA (hashmap, async) end-to-end test' failed > after 2 minutes and 51 seconds! Test exited with exit code 1 > Feb 14 01:32:00 > 01:32:00 ##[group]Environment Information > Feb 14 01:32:01 Searching for .dump, .dumpstream and related files in > '/home/vsts/work/1/s' > dmesg: read kernel buffer failed: Operation not permitted > Feb 14 01:32:06 Stopping taskexecutor daemon (pid: 259377) on host > fv-az313-602. > Feb 14 01:32:07 Stopping standalonesession daemon (pid: 256528) on host > fv-az313-602. > Feb 14 01:32:08 Stopping zookeeper... > Feb 14 01:32:08 Stopping zookeeper daemon (pid: 251023) on host fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 251636), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 255483), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 258722), because it is not > running anymore on fv-az313-602. > The STDIO streams did not close within 10 seconds of the exit event from > process '/usr/bin/bash'. This may indicate a child process inherited the > STDIO streams and has not yet exited. > ##[error]Bash exited with code '1'. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347&view=logs&j=e9d3d34f-3d15-59f4-0e3e-35067d100dfe&t=f8a6d3eb-38cf-5cca-9a99-d0badeb5fe62&l=8020 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18058: [FLINK-24571][connectors/elasticsearch] Supports a system time function(now() and current_timestamp) in index pattern
flinkbot edited a comment on pull request #18058: URL: https://github.com/apache/flink/pull/18058#issuecomment-988949831 ## CI report: * 93c33001cf55690369281de939bd79bb3727ad9a UNKNOWN * 47b985560b3efeb6b084c62e67ab56c2eb2cd7a9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31705) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] gaborgsomogyi commented on pull request #18796: [FLINK-26166][runtime-web] Add auto newline detection to prettier formatter
gaborgsomogyi commented on pull request #18796: URL: https://github.com/apache/flink/pull/18796#issuecomment-1042704830 @mbalassi started a build on my win box and gathering the logs. When it's done hopefully I can attach it here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26105) Rolling log filenames cause end-to-end test to fail (example test failure "Running HA (hashmap, async)")
[ https://issues.apache.org/jira/browse/FLINK-26105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493772#comment-17493772 ] Matthias Pohl commented on FLINK-26105: --- I updated the title and added additional affected versions because this issue is also present in older versions of Flink. > Rolling log filenames cause end-to-end test to fail (example test failure > "Running HA (hashmap, async)") > > > Key: FLINK-26105 > URL: https://issues.apache.org/jira/browse/FLINK-26105 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0, 1.13.6, 1.14.3 >Reporter: Yun Gao >Assignee: Matthias Pohl >Priority: Critical > Labels: pull-request-available, test-stability > > {code:java} > Feb 14 01:31:29 Killed TM @ 255483 > Feb 14 01:31:29 Starting new TM. > Feb 14 01:31:42 Killed TM @ 258722 > Feb 14 01:31:42 Starting new TM. > Feb 14 01:32:00 Checking for non-empty .out files... > Feb 14 01:32:00 No non-empty .out files. > Feb 14 01:32:00 FAILURE: A JM did not take over. > Feb 14 01:32:00 One or more tests FAILED. > Feb 14 01:32:00 Stopping job timeout watchdog (with pid=250820) > Feb 14 01:32:00 Killing JM watchdog @ 252644 > Feb 14 01:32:00 Killing TM watchdog @ 253262 > Feb 14 01:32:00 [FAIL] Test script contains errors. > Feb 14 01:32:00 Checking of logs skipped. > Feb 14 01:32:00 > Feb 14 01:32:00 [FAIL] 'Running HA (hashmap, async) end-to-end test' failed > after 2 minutes and 51 seconds! Test exited with exit code 1 > Feb 14 01:32:00 > 01:32:00 ##[group]Environment Information > Feb 14 01:32:01 Searching for .dump, .dumpstream and related files in > '/home/vsts/work/1/s' > dmesg: read kernel buffer failed: Operation not permitted > Feb 14 01:32:06 Stopping taskexecutor daemon (pid: 259377) on host > fv-az313-602. > Feb 14 01:32:07 Stopping standalonesession daemon (pid: 256528) on host > fv-az313-602. > Feb 14 01:32:08 Stopping zookeeper... > Feb 14 01:32:08 Stopping zookeeper daemon (pid: 251023) on host fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 251636), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 255483), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 258722), because it is not > running anymore on fv-az313-602. > The STDIO streams did not close within 10 seconds of the exit event from > process '/usr/bin/bash'. This may indicate a child process inherited the > STDIO streams and has not yet exited. > ##[error]Bash exited with code '1'. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347&view=logs&j=e9d3d34f-3d15-59f4-0e3e-35067d100dfe&t=f8a6d3eb-38cf-5cca-9a99-d0badeb5fe62&l=8020 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) * 70cdf760be9de80eaf3bc353b9ff3bb7e8fe80c1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18806: [FLINK-26105][e2e] Fixes log file extension
flinkbot edited a comment on pull request #18806: URL: https://github.com/apache/flink/pull/18806#issuecomment-1041656779 ## CI report: * ea2f81891557df0fa6ce6cba818a37a18c011c6f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18812: [FLINK-25129][docs] Improvements to the table-planner-loader related docs
flinkbot edited a comment on pull request #18812: URL: https://github.com/apache/flink/pull/18812#issuecomment-1042692210 ## CI report: * 3594fc6828daaf7a8bae4dedff82d9a73e81d1cc Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31726) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26192) PulsarOrderedSourceReaderTest fails with exit code 255
[ https://issues.apache.org/jira/browse/FLINK-26192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493777#comment-17493777 ] Yufei Zhang commented on FLINK-26192: - This looked to me the test VM crashed, let's keep observing it for a while, if it does not happen again for 1 month, can we close it ? > PulsarOrderedSourceReaderTest fails with exit code 255 > -- > > Key: FLINK-26192 > URL: https://issues.apache.org/jira/browse/FLINK-26192 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0 >Reporter: Dawid Wysakowicz >Priority: Major > > https://dev.azure.com/wysakowiczdawid/Flink/_build/results?buildId=1367&view=logs&j=f3dc9b18-b77a-55c1-591e-264c46fe44d1&t=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d&l=26787 > {code} > Feb 16 13:49:46 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) > on project flink-connector-pulsar: There are test failures. > Feb 16 13:49:46 [ERROR] > Feb 16 13:49:46 [ERROR] Please refer to > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire-reports for > the individual test results. > Feb 16 13:49:46 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Feb 16 13:49:46 [ERROR] The forked VM terminated without properly saying > goodbye. VM crash or System.exit called? > Feb 16 13:49:46 [ERROR] Command was /bin/sh -c cd > /__w/1/s/flink-connectors/flink-connector-pulsar && > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 > -XX:-UseGCOverheadLimit -Duser.country=US -Duser.language=en -jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire/surefirebooter3139517882560779643.jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire > 2022-02-16T13-48-34_435-jvmRun1 surefire3358354372075396323tmp > surefire_08509996975514960300tmp > Feb 16 13:49:46 [ERROR] Error occurred in starting fork, check output in log > Feb 16 13:49:46 [ERROR] Process Exit Code: 255 > Feb 16 13:49:46 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM > terminated without properly saying goodbye. VM crash or System.exit called? > Feb 16 13:49:46 [ERROR] Command was /bin/sh -c cd > /__w/1/s/flink-connectors/flink-connector-pulsar && > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 > -XX:-UseGCOverheadLimit -Duser.country=US -Duser.language=en -jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire/surefirebooter3139517882560779643.jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire > 2022-02-16T13-48-34_435-jvmRun1 surefire3358354372075396323tmp > surefire_08509996975514960300tmp > Feb 16 13:49:46 [ERROR] Error occurred in starting fork, check output in log > Feb 16 13:49:46 [ERROR] Process Exit Code: 255 > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:748) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:305) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:265) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) > Feb
[GitHub] [flink] matriv commented on a change in pull request #18804: [hotfix][docs][table] Document the new `TablePipeline` API object
matriv commented on a change in pull request #18804: URL: https://github.com/apache/flink/pull/18804#discussion_r808802789 ## File path: docs/content/docs/dev/table/tableApi.md ## @@ -1454,21 +1454,23 @@ result3 = table.order_by(table.a.asc).offset(10).fetch(5) {{< label Batch >}} {{< label Streaming >}} -Similar to the `INSERT INTO` clause in a SQL query, the method performs an insertion into a registered output table. The `executeInsert()` method will immediately submit a Flink job which execute the insert operation. +Similar to the `INSERT INTO` clause in a SQL query, the method performs an insertion into a registered output table. +The `insertInto()` method will translate the `INSERT INTO` to a `TablePipeline`. Review comment: maybe `convert` is better than `translate`? ## File path: docs/content/docs/dev/table/common.md ## @@ -750,9 +763,9 @@ A query is internally represented as a logical query plan and is translated in t A Table API or SQL query is translated when: * `TableEnvironment.executeSql()` is called. This method is used for executing a given statement, and the sql query is translated immediately once this method is called. -* `Table.executeInsert()` is called. This method is used for inserting the table content to the given sink path, and the Table API is translated immediately once this method is called. +* `Table.insertInto()` is called. This method is used for translating an insertion of the table content to the given sink path into a `TablePipeline`, and the Table API is translated immediately once this method is called. Using `TablePipeline.execute()` will execute the pipeline. Review comment: `translating` -> `converting`, but not insisting, it's personal opinion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MartijnVisser commented on pull request #18790: [FLINK-26018][connector/common] Create per-split output on split addition in SourceOperator
MartijnVisser commented on pull request #18790: URL: https://github.com/apache/flink/pull/18790#issuecomment-1042708445 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18058: [FLINK-24571][connectors/elasticsearch] Supports a system time function(now() and current_timestamp) in index pattern
flinkbot edited a comment on pull request #18058: URL: https://github.com/apache/flink/pull/18058#issuecomment-988949831 ## CI report: * 93c33001cf55690369281de939bd79bb3727ad9a UNKNOWN * 47b985560b3efeb6b084c62e67ab56c2eb2cd7a9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31705) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…
flinkbot edited a comment on pull request #18718: URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573 ## CI report: * 69d42893d3874932d77329e758818b743c528489 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718) * 70cdf760be9de80eaf3bc353b9ff3bb7e8fe80c1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31728) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-26213) Translate "Deduplication" page into Chinese
Edmond Wang created FLINK-26213: --- Summary: Translate "Deduplication" page into Chinese Key: FLINK-26213 URL: https://issues.apache.org/jira/browse/FLINK-26213 Project: Flink Issue Type: Improvement Components: chinese-translation, Documentation Affects Versions: 1.14.4 Reporter: Edmond Wang The page url is https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/dev/table/sql/queries/deduplication/ The markdown file is located in *docs/content.zh/docs/dev/table/sql/queries/deduplication.md* -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18806: [FLINK-26105][e2e] Fixes log file extension
flinkbot edited a comment on pull request #18806: URL: https://github.com/apache/flink/pull/18806#issuecomment-1041656779 ## CI report: * ea2f81891557df0fa6ce6cba818a37a18c011c6f UNKNOWN * b1c883f8c2fd5e038b06b9c20eb41f23fbbf6f63 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18790: [FLINK-26018][connector/common] Create per-split output on split addition in SourceOperator
flinkbot edited a comment on pull request #18790: URL: https://github.com/apache/flink/pull/18790#issuecomment-1041025841 ## CI report: * f11bc4e5c50b90d5410db10c4ae532ca888a3a48 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31590) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18812: [FLINK-25129][docs] Improvements to the table-planner-loader related docs
flinkbot edited a comment on pull request #18812: URL: https://github.com/apache/flink/pull/18812#issuecomment-1042692210 ## CI report: * 3594fc6828daaf7a8bae4dedff82d9a73e81d1cc Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31726) * c7273cc2faa20382e0b9e182ae6d3a64b9ba5e48 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26213) Translate "Deduplication" page into Chinese
[ https://issues.apache.org/jira/browse/FLINK-26213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493780#comment-17493780 ] Edmond Wang commented on FLINK-26213: - Hi [~pnowojski] , I find this page needs to be translated, I am happy to do that. please assign this ticket to me, thanks. > Translate "Deduplication" page into Chinese > --- > > Key: FLINK-26213 > URL: https://issues.apache.org/jira/browse/FLINK-26213 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation >Affects Versions: 1.14.4 >Reporter: Edmond Wang >Priority: Major > > The page url is > https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/dev/table/sql/queries/deduplication/ > The markdown file is located in > *docs/content.zh/docs/dev/table/sql/queries/deduplication.md* -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26192) PulsarOrderedSourceReaderTest fails with exit code 255
[ https://issues.apache.org/jira/browse/FLINK-26192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493781#comment-17493781 ] Chesnay Schepler commented on FLINK-26192: -- Given the amount of issues that pulsar is causing on CI, I'd say no to that. > PulsarOrderedSourceReaderTest fails with exit code 255 > -- > > Key: FLINK-26192 > URL: https://issues.apache.org/jira/browse/FLINK-26192 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0 >Reporter: Dawid Wysakowicz >Priority: Major > > https://dev.azure.com/wysakowiczdawid/Flink/_build/results?buildId=1367&view=logs&j=f3dc9b18-b77a-55c1-591e-264c46fe44d1&t=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d&l=26787 > {code} > Feb 16 13:49:46 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) > on project flink-connector-pulsar: There are test failures. > Feb 16 13:49:46 [ERROR] > Feb 16 13:49:46 [ERROR] Please refer to > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire-reports for > the individual test results. > Feb 16 13:49:46 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Feb 16 13:49:46 [ERROR] The forked VM terminated without properly saying > goodbye. VM crash or System.exit called? > Feb 16 13:49:46 [ERROR] Command was /bin/sh -c cd > /__w/1/s/flink-connectors/flink-connector-pulsar && > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 > -XX:-UseGCOverheadLimit -Duser.country=US -Duser.language=en -jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire/surefirebooter3139517882560779643.jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire > 2022-02-16T13-48-34_435-jvmRun1 surefire3358354372075396323tmp > surefire_08509996975514960300tmp > Feb 16 13:49:46 [ERROR] Error occurred in starting fork, check output in log > Feb 16 13:49:46 [ERROR] Process Exit Code: 255 > Feb 16 13:49:46 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM > terminated without properly saying goodbye. VM crash or System.exit called? > Feb 16 13:49:46 [ERROR] Command was /bin/sh -c cd > /__w/1/s/flink-connectors/flink-connector-pulsar && > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 > -XX:-UseGCOverheadLimit -Duser.country=US -Duser.language=en -jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire/surefirebooter3139517882560779643.jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire > 2022-02-16T13-48-34_435-jvmRun1 surefire3358354372075396323tmp > surefire_08509996975514960300tmp > Feb 16 13:49:46 [ERROR] Error occurred in starting fork, check output in log > Feb 16 13:49:46 [ERROR] Process Exit Code: 255 > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:748) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:305) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:265) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.Defau
[jira] [Commented] (FLINK-26211) PulsarSourceUnorderedE2ECase failed on azure due to multiple causes
[ https://issues.apache.org/jira/browse/FLINK-26211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493786#comment-17493786 ] Yufei Zhang commented on FLINK-26211: - This seems to be a duplicate of FLINK-26210. > PulsarSourceUnorderedE2ECase failed on azure due to multiple causes > --- > > Key: FLINK-26211 > URL: https://issues.apache.org/jira/browse/FLINK-26211 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 17 04:58:33 [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 85.664 s <<< FAILURE! - in > org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase > Feb 17 04:58:33 [ERROR] > org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase.testOneSplitWithMultipleConsumers(TestEnvironment, > DataStreamSourceExternalContext)[1] Time elapsed: 0.571 s <<< ERROR! > Feb 17 04:58:33 > org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: > > Feb 17 04:58:33 java.util.concurrent.ExecutionException: > org.apache.pulsar.client.admin.PulsarAdminException$GettingAuthenticationDataException: > A MultiException has 2 exceptions. They are: > Feb 17 04:58:33 1. java.lang.NoClassDefFoundError: > javax/xml/bind/annotation/XmlElement > Feb 17 04:58:33 2. java.lang.IllegalStateException: Unable to perform > operation: create on > org.apache.pulsar.shade.org.glassfish.jersey.jackson.internal.DefaultJacksonJaxbJsonProvider > Feb 17 04:58:33 > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.BaseResource.request(BaseResource.java:70) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.BaseResource.asyncPutRequest(BaseResource.java:120) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:430) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopicAsync(TopicsImpl.java:421) > Feb 17 04:58:33 at > org.apache.pulsar.client.admin.internal.TopicsImpl.createPartitionedTopic(TopicsImpl.java:373) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.lambda$createPartitionedTopic$11(PulsarRuntimeOperator.java:504) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneaky(PulsarExceptionUtils.java:60) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.common.utils.PulsarExceptionUtils.sneakyAdmin(PulsarExceptionUtils.java:50) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createPartitionedTopic(PulsarRuntimeOperator.java:504) > Feb 17 04:58:33 at > org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.createTopic(PulsarRuntimeOperator.java:184) > Feb 17 04:58:33 at > org.apache.flink.tests.util.pulsar.cases.KeySharedSubscriptionContext.createSourceSplitDataWriter(KeySharedSubscriptionContext.java:111) > Feb 17 04:58:33 at > org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers(UnorderedSourceTestSuiteBase.java:73) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 17 04:58:33 at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 17 04:58:33 at > java.base/java.lang.reflect.Method.invoke(Method.java:566) > Feb 17 04:58:33 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) > Feb 17 04:58:33 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > Feb 17 04:58:33 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > Feb 17 04:58:33 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > Feb 17 04:58:33 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31702&view=logs&j=6e8542d7-de38-5a33-4aca-458d6c87066d&t=5846934b-7a4f-545b-e5b0-eb4d8bda32e1&l=15537 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26192) PulsarOrderedSourceReaderTest fails with exit code 255
[ https://issues.apache.org/jira/browse/FLINK-26192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493785#comment-17493785 ] Chesnay Schepler commented on FLINK-26192: -- The PulsarEmbeddedRuntime has a codepath for shutting down the JVM, see #startPulsarService. > PulsarOrderedSourceReaderTest fails with exit code 255 > -- > > Key: FLINK-26192 > URL: https://issues.apache.org/jira/browse/FLINK-26192 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.15.0 >Reporter: Dawid Wysakowicz >Priority: Major > > https://dev.azure.com/wysakowiczdawid/Flink/_build/results?buildId=1367&view=logs&j=f3dc9b18-b77a-55c1-591e-264c46fe44d1&t=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d&l=26787 > {code} > Feb 16 13:49:46 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) > on project flink-connector-pulsar: There are test failures. > Feb 16 13:49:46 [ERROR] > Feb 16 13:49:46 [ERROR] Please refer to > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire-reports for > the individual test results. > Feb 16 13:49:46 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Feb 16 13:49:46 [ERROR] The forked VM terminated without properly saying > goodbye. VM crash or System.exit called? > Feb 16 13:49:46 [ERROR] Command was /bin/sh -c cd > /__w/1/s/flink-connectors/flink-connector-pulsar && > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 > -XX:-UseGCOverheadLimit -Duser.country=US -Duser.language=en -jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire/surefirebooter3139517882560779643.jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire > 2022-02-16T13-48-34_435-jvmRun1 surefire3358354372075396323tmp > surefire_08509996975514960300tmp > Feb 16 13:49:46 [ERROR] Error occurred in starting fork, check output in log > Feb 16 13:49:46 [ERROR] Process Exit Code: 255 > Feb 16 13:49:46 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM > terminated without properly saying goodbye. VM crash or System.exit called? > Feb 16 13:49:46 [ERROR] Command was /bin/sh -c cd > /__w/1/s/flink-connectors/flink-connector-pulsar && > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 > -XX:-UseGCOverheadLimit -Duser.country=US -Duser.language=en -jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire/surefirebooter3139517882560779643.jar > /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire > 2022-02-16T13-48-34_435-jvmRun1 surefire3358354372075396323tmp > surefire_08509996975514960300tmp > Feb 16 13:49:46 [ERROR] Error occurred in starting fork, check output in log > Feb 16 13:49:46 [ERROR] Process Exit Code: 255 > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:748) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:305) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:265) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > Feb 16 13:49:46 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) > Feb 16 13:49:46 [ERROR] at > org.a
[GitHub] [flink] flinkbot edited a comment on pull request #18806: [FLINK-26105][e2e] Fixes log file extension
flinkbot edited a comment on pull request #18806: URL: https://github.com/apache/flink/pull/18806#issuecomment-1041656779 ## CI report: * ea2f81891557df0fa6ce6cba818a37a18c011c6f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org